The Ideological is the Personal is the Political

It’s always curious to see how ideological mindsets, as psychological predispositions, play out on the large-scale of populations and politics, movements and mass actions (Public Health, Collective Potential, and Pro-Social Behavior). But such patterns of behavior, in many ways, are more commonly and easily observed on the small-scale of everyday life (On Rodents and Conservatives; & Orderliness and Animals). Politics proper, in particular, is a small part of an ideological identity and worldview, as socially constructed and interpellated.

Think about some aspects that show up in the social science research. Conservative-mindedness, as opposed to liberal-mindedness, is measured with less tolerance for anything that is unfamiliar, unknown, and ambiguous. One of the traits this shows up with is low rates of ‘Openness to Experience’, closely related to and overlapping with the trait called ‘Intellect’, having much to do with cognitive complexity and flexibility, such as perspective-shifting. On the sociopolitical level, this has to do with high rates of so-called ‘Traditionalism’ or rather norm enforcement, even enforcement of invented traditions (i.e., nostalgic revisionism).

The conservative mentality wants certainty, and will create it or the perception of it when it’s lacking. Studies show, for example, that conservatives are more prone to the backfire effect where the more their beliefs and biases are challenged the stronger their sense of conviction tends to become. Certainty itself is a conservative principle because the need for certainty is so overpowering. This is why conservatism is so strongly linked to religiosity and dogmatism, as studies show; and also why social conservatism increases under the same conditions (e.g., pathogen exposure, real or imagined) that precipitates right-wing authoritarianism, also as studies show.

This can be thought of as morally neutral or, more precisely, context-dependent. It’s an evolved defensive response that promotes survival in the presence of genuine risks and threats. So, it can be beneficial under normal conditions of temporary problems that can be easily solved, resolved, or escaped. But unfortunately, we modern Americans live in a state of chronic stress that can make this temporary state into a permanent ideological mentality. What is potentially pro-social, when deranged from pervasive stress and free floating anxiety, becomes anti-social. Hence, a temporary down leveling of Openness can permanently shut off the ability to change one’s mind, to see other perspectives, to take in new info.

On the individual level, this can manifest in simple and mundane ways. There is a conservative we know who is stereotypical in many ways. She strongly dislikes anything strange or different. How this can be problematic is that she prefers a known and familiar problem over an unknown and unfamiliar solution. A simple example is that she regularly loses her phone and breaks the screen because it falls out of her pocket or she lays it down somewhere to avoid it falling out of her pocket. She could simply carry it in a fanny pack or belt holster, but she refuses to change because she has always carried it in her pocket. She’d rather break and lose her phone on a regular basis, since doing something new and different would feel uncomfortable.

As a personal consequence of personal behavior, that is her choice. But of course, nothing remains merely personal. It also informs all other areas of decision-making, including support of policies that affect others. You could point out the problems of capitalist realism to most conservatives and, in some cases, they might even acknowledge them. Yet they’ll typically struggle to not only imagine an alternative but to imagine a state of mind where they’d prefer a potential solution over the known quantity of an imperfect system. To change anything, even to improve it to lessen suffering and harm, is preferentially interpreted as risk and not potential. This is why, on the Right, there is such staying power of narratives (ideological beliefs, conspiracy theories, talking points, etc).

It’s not merely a lack of knowledge or critical thinking, although it’s not incidental that conservatives rightly see education as undermining conservative-mindedness. The thing is that the social conservatism and right-wing authoritarianism that expresses as a defense mechanism itself becomes the object of defense. Those on the Right become identified with a narrow ideological identity because they come to fear the loss of fear itself. They end up taking fear as the normal condition of human nature, rather than a passing response to a temporary situation. As such, it becomes an absolute conviction that There Is No Alternative (TINA); and any suggestion to the contrary is to be attacked.

In fear, they cling to fear in the wish that it will protect them from what they fear. But over time fear takes on a life of its own, ungrounded from anything specific. Even the possibility of losing fear becomes fearful. This is how they come to prioritize the known and familiar for no other reason than its known and familiar, not unlike how an abused spouse will sometimes refuse to leave their abuser. But to confront this mentality will only antagonize it further into fear. The only way to undo it is to remove the stressful conditions underlying fear. A pathway out of fear has to become a real possibility that is viscerally felt. This sometimes begins with radical imagination slipped past the defenses in a pleasing and entertaining narrative, just for a moment to imagine that, yes, there are alternatives of hope and inspiration.

We Are All Liberals, and Always Have Been

Jonathan Haidt’s moral foundations theory gained traction some years back. His ideas aren’t brilliant or entirely original, but he is a catchy popularizer of social science. Still, there is some merit to his theory, if there is plenty to criticize, as we have done previously. It is lacking and misleading in certain ways. For example, in talking about the individualizing moral foundations, Haidt has zero discussion of the personality trait openness.

That is the defining feature of liberal-mindedness. Openness is core to the liberal values of intellectuality, critical thinking, curiosity, truth-seeking, systems thinking, cognitive complexity, cognitive empathy, tolerance of ambiguity, tolerance of differences, etc. As an attitude, in combination with the individualizing moral foundations of fairness/reciprocity and harm/care, openness also powerfully informs major aspects of the liberal sense of egalitarianism and justice underlying social and political liberalism.

Openness represents everything that is unique in opposition to the binding moral foundations: ingroup/loyalty, authority/respect, and purity/sanctity. Those other moral foundations, in being everything that openness is not, are what define conservatism, specifically social conservatism, and arguably are what makes conservatives prone to authoritarianism. One can think of authoritarianism as simply the binding moral foundations pushed to an extreme, such that the openness personality trait and the individualizing moral foundations are suppressed.

This is important for how the framing of the topic has been politicized. Haidt is a supposed ‘liberal’ who, in being conservative-minded, has made a name for himself by ‘courageously’ attacking liberalism and punching left, an old American tradition among pseudo-liberal elites. There has been an argument, originated by Haidt, that liberals are somehow deficient because of lacking conservative-minded values. But that is inaccurate for a number of reasons. The unwillingness to conform, submit, and fear-monger is in itself a liberal value, not merely a lack of conservative values.

Anyway, maybe not all values are equal in the first place. One study indicates, instead, that the binding moral foundations are not necessarily inherent to human nature and so not on the same level. The so-called but misnamed individualizing moral foundations are what everyone is born with. That is to say no one is born a conservative or an authoritarian. Instead, we are all come into this world with a liberal-minded sense of openness, fairness, and care. That very well might be the psychological baseline of the human species.

Yes, other research shows that stressful conditions (parasite load, real or imagined pathogen exposure, etc) increase both social conservatism and authoritarianism. But the evidence doesn’t indicate that chronic stress, as exists in the modern world, is the normal state of the human species. Would a well-functioning community with great public health, low inequality, a strong culture of trust, etc show much expression of conservative-mindedness at all? One suspects not. Certainly, traditional tribes like the Piraha don’t. Maybe physical health, psychological health, and moral health are inseparable.

In one sense, liberalism is a hothouse flower. It does require optimal conditions to thrive and bloom. But those optimal conditions are simply the conditions under which human nature evolved under most of the time. We have a threat system that takes over under less-than-optimal conditions. If temporary, it won’t elicit authoritarianism. That only happens when stressors never can be resolved, lessened, or escaped; and so trauma sets in. One might speculate that is not the normal state of humanity. It may be true that we, in the modern West, are all liberals now. But maybe, under it all, we always were.

* * *

We Are All White Liberals Now
We Are All Egalitarians, and Always Have Been
We Are All Bleeding Heart Liberals Now

The role of cognitive resources in determining our moral intuitions:
Are we all liberals at heart?

by Jennifer Cole Wright and Galen Baril

The role of cognitive resources in determining our moral intuitions:
Are we all liberals at heart?

by Caroline Minott

Some researchers suspect that the differences in liberal and conservative moral foundations are a byproduct of Enlightenment philosophers “narrowing” the focus of morality down to harm and fairness. In this view, liberals still have binding foundation intuitions but actively override them. The current study asks the question: are the differences between liberals’ and conservatives’ moral foundations due to an unconscious cognitive overriding of binding foundation intuitions, or are they due to an enhancement of them? Since both of these conditions takes effort, the researchers used self-regulation depletion/cognitive load tasks to get at participants’ automatic moral responses. […]

When cognitive resources were compromised, participants only responded strongly to the individualizing foundations (harm/fairness), with both liberals and conservatives deprioritizing the binding foundations (authority/in-group/purity). In other words, automatic moral reactions of conservatives turned out to be more like those of liberals. These findings suggest that harm and fairness could be core components of morality – for both liberals and conservatives. While many believed in an innate five-foundation moral code, in which liberals would narrow their foundations down to two, we may actually begin life with a two-foundation moral foundation. From here, conservatives emerge by way of expanding upon these two-foundations (adding authority/ingroup/purity).

WEIRD Personality Traits as Stable Egoic Structure

Nearly every scientific field of study is facing a replication crisis and, although known about for decades now, it still has not been resolved. Most researchers are so limited in their knowledge and expertise that they lack any larger context of understanding. They simply don’t know what they don’t know and there is no incentive in siloed professions to spend time to understand anything outside of one’s field. In science, the replication crisis has numerous causes, sometimes because of bad study design or the difficulty of some areas of study. Nutrition studies, for example, has been dependent on epidemiological studies that are based on correlations without being able to prove causation; and, on top of that, are often dependent on notoriously unreliable self-reporting food surveys where people have to guesstimate what they ate in the past, sometimes over a period of years. More recent research has shown that much of what we thought we knew simply is not true or has yet to be verified.

Another problem is what or who is studied. There are problems with the lab animals used because certain species adapt better to labs, even though other species are more similar in certain ways to humans. Researchers’ preference for lab mice, for example, is not unlike the guy looking for his keys under the streetlight because the light was better there. This problem applies to human subjects as well, in that they’ve mostly been white middle class undergraduate college students in the United States because most research has been done in U.S. colleges; and, in medical studies of the past, this mostly involved men which meant women in healthcare were treated as men without penises. The first part is known as the WEIRD bias (Western Educated Industrialized Rich Democratic), and it has particularly rocked the world of the social sciences. Take personality studies where the leading theory has been the Big Five (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism), with an additional factor being added to form the HEXACO model (honesty-humility). Like so much else, it turns out that most of these personality traits don’t replicate outside of WEIRD and WEIRD-like populations. This challenge of non-WEIRD cultures and mentalities has been around a long time, as seen in the anthropological literature, but most experts in other fields have remained largely ignorant of what anthropologists have known for more than a century, that environment shapes mind, perception, and behavior.

The funny thing is that, even when studies have shown this problem with the Big Five, the WEIRD bias continues to hold sway over those trying to explain away the potential implications and to put the non-WEIRD results back into WEIRD boxes. This is done by asserting the bad results are simply caused by social desirability bias and acquiescence bias, since the answers given by non-WEIRD individuals seem to be contradictory. The researchers and interpreters of the research refuse to take the results at face value, refuse to give the benefit of the doubt that these non-WEIRD people might be accurately reporting their experience. There is almost a grasp of what is going on in pointing to these biases, since these biases are about context, but this comes so close only to miss the point. Non-WEIRD cultures and mentalities tend to be more context-dependent and so unsurprisingly give varying responses in being sensitive to how questions are being asked, whereas the WEIRD egoic abstraction of rules and principles operates more often the same across contexts. Only a highly WEIRD person would think that it is even possible to discover something entirely unrelated to context.

WEIRD personality traits are a kind of psychological rule-orientation where the individual adheres to a psychological heuristic of cognitive behavior, a strict and rigid maintenance of thought pattern that calcifies into an identity formation. The failure of cross-cultural understanding is that the very concept of a stable, unchanging personality might itself be part of the WEIRD bias and an exaggerated extension of the larger Axial Age shift when the ego theory of mind took hold, what some call Jaynesian consciousness in reference to Julian Jaynes theory about the disappearance of the bicameral mind that is a variation of the bundle theory of mind. This was then magnified by mass literacy, beginning with the Protestant Reformation, that alters brain structure, as argued by Joseph Henrich. It might not merely be that those very far distant from WEIRD culture not only lack WEIRD-style personality traits but might also lack egoic personality structures. Most WEIRD people can’t acknowledge non-WEIRD mentalities, much less grok what they mean and how to imaginatively empathize with them. The sad part is this also demonstrates a lack of self-awareness, as the bundled mind essentially exists in all of us, something that can be observed by anyone looking into their own psyche — this is why contemplative traditions like Buddhism adhere to the bundle theory of mind.

Another explanation of this psychological change of personality traits is that agriculture and later industrialization increased labor specialization that generally passed down the generations. These work niches were originally and largely still occupied by specific families, kin networks, castes, and communities over centuries or longer (e.g., feudal serfs and factory workers). It formed a stable environment and a stable culture that shaped the human psyche according to what was required. This is the opposite of hunter-gatherers who are forced to be generalists in doing a wide variety of work. Agriculture had led to some gender specialization, but even that specialization was often limited. It is definitely true, though, that hunter-gatherers are far less specialized where some like the egalitarian Piraha have little specialization at all, along with no permanent authority of any kind. It’s possible that represents how humanity lived for most of evolution when food was more abundant and life easier, as is the case where the Piraha live along a river surrounded by lush jungle. The study of the Piraha have helped challenge one area of WEIRD bias, that of seeing the world through a highly recursive literary culture. The Piraha apparently lack linguistic recursion; i.e., embedded phrases. By the way, they are an animistic culture with the typical bundled mind as overt 4E cognition (embodied, embedded, enactive, and extended). Such animistic cultures allow for personality fluidity, sometimes temporary possessions and at other times permanent identity changes.

Even gender specialization might be a somewhat recent invention, corresponding to the invention of the bow and arrow. For most of human existence, humans hunted with spears and the evidence now points to spear hunting having required the whole tribe, including women. Some of the earliest rock art also portrays men holding the hands of children, which indicates that men were either involved with childcare or not kept separate from it, maybe because the children had to be brought along on the hunt with the whole tribe. So, even the theory that there are two genetically-determined personality types based on men hunting and women gathering was a result of relatively recent changes. By the way, those changes were caused by the megafauna die-off. Smaller game replaced the megafauna and hunting smaller game motivated development of new hunting tools and techniques. The bow and arrow, once invented, allowed individuals to hunt alone and this more often was an activity of men. This forced women to take up a separate labor niche. The lower nutrition level of lean small game also made necessary a greater reliance on plant foods, which meant horticulture and later agriculture. The plow, like the bow and arrow, made another area of men’s work and further reinforced gender division.

The point is not all hunting is the same and so these different practices would create different personality structures. The same was probably true of gathering, particularly in terms of how early humans were also meat scavengers. To get into the effect of the agricultural revolution, this is reminiscent of research done on wheat and rice farming in China. What was found is that the two populations fell into the stereotypical patterns of Western and Eastern thinking, with wheat-based populations having less context-dependent thinking and rice-based populations emphasizing context, even though both populations were Chinese. The explanation is that wheat farming is typically done by one person alone working a plow or now a tractor, whereas rice farming requires highly organized collective labor. Interestingly, China stands out in that psychopathy is found equally among both genders, unlike in the West and some other places where it is disproportionately found among males. It would be interesting to study if this is primarily an effect of the larger populations involved in rice-growing and the culture that has developed around it. On a related note, research does show higher rates of psychopathy in urban areas than in rural areas. Is this simply because psychopaths prefer to remain anonymous in cities or is there something about city life that promotes psychopathic neurocognition?

Anyway, wheat farming is as different from rice farming, as bow-and-arrow hunting is from spear hunting. What stands out is that both rice farming and spear hunting are collective activities involving both genders, but wheat farming and bow-and-arrow hunting can be solitary activities that have tended to be done by men. In Western Europe, there never was rice farming. And, unlike certain populations, spear hunting in the West probably hasn’t been common in recent history. Yet there are still spear hunting tribes in various places. Some of those also do persistence hunting, probably the original form of hunting. Anyway, hunter-gatherers in general need more adaptable minds because they are dealing with diverse tasks and often over large diverse territories. This requires a more fluid and shifting mentality, one where the very concept of stable personality traits maybe simply does not apply to the same extent. Even in the West, research shows that personality traits can change over a lifetime and under different conditions, such as how a liberal can basically turn into a conservative simply by giving them a few beers. But it is true that modern WEIRD conditions are much more stable with narrow niches of work and living, often with racial and class segregation, not to mention the repetitive nature of modern life with little changes in activities from day to day, season to season.

This brings us to the worries some had in early modernity. Adam Smith thought public education was necessary because repetitive factory work made people stupid, which might be simply another way of saying that those individuals lose or else never develop cognitive flexibility, cognitive complexity, and cognitive diversity. Karl Marx explained this in terms of the transition from traditional labor where an individual constructed a product from beginning to end, often having involved multiple complex steps with various tools and techniques, each requiring different physical and cognitive skills. This gave the individual great sense of accomplishment and pride, not to mention autonomy as to be a tradesmen was to have immense skill. The dumbing down of the work force with industrial labor may have contributed to the WEIRD mentality. Even the average office worker experienced this narrowing down of activity. This allowed moderns to specialize, but in doing so sacrificed all other aspects of development. This relates to the creation of stupid smart people, those who are only capable of doing one thing well but otherwise are clueless. It’s not hard to see how this has forced people into niche personalities and hence making possible theories about how to categorize such personalities.

* * *

Cognitive Scientist Shows How Culture Shapes Personality Traits
By Elizabeth Arakelian

Complex societies produce people with more varied personalities. […] But this covariation is neither random nor easily explained by genes. The social and ecological environments in which we develop, the scientists said, have a lot to do with how we develop. Our personalities are created by the patterns of behavior we exhibit that are relatively stable over time. But what creates those patterns, and why do they persist?

That’s the question Smaldino is exploring with collaborators from UC Santa Barbara, California State University Fullerton and the University of Richmond. Their research, published in the journal Nature Human Behaviour, suggests societies differ in the personality profiles of their members because of the different sociological niches in those societies. The diverse niches in a society — the occupational, social and other ways people navigate through daily life — constrain how an individual’s personality can develop.

Psychologists have traditionally relied upon the statistically derived “Big Five” personality traits to structure their research: openness, consciousness, extroversion, agreeableness and neuroticism.

Smaldino and his colleagues question the universality of this model in their work, instead exploring why certain traits — such as trust and sympathy or impulsivity and anxiety — bundle together as they do in particular places.

The researchers looked at personality data from more than 55 societies to show that more complex societies — those with a greater diversity of socioecological niches — tended to have less covariation among behavioral traits, leading to the appearance of more broad personality factors. They developed a computer model to create simulated environments that varied in their number of niches, which demonstrated the plausibility of their theory.

“The importance of socioecological niches basically comes down to this: How many ways are there to be a person in a given culture?” Smaldino said. “What are the number of successful strategies one can use to thrive? If you’re in a complex society, like the wealthy parts of America, there are just myriad ways to be.

“No matter how idiosyncratic you are, you can find a community that accepts you. On the other end of the spectrum, say in a small-scale foraging society, your behaviors are going to be a lot more constrained. This affects the ways in which behaviors cluster together, and the patterns that manifest as personality characteristics.”

Tests For the ‘Big Five’ Personality Traits Don’t Hold Up In Much of the World
by Megan Schmidt

So, why doesn’t the Big Five test hold up around the world? Lead author Rachid Laajaj, an economics researcher at the University of Los Andes in Columbia, said many of the reasons are rooted in literacy and education barriers. Many personality tests used in WEIRD countries are intended to be self-administered, designed for people who can read and write. But because of lower literacy rates in developing countries, tests may need to be given verbally. This introduces the possibility of translation or phrasing differences that could skew results.

Researchers also think that face-to-face questioning allows social desirability bias to creep into the process. This means that respondents may try to interpret social cues for a “right answer” or give answers they think would be viewed more favorably by others.

“Yea-saying,” or the tendency to agree with a statement even if it’s untrue, is also more common in developing countries, where there’s less access to education, the researchers say.

“People may have a harder time understanding abstract questions. Acquiescence bias may be accentuated when people do not fully understand, in which case it feels safer to just agree,” Laajaj said.

Additionally, the idea of personality tests — or personality itself — may not be a natural concept everywhere. Understandably, people who aren’t familiar with the idea of personality testing might be a bit wary of revealing personal details about themselves.

“Imagine that you live in a poor area and someone comes to you to ask you a bunch of questions, such as how hardworking you are, whether you get stressed easily or whether you are a polite person. If it is not common for you to fill out surveys, or if it’s not clear what will be done with it, you may, for example, care more about giving a good impression than being completely truthful,” Laajaj said.

Personality is not only about who but also where you are
by Dorsa Amir

To understand why industrialisation might be an influential force in the development of behaviour, it’s important to understand its legacy in the human story. The advent of agriculture 10,000 years ago launched perhaps the most profound transformation in the history of human life. No longer dependent on hunting or gathering for survival, people formed more complex societies with new cultural innovations. Some of the most important of these innovations involved new ways of accumulating, storing and trading resources. One effect of these changes, from a decision-making standpoint, was a reduction in uncertainty. Instead of relying on hard-to-predict resources such as prey, markets allowed us to create larger and more stable pools of resources.

As a result of these broader changes, markets might have also changed our perceptions of affordability. In WEIRD societies with more resources (remember that the R in WEIRD stands for rich) kids might feel that they can better afford strategies such as patience and risk-seeking. If they get unlucky and pull out a green marble and didn’t win any candy, that’s okay; it didn’t cost them that much. But for Shuar kids in the rainforest with less resources, the loss of that candy is a much bigger deal. They’d rather avoid the risk.

Over time, these successful strategies can stabilise and become recurrent strategies for interacting with our world. So, for instance, in an environment where the costs of waiting are high, people might be consistently impatient.

Other studies support the notion that personality is shaped more by the environment than previously thought. In work among Indigenous Tsimané adults in Bolivia, anthropologists from the University of California, Santa Barbara found weak support for the so-called ‘Big Five’ model of personality variation, which consists of openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. Similar patterns came from rural Senegalese farmers and the Aché in Paraguay. The Big Five model of personality, it turns out, is WEIRD.

In another recent paper, the anthropologist Paul Smaldino at the University of California, Merced and his collaborators followed up on these findings further, relating them to changes that were catalysed by industrialisation. They argue that, as societies become more complex, they lead to the development of more niches – or social and occupational roles that people can take. Different personality traits are more successful in some roles than others, and the more roles there are, the more diverse personality types can become.

As these new studies all suggest, our environments can have a profound impact on our personality traits. By expanding the circle of societies we work with, and approaching essentialist notions of personality with skepticism, we can better understand what makes us who we are.

A general theory of personality based on social selection and life-history theory
by Andreas Hofer

When it comes to personality psychology the Big 5 (or Five-Factor Model/FFM) are still considered the gold standard and many other personality tests, like the Myers-Briggs (MBTI) are considered pseudoscience. The FFM is even more useful and has more predictive power when a sixth dimension is added: honesty humility (HEXACO model).

However, adding new personality dimensions is of little use when it comes to understanding human nature, as not even five factors are human universals. Two of the factors that are often associated with mental disorders (neuroticism and openness to experience), never even show up in non-Western societies, which are called “WEIRD” (Western, educated, industrialized, rich and democratic) by Joseph Henrich in The WEIRDest People in the World (2020). Henrich points out the Big 5 are indeed WEIRD 5, as they are by no means human universals. Some societies yield only three or four factors. Subsistence-level economies often only have two factors. The Tsimane’  practise subsistence farming and Henrich writes about them:

So, did the Tsimane’ reveal the WEIRD-5? No, not even close. The Tsimane’ data reveal only two dimensions of personality. No matter how you slice and dice the data, there’s just nothing like the WEIRD-5. Moreover, based on the clusters of characteristics associated with each of the Tsimane’’s two personality dimensions, neither matches up nicely with any of the WEIRD-5 dimensions […] these dimensions capture the two primary routes to social success among the Tsimane’, which can be described roughly as “interpersonal prosociality” and “industriousness.” The idea is that if you are Tsimane’, you can either focus on working harder on the aforementioned productive activities and skills like hunting and weaving, or you can devote your time and mental efforts to building a richer network of social relationships.

Rice, Psychology, and Innovation
by Joseph Henrich

Decades of experimental research show that, compared to most populations in the world, people from societies that are Western, Educated, Industrialized, Rich, and Democratic (WEIRD) (4) are psychologically unusual, being both highly individualistic and analytically minded. High levels of individualism mean that people see themselves as independent from others and as characterized by a set of largely positive attributes. They willingly invest in new relationships even outside their kin, tribal, or religious groups. By contrast, in most other societies, people are enmeshed in dense, enduring networks of kith and kin on which they depend for cooperation, security, and personal identity. In such collectivistic societies, property is often corporately owned by kinship units such as clans; inherited relationships are enduring and people invest heavily in them, often at the expense of outsiders, strangers, or abstract principles (4).

Psychologically, growing up in an individualistic social world biases one toward the use of analytical reasoning, whereas exposure to more collectivistic environments favors holistic approaches. Thinking analytically means breaking things down into their constituent parts and assigning properties to those parts. Similarities are judged according to rule-based categories, and current trends are expected to continue. Holistic thinking, by contrast, focuses on relationships between objects or people anchored in their concrete contexts. Similarity is judged overall, not on the basis of logical rules. Trends are expected to be cyclical.

Various lines of evidence suggest that greater individualism and more analytical thinking are linked to innovation, novelty, and creativity (5). But why would northern Europe have had greater individualism and more analytical thinking in the first place? China, for example, was technologically advanced, institutionally complex, and relatively educated by the end of the first millennium. Why would Europe have been more individualist and analytically oriented than China? […]

Sure enough, participants from provinces more dependent on paddy rice cultivation were less analytically minded. The effects were big: The average number of analytical matches increased by about 56% in going from all-rice to no-rice cultivation. The results hold both nationwide and for the counties in the central provinces along the rice-wheat (north-south) border, where other differences are minimized.

Participants from rice-growing provinces were also less individualistic, drawing themselves roughly the same size as their friends, whereas those from wheat provinces drew themselves 1.5 mm larger. [This moves them only part of the way toward WEIRD people: Americans draw themselves 6 mm bigger than they draw others, and Europeans draw themselves 3.5 mm bigger (6).] People from rice provinces were also more likely to reward their friends and less likely to punish them, showing the in-group favoritism characteristic of collectivistic populations.

So, patterns of crop cultivation appear linked to psychological differences, but can these patterns really explain differences in innovation? Talhelm et al. provide some evidence for this by showing that less dependence on rice is associated with more successful patents for new inventions. This doesn’t nail it, but is consistent with the broader idea and will no doubt drive much future inquiry. For example, these insights may help explain why the embers of an 11th century industrial revolution in China were smothered as northern invasions and climate change drove people into the southern rice paddy regions, where clans had an ecological edge, and by the emergence of state-level political and legal institutions that reinforced the power of clans (7).

We Are All White Liberals Now

“Before asking who should speak for liberalism, we should note that liberalism is doing very well on its own account. Almost everyone is a liberal, although nobody likes the label. This is largely because no matter what sort of liberal you are, there is another sort of liberal that you are not. . . In political terms, liberals are citizens of anywhere and therefore citizens of nowhere. They are the Ishmaels of political life, the wandering spirits, an influence in all tribes but a dominant force in none.”

Philip Collins, How did the word “liberal” become a political insult?

I previously criticized Zach Goldberg’s article on white liberals. He wanted to make them out to seem like not only extremist ideologues but also psychologically abnormal. At times, it comes across as a soft-pedalled conservative diatribe, but some of his analysis brings up some good points.

It’s even more interesting when we ignore his conclusion and, instead, acknowledge that the average American is in general agreement with white liberals. White liberals may be a minority in the strict sense, particularly limiting ourselves to self-identified liberals, but “white liberalism” apparently has become the majority position. We are all white liberals now or most of us are, including an increasing number of non-whites and non-liberals. Embrace your inner white liberal!

Anyway, the relevant takeway is that a real change is happening. I don’t know that white liberals are the canary in the coal mine or otherwise deserving of special treatment. But because the mainstream is so obsessed with them, they get all the credit and blame for so much that is happening. So, looking at this one demographic might tell us something about Americans in general and where American society is heading.

Considering most Americans are further left than the mainstream would like to admit, this really isn’t fundamentally an issue of white liberalism at all, of course. It’s just a way of distracting from the decades-long leftward lurch of public opinion and a shifting psychological profile of personality traits and moral values. That is all the more reason to look at what is happening among white liberals, if we take them as representative of something far broader. For all the condemnation they get, and some of it deserved, they are fascinating creatures.

Supposedly, for the first time in history, there is a demographic that has a pro-outgroup bias. White liberals state a more positive view of those not like them than those like them. What is not mentioned are other demographics like non-white liberals and leftists who might show this tendency even more strongly. There isn’t necessarily anything special about white liberals. It’s simply liberal-mindedness taking ever stronger hold in the American psyche and this showing up clearly first in particular demographics.

Goldberg speculates that the cause is the internet. White liberals are leading the way in embracing the new media, although that is probably true of social liberals in general (black liberals, Asian-American libertarians, Latinx social democrats, etc). Social liberals tend to be the most liberal-minded in being open to new experiences (FFM Openness, MBTI Intuition, etc). That openness, in this age of media proliferation, contributes to greater exposure to different views and ideas. For all our fear that social media feeds into echo chambers of disinfo and extremism, so far the internet has also been a powerful force of liberalization.

It’s easy to forget how radically liberal our society has become. Most American conservatives today are more liberal or even leftist than the average liberal was maybe only a century ago. So much of what we’ve come to regularly question, doubt, and challenge was simply accepted as normal reality and undeniable truth not that long ago. The American majority, white and non-white, is now far to the left of John Locke, the prototype of Anglo-American white liberalism. In place of that earliest and most respectable expression of Enlightenment thought, we are ever more embracing the radical and rabblerousing liberal vision of Thomas Paine, the most important American founder now forgotten.

We can be transformed by this revolutionary liberal-mindedness or we can be shaped in reaction to it. But in either case, it has come to define our entire society. Indeed, we have all become white liberals, whatever that means. The white liberal is the symbolic force and totemic spirit of American society. Let us not forget, though, that the underlying moral potency of this white liberalism was always built around the radical other, slowly but surely brought into the fold in redefining not only what it means to be American but, more importantly, what it means to be human.

The threat and promise of a more inclusive empathy and more expansive identity was always the seed of irritation around which the pearl of idealism grew, from the Axial Age to the modern revolutionary era. In the egalitarian conviction of Thomas Paine, maybe we are coming closer to the time when we can all declare that we are citizens of the world. Imagine a global society where nearly everyone had a pro-outgroup bias, where a compassionate sense of the other was the moral mirror that we held up to ourselves, where we finally lived up to Jesus’ radical teaching that we are judged by the treatment of the least among us. Imagine…

* * *

America’s White Saviors
by Zach Goldberg

The Moral Foundations of the Modern White Liberal

A large body of work in this field consistently finds that liberals score significantly higher than conservatives on the personality trait “agreeableness” and more specifically on its sub-dimension of “compassion.” In social science studies like these, agreeableness represents the tendency to be altruistic, tender-minded, cooperative, trusting, forgiving, warm, helpful, and sympathetic. The trait is closely linked with empathy and compassion toward the suffering of others. […]

A substantial line of research reveals that, out of these moral considerations, liberals generally attach the most importance to the foundations of harm/care and fairness. While conservatives also tend to rate these foundations as important, their moral compass is broader and includes a greater concern for violations of purity (e.g., “whether or not someone was able to control his or her desires”), loyalty (e.g., “whether or not someone did something to betray his or her group”), and authority (e.g., “whether or not someone respected the traditions of society”). As with empathy, the liberal concern for harm/care and fairness relates to a larger set of targets (e.g., animals, the needy in other countries) than it does for conservatives, who are generally more concerned with threats to the in-group. The liberal conception of ‘harm’ is also far broader, which lowers the threshold at which their moral alarms are triggered.

[…] white liberals—especially the self-identified “very liberal”—are significantly more likely to report intense or extremely frequent feelings of tenderheartedness, protectiveness, and sensitivity when considering the circumstances of racial and ethnic out-group members. A related graph below displays the average differences in feelings of warmth (measured along a 0-100 scale) toward whites vs. nonwhites (i.e., Asians, Hispanics, and blacks) across different subgroups.

Remarkably, white liberals were the only subgroup exhibiting a pro-outgroup bias—meaning white liberals were more favorable toward nonwhites and are the only group to show this preference for group other than their own. Indeed, on average, white liberals rated ethnic and racial minority groups 13 points (or half a standard deviation) warmer than whites. As is depicted in the graph below, this disparity in feelings of warmth toward ingroup vs. outgroup is even more pronounced among whites who consider themselves “very liberal” where it widens to just under 20 points. Notably, while white liberals have consistently evinced weaker pro-ingroup biases than conservatives across time, the emergence and growth of a pro-outgroup bias is actually a very recent, and unprecedented, phenomenon.

Not surprisingly, data from the American National Elections Studies (ANES) shows white liberals scoring significantly higher on measures of ‘white privilege awareness’ (e.g., ‘how much does being white grant you unearned privileges in today’s society?’) and ‘white guilt’ (e.g., ‘how guilty do you feel about the privileges and benefits you receive as a white American?’). Both of these variables are strongly correlated with measures of liberal racial sympathy (or what is more traditionally referred to as ‘low racial resentment’)–the white liberal scores on which reached an ANES-high in 2016. Previous research has shown that these collective moral emotions, triggered by historical wrongdoing and perceptions that an in-group’s advantages and privileges are illegitimate, can can increase support for reparative and humanitarian social policies. That is exactly what has happened in recent years as white liberals have become increasingly supportive of affirmative actionreparations, and increased immigration.

The Social Media Accelerant

[…] Data from the General Social Survey reveals a roughly 170% increase in the number of weekly hours, from 5 to 13.6, that people reported spending on the internet between 2000-2018. Between 2006 and 2018, the percentage of respondents listing the internet as their primary news source jumped roughly 33 percentage points from 14.2% to 47.6%. Turning to social media, data I pooled from the Pew Research Center shows a similar increase in the percentage of people reporting social media use between 2008-2016, from 34.8% to 73%. These increases have occurred among all whites, regardless of political affiliation, but not to the same degree. White liberals place ahead of conservatives on every one of these measures of internet use and social media exposure. They spend significantly more weekly hours on the internet; are significantly more likely to list the internet as their primary news source; and significantly more likely to consume news from and be politically active on social media. A 2016 Pew Racial Attitudes survey further shows that of the 74% of white liberals (vs. 55% of white conservatives) reporting social media use, roughly 44% (vs. 30% of white conservatives) say that at least some of the posts are about race or race relations. And, more generally, 70% of white liberals (vs. 51% of white conservatives) report discussing race relations or racial inequality with others either “sometimes” (39%) or “often” (31%).

An analysis of GoogleTrends data, graphed below, shows that the frequency of searches for race-related and “woke” terms has grown substantially since the beginning of the decade—a period that happens to coincide with the social media boom and the emergence of so-called hashtag activism (e.g., Occupy Wall Street, Black Lives Matter). This period also saw the rise of the Huffington Post—an online progressive blog and news site that prolifically opines on race-related issues. Whereas just 13% of white liberals reported regularly visiting the site in 2012, over 30% did in 2016. A similar pattern is observed for digital readership of The New York Times (NYT), which grew from 16% to 31% among white liberals between 2012 and 2016—during this same period, according to a recent content analysis I conducted—the percentage of Times articles mentioning race-related and woke terms saw unprecedented growth. For instance, whereas just 0.4% (or 334) of articles referred to racism in 2012, this figure had doubled by 2015 (to 0.87% or 813) and reached over 2% (or 2,353) by 2018. Interestingly, the number of monthly NYT articles mentioning racism also closely tracks Google search interest in the term.

Henry Adams on the Bundled Mind

Of all forms of pessimism, the metaphysical form was, for a historian, the least enticing. Of all studies, the one he would rather have avoided was that of his own mind. He knew no tragedy so heartrending as introspection, and the more, because-as Mephistopheles said of Marguerite–he was not the first. Nearly all the highest intelligence known to history had drowned itself in the reflection of its own thought, and the bovine survivors had rudely told the truth about it, without affecting the intelligent. One’s own time had not been exempt. Even since 1870 friends by scores had fallen victims to it. Within five-and-twenty years, a new library had grown out of it. Harvard College was a focus of the study; France supported hospitals for it; England published magazines of it. Nothing was easier than to take one’s mind in one’s hand, and ask one’s psychological friends what they made of it, and the more because it mattered so little to either party, since their minds, whatever they were, had pretty nearly ceased to reflect, and let them do what they liked with the small remnant, they could scarcely do anything very new with it. All one asked was to learn what they hoped to do.

Unfortunately the pursuit of ignorance in silence had, by this time, led the weary pilgrim [i.e., himself] into such mountains of ignorance that he could no longer see any path whatever, and could not even understand a signpost. He failed to fathom the depths of the new psychology, which proved to him that, on that side as on the mathematical side, his power of thought was atrophied, if, indeed, it ever existed. Since he could not fathom the science, he could only ask the simplest of questions: Did the new psychology hold that the νΧή–soul or mind–was or was not a unit? He gathered from the books that the psychologists had, in a few cases, distinguished several personalities in the same mind, each conscious and constant, individual and exclusive.

The fact seemed scarcely surprising, since it had been a habit of mind from earliest recorded time, and equally familiar to the last acquaintance who had taken a drug or caught a fever, or eaten a Welsh rarebit before bed; for surely no one could follow the action of a vivid dream, and still need to be told that the actors evoked by his mind were not himself, but quite unknown to all he had ever recognized as self. The new psychology went further, and seemed convinced that it had actually split personality not only into dualism, but also into complex groups, like telephonic centres and systems, that might be isolated and called up at will, and whose physical action might be occult in the sense of strangeness to any known form of force.

Dualism seemed to have become as common as binary stars. Alternating personalities turned up constantly, even among one’s friends. The facts seemed certain, or at least as certain as other facts; all they needed was explanation.

This was not the business of the searcher of ignorance, who felt himself in no way responsible for causes. To his mind, the compound νΧή took at once the form of a bicycle-rider, mechanically balancing himself by inhibiting all his inferior personalities, and sure to fall into the sub-conscious chaos below, if one of his inferior personalities got on top. The only absolute truth was the sub-conscious chaos below, which every one could feel when he sought it.

Whether the psychologists admitted it or not, mattered little to the student who, by the law of his profession, was engaged in studying his own mind. On him, the effect was surprising. He woke up with a shudder as though he had himself fallen off his bicycle. If his mind were really this sort of magnet, mechanically dispersing its lines of force when it went to sleep, and mechanically orienting them when it woke up–which was normal, the dispersion or orientation? The mind, like the body, kept its unity unless it happened to lose balance, but the professor of physics, who skipped on a pavement and hurt himself, knew no more than an idiot what knocked him down, though he did know–what the idiot could hardly do–that his normal condition was idiocy, or want of balance, and that his sanity was unstable artifice. His normal thought was dispersion, sleep, dream, inconsequence; the simultaneous action of different thought-centres without central control. His artificial balance was acquired habit. He was an acrobat, with a dwarf on his back, crossing a chasm on a slack-rope, and commonly breaking his neck.

By that path of newest science, one saw no unity ahead–nothing but a dissolving mind-and the historian felt himself driven back on thought as one continuous Force, without Race, Sex, School, Country, or Church.

The Education of Henry Adams
Chapter XXIX
“The Abyss of Ignorance” (1902)
pp. 432-434

(Credit to Ron Pavellas for bringing this passage to my notice.)

The Drugged Up Birth of Modernity

Below is a passage from a book I got for my birthday. I was skimming through this tome and came across a note from one of the later chapters. It discusses a theory about how new substances, caffeine and sugar, helped cause changes in mentality during colonialism, early modernity, and industrialization. I first came across a version of this theory back in the late ’90s or early Aughts, in a book I no longer own and haven’t been able to track down since.

So, it was nice coming across this brief summary with references. But in the other version, the argument was that these substances (including nicotine, cocaine, etc; along with a different kind of drug like opium) were central to the Enlightenment Age and the post-Enlightenment world, something only suggested by this author. This is a supporting theory for my larger theory on addictive substances, including some thoughts on how they replaced psychedelics, as written about previously: Sugar is an Addictive Drug, The Agricultural Mind, Diets and Systems, and “Yes, tea banished the fairies.”. It has to do with what has built the rigid boundaries of modern egoic consciousness and hyper-individualism. It was a revolution of the mind.

Many have made arguments along these lines. It’s not hard to make the connection. Diverse leading figures over history have observed the importance changes that followed along as these substances were introduced and spread. In recent years, this line of thought has been catching on. Michael Pollan came out with an audiobook about the role coffee has played, “Caffeine: How Coffee and Tea Created the Modern World.” I haven’t listened to it because it’s only available through Audible and I don’t do business with Amazon, but reviews of it and interviews with Pollan about it make it sound fascinating. Pollan has many thoughts about psychedelics as well, although I’m not sure if he has talked about psychedelics in relation to stimulants. Steven Johnson has also written and talked about this.

As a side note, there is also an interesting point that connects rising drug addiction with an earlier era of moral panic, specifically a crisis of identity. There was a then new category of disease called neurasthenia, as first described by George Miller Beard. It replaced earlier notions of ‘nostalgia’ and ‘nerves’. In many ways, neurasthenia could be thought of as some kind of variant of mood disorder with some overlap with depression. But a passage from another work, also included below, indicates that drug addiction was closely linked in this developing ideology about the diseased mind and crippled self. At that stage, the relationship wasn’t entirely clear. All that was understood was that, in a fatigued and deficient state, increasing numbers turned to drugs as a coping mechanism.

Drugs may have helped to build modern civilization. But then they quickly came to be taken as a threat. This concern was implicitly understood and sometimes overtly applied right from the beginning. With the colonial trade, laws were often quickly put in place to make sugar and coffee controlled substances. Sugar for a long time was only sold in pharmacies. And a number of fearful rulers tried to ban coffee for fear of it, not unlike how psychedelics were perceived in the 1960s. It’s not only that these substances were radicalizing and revolutionary within the mind and society as seen in retrospect. Many at the time realized these addictive and often stimulating drugs (and one might even call sugar a drug) were powerful substances right from the beginning. That is what made them such profitable commodities requiring an emergent militaristic capitalism that was violently brutal in fulfilling this demand with forced labor.

* * *

The WEIRDest People in the World:
How the West Became Psychologically Peculiar and Particularly Prosperous
by Joseph Henrich
Ch. 13 “Escape Velocity”, section “More Inventive?”
p. 289, note 58

People’s industriousness may have been bolstered by new beverages: sugar mixed into caffeinated drinks—tea and coffee. These products only began arriving in Europe in large quantities after 1500, when overseas trade began to dramatically expand. The consumption of sugar, for example, rose 20-fold between 1663 and 1775. By the 18th century, sugary caffeinated beverages were not only becoming part of the daily consumption of the urban middle class, but they were also spreading into the working class. We know from his famous diary that Samuel Pepys was savoring coffee by 1660. The ability of these beverages to deliver quick energy—glucose and caffeine—may have provided innovators, industrialists, and laborers, as well as those engaged in intellectual exchanges at cafés (as opposed to taverns), with an extra edge in self-control, mental acuity, and productivity. While sugar, coffee, and tea had long been used elsewhere, no one had previously adopted the practice of mixing sugar into caffeinated drinks (Hersh and Voth, 2009; Nunn and Qian, 2010). Psychologists have linked the ingestion of glucose to greater self-control, though the mechanism is a matter of debate (Beedie and Lane, 2012; Gailliot and Baumeister, 2007; Inzlicht and Schmeichel, 2012; Sanders et al., 2012). The anthropologist Sidney Mintz (1986, p. 85) suggested that sugar helped create the industrial working class, writing that “by provisioning, sating—and, indeed, drugging—farm and factory workers, [sugar] sharply reduced the overall cost of creating and reproducing the metropolitan proletariat.”

“Mania Americana”: Narcotic Addiction and Modernity in the United States, 1870-1920
by Timothy A. Hickman

One such observer was George Miller Beard, the well-known physician who gave the name neurasthenia to the age’s most representative neurological disorder. In 1871 Beard wrote that drug use “has greatly extended and multiplied with the progress of civilization, and especially in modern times.” He found that drug use had spread through “the discovery and invention of new varieties [of narcotic], or new modifications of old varieties.” Alongside technological and scientific progress, Beard found another cause for the growth of drug use in “the influence of commerce, by which the products of each clime became the property of all.” He thus felt that a new economic interconnectedness had increased both the knowledge and the availability of the world’s regionally specific intoxicants. He wrote that “the ancient civilizations knew only of home made varieties; the moderns are content with nothing less than all of the best that the world produces.” Beard blamed modern progress for increased drug use, and he identified technological innovation and economic interconnectedness as the essence of modernity. Those were, of course, two central contributors to the modern cultural crisis. As we shall see, many experts believed that this particular form of (narcotic) interconnectedness produced a condition of interdependence, that it quite literally reduced those on the receiving end from even a nominal state of independence to an abject dependence on these chemical products and their suppliers.

There was probably no more influential authority on the relationship between a physical condition and its historical moment than George Miller Beard. In 1878 Beard used the term “neurasthenia” to define the “lack of nerve strength” that he believed was “a functional nervous disease of modern, and largely, though not entirely, of American origin.” He had made his vision of modern America clear two years earlier, writing that “three great inventions-the printing press, the steam engine, and the telegraph, are peculiar to our modern civilization, and they give it a character for which there is no precedent.” The direct consequence of these technological developments was that “the methods and incitements of brain-work have multiplied far in excess of average cerebral developments.” Neurasthenia was therefore “a malady that has developed mainly during the last half century.” It was, in short, “the cry of the system struggling with its environment.” Beard’s diagnosis is familiar, but less well known is his belief that a “susceptibility to stimulants and narcotics and various drugs” was among neurasthenia’s most attention-worthy symptoms. The new sensitivity to narcotics was “as unprecedented a fact as the telegraph, the railway, or the telephone.” Beard’s claim suggests that narcotic use might fruitfully be set alongside other diseases of “overcivilization,” including suicide, premarital sex (for women), and homosexuality. As Dr. W. E Waugh wrote in 1894, the reasons for the emergence of the drug habit “are to be found in the conditions of modern life, and consist of the causative factors of suicide and insanity.” Waugh saw those afflictions as “the price we pay for our modern civilization.”24

Though Beard was most concerned with decreased tolerance-people seemed more vulnerable to intoxication and its side effects than they once were-he also worried that the changing modern environment exacerbated the development of the drug habit. Beard explained that a person whose nervous system had become “enfeebled” by the demands of modern society would naturally turn wherever he could for support, and thus “anything that gives ease, sedation, oblivion, such as chloral, chloroform, opium or alcohol, may be resorted to at first as an incident, and finally as a habit.” Not merely to overcome physical discomfort, but to obtain “the relief of exhaustion, deeper and more distressing than pain, do both men and women resort to the drug shop.” Neurasthenia was brought on “under the press and stimulus of the telegraph and railway,” and Beard believed that it provided “the philosophy of many cases of opium or alcohol inebriety.”25

* * *

Also see:

The Age of Intoxication
by Benjamin Breen

Drugs, Labor and Colonial Expansion
ed. by William Jankowiak and Daniel Bradburd

How psychoactive drugs shape human culture
by Greg Wadley

Under the influence
by Ed Lake

The Enlightenment: Psychoactive Globalisation
from The Pendulum of Psychoactive Drug Use

Tea Tuesdays: How Tea + Sugar Reshaped The British Empire
by Maria Godoy

Some Notes On Sugar and the Evolution of Industrial Capitalism
by Peter Machen

Coffee, Tea and Colonialism
from The Wilson Quarterly

From Beer to Caffeine: The Birth of Innovation
by Peter Diamandis

How caffeine changed the world
by Colleen Walsh

The War On Coffee
by Adam Gopnik

Coffee: The drink of the enlightenment
by Jane Louise Kandur

Coffee and the Enlightenment
by Stephen Hicks

Coffee Enlightenment? – Does drinking my morning coffee lead to enlightenment?
from Coffee Enlightenment

The Enlightenment Coffeehouses
by David Gurteen

How Caffeine Accelerated The Scientific Enlightenment
by Drew Dennis

How Cafe Culture Helped Make Good Ideas Happen
from All Things Considered

Coffee & the Age of Reason (17th Century)
from The Coffee Brewers

Philosophers Drinking Coffee: The Excessive Habits of Kant, Voltaire & Kierkegaard
by Colin Marshall

Coffee Cultivation and Exchange, 1400-1800
from University of California, Santa Cruz

Recursive Knots of the Mind

In listening to the news media, there is often very little news being reported. It’s not for a lack of interesting stories and important issues to be reported upon. There is more than enough material to fill up the 24/7 news cycle without any repetition. On a particular news outlet, there was a panel discussing one of the many topics used to induce viewer outrage and hence advertizing engagement (i.e., profit), but it’s not relevant exactly what they were saying. Over the entire segment, there was mostly opinionating and speculating, as expected. There wasn’t much susbstance. What was interesting is how the media personalities wielded rhetoric.

The closest the viewer got to actual information was a quote that was given no context or additional support. The quoted material, taken in isolation, was immediately submitted to an interpretation that was an accusation of ill-intent and taken as having proven guilt. Then that was repeated, such that the interpretation came to be treated as an established fact that stood in place of the quote. The quote itself, that could’ve been interpreted variously, had been reduced and then expanded upon. The result was a declarative set of claims and conclusions, without any need for further evidence. The quote was discarded for it was never relevant in the first place. With belief-claim established as pseudo-fact, an entire narrative was spun as melodramatic infotainment.

What stood out was how most statements made were broad, sweeping generalizations and absolute assertions without any sourcing, argument, or qualification. Is that news? Not really. Rather, these agents of corporate media were, step by step, walking the viewers through the social construction of an ideological reality tunnel. The indoctrinated viewer could now re-create the ideological worldview as needed and teach others to do the same. It was fascinating to watch, as it was impressive in its own way. Yet it’s not as if there was anything brilliant or unusual about that ‘news’ segment or the hacks doing their job. It was all workmanlike but, nonetheless, highly effective in manipulating and moulding the audience. It served its purpose.

I’m not sure why that particular segment caught my attention. It was some random thing playing on the television in the background as I was passing by. But something about it caused me to stop and perk up my ears. It got me thinking about the power of language. The thing is this act of rhetorical manipulation wouldn’t be possible in all languages, although it’s a common feat in the global written languages that have had centuries to be adapted to literacy and the modern media environment. One common feature of the major languages is recursion. It’s so common that some took it as a universal trait of language, based on the assumption that it was built into a language module, a physical structure located somewhere in the human brain. Basically, the theory has been that we humans have a genetically-determined instinct for linguistic recursion, one of the building blocks of rhetoric that allows for these sometimes complex language games of persuasion.

That is the theory based on the linguistic cultures, primarily the United States, in which linguistic studies developed. “We tend to speak in sentences of multiple clauses,” writes Samantha Harvey, “not in clauses that have been separated out. Noam Chomsky has called these multiple clauses instances of recursion, and he thinks they’re what define human language. They reflect our unique ability to position a thought inside a thought, to move from the immediate to the abstract, to infinite other places and times. A circle in a spiral, a wheel within a wheel; a tunnel that you follow to a tunnel of its own. In theory, an infinitely long, recursive sentence is possible, says Chomsky; there is no limit to the mind’s capacity to embed one thought inside another. Our language is recursive because our minds are recursive. Infinitely windmilling” (The Shapeless Unease, A Year of Not Sleeping).

This is, it turns out, not true for all cultures; or so one side has agued in an ongoing debate. With that possibility in mind, Julie Sedivy suggests that, “the languages that many of us have grown up with are very different from the languages that have been spoken throughout the vast majority of human existence” (The Rise and Fall of the English Sentence). Take the example of the Piraha. Their language lacks any evidence of recursion. That isn’t to say the Piraha are incapable of recursive thought, but that is not the same thing as recursive language and what it makes possible. Before exploring linguistic recursion, let’s establish what is the non-linguistic recursion. “If you go back to the Pirahã language,” writes Daniel Everett, “and you look at the stories that they tell, you do find recursion. You find that ideas are built inside of other ideas, and one part of the story is subordinate to another part of the story” (Recursion and Human Thought). Such basic recursive thought is true of all human societies and might be the case with some non-human species (Manuel Arturo Abreu, Organisms that do not exhibit recursion in communication still have the capacity for recursion).

There is much debate about who has recursion and who lacks it. Leading experts across numerous fields (linguistics, biology, mathematics, etc) have yet to come to a consensus on defining recursion, much less agreeing about its universality. Yet others point to the diverse ways recursion might show up: “[W]hen deer look for food in the forest,” Everett mentions “they often use recursive strategies to map their way across the forest and back, and take little side paths that can be analyzed as recursive paths.” But speaking of early hominids, Everett suggests that, “it would not have been necessary for them to have recursion to have language, at least according to the simple idea of language evolution as a sign progression and supported by some modern languages.” (How Language Began: The Story of Humanity’s Greatest Invention). That is to say, language was an early evolutionary development as was non-lingistic recursion, whereas the combination of the two was a much later cultual development. Linguistic recurson takes a cross-species neurocognitive ability and hijacks it toward advanced cultural ends.

This simple observation that syntactical recursion is culturally-structured, not genetically-determined, has been treated as if radical and heretical. “The dispute over Pirahã is curious in many respects, not least with regard to the fact that Everett is not the first linguist to claim that a language lacks embedded clauses and therewith recursion,” writes Robert D. Van Valin (Recursion and Human Thought). “In a series of important papers published in the late 70’s, the late MIT linguist Kenneth Hale argued that certain Australian Aboriginal languages lack embedding of the type found in Indo-European languages in their complex sentences and furthermore that one of them, Warlpiri, has a completely ‘flat’ syntactic structure. The latter claim was amended somewhat in the published version of the paper, but the point about the complex sentences remained valid. In the mid-1980’s, William Foley, a linguist at the University of Sydney, described Iatmul, a language of Papua New Guinea, as having non-hierarchical clause combining, i.e. no embedded of clauses in complex sentences, hence no recursion in the syntax.”

Beyond a total lack of recursion, there are plenty of other cultures where it’s severely restricted, of which Julie Sedivy gave some examples, from linguist Marianne Mithun, by way of contrast with English: “In English, 34 percent of clauses in conversational American English are embedded clauses. In Mohawk (spoken in Quebec), only 7 percent are. Gunwinggu (an Australian language) has 6 percent and Kathlamet (formerly spoken in Washington state) has only 2 percent. An English speaker might say: Would you teach me to make bread? But a Mohawk speaker would break this down into several short sentences, saying something like this: It will be possible? You will teach me. I will make bread. In English, you might say: He came near boys who were throwing spears at something. A Kathlamet approximation would go like this: He came near those boys. They were throwing spears at something then.”

So the question arises,” asks Van Valin, “given that such claims go back a good thirty years, and the most important of them was from a former colleague of Chomsky’s, why has Everett’s claim engendered such controversy?” We don’t need to answer that question here, but it’s good to be reminded that this kind of thought about the power of culture, similar to lingistic relativity, is not a new insight. Everett was far from alone in noting the lack of recursion in some culture. Yet he was viciously attacked by Chomsky and his acolytes. They tried to destroy his reputation. The sense of animosity remains in the field, as it was a fight for control and dominance. It wasn’t only about an obscure theoretical issue but an entire paradigm in framing human nature and the social condition.

What, you might wonder, is this recursion that has become the subject of an academic turf war? Why is it so important and what does it do? Through subordinate clauses with embedded phrases and qualifications, syntactic recursion makes possible the hierarchcial ordering of value and meaning. Without it, all that is available for human communication are simple declarative statements, what is called parataxis as opposed to hypotaxis. Hypotactic communication, particularly as develped in written language, allows an immense complexity of linguistic structure and thought-forms, an extension of hypothesis and speculation. Recursion spirals out into entire fanasy worlds of ideological realism that are reified into a perception of supposed objective reality (what Everett calls dark matter of the mind, what Robert Anton Wilson calls a reality tunnel, and what anthropologists call a cosmology), in which we lose the ability to distinguish between the communication and what is communicated. There is the idea that the medium is the message — well, this puts a hurricane-level wind into that sail. This is language as advanced social construction, the foundation upon which are built vast civilizations and empires.

Paratactic speech, on the other hand, hews more closely to direct experience and so keeps the individual grounded in immediate and present reality. This is why Buddhists use paratactic language, in eschewing recursion, as a mindfulness practice to dismantle the imagined boundaries of the ego-mind. They do this by describing experience in the simplest of terms, such as “anger is arising here and now” and not “I’m angry, as happens every Friday, because the boss always finds a way to give me work for the weekend, even though I told him I had family visiting, and any of my coworkers could do this work but the boss always expects me to do it.” The Buddhist rhetorical practice eliminates the interpretation, such as in English, of subject-action-result. Anger is not a thing or an event. It is simply an experience that passes.

This might relate to why, in lacking linguistic recursion, the Piraha appear less likely to get stuck in states of stress and worry, while less likely to suffer from mood disorders such as anxiety and depression, not to mention suicide being entirely unknown. There aren’t even what we might consider fundamental stages of development, in our speaking about the terrible twos and teenage angst. Piraha go from toddlerhood to being weened and then basically they’re a part of the adult world at that point with little fuss or frustration. And as adults, they get along well, such as apparently not holding onto anger with resentment and grudges.

This was exemplified by an incident that Daniel Everett recorded: ““I mean, what are you going to do to him for shooting your dog?” “I will do nothing. I won’t hurt my brother. He acted like a child. He did a bad thing. But he is drunk and his head is not working well. He should not have hurt my dog. It is like my child.” Even when provoked, as Kaaboogí was now, the Pirahãs were able to respond with patience, love, and understanding, in ways rarely matched in any other culture I have encountered” (Don’t Sleep, There Are Snakes).

The same easygoing attitude was demonstrated in how they deal with a diversity of difficulties and conflicts, such as their informal process of ‘divorce’ where the abandoned partner grieves loudly and publicly for a few days but is perfectly fine when their former spouse returns to the village with a new spouse. Their serial monogomy, by the way, in such small tribes means that the majority of Piraha have had sex with the majority of other Piraha at some point — they are a close community. Maybe their language has much to do with their being able to simply experience emotions and then let them go. The lack of recursion might disallow them from easily getting stuck in constructed narratives and it could be noted that, although familiar with stories told by outsiders that they occasionally repeat, they do lack a native storytelling tradition. This might indicate a close connection between recursion and narrative.

But recursion, for all the attention it gets, is only one aspect of this far different cultural mindset. In making this point, Arika Okrent writes: “Ironically, in the 2005 article that began the whole Chomsky/Everett debate, Everett barely touched on the notion that the Pirahã’s lack of recursion might challenge the theory of universal grammar. Instead, his aim was to show that the Pirahã cultural commitment to immediate, concrete experience permeated the very structure of their language: not embedding one phrase inside another was just one of the many ways that the Pirahã prioritised the here and now. Other evidence he adduced for this priority included the simplicity of the kinship system, the lack of numbers, and the absence of fiction or creation myths” (Is linguistics a science?). It’s an entire cultural worldview.

Another linguistic factor is that it’s required one speaks very specifically in describing truth claims and attributing their source. The Piraha don’t and, according to the limits built into their language, can’t speak in broad generalizations and abstractions. Their knowledge, although encyclopedic in relation to the world around them, is limited to what they have directly experienced, what they can deduce from what they have drectly experienced, or what someone they personally know has directly experienced. They don’t even have any equivalent to ancestral or mythological knowledge about a perceived distant past. Everything is referred to in relation to its proximity to the broad here and now of living memory. This greater concrete specificity is observed among some other hunter-gatherer languages, such as the Peruvian Matses studied by David William Fleck (A grammar of Matses).

Such highly qualified but non-recursive language can also affect how one orients within time and space. We Westerners are used to the egoic perspective because it is built into our language, including referencing direction according to our individual view of the world. In giving directions, we’ll speak of turning left or right and going straight. But some tribal cultures like the Australian Guugu Ymithirr, as described by Guy Deustcher, express their sense of place according to the cardinal points in relation to the sun’s path in the sky. For another Australian Aboriginal group, the Pormpuraaw, cardinal directions also determined the perception of how time flowed or else how events are spatially related (see Lera Boroditsky and Alice Gaby’s work, Remembrances of Times East: Absolute Spatial Representations of Time in an Australian Aboriginal Community). Yet time and space to the Piraha can seem even more radically alien to Western understanding, as space-time collapses into an all-purpose sense of transience with phenomenal experience flickering in and out of perceived reality — as explained by Samantha Harvey:

“There is a Pirahã word that Everett heard often and couldn’t deduce the meaning of: xibipiio. Sometimes it would be a noun, sometimes a verb, sometimes an adjective or adverb. So and so would xibipiio go upriver, and xibipiio come back. The fire flame would be xibipiio-ing. Over time Everett realised that it designated a concept, something like going in and out of experience – ‘crossing the border between experience and non-experience’. Anything not in the here and now disappears from experience, it xibipiios, and arrives back in experience as once again the here and now. There isn’t a ‘there’ or a ‘then’, there are just the things xibipiio-ing in and out of the here and now.

“There is no past or future tense as such in Pirahã; the language has two tense-like morphemes – remote things (not here and now) are appended by -a and proximate things (here and now) by – i . These morphemes don’t so much describe time as whether the thing spoken about is in the speaker’s direct experience or not. The Pirahã language doesn’t lay experiences out on a past–present–future continuum as almost every other language does. In English we can place events quite precisely on this continuum: it had rained, it rained, it has rained, it rains, it is raining, it will rain, it will have rained. The Pirahã can only say whether the rain is proximate (here) or not” (The Shapeless Unease, A Year of Not Sleeping).

Anything more convoluted than that is, to the Piraha mind, unnecessary or maybe just unimaginable — not even requiring color terms, numbers, and quantifying words like ‘many’, ‘most’, ‘every’ or ‘each’. Their kinship terminology is limited as well, such as a single word for mother and father. And they have the simplest pronoun inventory ever recorded. Unsurprisingly, their lexicon for describing time is sparse.

“Time leaks everywhere into English,” Harvey writes, “some ten per cent of the most commonly used words are expressions of time. The Pirahã language has almost no words that depict time. This is all of them: another day; now; already; day; night; low water; high water; full moon; during the day; noon; sunset/sunrise; early morning, before sunrise. Their words for these are literally descriptive – the expression for day is ‘in sun’, for noon ‘in sun big be’ and for night ‘be at fire’.” Time, like color and much else, is described by the Piraha in practical terms by association or comparison to something in their everyday lives. There is no abstract notion of 3:30 pm or blue — such concrete thought creates a different mentality (see the work of Luria and Lev Vygotsky, as related to the Flynn effect).

The only temporal sense that can be expressed by the Piraha is in speaking of the immediately observable natural environment and it can’t be extended very far. Maybe this is because nothing they do requires much time and so time is as bountiful as the jungle around them. They don’t travel much and rarely over long distances. The food and materials they need are easily obtained near where they live. The practical application of time is barely relevant and they probably don’t perceive time in the way we do, as a finite resource and linear construct. Even the cyclical time of the mytholgical worldview would likely be unfamiliar to them.

Time is more similar to the flickering candle, such that it nether goes anywhere else as a trajectory nor repeats, simply shifts in the expansive and inclusive present moment. Experience comes and goes without any need to speculate and posit lines of causation or greater patterns and cycles. Time doesn’t need to be controlled or measured. The Piraha even lack the obsession some premodern people have had with astrology and calendars, as they would serve little purpose for them. Even seasonal changes are limited and don’t have much practical implication. There is no time of the year that changes from cold to hot or from wet to dry. So, the main foods they eat are available year round.

Their lifestyle remains constant, as does the surrounding nature within their traditional territory. These optimal conditions might approximate the stable environment for much of hominid and human evolution in Africa. Look at another unique example, the Pygmy tribes, some of which are the only surviving human populations with 100% homo sapiens genetics. The Pygmy live where human evolution began and one can see similarities to the Piraha. Both tribes, living in equatoral rainforests, have a simple culture that has been resistant to outside influence, even though each tribe has been in contact with foreigners for centuries.

This social solidarity and cultural resilience is impressive. Writing about the Piraha, Danel Everett said that, “My evangelism professor at Biola University, Dr. Curtis Mitchell, used to say, “You’ve gotta get ’em lost before you can get ’em saved”” (Don’t Sleep, There Are Snakes). The problem for a missionary is that tribes like the Piraha and Pygmies aren’t prone to feeling lost and maybe, at least in the case of the former, it’s partly their language that offers protection against evangelical rhetoric. Prosyletyzing becomes impossible when there is no recursive and abstract language in which to communicate theology, mythology, and history — all that is required to translate a written holy text like the Bible.

Unfortunately, the study of traditional Pygmy languages appears to be limited, but there is plenty of interesting anthropological evidence. C. M. Tumbull, in 1961, observed a BaMbuti Pygmy who became disoriented in seeing open grasslands for the first time and thought distant buffalo were insects (Some observations regarding the experiences and behavior of the BaMbuti Pygmies). Distance perception and size constancy aren’t major factors if one has never stepped outside of the visual enclosure of thick jungle.

So, environment would likely be an influence on the immediacy principle since, in a dense forest, one cannot see very far. And that would surely become built into the native language. Another telling detail, similar to the Piraha, is that these BaMbuti lacked their own concepts about witchcraft and what Tumbull described as the ‘supernormal’, something they associated with outsiders such as the neighboring Bantu. As there is no distant visual space to a forest dweller, neither are there distant spiritual realities. With trees mostly blocking out the sky, maybe people have less tendency to ponder heavenly bodies and speculate about heavenly worlds.

There is some information about particular Pygmy tribes that maintained their traditional languages. If not entirely innumerate as the Piraha, the BaMbuti can only count up to four. As for what both BaMbuti and Piraha entirely lack, they have no terms for colors and so are forced to describe them by comparison. Also, beyond the simplest of decorations, these tribes don’t make visual art nor make much use of color as dyes. Rather than a focus on the visual, Tumbull states that, “the Pygmy has the most complex music in the whole of Africa.” In an environment that constrains vision, the auditory is so important that these Pygmy will even aim their hunting arrows by sound. This auditory orientation would strengthen the affect of oral culture and it’s accompanying mindset. Being so reliant on info from sound would emphasize the animistic sense of a world alive with voices. Indeed, rainforests are dense with noisy life.

This is hard for the modern Westerner to grasp, as we are immersed in a hyper-visual culture where non-human sounds of nature have been almost entirely eradicated. Also, as an agricultural civilizaton, the experience of open spaces and distant vision is common, even for urbanites. We value our large grassy lawns and parks, and we enjoy vacations to vast landscapes of oceans and lakes, mountains and canyons. This is particularly true of the United States where most of the population lived in farm communities until a few generations ago. Open fields and open sky have been common. Even with the increase of audio in new media, the visual still dominates. And the sounds that new media brings are detached from the sensory percepton of the environment we inhabit.

For the Piraha, it’s not that the visual is unimportant. Rather, it’s significance is transformed. They are obsessed with certain kinds of visual experiences but of a far different quality. The visual environment of agriculture and urbanization is largely static and inanimate, surrounded as we are with manmade objects and architecture with only an occasional intrusion by wildlife or stray animal. The Piraha, on the other hand, have to be constantly hyper-aware of other living beings as food sources but mostly as potential threats. Predators and poisonous creatures abound in the jungle.

Vision is central, even as it is constrained by the density of foliage. This surely shapes their amorphous sense of time, as shown in their language. They have a fascination and obsession with a certain kind of visuo-temporal phenomenon described by the aforementioned term ‘xibipiio’ that has no equivalent in English. The concept behind it is demonstrated by their habit of staring at flickerng flames, as they also enjoy watching people appear and disappear wiithn their visual field, such as around the bend of a river. This liminal quality is key to understanding their worldview and mindset. There is no time continuity of perception, no objective constancy of beingness.

This is felt in quite personal ways, as Piraha identity can flicker like a flame. There is something akin to spiritual possession in Piraha culture, although to their perspectve it isn’t possession. When the spirit is present, the former person is simply absent. When asked where the person is, the simple answer will be given that they are not there. This identity shift sometmes can be permanent. In the forest, a visitation by a spirit might lead to a complete change of identity along with a new name. The previous identity is no longer existing and will not return. This is an attribute of the bundled mind, a fundamental tenet of Buddhist thought as well. Buddhists seek to regain some essence of what for the Piraha is the social norm of lived reality.

This goes back to the non-recursve and paratactic quality of Piraha language. The shifting fluidity of perception and identity can’t be generalized nor extended beyond the known and experienced specifics. And this has social consequences. Maybe we have much to learn from them. Their apparent invulnerability to the highly developed rhetoric of prosyletyzing missionaries is admirable. That is a nifty linguistic trick that we could adopt as a tool in the development of our intellectual and psychological defenses. We don’t have to become like the Piraha, but it could be useful to develop this skill as a discipline of the mind.

When finding ourselves pulled into some linguistic manipulation or trapped in a rhetorical framing, we can stop and turn our attention to language itself. How are we speaking and how are others speaking to us? Then we can bring our language back down to grounded reality by speaking simple statements, as the Buddhists do with their mindfulness practice. Slowly, we can learn how to untangle the recursive knots of the mind. It might have the added benefit, as seen with the Piraha, of developing some immunity toward the alluring mind virus of authoritarian thought control. The social hierarchy of power is dependent on the conceptual hierarchy of recursive rhetoric. This might explain the memetic pull of the reactionary mind and might demonstrate how we can use linguistic jujitsu to rediirect these psychic energies.

What if authoritarianism doesn’t begin in the external world through distant institutions of power but instead begins in our minds, in the way we relate and communicate, as it shapes how we think and forms our identities at an unconscious level? Recursion is not only about the structure of language and the structure of thought. As a tool of rhetorc, recursion is how hierarchies are built. From kulturCrtic, here is a great explanation of the relevance by way of the distinction between hypotaxis and parataxis:

“In short, recursion enables the construction of complex hypotactic language units rather than just simple paratactic ones. Parataxis, as I am sure you are all aware, is when each of your sentences in a larger grammatical unit carries equal weighting. Paratactic units usually have few, if any clauses, and more importantly, none of the clauses are subordinated one to another in a hierarchical scheme. Hypotaxis, on the other hand, occurs when clauses in sentences, or in larger grammatical wholes, are subordinated to one another, focusing attention on what is considered of greater importance or value within the semantic, syntactic, or larger logistic unit. In other words, recursion, by means of subordination, allows for the rudimentary and foundational element of hierarchization. Hierarchy, socio-economic and political, we might here add, is also one of the hallmarks of post-traditional societies […]

“As cultural historian Marvin Bram contends in The Recovery Of The West, “Parataxis suggests coordination more than subordination, and any number of sequences rather than a single correct sequence. Parataxis de-hierarchizes the world,” where the flat, coordinate, and non-orderliness of a paratactic world seems rather primitive or prosaic to the ever more civilized and tightly structured hypotactic logistic” (The Politics of Recursion: Hypotaxis, Hierarchy, and Healing).

Parataxis versus hypotaxis is egalitarianism versus hierarchy, coordination versus subordination, participatory versus disconnecting. In our modern sophistication, we take the latter way of being as normal, even inevitable. How could humans be otherwise? Our assumption is supported by our WEIRD bias, as nearly all alternative possibilities in the Western world have been eliminated, as have most other cultures that could challenge this false and illusory belief. The Piraha are one of the few remnants of a different way of being. They are part of an animistic world that is alive with psychic presence that is intimately a part of their shifting and extended identities.

There is an odd element about this other way of being that could be easy to overlook as incidental. The paratactic is repetitive, rhythmic, and resonant. Without recursion to create hierarchical orders of value, of meaning and significance, there are other ways to emphasize what is being said and so direct focus. There can be a musical or poetic quality to languages that make use of this style of speaking, such as meter being much more important to ancient storytelling. To return to the Piraha, they don’t only speak their language as they also can whistle and hum it, depending on context, but it cannot be written. A sing-song quality to spoken language might have been much more common prior to widespread literacy, particularly as it is a useful tool for oral traditions of mnemonics.

The closest the modern mind gets to this is through psychedelic use. “Hashish, then, is an assassin of referentiality, inducing a butterfly effect in thought. In [Walter] Benjamin, cannabis induces a parataxis wherein sentences less connect to each other through an explicit semantics than resonate together and summon coherence in the bardos between one statement and another. It is the silent murmur between sentences that is consistent while the sentences continually differentiate until, through repetition, an order appears” (Richard M. Doyle, Darwin’s Pharmacy, p. 107; see full passage as quoted in Psychedelics and Language).

This might not be a coincidence. The past three millennia of post-bicamreal civilization has been a gradual replacement of non-addictive psychedelics by addictive substances, in particular stimulants (Agricultural Mind & Diets and Systems). These various plant-based drugs may have fundamentally altered the human mind at multiple points in human evolution. There are those like Terence McKenna that go so far as to suggest that psychedelics were at the origin of consciousness and language, but we don’t need to speculate about that here. Indeed, diverse research has shown a number of psychedelics increase the personality and cognitive traits of openness, fantasy-proneness, and creativity (Scott Alexander, Why Were Early Psychedelicists So Weird?).

It should be noted that, though there isn’t a lot of focus on it, the Piraha are known to use a particular psychedelic, from the Parica tree (maybe containing N,N·dimethyltryptamine or DMT), that induces auditory hallucinations and verbosity (Siri von Reis Altschul, The Genus Anadenanthera In Amerindian Cultures). The human brain seems to have co-evolved with plant substances like the widespread psychedelic DMT. There is evidence that our bodies produce DMT, maybe in the pineal gland, and so even the Piraha have DMT coarsing through their brains (Eric W. Dolan, Study provides evidence that DMT is produced naturally from neurons in the mammalian brain). Importantly, there might be various ways other substances, diet, cultural practices, etc affect DMT levels. The Piraha do have experiences such as contact with intelligent beings (i.e., spirits) that is common for those who imbibe DMT.

DMT is structurally similar to serotonin and melatonin, all of which is derived from tryptamine. Like serotonin and dopamine, DMT is a monoamine compound and DMT shares receptors with both. DMT causes the body to produce more serotonin and increases the release of dopamine. We all carry DMT in our brains. It may play important roles, such as how DMT allows the body to operate at lower oxygen levels. Other psychedelics that we imbibe also use the serotonin receptor.

* * *

Below is part of the post that is a work in progress:

hierarchicy, egaltarianism, partcipatory reality and social order, organic, anarchy, democracy,

rhetorical strategies, social construction, ideological realism, symblic ideology, symbolic conflation, metaphor, metonymy, locking mechanism, visceral/embodied,

narrative loops, counter-narratives, polarization, outrage

Joe, obsessing over the perfect pick-up line, had not asked out the cute girl at work.
Joe, having not asked out the cute girl at work, obsessed over the perfect pick-up line.
Joe, having been disfigured in accident, had not asked out the cute girl work.
Joe, having not asked out the cute girl at work because of his fear of rejection like happened last time he had a crush, became nervously obsessed with the perfect pick-up line as a distraction, the kind of obsession he had since he was disfigured in the accident, but he didn’t want anyone’s pity, especially not her pity, the one thing he dreaded more than rejection.

Take the example used by Everett: John’s brother’s house. This is a simple form of recursion and can be extended infinitely: John’s brother’s sister’s mother’s friend’s house. But the Piraha must state each noun phrase separately: ‘John has a brother. This brother has a house.’ Each additional noun phrase would be another sentence and so complicated thoughts could quickly become linguistically unwieldy. So, thinking complicated thoughts is, if not entirely precluded, strongly disincentivized by the structure of the language. The Piraha languge is a finite language, in the way chess is a finite game, but that still leaves much capacity for communication. In fact, the strict limitations allows for kinds of thoughts that aren’t possible in highly recursive languages, and this could shape kinds of behaviors, perceptions, and identities that would be alien to the literary mind.

Cultural tools such as linguistic recursion are like scaffolding that can be used to build structures according to various designs and for various purposes: cathedrals, apartment buildings, monuments, etc. But once construction is finished, the scaffolding can be removed and the structure will hold itself in place without further use of scaffolding, other than occasional need for maintenance, repairs, and renovations. With a lifetime of mental habits developed from reading and writing, speaking and hearing recursive language, it is built into our neurocognitive-cultural substructure and built into the institutions and systems we are enmeshed in — as part of what Everett calls “dark matter of the mind”. Recursive language, for the average person, is only used selectively and subtly such that it is rarely noticed, if noticed at all. But we are all intimately familiar with it in our experience. It slips past our guard.

One might qualify the role of syntactic recursion by acknowledging that other cultural tools might be able to achieve the same or similar ends. “Some oral languages do regularly embed clauses,” points out Julie Sedivy, “suggesting that writing is not necessary for complex syntax. But, as can be seen in a number of indigenous languages, longer and more complicated sentences often emerge as a result of contact with a written language.” The point remains that the most convoluted sentence structures all come out of literate and literary societies. Recursion remains the ultimate cultural tool for this purpose, but obviously no cultural tool is used in isolation. These highly developed cultural tools are primarily used in writing, not speech: “In current English, writing uses more varied vocabulary than conversational speech, and it uses rarer and longer words much more often. Certain structures (such as passive sentences, prepositional phrases, and relative clauses) appear more often in written than spoken language. Writers generally elaborate their ideas more explicitly through syntax whereas speakers leave more material implicit.”

Language is never static, though. These cultural tools are adapted to changes in media. “In fact, heavily recursive sentences like those found in the Declaration of Independence have already been dwindling in written English (as well as in German) for some time. According to texts analyzed by Brock Haussamen, the average sentence length in written English has shrunk since the 17th century from between 40-70 words to a more modest 20, with a significant paring down of the number of subordinate and relative clauses, passive sentences, explicit connectors between clauses, and off-the-beaten-path sentence structures.”

third man, ghost voice/note, repetition-compulsion, addiction, egoic consciousness, rigid boundaries

paratactic animal speech

writing as transformational: Julian Jaynes, Marshall McLuhan, WEIRD

* * *

The Politics of Recursion: Hypotaxis, Hierarchy, and Healing
by kulturCritic

In short, recursion enables the construction of complex hypotactic language units rather than just simple paratactic ones. Parataxis, as I am sure you are all aware, is when each of your sentences in a larger grammatical unit carries equal weighting. Paratactic units usually have few, if any clauses, and more importantly, none of the clauses are subordinated one to another in a hierarchical scheme. Hypotaxis, on the other hand, occurs when clauses in sentences, or in larger grammatical wholes, are subordinated to one another, focusing attention on what is considered of greater importance or value within the semantic, syntactic, or larger logistic unit. In other words, recursion, by means of subordination, allows for the rudimentary and foundational element of hierarchization. Hierarchy, socio-economic and political, we might here add, is also one of the hallmarks of post-traditional societies […]

As cultural historian Marvin Bram contends in The Recovery Of The West, “Parataxis suggests coordination more than subordination, and any number of sequences rather than a single correct sequence. Parataxis de-hierarchizes the world,” where the flat, coordinate, and non-orderliness of a paratactic world seems rather primitive or prosaic to the ever more civilized and tightly structured hypotactic logistic.  Bram continues:

Parataxis is concerned with the concrete thing itself, the local and contained, and the moment, rather than with relationships among abstract things and over-arching spatial and temporal schemes… Paratactic space and time make dramatic antitheses to their hypotactic counterparts.10

For example, a person walking down a forest path seeing paratactically will see much more than a person looking hypotactically along the same path but only seeing what is of interest to him.  The paratactic visual space will be fuller. As Bram concludes,

This phenomenon of paratactic persons taking in more of the world, living in a fuller world than hypotactic persons, has been reported time and time again by (hypotactic) travelers among (paratactic) traditional peoples. 11 […]

Yet, there was also born regret for the past poorly lived and anxiety over a future still uncertain, in short, the terror of an historical consciousness, and the realization that ‘one-day I too will die.’ As Bram reminds us,

In paratactic time there is little past because there are no complete logistic structures to be sought there, and there is little future because there is no need for a place in which to complete incomplete logistic structures.  There is certainly a present, gathering to itself much of the energy that hypotactic persons give to the past and future, and inhabited by full persons and full objects: a full present.  The present of hypotactic time often enough takes third place behind the past and the future, depleted of energy: an empty present. 12

But, what was lost in this transformation to the hypotactic word, in the subordination of thought and speech within the apparently universal grammar of literacy, univocity, and its newly appropriated voice – the sterile logic of syllogism and, finally, of mathematics?

The Shapeless Unease, A Year of Not Sleeping
by Samantha Harvey
pp. 42-51

Think of a sentence:

One day I’d like to write a story about a man who, while robbing a cash machine, loses his wedding ring and has to go back for it because his wife, a terrifying individual whose material needs have driven him to crime, will no doubt kill him if the ring is lost.

A sentence with multiple clauses, one clause buried within another like Russian dolls. If we take each doll out and line them up we get:

One day I’d like to write a story.
The story is about a man.
A man robs a cash machine.
A man loses his wedding ring.
A man goes back to the cash machine for his wedding ring.
A man has a wife.
The wife is terrifying.
The wife has many material needs.
The man is driven to crime by his wife.
The ring must not get lost.
The wife could kill the man.

We tend to speak in sentences of multiple clauses, not in clauses that have been separated out. Noam Chomsky has called these multiple clauses instances of recursion, and he thinks they’re what define human language. They reflect our unique ability to position a thought inside a thought, to move from the immediate to the abstract, to infinite other places and times. A circle in a spiral, a wheel within a wheel; a tunnel that you follow to a tunnel of its own. In theory, an infinitely long, recursive sentence is possible, says Chomsky; there is no limit to the mind’s capacity to embed one thought inside another. Our language is recursive because our minds are recursive. Infinitely windmilling.

But then came studies on the Pirahã people of the Brazilian Amazon, who do not make recursive sentences. Their language doesn’t permit them to make the sentence I made above, or even something like When it rains I’ll take shelter. For the Pirahã it would have to be It rains. I take shelter. They don’t embed a thought inside a thought, nor travel from one time or place to another within a single sentence.

When it rains, unless I take shelter, I get wet.
Unless I want to get wet, I take shelter when it rains.
So that I stay dry when it rains, I take shelter.

For the Pirahã tribe there are no sentences like these – there is none of this restless ranging from one hypothesis to another. Instead, It rains. I take shelter. Or, I take shelter. I don’t get wet. Or, I take shelter. I stay dry.

The Pirahã seem incapable of abstraction. They seem literal in the extreme – their ability to learn new grammar rules through a computerised game, by predicting which way an icon of a monkey would go when a type of sentence was generated, was thwarted in almost every case by their inability to see the monkey as real, and therefore to care what it would do next. They became fascinated and distracted by the icon, or by the colours on the screen. One of them fell asleep in the middle of the test. ‘They don’t do new things’ was the repeated assertion of Daniel Everett, the only westerner who has ever got anywhere near knowing and understanding the Pirahã language and culture. They don’t tell stories. They don’t make art. They have no supernatural or transcendental beliefs. They don’t have individual or collective memories that go back more than one or two generations. They don’t have fixed words for colours. They don’t have numbers.

Yet they are a bright, alert, capable, witty people who are one of the only tribes in the world to have survived – largely in the jungle – without any concession to the modern world. A meal might involve sucking the brains from a just-killed rat. A house is fronds of palm or a piece of leather strung over four sticks in the ground. They have no possessions. Their language might involve speaking, but it might also occur through whistling, singing or humming. And their experience of the present moment is seemingly absolute. ‘The Pirahã’s excitement at seeing a canoe go around a river bend is hard to describe,’ Everett writes. ‘They see this almost as travelling into another dimension.’ 6

[6 This and every of Everett’s quotations here is from his paper ‘Cultural Constraints on Grammar and Cognition in Pirahã’.]

There is a Pirahã word that Everett heard often and couldn’t deduce the meaning of: xibipiio. Sometimes it would be a noun, sometimes a verb, sometimes an adjective or adverb. So and so would xibipiio go upriver, and xibipiio come back. The fire flame would be xibipiio-ing. Over time Everett realised that it designated a concept, something like going in and out of experience – ‘crossing the border between experience and non-experience’. Anything not in the here and now disappears from experience, it xibipiios, and arrives back in experience as once again the here and now. There isn’t a ‘there’ or a ‘then’, there are just the things xibipiio-ing in and out of the here and now.

There is no past or future tense as such in Pirahã; the language has two tense-like morphemes – remote things (not here and now) are appended by -a and proximate things (here and now) by – i . These morphemes don’t so much describe time as whether the thing spoken about is in the speaker’s direct experience or not. The Pirahã language doesn’t lay experiences out on a past–present–future continuum as almost every other language does. In English we can place events quite precisely on this continuum: it had rained, it rained, it has rained, it rains, it is raining, it will rain, it will have rained. The Pirahã can only say whether the rain is proximate (here) or not.

They can then modify a verb to qualify the claims they make about it. If they say ‘It rained in the night’, the verb ‘rain’ will be modified by one of three morphemes to convey how they know it rained, i.e. whether they heard about it (someone told them), deduced it (saw the ground was wet in the morning), or saw/heard it for themselves. The Pirahã language and culture is not only literal but evidence-based. How do you know something happened? If the line of hearsay becomes too long, involving too many steps away from experience, the thing is no longer deemed to be of any importance to speak or think about. This is why they don’t have transcendental beliefs or collective memories and stories and myths that go back generations.

What a thing this is, to be so firmly entrenched in the here and now. What a thing. We are, I am, spread chaotically in time. Flung about. I can leap thirty-seven years in a moment; I can be six again, listening to my mum singing while she cleans the silver candelabra she treasures, that reminds her of a life she doesn’t have. I can sidestep into another possible version of myself now, one who made different, better decisions. I can rest my entire life on the cranky hinge of the word ‘if’. My life is when and until and yesterday and tomorrow and a minute ago and next year and then and again and forever and never.

Time leaks everywhere into English, some ten per cent of the most commonly used words are expressions of time. The Pirahã language has almost no words that depict time. This is all of them: another day; now; already; day; night; low water; high water; full moon; during the day; noon; sunset/sunrise; early morning, before sunrise. Their words for these are literally descriptive – the expression for day is ‘in sun’, for noon ‘in sun big be’ and for night ‘be at fire’.

Are there whole slices and movements of time that the Pirahã people don’t experience, then? If they can only speak in terms of ‘another day’, do they not experience ‘yesterday’ and ‘a year ago’ as different things? If something doesn’t exist in a language, does it also not exist in the minds of those who speak the language?

I wondered that when I tried to teach the perfect tense to Japanese students; there isn’t a perfect tense in Japanese. When I taught the sentence I have eaten I got blank looks, incomprehension. Why not just say I ate? Why say I have been to Europe when you could just say I went to Europe? I tried to illustrate: I ate (before, at some time you need to specify – this morning, all day yesterday); I have eaten (just now, I’m still full). Blank looks, incomprehension. In the perfect tense a period of time opens out, the past, not as separate from the present, but running up to and meeting the present. I have eaten; we’ve danced all night; it’s been a year. Do the Japanese not experience that segment of time? Or is it that they deal with it in other linguistic ways, or by inference and context?

Everett described the Pirahã’s mode of being as ‘live here and now’. If you live here and now, you don’t need recursion in language because there’s no conceptual need to join together ideas or states according to their order in time, or in terms of which causes which, or in terms of hypothetical outcomes. You don’t need a past or future tense if you’re living only now. You don’t need a large stock of words that try to nail down instances of time along a horizontal continuum from the distant past to the distant future, a continuum that also has an enormous elastic stretch into the vertical planes of virtual time, time as it intersects with space, time as happening elsewhere, real or imagined.

What would it be like to be a person of the Pirahã tribe? How would it be to not experience that continuum? For one’s mind to not be an infinitely recursive wheel within a wheel? It feels in some ways a relief, even to imagine such a mode of living, but it feels almost non-human too. And yet there the Pirahã are, as human as human can be. I can’t imagine it. I can’t imagine being anything but submersed in time, it ticking in every cell. […]

As for the continual vanishing of the now, well, here is also the continual birth of the now. A live birth, from a living now; there are no deaths, there’s no hiatus. It seems to me that now is the largest, most predictable and most durable of all things, and that the question isn’t so much: what is time but a set of nothings? But more: what is time but an indomitable something? An unscalable wall of now. When I think of the Pirahã I don’t imagine them cresting the brink of a collapsing moment, each step bringing an existential vertigo. I imagine them fishing, skinning animals, drinking, painting their faces, building shelter. It rains. We stay dry. Their here and now seems as solid to me as one brick – it rains – laid on another – we stay dry.

What would it be like to live and think like the Pirahã? For the world to be continually xibipiio-ing? No mad spooling out of events through time, all chain-linked and dragged each by the next, one event causing another, one event blamed for another, one past pain locked into a present pain to cause future pain; no. No things crossing the boundary from experience to non-experience. Just things disappearing and reappearing around the bend of the river. […]

[The author watched a digital clock while in an altered state from sickness. The numbers…] bore no relationship to me. They weren’t tugging in a forwards direction, they were just things gently changing, rearranging, in the same way that the clouds rearrange, and they were rearranging in a vast stillness. They were xibipiio-ing. Only: here I am. Then: here I am. Then: here I am. Is that akin to the Pirahã’s experience of time?

Is that where the dance is, the dance T. S. Eliot told us about when we read Four Quartets as uncomprehending teenagers? At the still point, there the dance is.

Harriet Tubman: Voice-Hearing Visionary

Origin of Harriet Tubman in the Persistence of the Bicameral Mind

The movie ‘Harriet’ came out this year, amidst pandemic and protest. The portrayal of Harriet Tubman’s life and her strange abilities reminds one of Julian Jaynes’ theory of the bicameral mind, as written about in what is now a classic work, The Origin of Consciousness in the Breakdown of the Bicameral Mind. Some background will help and so let’s look at the biographical details of what is known. This famous Underground Railroad conductor was born Araminta Harriet Ross in the early 1820s and, when younger, she was known as ‘Minty’. Her parents were religious, as she would later become. She might also have been exposed to the various church affiliations of her master’s extended family.

These influences were diverse, writes James A. McGowan and William C. Kashatus in their book Harriet Tubman: A Biography (pp. 11-12): “As a child, Minty had been told Bible stories by her mother, and she was occasionally forced to attend the services held by Dr. Anthony Thompson, Jr., who was a licensed Methodist minister. But Minty and her parents might also have been influenced by Episcopal, Baptist, and Catholic teachings since the Pattisons, Thompsons, and Brodesses initially belonged to Anglican and Episcopal churches in Dorchester County before they became Methodists. In addition, some of the white Tubmans and Rosses were originally Catholic. Accordingly, Minty’s religious beliefs might have been a composite of several different Christian traditions that were adapted to the evangelical emphasis on spiritual freedom.”

Tubman’s mixed religious background was also noted by Kate C. Larson: “The “creolization” of this family more accurately reflects the blending of cultures from West Africa, Northern Europe, and local Indian peoples in the Chesapeake. As historian Mechal Sobel put it, this was a “world they made together.” By the time Tubman was born, first generation Africans were visible presences in Dorchester County […] Tubman and her family integrated a number of religious practices and beliefs into their daily lives, including Episcopal, Methodist, Baptist, Catholic, and even Quaker teachings, all religious denominations supported by local white masters and their neighbors who were intimately involved with Tubman’s family. Many slaves were required to attend the churches of their owners and temporary masters. Tubman’s religiosity, however, was a deeply personal spiritual experience, rooted in evangelical Christian teachings and familial traditions” (Harriet Ross Tubman).

Other scholars likewise agree, such as Robert Gudmestad: “Like many enslaved people, her belief system fused Christian and African beliefs” (Faith made Harriet Tubman fearless as she rescued slaves). This syncretism was made simpler for the commonalities traditional African religion had with Christianity or particular sects of Christianity: worship of one God who was supreme, relating to God as a helpful friend who could be heard and talked with (a commonality with Quakerism), belief in an eternal soul and an afterlife, rites and initiations involving immersion in water, etc. Early generations of slaves were often kept out of the churches and so this allowed folk religion to take on a life of its own with a slow merging of traditions, such as how African rhythms of mourning were incorporated into Gospel music.

Furthermore, religious fervor was at a peak in the early 1800s and it was part of the world Tubman’s parents lived in and that Tubman was born into. “Both races attended the massive camp meetings so Rit and Ben experienced these sporadic evangelical upsurges”, wrote Margaret Washington (Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman). “She grew up during the Second Great Awakening,” Gudmestad explained, “which was a Protestant religious revival in the United States. Preachers took the gospel of evangelical Christianity from place to place, and church membership flourished. Christians at this time believed that they needed to reform America in order to usher in Christ’s second coming. Some during that restless period believed it was the End Times, as it was easier to imagine the world coming to an end than to imagine it to become something else.

This would have been personally felt by Tubman. “A number of black female preachers,” Gudmestad goes on to say, “preached the message of revival and sanctification on Maryland’s Eastern Shore. Jarena Lee was the first authorized female preacher in the African Methodist Episcopal Church. It is not clear if Tubman attended any of Lee’s camp meetings, but she was inspired by the evangelist. She came to understand that women could hold religious authority.” The religious fervor was part of a growing political fervor, as the country moved toward Civil War. For blacks, Moses leading his people to freedom inspired more than faith and hope toward the afterlife.

Around the time of Tubman’s birth, there was the failed 1822 revolt planned by Denmark Vesey in South Carolina. Later in 1831, Nat Turner led his rebellion in nearby Virginia and that would’ve been an exciting event for enslaved blacks, especially a lonely young slave girl who at the time was being kept separate from her family and mercilessly whipped. Then throughout her teens and into her early twenties, there were numerous other uprisings: 1835–1838 Black Seminole Slave Rebellion, 1839 Amistad seizure, 1841 Creole case, 1842 Slave Revolt in the Cherokee Nation. The Creole case was the most successful slave revolt in United States history. Such tremendous events, one might imagine, could shape a young impressionable mind.

* * *

Harriet Tubman’s Ethno-Cultural Ancestry and Family Inheritance

Someone like Tubman didn’t come out of nowhere. “I am quite willing to acknowledge that she was almost an anomaly among her people,” wrote her early biographer Sarah Bradford, “and so far I can judge they all seem to be particularly intelligent, upright and religious people, and to have a strong feeling of family affection” (Harriet: The Moses of Her People). She earned her strong spirit honestly, from the black culture around here and as modeled by her parents. The spiritual inclinations, as with with knowledge of nature, came from her father: “As a clairvoyant, Minty believed that she inherited this second sense from her father, Ben. […] Listening to Ben’s stories, predictions and sharing his faith convinced Minty that an omniscient force protected her” (Margaret Washington, Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman). But it was her mother, in particular, who showed what it meant to be a fiercely protective woman when it came to family. When Tubman returned to free her family, including her elderly parents, she was acting on the values she was raised with:

“Rit struggled to keep her family together as slavery threatened to tear it apart. Edward Brodess sold three of her daughters (Linah, Mariah Ritty, and Soph), separating them from the family forever.[10] When a trader from Georgia approached Brodess about buying Rit’s youngest son, Moses, she hid him for a month, aided by other enslaved people and freedmen in the community.[11] At one point she confronted her owner about the sale.[12] Finally, Brodess and “the Georgia man” came toward the slave quarters to seize the child, where Rit told them, “You are after my son; but the first man that comes into my house, I will split his head open.”[12] Brodess backed away and abandoned the sale.[13] Tubman’s biographers agree that stories told about this event within the family influenced her belief in the possibilities of resistance.[13][14] (Harriet Tubman, Wikipedia)

Whatever the cause, a strong moral sense developed in Tubman. Around the age of twelve or fifteen, there was an incident where she refused to help an overseer catch and tie up a runaway slave. Instead, she stood in front of the door and blocked his way. He threw an iron weight after the escapee, but it came up short when it hit her in the head, knocking her unconscious. She later said that it “broke my skull” and, though her master wanted to send her back to work, it took her a long time to recover. “The teenager remained in a coma for weeks,” writes M.W. Taylor, “lying on a bed of rags in the corner of her family’s windowless wooden cabin. Not until the following spring was she able to get up and walk unaided” (Harriet Tubman: Antislavery Activist, p. 16). Kate C. Larson says that, “It took months for her mother to nurse her back to health” (Harriet Ross Tubman).

Ever after, she had seizures and trance-like states (“spells”, “sleeping fits”, or “a sort of stupor or lethargy at times”), premonitions and prophetic visions (“vivid dreams”), and out-of-body and other shamanic-like experiences — possibly caused by temporal lobe epilepsy, narcolepsy, cataplexy, or hypersomnia. She claimed to have heard the voice of God that guided and protected her, that He “spoke directly to my soul”. She “prayed all the time” and “was always talking to the Lord”“When I went to the horse trough to wash my face, and took up the water n my hands, I said, ‘Oh Lord, wash me, make me clean.’ When I took up the towel to wipe my face and hands, I cried, ‘Oh Lord, for Jesus’ sake, wipe away all my sins!’ ” (Sarah H. Bradford, Harriet, p. 11).

“During these hallucinatory states,” writes Gordon S. Johnson Jr., “she would also hear voices, screams, music, and rushing water, and feel as though her skin was on fire, while still aware of what was going on around her. The attacks could occur suddenly, without warning, even in the middle of a conversation. She would wake up and pick up the conversation where it left off a half hour later. In addition, Tubman would have terrible headaches, and would become more religious after the injury” (Harriet Tubman Suffered a TBI Early In Life).

While recuperating, she prayed for her master’s soul, that he might be saved and become a Christian. Her master’s behavior didn’t improve. In her stupor, no amount of whipping would arouse her. So he tried to sell her, but no one wanted to buy an injured and incapacitated slave, even though prior to the accident she had been hardworking and was able to do the work of a full-grown man. She didn’t want to be sold and separated from her family. One day she prayed that, if her master couldn’t be saved, the Lord should kill him and take him away. Shortly later, he did die and, with overwhelming guilt, she felt her prayer had been the cause.

Tubman’s experiences may have been shaped by African traditions, as there were many first generation slaves around. She would have been close to her immediate and extended family living in the area, as described by Professor Larson: “Harriet Tubman’s grandmother, Modesty, lived on Pattison’s property for an undetermined number of years after Rit left with Mary and moved to the Thompson plantation. Though the Thompson plantation sat about 6 miles to the west of the Pattison plantation and their neighbors along the Little Blackwater River near the bridge, their interactions were likely frequent and essential to maintaining social, political, and economic wellbeing” (Harriet Tubman Underground Railroad National Monument: Historic Resource Study).

An important familial link, as discussed above, was their shared religious inheritance. “Methodism was one source of strength, blending smoothly with cultural and religious traditions that survived the middle passage from Africa,” wrote Professor Larson. “First generation Africans, like her grandmother Modesty, embodied a living African connection and memory for the Bradford, Scenes in the Life of Harriet Tubman. Tubman’s religious fervor and trust in God to protect and guide her evolved from a fusion of these traditions.” Tubman remained close to family living on nearby plantations, such as being hired out to do logging work with her father and quite likely hearing the same sermons, maybe sometimes clandestinely meeting in the “hidden church” of informal religious gatherings.

Her first biographer, Fanklin Sanborn, said that she was “one degree removed from the wolds of Africa, her grandfather being an imported African of a chieftan family” and that, as “the grand-daughter of a slave imported from Africa,” she “has not a drop of white blood in her veins” (“The Late Araminta Davis: Better Known as ‘Moses’ or ‘Harriet Tubman’.” Franklin B. Sanborn Papers. Box 1, Folder 5. Box 1, Folder 5, American Antiquarian Society). The latter claim of her being pure African ancestry has been disputed and was contradicted by other accounts, but at least part of her family was of recent African ancestry as was common in that era, making her a second generation American in at least one line. With a living memory of the Old World, Tubman’s maternal grandmother Modesty Green would have been treated as what is called a griot, an elder who is a teacher, healer, and counselor; a keeper of knowledge, wisdom, and customs. She would have remembered the old world and had learned much about how to live in the new one, helping to shape the creole culture into which Tubman was born.

Modesty might have come from the Ashanti tribe of West Africa, specifically Ghana. She was sold as a slave sometime before 1785, the year Tubman’s mother Rittia (Rit, Ritty) Green was born. The Ashanti ethnicity was common in the region, writes Ann Malaspina: “During the eighteenth century, more than one million slaves were bought by British, Danish, and Dutch slave traders and shipped to the Americas from the Ashanti Empire on West Africa’s Gold Coast, a rich trading region. Many Ashanti slaves were sold to buyers in Maryland” (Harriet Tubman, p. 10). The Ashanti had a proud reputation and the ethnic culture made its presence known, such as the “Asante proverbs that Harriet picked up as a young girl (“Don’t test the depth of a river with both feet”)” (Catherine Clinton, Harriet Tubman). Along with the Ashante, blacks of Igbo descent were numerous in the Tidewater region of Maryland and Virginia (Igbo Americans, Wikipedia). These cultures, along with the Kongo people, were known to be proud and loyal. Also, West Africa had a tradition of respect for women — as property owners and leaders, and sometimes as warriors.

It’s the reason the Tidewater plantation owners preferred them as slaves. The preference in the Deep South was different because down there plantations were large commercial operations with typically absentee owners, an aristocracy that spent most of its time in Charleston, England, or elsewhere. Tidewater slaveholders had smaller plantations and were less prosperous. This meant they and their families lived close to slaves and, in some cases, would have worked with them. These Tidewater aristocrats were more likely to use the paternalistic rhetoric that identified slaves as part of the extended family, as often was literally the case from generations of close relations with many of the plantation owner’s mulatto children, grandchildren, cousins, etc running around. Cultures like the Ashanti and Igbo, in being strongly devoted to their families and communities, could be manipulated to keep slaves from running away. The downside to this communal solidarity is that these ethnic groups were known to be disobedient and cause a lot of trouble, including some of the greatest slave rebellions

Tubman is an exemplar of this Tidewater black culture. According to her own statements recorded by Frank C. Drake: “the old mammies to whom she told [her] dreams were wont to nod knowingly and say, ‘I reckon youse one o’ dem ‘Shantees’, chile.’ For they knew the tradition of the unconquerable Ashantee blood, which in a slave made him a thorn in the side of the planter or cane grower whose property he became, so that few of that race were in bondage” (“The Moses of Her People. Amazing Life work of Harriet Tubman,” New York Herald, New York, Sept. 22, 1907). The claim about her grandmother was confirmed by a piece from the year before Tubman’s death, written by Ann Fitzhugh Miller (granddaughter of Tubman’s friend Gerrit Smith), in reporting that Tubman believed her maternal grandmother had been “brought in a slave ship from Africa” (“Harriet Tubman,” American Review, August 1912, p. 420).

Professor Kate C. Larson concludes that, “It has been generally assumed at least one if not more of Tubman’s grandparents came directly from Africa” (Harriet Tubman Underground Railroad National Monument: Historic Resource Study). This is the reason for speculating about a more direct African influence or, at the very least, it shows how important an African identity was to Tubman’s sense of faith and spirituality. “Like many enslaved people, her belief system fused Christian and African beliefs,” Robert Gudmestad suggests. “Her belief that there was no separation between the physical and spiritual worlds was a direct result of African religious practices. Tubman literally believed that she moved between a physical existence and a spiritual experience where she sometimes flew over the land.”

* * *

Harriet Tubman’s Special Relationship with God and Archaic Authorization

Whatever was the original source and true nature of Harriet Tubman’s abilities, they did serve her well in freeing slaves and saved her from her pursuers. She always trusted her voices and visions, and would change her course of action in an instant, such as the time God told her to not continue down a road and so, without hesitation, she led her fellow fugitives across the rushing waters of an icy stream, but the “several stout men” in her care “refused to follow til they saw her safe on the other side”. Sarah Bradford goes on to say that, “The strange part of the story we found to be, that the masters of these men had put up the previous day, at the railroad station near where she left, an advertisement for them, offering a large reward for their apprehension; but they made a safe exit” (p. 45). Commenting on this incident, McGowan and Kashatus notes, “Similar instances occurred on her rescue missions whenever Harriet was forced to make an important decision” (Harriet Tubman: A Biography, p. 62).

This divine guidance probably made her behavior erratic and unpredictable, always one step ahead (or one step to the side) of the slave-catchers — maybe not unlike the Trickster stories she likely heard growing up, as part of the folklore tradition in African-American communities or possibly picked up from Native Americans who still lived in the area. Maybe there is a reason both Trickster stories and voice-hearing are often found in oral cultures. The Trickster, as an archetype similar to salvific figures, exists between the divine and human — Jesus often played the role of Trickster. Looking more closely at this mentality might also tell us something about the bicameral mind.

Her visions and voice-hearing was also a comfort and assurance to her; and, as some suggested, this gave her “command over others’ minds” (Edna Cheney, “Moses”, The Freedmen’s Record, p. 35) — that is to say, when around her, people paid attention and did what they were told. She had the power of charisma and persuasion, and failing that she had a gun that she was not afraid to use too good effect. She heard God’s voice in conviction and so she spoke with conviction. One was wise to not doubt her and, when leading slaves to freedom, she did not tolerate anyone challenging her authority. But it was in moments of solitude that she most strongly felt the divine. Based on interviews with Tubman in 1865, Edna Cheney conveyed it in the following way:

“When going on these journeys she often lay alone in the forests all night. Her whole soul was filled with awe of the mysterious Unseen Presence, which thrilled her with such depths of emotion, that all other care and fear vanished. Then she seemed to speak with her Maker “as a man talketh with his friend;” her child-like petitions had direct answers, and beautiful visions lifted her up above all doubt and anxiety into serene trust and faith. No man can be a hero without this faith in some form; the sense that he walks not in his own strength, but leaning on an almighty arm. Call it fate, destiny, what you will, Moses of old, Moses of to-day, believed it to be Almight God” (p. 36).

Friends and co-conspirators described Tubman as having lacked the gnawing anxiety and doubt that, according to Julian Jaynes, has marked egoic consciousness since the collapse of Bronze Age civilization. “Great fears were entertained for her safety,” according to William Still, an African American abolitionist who personally knew her, “but she seemed wholly devoid of personal fear. The idea of being captured by slave-hunters or slave-holders, seemed never to enter her mind.” That kind of absolute courage and conviction, based on trust of voices and visions, is not common in the modern mind. Her example inspired and impressed many.

Thomas Garrett, a close confidante, said that, “I never met with any person, of any color, who had more confidence in the voice of God, as spoken direct to her soul. She has frequently told me that she talked with God, and he talked with her every day of her life, and she has declared to me that she felt no more fear of being arrested by her former master, or any other person, when in his immediate neighborhood, than she did in the State of New York, or Canada, for she said she never ventured only where God sent her, and her faith in a Supreme Power truly was great” (letter, 1868). As an aside, there is an interesting detail about her relationship with God — it was told by Samuel Hopkins Adams, grandson of Tubman’s friend and benefactor Samuel Miles Hopkins (brother of Tubman’s biographer Sarah Bradford): “Her relations with the Deity were personal, even intimate, though respectful on her part. He always addressed her as Araminta, which was her christened name” (“Slave in the Family”, Grandfather Stories, pp. 277-278; quoted by Jean M. Humez on p. 355 of Harriet Tubman: The Life and the Life Stories).

In summarizing her faith, Milton C. Sernett concluded that, “Tubman did not distinguish between seer and saint. She seems to have believed that her trust in the Lord enabled her to meet all of life’s exigencies with a confident foreknowledge of how things would turn out, a habit others found impressive, or uncanny, as the case may be” (Harriet Tubman: Myth, Memory, and History, p. 145). That is it. This supreme confidence did not come from herself. At one moment of uncertainty, she was faced with making a decision. “The Lord told me to do this. I said, ‘Oh Lord, I can’t—don’t ask me—take somebody else.” God then spoke to her: “It’s you I want, Harriet Tubman” (Catherine Clinton, Harriet Tubman: The Road to Freedom).

Anyone familiar with Julian Jaynes’ theory of the bicameral mind would perk up at this discussion of voice-hearing, specifically of commanding voices with the undeniable and infallible power of archaic authorization. Besides this, he spoke of three other necessary components to the general bicameral paradigm, as much relevant today as it was during the Bronze Age (The Origin of Consciousness in the Breakdown of the Bicameral Mind, p. 324):

  • “The collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form”
  • “an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations”
  • “the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group”

Collective cognitive imperative is central what we are exploring here. Tubman grew up in a culture where such spiritual, paranormal, and shamanic experiences were still part of a living tradition, including traces of traditional African religion. She lacked doubt about this greater reality because almost everyone around her shared this sense of faith. As social creatures, such shared culture has a powerful effect upon the human mind. But at that point in early modernity when Tubman grew up, most of American society had lost the practices of induction and hence the ability to enter trances.

The Evangelical church, however, has long promoted trance experiences and trained people how to talk to God and listen for his voice (still does, in some cases: Tanya Luhrmann, When God Talks Back). Because of her brain condition, Tubman didn’t necessarily require induction, although her ritual of constant prayer probably helped. She went into trance apparently without having to try, one might say against her will. There is also another important contributing factor. Voice-hearing has historically been most common among non-literate, especially preliterate, societies — that is because the written word alters the human mind, as argued by many besides Jaynes: Marshall McLuhan, Walter Ong, etc. Such illiteracy would describe the American slave population since it was against the law for them to read and write.

* * *

Harriet Tubman’s Illiteracy and Storytelling Talent

This state of illiteracy included Tubman. During the Civil War, she spoke of a desire to become literate so as to “write her own life” (Cheney, p. 38), but there is no evidence she ever learned to write. “The blow to the head of Tubman received at about thirteen may have been the root cause of her illiteracy. According to Cheney’s sketch, “The trouble in her head prevents her from applying closely to a book” “ (Milton C. Sernett, Harriet Tubman: Myth, Memory, and History, p. 105). She remained her whole life fully immersed in an oral mindset. This was demonstrated by her heavy use of figurative language with concrete imagery, as when describing a Civil War battle — recorded by visiting historian Albert Bushnell Hart:

“And then we saw the lightning, and that was the guns; and then we heard the thunder, and that was the big guns; and then we heard the rain falling, and that was the drops of blood falling; and when we came to get in the crops, it was dead men that we reaped” (Slavery and Abolition, p. 209). Also, consider how she spoke of her personal experiences: “She loves to describe her visions, which are very real to her; but she must tell them word for word as they lie in her untutored mind, with endless repetitions and details, she cannot condensed them, whatever be your haste. She has great dramatic power; the scene rises before you as she saw it, and her voice and language change with her different actors” (Cheney, pp. 36-37).

Elaborating on her storytelling talent, Jean M. Humez writes: “One of Earl Conrad’s informants who as a child had known Tubman in her old age reported: “there never was any variation in the stories she told, whether to me or to any other” (Tatlock, 1939a). It is characteristic of the folklore performer trained in an oral culture to tell a story in precisely the right way each time. This is because the story itself is often regarded as a form of knowledge that will educate the young and be passed down through the generations. The storyteller must not weaken the story’s integrity with a poor performance” (Harriet Tubman: The Life and the Life Stories, p. 135).

This was also heard in how Tubman drew upon the down-to-earth style of old school religion: “Instead of the classical Greek “tricks of oratory” to which the college-educated Higginson refers, Tubman drew upon homelier sources of eloquence, such as scriptures she would have heard preached in the South. She frequently employed a teaching technique made familiar in the New Testament Gospels—the “parable’ or narrative metaphor—to make her lessons persuasive and memorable” (Jean M. Humez, Harriet Tubman: The Life and the Life Stories, p. 135). She knew of Jesus’ message through oral tellings by preachers and that was fitting since Jesus too effectively taught in the spoken word.

She was masterful. Even before a crowd of respectable whites, such as at abolitionist meetings, she could captivate an audience and move them to great emotion. Having witnessed a performance of Tubman’s oft-repeated story of former slave Joe’s arrival in Canada along with a rendition of the song he sang in joyous praise, Charlotte Forten recorded the impact it had on those present: “How exciting it was to hear her tell the story. And to hear the very scraps of jubilant hymns that he sang. She said the ladies crowded around them, and some laughed and some cried. My own eyes were full I listened to her” (Charlotte Forten, journal entry, Saturday, January 31, 1862).

All of these ways of speaking are typical of those born in oral societies. As such, her illiteracy might have been key. “She is a rare instance,” as told in The Freedmen’s Record, “in the midst of high civilization and intellectual culture, of a being of great native powers, working powerfully, and to beneficient ends, entirely unaided by school or books” (Cheney, p. 34). Maybe the two factors are closely linked. Even in the ancient world, some of the most famous and respected oracles were given by the uneducated and illiterate, often women. Tubman did have the oracular about her, as she occasionally prophesied outcomes and coming events.

We mainly know of Tubman through the stories she told and retold of herself and her achievements, surely having been important in gaining support and raising funds in those early years when she needed provisions to make her trips to the South. She came from a storytelling tradition and, obviously, she knew how to entertain and persuade, to make real the plight of the still enslaved and the dangers it took to gain their freedom. She drew in her audience, as if they were there with bloodhounds tracking them, with their lives hanging in the balance of a single wrong decision or unfortunate turn of events.

One of her greatest talents was weaving song into her stories, but that was also part of oral culture. The slave’s life was filled with song, from morning to night. They sung in church and while at work, at births and burials. These songs were often stories, many of them taken from or inspired by the religion that was so much a part of their daily experience. Song itself was a form of language: “Tubman used spirituals to signal her arrival or as a secret code to tell of her plans. She also used spirituals to reassure those she was leading of their safety and to lift their spirits during the long journey to freedom” (M.W. Taylor, Harriet Tubman: Antislavery Activist, p. 18). She also used the song of birds and owls to communicate, something she may have learned from the African or Native American tradition.

Song defined Tubman, as much as did her spirituality. “Religious songs,” Jean M. Humez explains, “embellished Tubman’s oral storytelling performances and were frequently central plot elements in her most popular Underground Railroad stories. There was the story of teasing the thick-witted “master” the night before her escape by using a familiar Methodist song, “I’m Bound for the Promised Land,” to communicate to her family her intention to run away. Singing was also integral to her much-told story about coded communication with fugitives she had hidden in the woods. “Go Down, Moses” meant “stay hidden,” while a “Methodist air,” “Hail, oh hail, ye happy spirits,” meant “all clear” (Bradford, 1869)” (Harriet Tubman: The Life and the Life Stories, p. 136).

Humez goes on to say that, “Though she was able to capture and reproduce the lyrics for her readers, Bradford was evidently bewildered by Tubman’s musical performances in much the same way Cheney was by her spiritual testimony: “The air sung to these words was so wild, so full of plaintive minor strains, and unexpected quavers, that I would defy any white person to learn it, and often as I heard it, it was to me a constant surprise” (Bradford, 1886, 35-36).” Her performances used a full range expression, including through her movement. She would wave her arms and clap her hands, sway and stamp her feet, dance and gesture — according to the details of what she spoke and rhythm of what she sang (Humez, p. 137). Orality is an embodied way of communicating.

* * *

Harriet Tubman’s Voice-Hearing and the Power of Oral Culture

Tubman may have been more talented and charismatic than most, but one suspects that such a commanding presence of speech and rhetorical persuasion would have been far more common among the enslaved who were raised in an oral culture where language was one of the few sources of power in defense against those who wielded physical violence and political force — such as the necessary ability for survival to use language that was coded and veiled, symbolic and metaphorical, whether in conversation or song, in order to communicate without stating something directly for fear of being overheard.

Her display of orality would have impressed many whites simply because literacy and the literary mind by that point had become the norm among the well-off white abolitionists who came to hear her. Generations had passed since orality had been prevalent in mainstream American society, especially among the emerging liberal class. The traditional culture of the ancien regime had been eroding since the colonial era. There is a power in oral cultures that the modern mind has forgotten, but there were those like Tubman who carried the last traces of oral culture into the 20th century before she finally died in her early 90s in 1913.

The bewilderment of whites, slave-catchers and abolitionists alike, by Tubman’s prowess makes one think of another example of the power of oral culture. The Mongol hordes, as they were perceived, acted in a way that was incomprehensible to the literate ruling elite of European feudalism. Genghis Khan established a mnemonic system used among his illiterate cavalry that allowed messages to be spread quickly and accurately. As all Mongols rode horses and carried all food with them, they were able to act collectively like a swarm and so could easily shift strategy in the middle of a battle. Oral culture had less rigid hierarchy. It was also highly religious and based in a shamanic tradition not unlike that of Africa. Genghis Khan regularly prayed to God, fasting for days until getting a clear message before he would leave on a military campaign. In similar fashion, Thomas Garrett said of Tubman: “She is a firm believer in spiritual manifestations […] she never goes on her missions of mercy without his (God’s) consent” (letter to Eliza Wigham, Dec. 27, 1856).

One imagines that, as with that Mongol leader, Tubman was so successful for the reason she wielded archaic authorization. That was the underlying force of personality and persuasion that made her way of speaking and acting so compelling, for the voice of God spoke through her. It was a much greater way of being in the world, a porous self that extended much further and that could reach into the heart and minds of others, apparently not limited to humans. Her “contemporaries noted that Tubman had a strange power over all animals—another indication of psychic ability—and insisted that she never feared the bloodhounds who dogged her trail when she became and Underground Railroad agent” (James A. McGowan & William C. Kashatus, Harriet Tubman: A Biography, pp. 10-11). Psychic ability or simply a rare example of a well-functioning bicameral mind in the modern era.

Some people did perceive her as being psychic or otherwise having an uncanny perception, an ability to know things it seems she shouldn’t be able to know. It depends on one’s psychological interpretation and theological persuasion. Her compatriot Thomas Garrett was also strongly religious in his commitment to abolitionism. “In fact,” states McGowan and Kashatus, “Garrett compared Harriet’s psychic ability to hear “the voice of God as spoken direct to her soul” to the Quakers’ concept of an Inner Light, or a divine presence in each human being that allows them to do God’s will on earth. Because of their common emphasis on a mystical experience and a shared religious perspective, Tubman and the Quakers developed a mutual trust” (Harriet Tubman: A Biography, p. 62). A particular incident helps explain Garret’s appraisal, from the same book (pp. 59-60):

“One late afternoon in mid-October 1856, Harriet arrived in Wilmington, Delaware, in need of funding for a rescue mission to the Eastern Shore. She went immediately to the office of Thomas Garrett, a white Quaker station master who also operated a hardware business in the town. “God sent me to you, Thomas,” said Harriet, dismissing the formality of a simple greeting. “He tells me you have money for me.” Amused by the request, Garrett jokingly asked: “Has God ever deceived thee?” “No,” she snapped. “I have always been liberal with thee, Harriet, and wish to be of assistance,” said the Quaker station master, stringing her along. “But I am not rich and cannot afford to give thee much.” Undeterred by the response, Harriet shot back: “God told me you’ve got money for me, and God never fools me!” Realizing that she was getting upset, Garrett cut to the chase: “Well, then, how much does thee need?” After reflecting a moment, Tubman said, “About 23 dollars.”

“The elderly Quaker shook his head in disbelief. Harriet’s request was almost exactly the amount he had received from an antislavery society in Scotland for her specific use. He went to his cash box, retrieved the donation, and handed it to his visitor. Smiling at her benefactor, Tubman took the cash, turned abruptly and marched out of the office. Astonished by the incident, Garrett later confided to another abolitionist that “there was something remarkable” about Harriet. “Whether it [was] clairvoyance or the divine impression on her mind, I cannot tell,” he admitted. “But I am certain she has a guide within herself other than the written word, for she never had any education.”1 By most accounts, Tubman’s behavior can be described as selfrighteous, if not extremely presumptuous. But she viewed herself as being chosen by God for the special duty of a liberator. In fact, she admitted that she “felt like Moses,” the Old Testament prophet, because “the Lord told me to go down South and bring up my brothers and sisters.” When she expressed doubt about her abilities and suggested that the Lord “take somebody else,” He replied: “It’s you I want, Harriet Tubman.”2 With such a divine commission, Tubman was confident that her visions and actions—no matter how rude by 19th–century society’s standards—were condoned by the Almighty. Thomas Garrett understood that.”

There is no doubt she had an instinctive understanding that was built on an impressive awareness, a keen presence of mind — call it psychic or bicameral. With our rigid egoic boundaries and schizoid mentality, we inhabitants of this modern hyper-individualistic world have much to learn about the deeper realms of the bundled mind, of the multiplicity of self. We have made ourselves alien to our own human and animal nature, and we are the lesser for it. The post-bicameral loss of not only God’s voice but of a more expansive way of being is still felt in a nostalgic longing that continues to rule over us, ever leading to backlashes of the reactionary mind. Even with possible brain damage, Tubman was no where near as mentally crippled as we are with our prized ego-consciousness that shuts out all other voices and presences.

In the Western world, it would be hard to find such a fine specimen of visionary voice-hearing. Harriet Tubman had a genius about her, both genius in the modern sense of brilliance and genius in the ancient sense of a guiding spirit. If she were around today, she would likely be medicated and institutionalized or maybe imprisoned, as a threat to sane and civil society (Bruce Levine, “Sublime Madness”: Anarchists, Psychiatric Survivors, Emma Goldman & Harriet Tubman). Yet there are still other societies, including developed countries, in the world where this is not the case.

Tanya Luhrmann, as inspired by Julian Jaynes, went into anthropology where she researches voice-hearing (her work on evangelicalism is briefly noted above). One study she did compared the experience of voice-hearers in the Ghana and the United States (Differences in voice-hearing experiences of people with psychosis in the U.S.A., India and Ghana: interview-based study). Unlike here in this country, those voice-hearer’s in certain non-Western culture are not treated as mentally ill and, unsurprisingly, neither do they experience cruel and persecutory voices — quite the opposite in being kind, affirming, and helpful as was the case with Tubman.

“In the case of voice hearing, culture may also play a role in helping people cope.  One study conducted by Luhrmann, the anthropologist, found that compared to their American counterparts, voice-hearing people diagnosed with schizophrenia in more collectivist cultures were more likely to perceive their voices as helpful and friendly, sometimes even resembling members of their friends and family. She adds that people who meet criteria for schizophrenia in India have better outcomes than their U.S. counterparts. She suspects this is because of “the negative salience” a diagnosis of schizophrenia holds in the U.S., as well as the greater rates of homelessness among people with schizophrenia in America” (Joseph Frankel, Psychics Who Hear Voices Could Be On to Something).

One suspects that the Ashanti and related African cultures that helped shape black traditions in Tubman’s Maryland are basically the same as the culture still existing in Ghana to this day. After all, the Ashanti Empire that began in the early colonial era, 1701, continued its rule well into the twentieth century, 1957. If it’s true that her grandmother Modesty was Ashanti, that would go a long way in explaining the cultural background to Tubman’s voice-hearing. It’s been speculated her father was the child of two Africans and it was directly from him that she claimed to have inherited her peculiar talents. It’s possible that elements of the bicameral mind survived later in those West African societies and from there was carried across the Middle Passage.

* * *

The Friendship and Freedom of the Living God

It’s important to think about the bicameral mind by looking at real world examples of voice-hearing. It might teach us something about what it means to be in relationship with a living God — a living world, a living experience of the greater mind, the bundled self (no matter one’s beliefs). Many Christians talk about such things, but few take it seriously, much less experience it or seek it out. That was what drew the Quakers to Tubman and others like her influenced by the African tradition of a living God. It wasn’t only a commonality of politics, in fighting for abolitionism and such. Rather, the politics was an expression of that particular kind of spiritual and epistemological experience.

To personally know God — or, if you prefer, to directly know concrete, lived reality — without the intervention of either priest or text or the equivalent can create immense power through authorization. It is an ability to act with confidence, rather than bowing down to external authority of hierarchical institutions, be it church clergy or plantation aristocracy. But it also avoids the other extreme, that of getting lost in the abstractions of the egoic consciousness that drain psychic reserves and make human will impotent. As Harriet Tubman proved, this other way of being can be a source of empowerment and liberation.

What made this possible is not only that she was illiterate but unchurched as well. In their own way, Quakers traditionally maintained a practice of being unchurched, in avoiding certain formal church institutions such as eschewing the ministerial profession. Slaves, on the other hand, were often forced to be unchurched in not being allowed to participate in formal religion. This would have helped maintain traditional African spiritual practice and experience. Interestingly, as J.E. Kennedy reports, one set of data found that “belief in the paranormal was positively related to religious faith but negatively related to religious participation” (The Polarization of Psi Beliefs; as discussed in NDE: Spirituality vs Religiosity). It’s ironic that formal religion (organized, institutionalized) and literacy, specifically in a text-based religion, have the powerful effect of disconnecting people from experience of God. Yet experience of God can break the spell of that mind virus.

The other thing is that, like African religion, the Quaker emphasis was on the communal. This might not seem obvious, in how Quakers believed in the individual’s relationship to God. That is where Tubman’s example is helpful. She too had an individual relationship to God, but her identity was also tied closely to kinship, community, and ancestry. We need to think more carefully about what is meant when we speak of individuality. One can gain one’s own private liberty by freeing oneself from shackled enslavement, that is to say changing one’s status from owned by another to owned by oneself (i.e., owned by the ego-self, in some ways an even more harsh taskmaster). Freedom, however, is something else entirely. The etymology of ‘freedom’ is the same as ‘friend’. To be free is to be among friends, to be a member of a free society — one is reminded that, to Quakers and West Africans alike, there was an inclination to relate to God as a friend. Considering this simple but profound understanding, it wasn’t enough for Tubman to escape her oppressive bondage, if she left behind everyone she loved.

Often she repeated her moral claim for either liberty or death, as if they were of equivalent value; whereas freedom is about life and the essence of life is shared, as freedom is always about connection and relationship, about solidarity and belonging. She couldn’t be free alone and, under the will of something greater than her, she returned South to free her kith and kin. The year Harriet Tubman first sought freedom, 1849, was the same year of the birth of Emma Lazarus, a poet who would write some of the most well known words on slavery and oppression, including the simple statement that, “Until we are all free, we are none of us free.” About a century later, this was rephrased by Martin Luther King Jr. during the Civil Rights movement when he said, “No one is free until we are all free.” One could trace this insight back to the ancient world, as when Jesus spoke that, “Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.” That is freedom.

A living God lives among a living generation of people, a living community. “For where two or three gather in my name,” as Jesus also taught, “there am I with them.” Quakers had a tradition of living constitutionalism, something now associated with liberalism but originally having its origins in a profound sense of the divine (Where Liberty and Freedom Converge). To the Quaker worldview, a constitution is a living agreement and expression of the Divine, a covenant between God a specific people; related to why Quakers denied natural law that would usurp the authorization of this divine presence. A constitution is not a piece of paper nor the words upon it. Nor can a constitution be imposed upon other people outside of that community of souls. So, neither slaves nor following generations are beholden to a constitution enacted by someone else. This was why Thomas Jefferson assumed later Americans would forever seek out new constitutions to express their democratic voice as a people. But those who understood this the best were Quakers; or those, like Thomas Paine, who were early on influenced by the Quaker faith.

Consider John Dickinson who was raised as a Quaker and, after inheriting slaves, freed them. He is the author of the first draft of America’s first constitution, the Articles of Confederation, which was inspired by Quaker constitutionalism. The Articles of Confederation was a living document, in that it’s only power was the authority of every state agreeing to it with total consensus and no change being allowed to be made to it without further consensus. The second constitution, simply known as the United States Constitution and unconstitutionally established according to the first constitution (The Vague and Ambiguous US Constitution), was designed to be a dead letter and it has become famous for enshrining the institution of slavery. Rather than expressing a message of freedom, it was a new system of centralized power and authority. The deity invoked under this oppression is a dead god, a god of death. No one hears the voice of this false god, this demiurge.

Such a false idol can make no moral claim over a free people. As such, a free people assert their freedom by the simplest act of walking away, as did Harriet Tubman by following the water gourd pointing to the North Star, and as she repeated many times in guiding her people to what to them was the Promised Land. What guided her was the living voice of the living God. They had their own divine covenant that took precedence over any paper scribbled upon by a human hand.

* * *

Harriet Tubman, an Unsung Naturalist, Used Owl Calls as a Signal on the Underground Railroad
by Allison Keys, Audubon Magazine

“It was in those timber fields where she learned the skills necessary to be a successful conductor on the Underground Railroad,” Crenshaw explains, “including how to read the landscape, how to be comfortable in the woods, how to navigate and use the sounds that were natural in Dorchester County at the time.”

Underground Railroad Secret Codes
from Harriet Tubman Historical Society

Supporters of the Underground Railroad used words railroad conductors employed everyday to create their own code as secret language in order to help slaves escape. Railroad language was chosen because the railroad was an emerging form of transportation and its communication language was not widespread. Code words would be used in letters to “agents” so that if they were intercepted they could not be caught. Underground Railroad code was also used in songs sung by slaves to communicate among each other without their masters being aware.

Myths & Facts About Harriet Tubman
from National Park Service

Tubman sang two songs while operating her rescue missions. Both are listed in Sarah Bradford’s biography Scenes in the Life of Harriet Tubman: “Go Down Moses,” and, “Bound For the Promised Land.” Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.

Songs of the Underground Railroad
from Harriet Tubman Historical Society

Songs were used in everyday life by African slaves. Singing was tradition brought from Africa by the first slaves; sometimes their songs are called spirituals. Singing served many purposes such as providing repetitive rhythm for repetitive manual work, inspiration and motivation. Singing was also use to express their values and solidarity with each other and during celebrations. Songs were used as tools to remember and communicate since the majority of slaves could not read.

Harriet Tubman and other slaves used songs as a strategy to communicate with slaves in their struggle for freedom. Coded songs contained words giving directions on how to escape also known as signal songs or where to meet known as map songs.

Songs used Biblical references and analogies of Biblical people, places and stories, comparing them to their own history of slavery. For example, “being bound for the land of Canaan” for a white person could mean ready to die and go to heaven; but to a slave it meant ready to go to Canada.

Scenes in the Life of Harriet Tubman
by Sarah Hopkins Bradford
pp. 25-27

After nightfall, the sound of a hymn sung at a distance comes upon the ears of the concealed and famished fugitives in the woods, and they know that their deliverer is at hand. They listen eagerly for the words she sings, for by them they are to be warned of danger, or informed of safety. Nearer and nearer comes the unseen singer, and the words are wafted to their ears:

Hail, oh hail ye happy spirits,
Death no more shall make you fear,
No grief nor sorrow, pain nor anger (anguish)
Shall no more distress you there.

Around him are ten thousan’ angels,
Always ready to ‘bey comman’.
Dey are always hobring round you,
Till you reach the hebbenly lan’.

Jesus, Jesus will go wid you;
He will lead you to his throne;
He who died has gone before you,
Trod de wine-press all alone.

He whose thunders shake creation;
He who bids the planets roll;
He who rides upon the temple, (tempest)
An’ his scepter sways de whole.

Dark and thorny is de desert,
Through de pilgrim makes his ways,
Yet beyon’ dis vale of sorrow,
Lies de fiel’s of endless days.

I give these words exactly as Harriet sang them to me to a sweet and simple Methodist air. “De first time I go by singing dis hymn, dey don’t come out to me,” she said, “till I listen if de coast is clar; den when I go back and sing it again, dey come out. But if I sing:

Moses go down in Egypt,
Till ole Pharo’ let me go;
Hadn’t been for Adam’s fall,
Shouldn’t hab to died at all,

den dey don’t come out, for dere’s danger in de way.”

Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman
by Margaret Washington

I. Building Communities
C. It Takes a Village to Raise a Child.

Enslaved African Americans came from a heritage that embraced concepts of solidarity in a descending order from the larger ethnic group, to the communal village, to the extended family to the nuclear family. Individualism (as opposed to individuality) was considered selfish and antithetical to the broader interests of a unit. Whether societies were matrilineal or patrilineal, nearly all were patriarchal (power rested with men). Nonetheless, the glue that bound the communal circle was the woman, considered the life giving force, the bearer of culture, essence of aesthetic beauty and key to a community’s longevity. Mothers, grandmothers, aunts, sisters etc. had oversight of children until puberty, when male and female rites of passage prepared them separately for their gendered communal roles. West African women were spiritually strong, morally respected, valued for their economic propensity, important in governance and in some cultures (Ashanti, Kongo, Ibo) powerful warriors. However devalued and exploited in America, Modesty, Rit and Minty exemplified how enslaved women resisted a sense of futility or fatalism and refashioned African attributes of beauty, dignity, self-worth and ethics. Enslaved women combed the waterways, forests and woods to obtain roots, herbs, leaves, sap, barks and other medicinal products for healing, amulets and even conjuration. Rit certainly used such remedies to nurse Minty back to health after extreme exhaustion, illnesses, beatings and her near fatal blow on the head. Rit learned these remedies and poultices from her mother Modesty and Harriet Tubman used them on the Underground Railroad. Their example reveals the significance of women to the community and that despite the assaults on the black family; it remained an institution, which even separation could not sever. […]

II ANCHORING THE SPIRIT
A. The Hidden Church: An African-Christian Synthesis.

If community was the base of African and African American life and culture, spirituality was the superstructure. Certainly enslaved people ultimately embraced Christianity. But for generations Southern whites feared exposing blacks to Christianity. The Bible’s Old Testament militant nationalism and New Testament’s spiritual  egalitarianism were not lost on African Americans, a few of whom were literate and the majority of whom felt that baptism was one kind of freedom.

Like most enslaved children, young Minty grew up outside of a church. However, since Ben Ross’s owner Anthony Thompson Sr., was a practicing Methodist, Minty’s family heard Christian sermons. But Edward Brodess was not devout and when he separated the Ross family, little Minty was hired out and did not receive white religious benevolence. But a tradition of black religion and spirituality existed independent of whites. In African culture, sacred worship embedded every aspect of life (rites of passage, marriage, funerals, child birth, etc.). Divine reverence was not confined to a building, a single ceremony or a specific day of the week. Spirituality was pervasive, expressive, emotional and evocative. Although the religious culture developed in America had African roots, the ravages of bondage created more social-spiritual convergences. In Minty’s world, spirituality was wrapped in temporal concerns affecting the individual, the family and the community. Worship was praising, praying, lamenting, hoping and drawing strength from each other. Long before Minty’s birth, Africans in America had created a “hidden church” where enslaved people gathered clandestinely (the woods, in cabins, in boats, in white people’s kitchens and even in the fields). In the hidden church they recounted religious and secular experiences; gave testimonies and created a space were women such as Rit could express the pain of having children sold or of trying to bring Minty back to life after her head was bashed in. In the hidden church, enslaved people created subversive songs, prayed for spiritual salvation, heavenly retribution and freedom.

Africans traveling the Maafa brought an ethos that merged the sacred and secular worlds. Enslaved African Americans embraced Christianity but also selectively adapted it to previous traditions and to their historical circumstances. Above all, they rejected incongruous white teachings meant to relegate blacks to perpetual slavery. Rather than being converted to Christianity as taught by whites, enslaved people converted Christianity to their own needs. Moreover, some significant African and Christian traditions had noteworthy commonalities.

Africans, like Christians believed in one God (Nzambi among the Bantu, Onyame among the Akan-Ashanti for example) who was the apex of all existence just as humanity was the center of earthly life. While gendered concepts of the African Supreme Being varied, like Jehovah, Africans’ God was revered, all-powerful and approachable. However, unlike Jehovah, the African Supreme Being was not feared, jealous nor wrathful. Other spirits exist in the African pantheon, like saints in Catholicism. But there was only one God. Hence, when whites spoke of a Supreme God, Africans understood. Harriet Tubman’s God was an all-powerful friend. According to Thomas Garrett, her close friend and a beloved Quaker Underground Railroad Conductor, Harriet spoke to God every day of her life. “I never knew anyone so confident of her faith,” said Garrett. (Letter in Bradford)

Africans, like Christians, believed in a soul, sometimes called the “heart” or “voice.” The soul was responsible for human behavior in life and was one’s spiritual existence after death. Some ethnicities had complicated concepts of the soul; others simply recognized the soul as the “little me in the big me” which lived on. Africans believed in honoring this life after death, especially as part of the kinship spiritual connection (ancestor reverence), which brought protection to the living. The curse of the dead was much dreaded in Africa and in America. Hence the importance of burial and funeral rites throughout the Diaspora, even today. A woman such as Harriet Tubman who embraced Christianity, also blended a spiritual syncretism that constructed a concept of the soul around moral ethics and faith imparted through the word of God, “as spoken to her soul” according to her friend Garrett. “She is a firm believer in spiritual manifestations . . . she never goes on her missions of mercy without his (God’s) consent.” (Garrett to Eliza Wigham, in McGowan, 135)

Water was a life giving force in African culture and the spirit world was under water. Throughout the African Diaspora, water represented divine transformations—birth, death, baptism and rebirth. For many enslaved people, accepting Christianity carried implications reminiscent of older traditions that surpassed what whites intended. In African cultures, an initiate received a “sacred bath” following a special protracted rite of passage symbolizing acceptance and integration into the community. Similarly, with Christianity enslaved people sought salvation through isolation, prayer, meditation, and communication with God through visions and signs from the natural environment. Baptism by total immersion represented final acceptance into the “ark of safety.” Although Methodists baptized by sprinkling, enslaved people insisted on going “down under” the water. They also equated spiritual transformation with secular change. Such thinking was Christian because the New Testament upheld spiritual egalitarianism. It was also African: One traveled briefly into the watery world of the ancestors as an uncivil “little spirit of the bush” full of individualistic anti-communal tendencies. One emerged from the water as a citizen of the community able to partake of all rights and privileges. The change was both divine and temporal; it was fervent, overwhelming and thoroughgoing. Canals, marshes, swamps and rivers surrounded African descended people on the Eastern Shore. Here they labored as slaves. Here they were baptized and hence constantly reminded of water’s spiritual and liberating significance.

Minty’s Christian conversion experience probably happened while working for the Stewarts in Caroline County. Whether because of that experience or her blow on the head, Minty insisted she spoke to God, had trances and saw visions that foretold future events. As a clairvoyant, Minty believed that she inherited this second sense from her father, Ben. Africans and African Americans believed that a clairvoyant person was born with a “caul” or “veil,” a portion of the birth membrane that remained on the head. They were seers and visionaries who communicated with the supernatural world and were under a special spiritual dispensation. Visions sometimes came while Minty worked, were accompanied by music and articulated in a different language. Minty also claimed exceptional power. When Edward Brodess sent slave traders to Ben’s cabin to inspect Minty, she prayed for God to cleanse Brodess’s heart and make him a good man or kill him. Brodess’ death convinced Minty that she had “prayed him to death.”1 Since his death put her in eminent danger of sale, Minty knew it was a sign from God to flee.

Northerners called Ben “a full-blooded Negro.” His parents were probably African born and told him the old Maafa adage that he passed on to Minty: some Africans could fly. Indeed, captured Ibo people committed suicide believing that their spirits flew back to Africa.2 Similarly, as Minty envisioned her escape, “She used to dream of flying over fiefs and towns, and rivers and mountings, looking down upon them ‘like a bird.'” When it appeared as if her strength would give out and she could not cross the river, “there would be ladies all dressed in white over there, and they would put our their arms and pull me across.” Listening to Ben’s stories, predictions and sharing his faith convinced Minty that an omniscient force protected her. In visions, she became a disembodied spirit observing earthly and heavenly scenes. Harriet Tubman told friends that God “called” her to activism against her wishes. She begged God to “get someone else” but to no avail. Since God called her, she depended on God to guide her away from danger.

Psychology in Religion or as a Religion

There is a strong connection between Islamic doctrine and, as Julian Jaynes wrote about, the post-bicameral experience of the lost divine, of God/gods gone silent. As a much later religious development, Islam took this sense of loss to a further extreme in the theological claim that neither God nor the angels any longer speak to humans (Islam as Worship of a Missing God; & Islamic Voice-Hearing), and that silence will continue until the end of time.

The divine supposedly can only be known about indirectly, by way of dreams and other means. It also makes it a much more text-based religion, since Muhammad wrote down his visions there has been total divine silence. So, there is greater focus on the power of language and textual analysis, as the only hope we have of sensing the voice of God in life is by reading the words of prophets who did hear God or, in the case of Muhammad, heard the archangel Gabriel speak on behalf of God.

In a way, this makes Islam a more modern religion, much further distant from bicameral voice-hearing. It was founded, after all, more than a half millennium following the earlier monotheistic revival in the post-axial era of the first century. So, Islam could be seen as an attempt to come to terms with a world ever more dominated by Jaynesian consciousness.

Evidence of this could be seen with Islamic psychology, ilm al-nafs. In the West, psychology developed more separately from and independently of religion, specifically Christianity and Judaism. But in Islam, psychological study and mental health became central to the religion itself and developed early on. That is a telling difference, so it seems to me.

Here is a possible explanation. Unlike the other monotheistic religions, the divine mind and voice in Islam is so distant as to have no immediate contact with the human world. This forces humans to study their own minds more carefully, including dreams, to sense the influence of the divine like reading the currents of the ocean by watching the ripples on the surface. This makes psychology to be potentially all the more important to Islam.

The West, instead, has largely replaced religion with psychology. This was necessary as religion had not as fully adapted itself to the new psychological mindset that emerged from Jaynesian consciousness. This leaves an uneasy relationship between religion and psychology for Western culture, something that is maybe less of an issue within Islam.

Islam has a more complicated and nuanced relationship to voice-hearing. This maybe requires a more psychological approach. The Islamic individual has a greater responsibility in determining the sources of voices, as part of religious practice.

The Islamic tradition sees religion and psychology as being inseparable. The psychologist Carl Jung, having developed mutual respect with the Islamic scholar Henry Corbin, agreed with that view in stating to Sigmund Freud that “religion can only be replaced by religion” (quoted in Peter Kingsley’s Catafalque). Jung argued that, “We must read the Bible or we shall not understand psychology. Our psychology, our whole lives, our language and imagery, are built upon the Bible.”

There is no way to remove religion from psychology. And all that we’ve accomplished in the modern West is to turn psychology into its own religion.

Balance of Egalitarianism and Hierarchy

David Graeber, an anthropologist, and David Wengrow, an archaeologist, have a theory about hunter-gatherer societies having cycled between egalitarianism and hierarchy. That is to say hierarchies were temporary and often seasonal. There was no permanent leadership or ruling caste, as seen in the fluid social order of still surviving hunter. This carried over into the early settlements that were initially transitory meeting places, likely for feasts and festivals.

There are two questions that need to be answered. First, why did humans permanently settle down? Second, why did civilization get stuck in hierarchy? These questions have to be answered separately. For millennia into civilization, the egalitarian impulse persisted within many permanent settlements. There was no linear development from egalitarianism to hierarchy, no fall from the Garden of Eden.

Julian Jaynes, in his theorizing about the bicameral mind, offered a possible explanation. A contributing factor for permanent settlements would be because the speaking idols had to be kept in a single location with agriculture developing as a later result. Then as societies became more populous, complex and expansive, hierarchies (as with moralizing gods) became more important to compensate for the communal limits of a voice-hearing social order.

That kind of hierarchy, though, was a much later development, especially in its extreme forms not seen until the Axial Age empires. The earlier bicameral societies had a more communal identity. That would’ve been true on the level of experience, as even the voices people heard were shared. There wasn’t an internal self separate from the communal identity and so no conflict between the individual member and larger society. One either fully belonged to and was immersed in that culture or not.

Large, complex hierarchies weren’t needed. Bicameralism began in small settlements that lacked police, court systems, standing armies, etc — all the traits of an oppressively authoritarian hierarchy that would later be seen, such as the simultaneous appearance of sexual moralizing and pornographic art. It wasn’t the threat of violent force by centralized authority and concentrated power that created and maintained the bicameral order but, as still seen with isolated indigenous tribes, shared identity and experience.

An example of this is that of early Egyptians. They were capable of impressive technological feats and yet they didn’t even have basic infrastructure like bridges. It appears they initially were a loose association of farmers organized around the bicameral culture of archaic authorization and, in the off-season, they built pyramids without coercion. Slavery was not required for this, as there is no evidence of forced labor.

In so many ways, this is alien to the conventional understanding of civilization. It is so radically strange that to many it seems impossible, especially when it gets described as ‘egalitarian’ in placing it in a framework of modern ideas. Mention primitive ‘communism’ or ‘anarchism’ and you’ll really lose most people. Nonetheless, however one wants to describe and label it, this is what the evidence points toward.

Here is another related thought. How societies went from bicameral mind to consciousness is well-trodden territory. But what about how bicameralism emerged from animism? They share enough similarities that I’ve referred to them as the animistic-bicameral complex. The bicameral mind seems like a variant or extension of the voice-hearing in animism.

Among hunter-gatherers, it was often costume and masks through which gods, spirits, and ancestors spoke. Any individual potentially could become the vessel of possession because, in the animistic view, all the world is alive with voices. So, how did this animistic voice-hearing become narrowed down to idol worship of corpses and statues?

I ask this because this is central to the question of why humans created permanent settlements. A god-king’s voice of authorization was so powerful that it persisted beyond his death. The corpse was turned into a mummy, as his voice was a living memory that kept speaking, and so god-houses were built. But how did the fluid practice of voice-hearing in animism become centralized in a god-king?

Did this begin with the rise of shamanism? Some hunter-gatherers don’t have shamans. But once the role of shaman becomes a permanent authority figure mediating with other realms, it’s not a large leap from a shaman-king to a god-king who could be fully deified in death. In that case, how did shamanism act as a transitional proto-bicameralism? In this, we might begin to discern the hitch upon which permanent hierarchy eventually got stuck.

I might point out that there is much disagreement in this area of scholarship, as expected. The position of Graeber and Wengrow is highly contested, even among those offering alternative interpretations of the evidence see Peter Turchin (An Anarchist View of Human Social Evolution & A Feminist Perspective on Human Social Evolution) and Camilla Power (Gender egalitarianism made us human: patriarchy was too little, too late & Gender egalitarianism made us human: A response to David Graeber & David Wengrow’s ‘How to change the course of human history’).

But I don’t see the disagreements as being significant for the purposes here. Here is a basic point that Turchin explains: “The reason we say that foragers were fiercely egalitarian is because they practiced reverse dominance hierarchy” (from first link directly above). That seems to go straight to the original argument. Many other primates have social hierarchy, although not all. Some of the difference appears to be cultural, in that humans early in evolution appear to have developed cultural methods of enforcing egalitarianism. This cultural pattern has existed long enough to have fundamentally altered human nature.

According to Graeber and Wengrow, these egalitarian habits weren’t lost easily, even as society became larger and more complex. Modern authoritarian hierarchies represent a late development, a fraction of a percentage of human existence. They are far outside the human norm. In social science experiments, we see how the egalitarian impulse persists. Consider two examples. Children will naturally help those in need, until someone pays them money to do so, shifting from intrinsic motivation to extrinsic. The other study showed how most people, both children an adults, will choose to punish wrongdoers even at personal cost.

This in-built egalitarianism is an old habit that doesn’t die easily no matter how it is suppressed or perverted by systems of authoritarian power. It is the psychological basis of a culture of trust that permanent hierarchies take advantage of through manipulation of human nature. The egalitarian impulse gets redirected in undermining egalitarianism. This is why modern societies are so unstable, as compared to the ancient societies that lasted for millennia.

That said, there is nothing wrong with genuine authority, expertise, and leadership — as seen even in the most radically egalitarian societies like the Piraha. Hierarchies are also part of our natural repertoire and only problematic when they fall out of balance with egalitarianism and so become entrenched. One way or another, human societies cycle between hierarchy and egalitarianism, whether it cycles on a regular basis or necessitates collapse. That is the point Walter Scheidel makes in his book, The Great Leveler. High inequality destabilizes society and always brings its own downfall.

We need to relearn that balance, if we hope to avoid mass disaster. Egalitarianism is not a utopian ideal. It’s simply the other side of human nature that gets forgotten.

* * *

Archaeology, anarchy, hierarchy, and the growth of inequality
by Andre Costopoulos

In some ways, I agree with both Graeber and Wengrow, and with Turchin. Models of the growth of social inequality have indeed emphasized a one dimensional march, sometimes inevitable, from virtual equality and autonomy to strong inequality and centralization. I agree with Graeber and Wengrow that this is a mistaken view. Except I think humans have moved from strong inequality, to somewhat managed inequality, to strong inequality again.

The rise and fall of equality

Hierarchy, dominance, power, influence, politics, and violence are hallmarks not only of human social organization, but of that of our primate cousins. They are widespread among mammals. Inequality runs deep in our lineage, and our earliest identifiable human ancestors must have inherited it. But an amazing thing happened among Pleistocene humans. They developed strong social leveling mechanisms, which actively reduced inequality. Some of those mechanisms are still at work in our societies today: Ridicule at the expense of self-aggrandizers, carnival inversion as a reminder of the vulnerability of the powerful, ostracism of the controlling, or just walking away from conflict, for example.

Understanding the growth of equality in Pleistocene human communities is the big untackled project of Paleolithic archaeology, mostly because we assume they started from a state of egalitarianism and either degenerated or progressed from there, depending on your lens. Our broader evolutionary context argues they didn’t.

During the Holocene, under increasing sedentism and dependence on spatially bounded resources such as agricultural fields that represent significant energy investments, these mechanisms gradually failed to dampen the pressures for increasing centralization of power. However, even at the height of the Pleistocene egalitarian adaptation, there were elites if, using Turchin’s figure of the top one or two percent, we consider that the one or two most influential members in a network of a hundred are its elite. All the social leveling in the world could not contain influence. Influence, in the end, if wielded effectively, is power.

Ancient ‘megasites’ may reshape the history of the first cities
by Bruce Bower

No signs of a centralized government, a ruling dynasty, or wealth or social class disparities appear in the ancient settlement, the researchers say. Houses were largely alike in size and design. Excavations yielded few prestige goods, such as copper items and shell ornaments. Many examples of painted pottery and clay figurines typical of Trypillia culture turned up, and more than 6,300 animal bones unearthed at the site suggest residents ate a lot of beef and lamb. Those clues suggest daily life was much the same across Nebelivka’s various neighborhoods and quarters. […]

Though some of these sprawling sites had social inequality, egalitarian cities like Nebelivka were probably more widespread several thousand years ago than has typically been assumed, says archaeologist David Wengrow of University College London. Ancient ceremonial centers in China and Peru, for instance, were cities with sophisticated infrastructures that existed before any hints of bureaucratic control, he argues. Wengrow and anthropologist David Graeber of the London School of Economics and Political Science also made that argument in a 2018 essay in Eurozine, an online cultural magazine.

Councils of social equals governed many of the world’s earliest cities, including Trypillia megasites, Wengrow contends. Egalitarian rule may even have characterized Mesopotamian cities for their first few hundred years, a period that lacks archaeological evidence of royal burials, armies or large bureaucracies typical of early states, he suggests.

How to change the course of human history
by David Graeber and David Wengrow

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Simply framing the question this way means making a series of assumptions, that 1. there is a thing called ‘inequality,’ 2. that it is a problem, and 3. that there was a time it did not exist. Since the financial crash of 2008, of course, and the upheavals that followed, the ‘problem of social inequality’ has been at the centre of political debate. There seems to be a consensus, among the intellectual and political classes, that levels of social inequality have spiralled out of control, and that most of the world’s problems result from this, in one way or another. Pointing this out is seen as a challenge to global power structures, but compare this to the way similar issues might have been discussed a generation earlier. Unlike terms such as ‘capital’ or ‘class power’, the word ‘equality’ is practically designed to lead to half-measures and compromise. One can imagine overthrowing capitalism or breaking the power of the state, but it’s very difficult to imagine eliminating ‘inequality’. In fact, it’s not obvious what doing so would even mean, since people are not all the same and nobody would particularly want them to be.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society. That is the real political message conveyed by endless invocations of an imaginary age of innocence, before the invention of inequality: that if we want to get rid of such problems entirely, we’d have to somehow get rid of 99.9% of the Earth’s population and go back to being tiny bands of foragers again. Otherwise, the best we can hope for is to adjust the size of the boot that will be stomping on our faces, forever, or perhaps to wrangle a bit more wiggle room in which some of us can at least temporarily duck out of its way.

Mainstream social science now seems mobilized to reinforce this sense of hopelessness.

Rethinking cities, from the ground up
by David Wengrow

Settlements inhabited by tens of thousands of people make their first appearance in human history around 6,000 years ago. In the earliest examples on each continent, we find the seedbed of our modern cities; but as those examples multiply, and our understanding grows, the possibility of fitting them all into some neat evolutionary scheme diminishes. It is not just that some early cities lack the expected features of class divisions, wealth monopolies, and hierarchies of administration. The emerging picture suggests not just variability, but conscious experimentation in urban form, from the very point of inception. Intriguingly, much of this evidence runs counter to the idea that cities marked a ‘great divide’ between rich and poor, shaped by the interests of governing elites.

In fact, surprisingly few early cities show signs of authoritarian rule. There is no evidence for the existence of monarchy in the first urban centres of the Middle East or South Asia, which date back to the fourth and early third millennia BCE; and even after the inception of kingship in Mesopotamia, written sources tell us that power in cities remained in the hands of self-governing councils and popular assemblies. In other parts of Eurasia we find persuasive evidence for collective strategies, which promoted egalitarian relations in key aspects of urban life, right from the beginning. At Mohenjo-daro, a city of perhaps 40,000 residents, founded on the banks of the Indus around 2600 BCE, material wealth was decoupled from religious and political authority, and much of the population lived in high quality housing. In Ukraine, a thousand years earlier, prehistoric settlements already existed on a similar scale, but with no associated evidence of monumental buildings, central administration, or marked differences of wealth. Instead we find circular arrangements of houses, each with its attached garden, forming neighbourhoods around assembly halls; an urban pattern of life, built and maintained from the bottom-up, which lasted in this form for over eight centuries.⁶

A similar picture of experimentation is emerging from the archaeology of the Americas. In the Valley of Mexico, despite decades of active searching, no evidence for monarchy has been found among the remains of Teotihuacan, which had its magnificent heyday around 400 CE. After an early phase of monumental construction, which raised up the Pyramids of the Sun and Moon, most of the city’s resources were channelled into a prodigious programme of public housing, providing multi-family apartments for its residents. Laid out on a uniform grid, these stone-built villas — with their finely plastered floors and walls, integral drainage facilities, and central courtyards — were available to citizens regardless of wealth, status, or ethnicity. Archaeologists at first considered them to be palaces, until they realised virtually the entire population of the city (all 100,000 of them) were living in such ‘palatial’ conditions.⁷

A millennium later, when Europeans first came to Mesoamerica, they found an urban civilisation of striking diversity. Kingship was ubiquitous in cities, but moderated by the power of urban wards known as calpolli, which took turns to fulfil the obligations of municipal government, distributing the highest offices among a broad sector of the altepetl (or city-state). Some cities veered towards absolutism, but others experimented with collective governance. Tlaxcalan, in the Valley of Puebla, went impressively far in the latter direction. On arrival, Cortés described a commercial arcadia, where the ‘order of government so far observed among the people resembles very much the republics of Venice, Genoa, and Pisa for there is no supreme overlord.’ Archaeology confirms the existence here of an indigenous republic, where the most imposing structures were not palaces or pyramid-temples, but the residences of ordinary citizens, constructed around district plazas to uniformly high standards, and raised up on grand earthen terraces.⁸

Contemporary archaeology shows that the ecology of early cities was also far more diverse, and less centralised than once believed. Small-scale gardening and animal keeping were often central to their economies, as were the resources of rivers and seas, and indeed the ongoing hunting and collecting of wild seasonal foods in forests or in marshes, depending on where in the world we happen to be.⁹ What we are gradually learning about history’s first city-dwellers is that they did not always leave a harsh footprint on the environment, or on each other; and there is a contemporary message here too. When today’s urbanites take to the streets, calling for the establishment of citizens’ assemblies to tackle issues of climate change, they are not going against the grain of history or social evolution, but with its flow. They are asking us to reclaim something of the spark of political creativity that first gave life to cities, in the hope of discerning a sustainable future for the planet we all share.

Farewell to the ‘Childhood of Man’
by Gyrus

[Robert] Lowie made similar arguments to [Pierre] Clastres, about conscious knowledge of hierarchies among hunter-gatherers. However, for reasons related to his concentration on Amazonian Indians, Clastres missed a crucial point in Lowie’s work. Lowie highlighted the fact that among many foragers, such as the Eskimos in the Arctic, egalitarianism and hierarchy exist within the same society at once, cycling from one to another through seasonal social gatherings and dispersals. Based on social responses to seasonal variations in the weather, and patterns in the migration of hunted animals, not to mention the very human urge to sometimes hang out with a lot of people and sometimes to get the hell away from them, foraging societies often create and then dismantle hierarchical arrangements on a year-by-year basis.

There seems to have been some confusion about exactly what the pattern was. Does hierarchy arise during gatherings? This would tally with sociologist Émile Durkheim’s famous idea that ‘the gods’ were a kind of primitive hypothesis personifying the emergent forces that social complexity brought about. People sensed the dynamics changing as they lived more closely in greater numbers, and attributed these new ‘transcendent’ dynamics to organised supernatural forces that bound society together. Religion and cosmology thus function as naive mystifications of social forces. Graeber detailed ethnographic examples where some kind of ‘police force’ arises during tribal gatherings, enforcing the etiquette and social expectations of the event, but returning to being everyday people when it’s all over.

But sometimes, the gatherings are occasions for the subversion of social order — as is well known in civilised festivals such as the Roman Saturnalia. Thus, the evidence seemed to be confusing, and the idea of seasonal variations in social order was neglected. After the ’60s, the dominant view became that ‘simple’ egalitarian hunter-gatherers were superseded by ‘complex’ hierarchical hunter-gatherers as a prelude to farming and civilisation.

Graeber and Wengrow argue that the evidence isn’t confusing: it’s simply that hunter-gatherers are far more politically sophisticated and experimental than we’ve realised. Many different variations, and variations on variations, have been tried over the vast spans of time that hunter-gatherers have existed (over 200,000 years, compared to the 12,000 or so years we know agriculture has been around). Clastres was right: people were never naive, and resistance to the formation of hierarchies is a significant part of our heritage. However, seasonal variations in social structures mean that hierarchies may never have been a ghostly object of resistance. They have probably been at least a temporary factor throughout our long history.1 Sometimes they functioned, in this temporary guise, to facilitate socially positive events — though experience of their oppressive possibilities usually encouraged societies to keep them in check, and prevent them from becoming fixed.

How does this analysis change our sense of the human story? In its simplest form, it moves the debate from ‘how and when did hierarchy arise?’ to ‘how and when did we get stuck in the hierarchical mode?’. But this is merely the first stage in what Graeber and Wengrow promise is a larger project, which will include analysis of the persistence of egalitarianism among early civilisations, usually considered to be ‘after the fall’ into hierarchy.