Containment of Freedom

Human constructed physical structures, from roads and channeled rivers to walls and buildings, are the templates of social and psychic structures. This is the foundation of social construction and constructivism, upon which superstructures are built. Julian Jaynes suggested this operates linguistically by way of metaphors, helping to create analog structures (e.g., inner mind-space). Whatever the mechanism, the underlying theory is that we can tell a lot about a society by the kinds of structures they use, inhabit, and speak about.

For Jaynes, he seems to have limited his speculations in this area to that of the container metaphor. That makes sense. It’s not only that actual containers (pouches, jugs, jars, barrels, boxes, etc) became more common as civilization developed, beginning with the agricultural revolution and later increasing with surplus yields and wide-scale trade. All structures, from temples to houses to granaries, became more enclosed and hence more containing.

In contrast, there is the example of the Piraha with their animistic mentality (the term offered by Paul Otteson). At first, Marcel Kuijsten, the editor of many collections of Jaynesian scholarship, suggested that animistic mentality was a subset of bicameral mentality; but he clarified that his suggestion was tentative. We weren’t certain at first and we’re now leaning more toward distinguishing the two. The reason precisely has to do with the container metaphor.

The Piraha don’t seem to make or use containers. They rarely store food, except occasionally smoking some fish for trade. Even their shelters are as simple as possible. The few objects they trade for (e.g., metal axes) are treated with little sense of value and no sense of possession, just left lying around for anyone to use; or else simply to be forgotten. It’s unsurprising they have an extremely uncontained sense of self, not to mention an unstructured social order.

To be accurate, it’s not that the extreme end of non-WEIRD mentality is actually unstructured. Rather, it is structured more according to the natural world. Hunter-gatherers often have a sense of self that is shaped by the immediate environment and sensory field. For the Piraha, they live on a river and so maybe it’s unsurprising their very conception of reality is one that flows and shifts, that appears and disappears as if going around a bend.

The Australian Aborigines offer a middle position, as they already had basic agriculture, including granaries. Like many tribal people, they had highly structured the world around them, though early Westerners couldn’t see it. The whole world was a garden to be tended. The Aborigines managed water, fire, and animals; similar to Native Americans. Aboriginal Songlines were a geographic mapping of psyche, based on landscape markings, seasonal patterns, ecosystems, and ancient trails.

So, in reality, human experience is always structured. But maybe that isn’t quite right. Structure implies a struction, something that was constructed. Not all societies spend much time constructing, if there is no society that doesn’t construct something. Even the Piraha make basic things as needed, albeit on a limited scale, heavy emphasis on the latter point. The Piraha go to the extreme of not bothering to make jewelry or ornamented clothing. Neither do they construct stories, in having no storytelling tradition, although they’ll sometimes repeat the stories they’ve heard outsiders tell.

Still, the Piraha do build things, such as shelters, bows and arrows, etc. But there is something unique about building containers, an object of little use to the Piraha. The archaic bicameral mentality, according to Jaynes, likewise wasn’t modeled according to the container metaphor. Yet the structures that had developed by the time of the Bronze Age were much more containing, in the proliferation of enclosed spaces. And containers proper were becoming more commonly used.

In this context, voice-hearing also seems to have become more structured, as opposed to the egalitarian and non-hierarchical voice-speaking (i.e., spirit ‘possession’) of the Piraha. The first permanent structures were not houses to be lived in, granaries to store food, or any such thing. They apparently were ritual sites, that is to say houses for the gods, god-kings, and ancestors. The mummified bodies or skulls were literally housed there, presumably because they were maintained as an aid in hearing the voices of the dead or of hearing the voices that spoke through the dead.

Animistic tribes like the Piraha don’t do any such thing. There is no individual who permanently possesses or is possessed by archaic authorization. Spirits and the dead can speak through any number of people, as there are no authority figures of any sort, no shamans, healers, chiefs, or council of elders. As such, when any given person dies, it’s no more relevant than any other death. Access to the voices isn’t threatened because they are free-floating identities — one might consider them communal theories of mind.

All of that changed with the agricultural revolution, and so that is what begins an important distinction. Bicameral mentality not only with temples and later urbanization but increasingly with their walled city-states and emerging empires was more contained than animistic mentality, if far less contained than Jaynesian consciousness. The difference was communal-containment versus self-containment, but still a containment of sorts in either case, as contrasted to animistic uncontainment.

Both the bicameral-minded and the consciousness-minded had hierarchies, separating them both from the extreme opposite end of animistic-minded laissez-faire egalitarianism. Since the Piraha don’t have any authority figures at all, hierarchical or otherwise, there is no one in a position to monopolize and control voice authorization. Hence, no enforced authoritarianism, although plenty of tribalistic conventionalism and conformism that is maintained merely through shared identity.

We could speculate that authoritarianism had already appeared, if barely, among the earliest bicameral-minded societies, following the agricultural revolution, since that was the beginning of new forms of extreme stress: overcrowding, resource competition, malnutrition, famine, infectious disease, etc — indeed, research shows that such large-scalle stressors are precisely the conditions of authoritarianism. Whenever it first appeared, we certainly can safely assert that full-on authoritarianism was taking hold by the end of the Bronze Age.

We lean in the direction of the initial wave of bicameral-minded societies only having been partly and temporarily authoritarian, as conditions changed. But is partial and temporary authoritarianism actually authoritarian? We sense that it is not or at least not in how we understand it. Humans can collectively respond to threats, sometimes in oppressive ways, but without forming permanent authoritarian social orders. The threat response is built into the human psyche, as it’s an evolved survival instinct. Authoritarianism isn’t merely the threat response under normal conditions for it only appears when stressors continue indefinitely without the option of resolution or escape — it becomes stuck in the on position and so takes exaggerated form.

The entrenchment of authoritarianism as overwhelming and pervasive stress, in inducing mass anxiety and trauma, might be the very thing that was undermining bicameral mentality by the end of the Bronze Age. Maybe bicameral mentality required the lingering traces of the non-authoritarian animistic mentality. The problem was that bicameral mentality required the control of animistic mentality in order to control ever larger and unwieldy populations, but this kind of social control is anathema to animistic communalism and egalitarianism.

If we accept that view, we could interpret bicameral mentality as a very long transitional phase from animistic mentality to Jaynesian consciousness. In a sense, it was never a stable order because it was built on an internal conflict. Over time, it demanded more and more authoritarianism, which undermined the very voice-hearing that held the society together. The bicameral-minded societies were the earliest attempts at making agriculture a sustainable social order. It was an experiment and no one knew what they were doing.

The container metaphor might offer us a central insight. To contain something is to control it. Hunter-gatherers often have little need for control, depending on how much or how little stress they are under. But once agricultural settlements become permanent, control becomes necessary for continued survival. Farmers can’t simply move on and go their separate ways. That was ever more true as urbanization increased, food systems complexified, and trade became interdependent. There was no second option. When drought or famine occurred, most of the population simply died. The containing structure of civilization sometimes became a death trap.

That could be what also distinguishes early bicameral mentality from late bicameral mentality. The earliest structures were apparently ritual sites that were visited, not places of settlements. And even the first settlements were typically temporary affairs. It took many millennia for permanent settlements to have become more common, as large populations became dependent on agricultural foods. There was no turning back, in the way that was previously possible with small city-states that regularly dissolved back to herder and forager tribes.

Maybe what we mean by Jaynesian consciousness is simply civilization finally hitting a tipping point, the ending of the transitional phase of bicameral mentality. The pre-agricultural practices and cultures had finally and fully been forgotten from living memory or somehow no longer valid and applicable to altered conditions. When the Bronze Age collapse happened, this was a crisis since there was no other option remaining, no option of a return to animistic mentality. Large urban and farming populations can’t easily transition back to tribes of any sort.

That was a period of catastrophe, as the great empires fell like dominoes when hit by a series of natural disasters (volcanoes, earthquakes, tidal waves, wildfires, climatic changes, etc) that led to famines, refugees, and marauders. Vast numbers were suddenly forced out of their settled, stable, and secure lifestyles. What little they brought with them were containers of goods. It was the one structure they could rely on when all other structures had been destroyed, lost, or left behind. It was an obvious step for the container metaphor to become psychologically potent.

Self-containment was something entirely new, but it was built on the psychic structures of the prior age. It meant the final and complete suppression of the animistic mentality as a social order. Yes, the bicameral-minded social order, as a transitional phase, was over; albeit the animistic mentality could never be completely eliminated, however suppressed and distorted it became. This is maybe why some associate modern authoritarianism with a return of the repressed bicameral-minded impulses with its late stage authoritarianism: stratified hierarchies, centralized power, expansionary imperialism, standing armies, long-distance warfare, brutal oppression, genocidal slaughter, mass enslavement, written laws, court systems, moralistic norms, etc.

We were thinking about this in reading an interview with Brian J. McVeigh, a student of Julian Jaynes, in the collection recently put out, Conversations on Consciousness and the Bicameral Mind edited by Marcel Kuijsten. He was talking of the need to increase self-control to stabilize and optimize consciousness. We’ve come across him talking similarly in an earlier talk he had with Jaynes, from Discussions with Julian Jaynes. That meeting with Jaynes took place on June 5, 1991. So, this is a longstanding view of McVeigh, going back more than three decades, spanning his entire professional career, since that was the same year he got his doctorate.

This commitment to a control-orientation was probably something he picked up from Jaynes himself, as the two seemed in agreement. That perspective is understandable. As a society, we’ve become committed to Jaynesian consciousness. Our entire society is ordered in terms of it and so, at this point, it might be pathway dependence. The only way might seem to be forward. But one might wonder if there is an inherent contradiction to Jaynesian consciousness, as happened before with bicameral mentality, an intrinsic and irresolvable conflict that will worsen over time until it becomes an existential crisis.

The success of Jaynesian consciousness might end up being its doom, specifically as complexity leads to stress, anxiety, and trauma that would elicit increasing threat responses. To contain means to control, initially at a communal level, and that is precisely what predisposed bicameral mentality over time to worsening authoritarianism. That then made empires possible, if empires ultimately can’t operate according to bicameral mentality. It was an impossible situation that made collapse near inevitable.

Out of the wreckage, Jaynesian consciousness created a new order of control, but it came at a high price. Over the millennia, civilization has been on a boom and bust cycle with some of the busts being doozies. So, what if we are in a similar situation or else will get to that situation sometime in the future? We think of self-containment as self-control in making autonomy and independence possible. But maybe this is more of a perception than a reality. Only the controlled would imagine freedom as yet more control.

As a side note, the etymology of ‘freedom’ originated among German tribes, probably when they still were animistic. This word is cognate with ‘friend’. To be free, in this sense, meant to belong to a free people, uncontrolled and uncontained for the identity was shared and not enforced. It’s all about relationship, not individualism. So far, humans have never found a way to have individualism without authoritarianism for individuals act individually and hence need to be controlled for social order, collective action, and public good. This is made clear in how Germanic ‘freedom’ is opposite of Latin ‘liberty’ that, under the Roman Empire, simply meant not being being legally enslaved in a slave-based society.

This is the reason Southern slaveholders fought for liberty, not for freedom. They could make statements like, “Give me liberty, or give me death!” Liberty only applied to those who owned themselves. Then again, all the way back at least to the Stoics, there was the beginning of a concept of self-ownership that even slaves could claim, as no one else could own one’s soul. This sense of individualism was in compliance with authoritarianism, as the liberty of self-identity didn’t require liberty of the body. This remains true with modern wage slavery. Unlike animistic and egalitarian tribes, modern humans have little freedom to do what they will, as we live under the constant threat of hunger and homelessness if we don’t comply with and submit to the system of control.

Do we really control ourselves at all? Benjamin Libet’s research would indicate otherwise, as we apparently only become conscious of our actions after they are initiated. Control is a narrative that we tell ourselves for comfort. Self-ownership of the propertied self, what a strange thing — as if the individual could be removed from the public sector and made into a private corporation. We know that the self can never be made into an actual object separate from enmeshment in the world and relationships. Yet self-ownership clothed in the Burkean moral imagination is ideological realism at the highest level. It’s so compelling, a hypnotic trance.

But one might suspect it’s a cognitive trap, a dead end. Isn’t this a metaphorical internalization and ideological interpellation where the ego-self is made into a tyrant and slaveholder of the psychic realm, a demiurgic and archonic overlord? It seems to be an odd self-enforced authoritarianism, where one part of the psyche comes to rule over the rest; or else merely made to appear so, in acting as a puppet dictator who rationalizes the forces actually outside of his control. Exactly who is owning and controlling? Who is being owned and controlled?

Is inner authoritarianism an improvement over external authoritarianism? Or are they mirroring each other? Aren’t they ultimately of the same cloth? Is this why so many authoritarian regimes, from the Nazis to the Stalinists, rhetorically praised the individual soldier, worker, etc? Is there ever the light of individualism without the shadow of authoritarianism? How is one free when inside a container one cannot get out of? If we truly seek freedom, we might want to consider a new metaphor, and that would require new structures from which to form new identities. But it’s unclear, at this point, that we are capable of transformation without collapse.

* * *

As an additional thought, we have doubts that Jaynes’ emphasis on metaphor is sufficient. That is the point of why we pontificate on actual structures. All metaphors begin in the physical world. But we are still left with explaining why some structures become common metaphors and why some common metaphors become internalized as identity. To this extent, we were building upon Jaynes’ own theorizing. And we could refer back to other thoughts we’ve had along these lines. It’s not only that structures of buildings and containers potentially shape the psyche. The most major factor might be how a key component of the civilizational project is the reshaping of the landscape, particularly in light of how central landscape has always been, such as with the earliest mnemonic systems of oral cultures, from the Australian Aborigines to the archaic Greeks.

This brings us to agriculture, as control of the earth itself (Enclosure of the Mind). But that is not how it began, in the earliest glimmers of the agricultural revolution. Even many millennia later into the post-bicameral dark age, agriculture remained a rough and primitive endeavor of weedy fields. The cultivation of grains, at the time, wouldn’t necessarily have looked much different from wild grasslands. It took the Axial Age to bring on systematization of farmland and farming practices (e.g., weed and ergot control) that would eventually make possible large and dependable surplus yields. Land reform, during modernity, took this to the next level as a nationalistic reform agenda to enforce what Brian J. McVeigh calls the ‘propertied self’. Every aspect of the landscape fell under greater control, from the plutocratic enclosure movement to technocratic land and water management. Nothing was left to remain uncontained and uncontrolled. Even ‘wilderness’ was to be carefully managed as part of bureaucratic park systems and national territories.

As external control has increased, so have the demands of internal self-control. Authoritarianism is ever more introjected. We can’t escape the oppression because it’s infected us, to such an extent we’ve become identified with the parasite. We can’t imagine anything else because our imagination is also contained, in having spent our entire lives within contained landscapes, especially with mass urbanization and city planning. It is near perfect epistemic closure; an all-encompassing ideological realism; a totalitarian interpellation. For all that tells us about our predicament, it’s still left to be determined what made it all possible, what motivated it in the first place, and what continually compelled humanity across millennia. The rarely discussed component is not just agriculture as a system and social order but what it produced.

This is seen right from the beginning of agriculture when the state of health plummeted, under the pressure of malnutrition and pestilence. The complete alteration of the human diet with farming, in particular, was one of the most profound changes humanity has ever experienced, maybe only equal to the megafauna die-off that immediately preceded it in causing the initial loss of nutrient density that turned humanity toward increased intake of plant foods. But it wasn’t only what was lost. In grains and dairy, there were substances that had not previously been a central part of what humans ate. Some of these substances appear to be addictive, along with affecting neurocognitive development (The Agricultural Mind). On top of that, it was what was being replaced

Enclosure of the Mind

“[T]he chief matter . . . being now not the fruits of the earth, and the beasts that subsist on it, but the earth itself; as that which takes in, and carries with it all the rest.
~ John Locke, Two Treatises of Government, 1689

“As long as we keep ourselves busy tilling the earth, there is no fear of any of us becoming wild.”
~ Michel Guillaume Jean de Crevecoeur, Letters From an American Farmer, 1782

“Inclosure came and trampled on the grave
Of labour’s rights and left the poor a slave …
And birds and trees and flowers without a name
I sighed when lawless law’s enclosure came.”
~ John Clare, The Mores, 1820

“Strangely enough they have a mind to till the soil and the love of possession is a disease with them. These people have made many rules that the rich may break but the poor may not. They take their tithes from the poor and weak to support the rich and those who rule.
“They claim this mother of ours, the earth, for their own and fence their neighbors away; they deface her with their buildings and their refuse. The nation is like a spring freshet that overruns its banks and destroys all that are in its path.”
~ Sitting Bull, Speech at the Powder River Council, 1877

The time has arrived when we should definitely make up our minds to recognize the Indian as an individual and not as a member of a tribe. The General Allotment Act is a mighty pulverizing engine to break up the tribal mass. It acts directly upon the family and the individual.
~ Teddy Roosevelt, Address to Congress about Dawes Act, 1901

The early modern period saw the legal push for land enclosure, privatization, and consolidation. It became a powerful force in the 18th century, which destroyed the ancien regime, destabilized the social order, and precipitated revolts and eventually revolution. This was central to Enlightenment thought in the creation or exacerbation of Jaynesian consciousness, post-bicameral nostalgia, Platonic/Cartesian anxiety, atomistic individualism, capitalist realism, social Darwinism, and WEIRD culture. In a short period of time, land reform, agricultural improvements, and technological advancements led to the first dependable grain surpluses, particularly the increase of wheat production, the sudden availability and affordability of white flour, and the industrial development of the high-carb standard American diet (SAD). Also, with colonial trade, tobacco, tea and sugar replaced local smoking herbs and herb-infused beer. Heading into the 19th century and continuing into the next, all of this combined might have contributed to the disappearance of the fairies and the emergence of a crisis of identity, followed by moral panic along with the rise of widespread mental illness and drug addiction and other diseases of civilization, which continues to worsen, not to mention increasing rates of such things as autism — all of it central to what one could call the agricultural mind, exacerbated by mass urbanization, industrialization, and big ag.

This is an ongoing line of speculation, but the land enclosure angle is somewhat new. We’ve previously written about the enclosure movement, privatization and the loss of the Commons, as it obviously is one of the most central changes in recent history, arguably key to understanding nearly all other changes in modernity. It coincided not only with capitalism, corporatism, and industrialization but also colonial imperialism and its vast trade network. There really is no way of comphrehending what all the fuss was about, from the English Peasants’ Revolt to the English Civil War to the American Revolution, without knowing how feudalism was forcefully and violently dismantled not by the peasants and serfs but by aristocrats and monarchs. Other economic practices and systems were seen as more profitable or otherwse attractive. Eliminating the feudal system of parishes and commons, for example, eliminated all of the inconvenient social obligations and traditional roles of noblesse oblige that constrained power according to the authorizng precedence of living tradition and custom. Part of the complaint of some aristocrats, including the more radical-minded like Thomas Jefferson, was that the ancien regime was perceived as oppressively confining to everyone, including the aristocracy. But to destroy that old order meant creating something radically new in its place, which would involve new subjectivities, identities, and roles.

That was the self-enforced task set before the Enlightenment thinkers and later reformers. Individuality and independence was praised, but some at the time admitted to or hinted at the fact that these were not natural law and human birthright. They had to be artificially created. First off, let’s set down a distinction: “Like social constructionism, social constructivism states that people work together to construct artifacts. While social constructionism focuses on the artifacts that are created through the social interactions of a group, social constructivism focuses on an individual’s learning that takes place because of his or her interactions in a group” (Wikipedia). Another way of thinking about this was described by Richard M. Doyle: “The philosopher Louis Althusser used the language of “interpellation” to describe the function of ideology and its purchase on an individual subject to it, and he treats interpellation as precisely such a “calling out.” Rather than a vague overall system involving the repression of content or the production of illusion, ideology for Althusser functions through its ability to become an “interior” rhetorical force that is the very stuff of identity, at least any identity subject to being “hailed” by any authority it finds itself response-able to” (Darwin’s Pharmacy). A social artifact, once socially constructed, offers an affordance that unconsciously enforces the authorization of social constructivism through the interpellation of calling out a particular behavioral subjectivity we become identified with in responding. So, to give a concrete example, we are enacting the propertied self when, after seeing a no trespassing sign, we don’t cross a fence. We’ve been hailed by the authorization of an implicit ideological realism that makes a claim over us, constraining not only our behavior but more importantly our identity. But that response has to be taught, modeled, and internalized — fences and walls, like roads and sidewalks, become the infrastructure emblazoned upon the mind.

This civilizing process was more starkly apparent at the beginning of modernity because so much of what we take for granted, within this dominant ideological realism, did not yet exist. To establish private landholdings was necessary to form the structure for the propertied self, far beyond mere self-ownership in not being a slave (i.e., liberty). The danger, to the emerging capitalist class, was that there were competing structures of identity with the communal self and bundled mind that continued to assert itself. Consider the elite intellectual William Godwin (1756–1836) who saw “associations as constructing their members’ subjectivities, not merely directing their energies incorrectly,” writes Robert Anderson. “In this sense, then, associations are analagous to what Louis Althusser calls Ideological State Apparatuses which provide material rituals and practice, which subjects recognize themselves. Unlike Althusser’s state apparatuses, which hail subjects as individuals, political associations, in Godwin’s view, construct a “common mass” subject, in which subjects are undifferentiated one from another. Since, as Sayer and Corrigan argue, the construction of subjectivity is central to the success of a nation-state, this function of political associations is no trivial matter” (“Ruinous Mixture”: Godwin, Enclosure and the Associated Self). Those like Godwin thought collectivities were a bad thing, since individualistic propertied elites such as himself represented the ideal in his utopian ideology. During this same era, George Washington warned of the threat of politcal parties and one wonders if he had similar worries on his mind, considering his treatment of the collective action of Shays’ Rebellion. Robert Anderson explains what this entails:

“The Enclosure Movement, which yokes the realms of the subject and of property, gives some historical grounding for Julia Kristeva’s theory of the abject, which describes the psychic imperatives that drive the subject to distinguish itself from a “common mass.” This force, I am suggesting, determines the movement towards the enclosure of both the commons and the “self.” It concerns an anxiety about the “clean and proper” (“le propre”) boundaries of the self (“le propre”). The subject is constructed through a process of exclusion and boudnary-defense which involves an attempt ot ensure the singularity and integrity of the self within its boundaries, and an attempt to protect those boundaries of the self—not merely the self, but the boundaries themselves. Abjection names the proces of “exclusion” through wich “‘I’ expell myself” from indifferentiation and wildness/animality. The abject, then, threatens to “engulf” the subject because it is a reminder of what it must push aside in order to live. We can se this at work in Young’s claim that enclosure transformed the country from “boundless wilds and uncultivated wastes” into “well-peopled” “inclosures . . . cultivated in a most husband-like manner . . . and yielding an hundred times times the produce.” It is to guard against the “ruinous Effects of a Mixture of opposite Interests” and the “untidiness” of common and use-rights, that enclosure takes place. It cleans and distinguishes le propre—the self, the property—from the “improper.” In his chapter on “The Principles of Property,” Godwin argues that property performs this very function. In spite of the great injustices it causes, the right to property is so “sacred” that no exertion or sacrifice to protect it can be too great (2.440-50). It creates an “essential” “sphere” which protects man from outside intervention, thereby freeing up a space for the operation of “private judgment,” which is necessary for the improvement of man” (2.433). This improvement is threatened if the self is not protected from being “resolve[d] . . . into one common mass” (1.289). Abjection, then, is the psychological engine for improvement.

“The history of enclosure bears out Kristeva’s argument that abjection is ultimately a reliance on the law, which “shapes the body into a territory protected by the “differentiations of proper-clean and improper-dirty” (72). Thompson reveals the extent to which “reasons of improvement” had acquired the status of legal terminology, in particular as a justification for the enclosure of the commons (“Custom” 134-60 passim). A. W. B. Simpson’s A History of Land Law articulates the historical change from “communal rights” of the commons to individual rights, which both made possible and were produced by the enclosure: “[t]he tenurial system converted the villagers [who used the land as common village property] into tenants, and the theory of the law placed the freehold of most of the lands of the manor in the lord. . . . Thus a theory of individual ownership supplants earlier more egalitarian notions” of property. And with this change, common rights came to be seen as having originated “in the grant of the lord,” rather than as a “customary rights associated with the communal system of agriculture practiced in primitive village communities.” In cases where enclosure was contested, however, court rulings often reversed the implicit chronology of “improvement” to suggest that enclosure was the natural state of property rather than an innovation.”

This demonstrates how the conservative authority of hierarchical individualism usurped the role of traditional authority of the ancestral commons, the latter a vestige of archaic authorization of the bicameral mind. The historical revisionism of the conservative project of individualistic privatization hints at the underlying reactionary mind that fuels the radical transformation through the invented tradition of ideological realism dressed up in robes from the wardrobe of moral imagination, proclaiming it has always been this way and putting a narratized spell of historical amnesia upon Jaynesian consciousness — and so individuality erases the evidence of its own origins, like the scaffolding removed from a cathedral after being built by thousands of laboerers over centuries. The threat of collective action of worker associations, labor unions, etc is not that they represent something radically and entirely new but that they are old impulses/habits carried over from the lingering habitus of the ancien regime and traditional communities that keep challenging the radical modernity of reactionary conservatism. The conservative counterrevolution is itself revolutionary, as it is also authoritarian. As noted many times before, the ideology of independence of hyper-individualism is inseparable from dependence of authoritarianism (as violently oppressive militarism, totalitarianism, imperialism, and statism) — concentrated and centralized power, concentrated and centralized land ownership, concentrated and centralized psychic energy (withdrawn form the common world-self and enclosed). It requires concerted political effort and monopolization of violence to break apart communal land and identity. The capitalist self of hyper-individualism began with the wealthy elite precisely because they were the initial beneficiaries of the enclosure movement. They were enclosing not only land but their own minds and selves from the ancient common mass of the lingering traces of the bicameral mind. Many were thinking about these issues.

Thomas Jefferson and Thomas Paine’s land reform proposals are as much, if not more, about selfhood and social identity as they are about economics (the elimination of entail and primogeniture was intended as a direct attack on aristocracy). Neither trusted an elite to control all land and all benefits from land but they (fatalistically?) accepted that the enclosure movement was irreversible or necessary for the society that was being created, even as they acknowledged the loss of freedom as demonstrated by Native Americans who could act freely precisely because they were acting within a commons (Benjamin Franklin also made such observations about the greater indigenous freedom and its attraction). These specific founders wanted to make all individuals either land owners (Jefferson’s yeoman farmers as republican citizens) or beneficiaries of land ownership (Paines’s citizens dividend), in both cases a response to the enclosure movement as it encroached in on the New World through land consolidation. Self-development had been limited to the elite, but what if self-development could be made available to all. The most radical challenge of Enlightenment thought was that all of humanity, even women and the poor and non-Europeans, shared a common human nature and that self-cultivated individuality was a universal potential, while others saw it as a necessary demand and obligation (develop an individual self or be punished). Like these two, Adam Smith thought inequality opposed a free society of individual citizens. And for this reason, Smith worried that, as opposed to agriculture, the new industrial labor might dumb down the population and so public education was necessary. Without land to cultivate as part of Jeffersonian republicanism, society would have to teach other methods of self-cultivation. Godwin likewise was concerned with education motivated by a belief that every individual should independently research, analyze, and assess everything for themselves; such deification of individualism being an impossible ideal, of course; but that apparently was of no great concern to him beause he was of a less practical bent, as opposed to Jefferson and Paine’s aspirations to offer real world solutions. From Godwin’s perspective, the point was to create and enforce individualism, including actively destroying collectivities, and then everything else would presumably fall into place.

Godwin opposed the commoners re-creating the ancient practice of the commons for the very reason it was such a natural and deeply entrenched impulse within the shared psyche. Later on, it would be the same reason collective adoptions had to be illegalized to destroy Shaker communities, collective land ownership had to be constrained to weaken Hutterite communities, and collective labor unions had to be busted to shatter working class communites. Individualism isn’t created only one time in the past but must be constantly re-created through the policies and actions of government, the punishment and coercion of law, and the encouragement of incentives and subsidies. Individualism is such a weak, unstable, and unnatural state that it would break apart without constantly being shored up and defended. The modern psyche is ever seeking to return to its origins in the bundled mind of bicameralism, animism, or some other variant. The inherent failure of individualism is regularly affirmed by how individualist realism is entirely dependent on collectivist institutions of state governments, state-created corporate charters, etc — such as giving greater rights, privileges, benefits, power, autonomy, and representation to corporate persons than to most individual humans. We are suffused with an authoritarian collectivism that is the actual system behind the charade of individualism. As with Edmund Burke, Godwin’s fear of combinations, mixings, and associations — the undifferentiated masses — expressed a fear of the impure and disorderly; like an obsessive-compulsive child forever lining up her toys and panicking whenever anyone touches them. This is the demand for principled consistency in the WEIRD mind, but the only principle is order for the sake of order, as demonstration of hierarchical power to assert the authority that authorizes ideological realism. It must be an enforced order because the ancient organic orders of tribe, kinship, village, commons, etc or the grassroots organizing of communities and workers can’t be trusted because it can’t be controlled hierarchically through centralized authority and concentrated power. When the last traces of bicameral voices have been silenced, conservatives see hierarchy as the only authority left to command authorization, be it the hierarchy of Klan, church, military, or something similar.

Hierarchy, though, can only accomplish this if it has been narratized and internalized, by way of the interpellation of symbolic conflation where an ideological realism recedes from consciousness in becoming the calcified frame of thought and perception. This was what made the enclosure movement essential in reifying an abstract ideology. It had to be imprinted upon not only the human psyche but the land itself, the literal ground of psyche as our embodied sense of place. The early land reforms rigidified boundaries, regimented land ownership, and systematized infrastructure — roads were straightened and waterways channelized. As the echoes of the living bicameral voices of ancestral spirits were transformed into the written word as the “dead hand” of corpses (i.e., widespread literacy), the soil became mere dust and land mere property with the earth being mapped and bounded. Some traditions such as Quaker living constitutionalism sought to hold onto the remnants, as part of the memory of a former British communalism. The living landscape invoked by Australian Aborigines maybe was not so different than the English practice of beating the bounds and wassailing that reinforced a collective enclosure of a shared subjectivity. Once the commons were gone, there were no bounds of the commons left to be ritually beat as a community nor communal lands inabited by spirits to be wassailed. Land reform was social reform and moral reform. Godwin’s described education of the mind like the cultivation of enclosed land, which reminds one that Lockean land rights were defined not merely by use but by cultivation or improvement of enclosed land (including John Locke’s consitutional defense of slavery; propertied self going hand in hand with the embodied self literally being property to be owned; though Locked suggested a vague qualification about how much could be enclosed, which meant the rich could accumulate vast tracts of land as long as theoretically somewhere there is still land available for others), wherease the pre-Lockean land rights of Roger Williams acknowledged that any use of even non-enclosed land proved (demonstrated and expressed) ownership, which might simply have been an invocation of the old Charter of the Forest, “guaranteeing the right to commoning (recovered in 1217), which in turn recognized subsistence rights, e.g., the right to widow’s estovers (wood needed for housing repairs, implements, etc.), and to subsistence usufructs (the temporary use of another person’s land)” (Carolyn Lesjak, 1750 to the Present: Acts of Enclosure and Their Afterlife); some of the practices continuing into 19th cenury American property law and still barely hanging on today in certain Western countries.

It is intriguing to think about how recent this happened, but first consider where it began. “In the Middle Ages, fifty per cent or more of the land was commons, accessible to everybody,” says Mark Vernon (Spiritual Commons). Then the enclosures began. “Overall, the pace of enclosure rose dramatically after the 1760s as landowners turned to parliament for the legitimization of their claims,” writes Nina Mcquown. “Michael Turner estimates that more than twenty percent of the area of England was enclosed by act of parliament between 1750 and 1819, the vast majority of these acts occurring after 1760 (32). A high concentration—twenty-one percent of the whole of acreage enclosed by parliament—was enclosed in the decades between 1770 and 1780 and in the years of high grain prices during the Napoleonic wars (Yelling 16).11 Although enclosure continued until the end of the nineteenth century, by 1815 only small and discontinuous patches of common fields remained” (“Rank Corpuscles”: Soil and Identity in Eighteenth Century Representations). Then some further details from Gary Snyder: “between 1709 and 1869 almost five million acres were transferred to private ownership, one acre in every seven. After 1869 there was a sudden reversal of sentiment called the ‘open space movement’ which ultimately halted enclosures and managed to preserve, via a spectacular lawsuit against the lords of fourteen manors, the Epping Forest.” To put that in context, following the Englsh Civil War, the Glorious Revoluion reinstated the monarchy in 1688, but there now was a powerful Parliament. That Parliament would be the agent of change, beginning to take strong actions in that next century. Not only were the commons privatized for the colonies were legally constructed as for-profit corporations, along with the creation of quasi-governmental corporations like the East India Company. This led to the complaints by the colonists in demanding the king stand up to Parliament, but the monarchy no longer held the reigns of power. Capitalism was now running the show.

Even then the Charter of the Forest as the founding document of the Commons, having been established in 1217, didn’t officially end until 1971. It almost made it to the end of the Cold War and a new millennia. One might suspect the Commons seemed too communist to be allowed to survive. If it had been maintained, the people might have gotten the wrong idea about who the country belonged to. Even as the politics of it is more than relevant, what made the enclosure movement a revolutionary moment was the transformation of the Western mind. The real issue was the enclosure of the common identity and moral imagination. That is why, as colonial imperialism took hold and expanded, the rhetoric so heavily focused on the symbolic ‘wilderness’ left remaining. Though the “percentage of wastelands—forests, fens, sheep walks, and moors—enclosed and improved during the period of parliamentary enclosure was relatively small,” writes Nina McQuown, they “loomed large in the imaginations of the propagandists responsible for encouraging the expansion of both enclosure and the innovative agricultural practice that it was thought to support.” Carolyn Lesjak writes that, “If enclosure in the 16th century was largely “by agreement” and, in fact, condemned by both the church and the government, who sided with the commoners’ claims regarding “common rights,” by the 1750s the government had taken the lead and over the course of the period from 1750-1830 passed over 4000 Acts of Enclosure, resulting in over 21% of the land (approximately 6.8 million acres) being enclosed (see Ellen Rosenman’s BRANCH essay on “Enclosure Acts and the Commons”). By the end of the century, virtually all the open fields in Britain were gone.” Everything had to be cultivated, even what was deemed useless. All material was to be fodder for improvement and progress, at least in the new mythos. “After the 1760s,” McQuown explains, as the “British improvers turned the logic and language of colonialism inward, towards the wastes,” they also turned inward to colonizing the uncultivated mind.

This makes one realize how false it is to blame everything on the later political revolutions and freethinking radicals. The enclosure movement actually began much earlier around the 14th century, around the time of the English Peasants’ Revolt. Even Parliaments’ legal justifications and enforcement happened generations before the Boston Massacre and Boston Tea Party. This reform of land, self, and mind unsurprisingly preceded and then overlapped with the early modern revolutions. John Adams famously wrote that, “What do We mean by the Revolution? The War? That was no part of the Revolution. It was only an Effect and Consequence of it. The Revolution was in the Minds of the People, and this was effected, from 1760 to 1775, in the course of fifteen Years before a drop of blood was drawn at Lexington. The Records of thirteen Legislatures, the Pamphlets, Newspapers in all the Colonies ought be consulted, during that Period, to ascertain the Steps by which the public opinion was enlightened and informed concerning the Authority of Parliament over the Colonies.” His only error was limiting his scope to the colonies and not pushing it further back. Enclosure of land became reform of mind became revolution of society became rupture of history. The cultivation of farming that once followed astrological cycles of return (i.e., revolution) had ground down the bones of the dead into dust. Humanity was uprooted from the past and temporally dislocated in an abstract narrative, as cyclical time became linear and nostalgia became a disease. The colonists surely experienced this most clearly in how the early waves of colonists largely consisted of the most destitute landless peaseants, many recently evicted from the commons and feudal villages, often arriving as slave-like indentured servants and convict labor — one can imagine the desperation and despair they felt, as being sent to the early colonies was practically a death sentence.

The colonial era may seem like a distant time from the present, but we can sense how the world we now live in was shaped then. Most Westerners remain landless peasants. The commons that once defined a communal experience of reality only remain like the shadows of a nuclear blast, the traces of a living world that remains our ancient inheritance, however cut off we have become. It may seem the egoic boundaries of our individualism have toughened into place like scars, like the crust of parched earth. We feel tired and anxious from the constant effort of mainaintiaing the walls of our mind, to keep the self separate from the world. It takes only a moment’s lapse when our guard is let down before we begin to sense what we have lost. An aching tenderness remains below. We are so hungry for connection that simply stepping into the commons of a forested park can feel like a spiritual experience for many people today. Yet such moments are mere glimpses too often quickly forgotten again. We have no shared experience, no living memory to draw from. We have no solid ground to stand upon. And the path to a different world that existed in the past has been gated shut. Or so it seems. But is that true? Where else could we be but in the world? Nature knows no boundaries nor does the human psyche, if we root down deep enough into our own soil. There is no sense of self without a sense of place for we mould ourselves out of the clay, as we breathe the dust of our ancestors.

Landscape is memory, and memory in turn compresses to become the rich black seam that underlies our territory.

Alan Moore, Coal Country, from Spirits of Place

Ever place has its own… proliferation of stories and every spatial practice constitutes a form of re-narrating or re-writing a place… Walking [into a place] affirms, suspects, tries out, transgresses, respects… haunted places are the only ones people can live in.

Michel de Certeau, The Practice of Everyday Life

* * *

Southern United States: An Environmental History
by Donald E. Davis, pp. 136-7

Without question, Kentucky’s early reputation as a hunter’s paradise influenced public opinion about all those residing in the uplands during the early settlement period. In a world that equated agricultural improvements with civilization, Native Americans living in the backcountry were seen by most Anglo-Europeans as representing the lowest evolutionary stage of human development—hunter. So needing a rationale for conquering and subdividing the largely forested frontier, Kentucky and other Ohio Valley Native Americans became hunters in the minds of most Europeans, even though they were also accomplished agriculturalists. Not surprisingly, after frontier settlers had later adopted many of the same subsistence techniques and hunting practices of their Shawnee, Creek, and Cherokee neighbors, they too were ridiculed by authorities for their “backward” and “primitive” ways. The British military commander for North America, Thomas Cage, was already of the opinion in 1772 that white backcountry settlers “differ little from Indians in their manner of life” [Davies 1972-1981, V. 203]. Perhaps more to the point is frontier historian Stephen Aron, who, in paraphrasing Cage’s letter to the Earl of Hillsborough, wrote that backcountry residents “dressed like Indians, comported themselves like Indians, and indiscriminately consorted with one another like Indians [Aron 1996, 14]. Hunting was blamed as the principal cause of the problem by both religious reformers and the ruling elite, who in their missionary visits and public appeals, tried to promote the latest agricultural reforms among the backwoods populace. Agreeing with the reformers, Crevecoeur, the celebrated author of Letters from an American Farmer, wrote that “as long as we keep ourselves busy tilling the earth, there is no fear of any of us becoming wild; it is the chase and the food it procures that have this strange effect” [Crevecoeur 1957, 215].

“Ruinous Mixture”: Godwin, Enclosure and the Associated Self
by Robert Anderson

In this argument, I turn on its head Godwin’s claim that the right to private property “flows from the very nature of man.” While Godwin argues that the right to property is “founded” on the “right of private judgment” which “flows from the very nature of man” (2.169-70), I will argue that this argument runs counter to his notion that private property “unavoidably suggests some species of law” to guarantee it (2.439). To be more specific, I argue that Godwin’s defense of the “sacred” and “essential” “sphere” surrounding the self (1.1.70, 1.257), which is necessary to potect it from being “resolved . . . into the common mass” (1.289), draws upon the conceptual framework which informs the rhetoric of the Enclosure Movement. In particular, I note his argument that cutting off the individual from the “common mass” is necessary for “improvement”—another term for enclosure. [….]

Part of his “extensive plan of freedom” involved the socialization of the self and (ideally) property and the rejection of all restraints on individual liberty; his “reprobation,” I argue, stems from this same defense of private judgment, which can be said to serve the conservative interests of the powers that be.

  1. The Subject of the Commons

Political associatons came of age in the latter part of the eighteenth century in response to the upheavals wrought by the industrial revoltion. Associations were contesting the state’s efforts to regulate subjectivites. Albert Goodwin recounts that in 1790 in the industrial center of Sheffeld, for example, “the master scissorsmiths,” apprehensive of the collective power of striking scissor grinders, “called a general meeting of the town’s merchants and manufacturers ‘to ooppose the unlawful combinations of the scissor grinders and the combinations of all other workmen.'” The same anxiety about the collective strength of the poor which led the Sheffield city leaders to oppose combinations also led to attempts to eradicate collective landholding arrangements by enclosing the commons. Following the passage of the Private Enclosure Act of 6 June 1791, in whch 6,000 acres of commons were redistrbuted among the wealthy “local land-holders, tithe-owners and large freeholders,” an angry mob, comprising both peasants and industrial laborers, rioted, threatening to destroy “the lives and properties of the freeholders who had approved the enclosure” (165-67). The fact that the mob opposing enclosure included industrial laborers as well as peasant farmers whose land was being appropriated reveals the close connections between enclosure and industrial capitalism. Sayer and Corrigan make the connecton between enclosure, capitalism, and subjectivity in this period more explicit.

But the great catastrophe which above all pervades the eighteenth century is the acceleration of the great “freeing” of labour (and thus making labour-power) that divides wage-labouring from generalized poverty; the long movement from service to employment, from provision to production/consumption, from political theatre to the individualism . . . of the vote: enclosures.” (96)

As Marx argues, enclosure ensures that workers, expropriated from their means of subsistence, are thrust into relations of dependence on the capitalists.

Goodwin goes on to relate that the response of the commoners and laborers also took forms more organized and intellectual than rioting. “When ‘5 or 6 Mechanicks’ began to meet . . . to discuss ‘the enormous high prices of Provisions,'” they initiated the creation of political societies, associatons, for the (self-) education of the working classes (166). They attempted, in the words of one charter, “to persuade their benighted brethren to defend themselves against private and publiic exploitation by the assertion of their natural rights” (qtd. in Goodwin 167). Political societes provided laborers with an organized forum—an institution—to exert influence on the opinions of their fellow laborers, and by extension, on society at large. Godwin opposes political associations on just this account. The “interference of an organized society” to influence “opinion” is “pernicious” (2.2280. “[E]ach man must be taught to enquire and think for himself,” uninfluenced by either “sympahy or coercion,” guided only by “reason.” The “creeds” of politcal associations, on the other hand, encourage “each man to identify his creed with that of his neighbour” (1.288). He goes on to argue that sympathy, like a disease, is especially contagious among undisciplined laborers: “While the sympathy of opinion catches from man to man, especially among persons whose passions have been little used to the curb of judgment, actns may be determined upon, which the solitary reflections of all would have rejected” (1.294). Like the unenclosed commons, sympathy threatens the distinctions upone which general improvements is predicated: the “mind of one man is essentally diistinct from the mind of another. If each do not preserve his individuality, the judgment of all will be feeble, and the progress of our common understandng inexpressibly retarded” (1.236).

1790, the year the Sheffeld master scissorsmiths moved to oulaw the combinations of “grinders” and “workmen,” was also the year in which Edmund Burke published his Reflections on the Revolution in France. Burke reserved his greates hostility—and fear—for the “confusion” of the “swinish multitdude” (314). Reflections reveals the extent to which concerns about the collective power of the masses, the upheavals of the industrial revolution, and anxiety about the French Revolution are intertwined. The “French Revolution,” he argues, was brought about “by the most absurd and ridiculous . . . by the most contemptible instruments. Everything seems out of nature in this strange chaos of levity and ferocity, and all sorts of crimes jumbled together with all sorts of follies.” And further, it is a “monstrous tragi-comic scene, the most opposite passions necessarily succeeded, and sometimes mix with each other in the mind; alternate contempt and indignation; alternate laughter and tears; alternate scorn and horror.” Burke’s concern about the inappropriate mixture driving the French Revolution invokes a common rhetoric for disparaging forms of life among peasants and the laboring population. It appears, as I will argue, in condemnations of the “waste” and the “ruinous . . . Mixture of opposite Interests” in the subsistence economy of the commons, and in Godwin’s critique of the tumult of political associations—both of which are seen as threats to individual integrity and “progress.” It also appears in his analysis of the “mechanism of the human mind.”

“Rank Corpuscles”: Soil and Identity in Eighteenth Century Representations
by Nina Patricia Budabin McQuown

The teleology of improvement could even stretch towards man’s transcendence of matter itself. This idea is amply represented in a notorious reverie from Godwin’s first edition of An Enquiry Concerning Political Justice (1793), where Godwin projects the complete domination of matter—not only the matter of the soil, but also and especially the matter of the body—as the eventual outcome of human progress, beginning with its progress in agriculture. His logic traces a line from improved agriculture to a human transcendence of appetite, illness, and death: “[t]hree fourths of the habitable globe is now uncultivated. The parts already cultivated are capable of immeasurable improvements” (2: 861), he offers, and if we can gain control “over all other matter,” Godwin suggests,

“why not over the matter of our own bodies? If over matter at ever so great a distance, why not over matter which . . . we always carry about with us, and which is in all cases the medium of communication between that principle and the external universe? In a word, why may not man one day be immortal?” (2: 862)

Godwin’s questions are only the most succinct statement of the radical hope that is at the center of late eighteenth-century bourgeois liberalism, which, as Kramnick has argued, linked agricultural improvement to “middle-class disdain for the past, for history, and for custom” (Kramnick, “Eighteenth-Century Science” 9). For reformist thinkers, in all areas of human ambition, improvement was articulated as a break with the past and an optimistic orientation towards the future.

Even so, reformers relied on an analogy between human self-ownership and landownership that draws on inherited parallels between human bodily-economy and the social system.5 Reformers saw an obvious parallel between agriculturally improved land and the human subject, who, cut off by self-reliance from the prejudice of contemporaries as well as the inherited prejudices of the past, could “cultivate” himself towards perfection, so that, as Robert Anderson puts it, “[t]he moral economy and political economy merge in the social and semantic fields covered by ‘improvement’” (630). In the works of both Godwin and Priestley, both subjectivity and soil are divided into discrete properties whose content is to be determined by one and only one owner, protected by the integrity of the individual conscience from absorption into the “common mass” of human thought and opinion (620).6 Enclosure of both self and soil meant divestment from the influence of history—those ancient patriarchs and their prejudices—as much as from the influence of the rights of commonage. If earlier authors imagined the soil as disseminating ownership of England’s past, bearing it physically into the bodies of nationals, later eighteenth-century reformist authors often render the soil as a failed medium for the transmission of historical experience and lingering subjectivities. Such failure is, paradoxically, reinscribed as improvement. Priestley destroys the “foundation” for the prejudicial thought of the past, and Charlotte Smith, as we will see in the conclusion to this chapter, insists on a failure of communication between the present and an incomprehensible past that is buried well below reach of the ploughshare, and is in any case unworthy of transmission. Smith and Priestley deny the relevance of the past to the present because both prefer to build on a different foundation.

This chapter examines late eighteenth-century reformist representations of the soil primarily in the field of agricultural writing. It offers an analysis, first, of Arthur Young’s writing in support of the enclosure of waste soils in several works of the 1770s and 1780s. In contrast to the revolutionary rhetoric of Priestley, Godwin, and Smith, Arthur Young is usually thought of as a political conservative for his response to the French Revolution.7 Yet to call Young a conservative is to fail to appreciate common ground he shared with the likes of progressives such as Godwin and Priestley in his advocacy for enclosure and against tithes and poor rates. Moreover, in the field of agriculture at least, Young was hardly an advocate for the careful and conservative restoration of the edifice of the past. For Young, the waste spaces of Britain must be rendered into an inviting blankness empty and available enough to rival the magnetism of America’s putatively untouched interior. We start by acknowledging the ways that his arguments for the enclosure of wastelands require the figuration of Britain as Locke’s tabula rasa, ripe for human improvement, and move on to a specific discussion of Young’s descriptions of moor soils as the prototypical waste, where we find him forcibly unearthing and dispersing the evidence of other histories and interests in the soil in order to make the past available for improvement towards a progressively more fertile future. In Young’s improvement and enclosure propaganda, we can see that eighteenth-century agricultural writing does not, like Dryden’s translation of the Georgics and Defoe’s Tour in this dissertation’s chapter two, simply mediate, reframe, or cover up relics that it cannot fit into an acceptable narrative of British history, or, like Powell and Philips, allow the concept of recirculation through the soil to provide an alternative, inarticulate, and immediate relation to the past. Nor does Young, like Smollett or Tull, suggest sequestration from the violating agency of decay. Instead, Young offers an improvement that actively un-earths the past. The coherence of Young’s improved Britain is based not on a hermeneutics of repression, where fragmented and conflicting histories are buried out of sight, but on the agricultural improver’s active recycling of the past into fertile soil that will produce a better future. His texts acknowledge the tangles of historical and legal relics and material and customary restraints in and on the soil in order to enact their exhumation and dispersal. By claiming and controlling the power of putrefaction to break down and disseminate relics, Young’s improver takes over the soil’s work of decay. He releases the value of the past for the production of future goods.

In fact, Young’s program—which became the program of the new Royal Agricultural Society in 1793—was so successful that by the end of the eighteenth century, the landscape of Britain was entirely changed. With private enclosures replacing open fields formerly held in common, it was divided into subdivisions set apart by hedgerows, ditches, walls, and straight(er) roads. Where Godwin imagined a mind that could be enclosed and cultivated like soil through improvements, the poet John Clare asserted that by the first decades of the nineteenth century, that the British landscape had indeed come to imitate the private boundaries of the individual conscience. In this poem on the enclosure of his native village in Northhamptonshire, “The Moors,” for example, Clare shows,

“Fence meeting fence in owners’ little bounds
Of field and meadow, large as garden grounds,
In little parcels little minds to please,
With men and flocks imprisoned, ill at ease.”
(46-49)

For Clare as for others, the consonance of a private landscape and a private subjectivity came with a sense of loss, both of individual rights, and of continuity with the past, whose paths “are stopt—the rude philistine’s thrall / Is laid upon them and destroyed them all” (64-5). The sense that improvement had turned out to mean the parceling up of experience into discrete and discontinuous blocks led, for Godwin, to his eventual anxiety that the possibility of future progress had also been lost. How can men whose lives are so strongly separated engage in the communication that leads to human perfection? This anxiety motivates Godwin’s An Essay on Sepulchres (1809), a text in which Godwin ultimately abandons his advocacy for a historical soil, and proposes that dirt—literally the dust of the buried corpses of great men—could be the foundation of improvement by materializing cultural and historical continuity. Godwin’s Essay proposes a different kind of soil-fertility, land that fruits out in knowledge, experience, and sentiment instead of only food. Yet Godwin’s essay is unable to imagine an immediate and therefore openended relation between human bodies and the dust of the dead. He strives to secure stable access to corpses that are also subjects, with particular memories and ideas to represent to their living interlocutors. Intent on controlling the legacy that the past leaves for the future, Godwin can only approach the dead through the medium of their representations—both the texts they leave behind and the monuments he wants to erect at their gravesites. Ultimately, his Essay offers less a plan for the stable continuity of experience across generations, than a revelation of the limits of what representations and mediums can accomplish when they refuse the immediate agency of soil.

The Early Modern 99%
by Harry C. Merritt

Reverberations of battle are the soundtrack to developments in England at the time, where King Charles I would be executed the following year and his kingdom transformed into a commonwealth. During the course of the film, the educated and principled Whitehead is forced into labor together with the alcoholic Jacob and the simpleton Friend by O’Neill, a rogue Irishman seeking self-enrichment. […]

Not just England was in turmoil at this time — much of Europe and the growing number of territories it ruled across the globe experienced extraordinary upheaval during the sixteenth and seventeenth centuries.

Though the “General Crisis of the Seventeenth Century” thesis originally developed by Marxist historian Eric Hobsbawm has since been challenged and amended, a number of broad themes can still be distilled. Religious dissent and political radicalism challenged the authority of both the Catholic Church and monarchs who ruled by the grace of God. Conflicts like the Thirty Years War descended into endless nihilistic pillage and slaughter before lending themselves to the creation of the modern state system. The ruthless quest for precious metals and profits fueled the conquest of Native American peoples and the establishment of the Atlantic slave trade.

Perhaps one of the most powerful conceptualizations of this period can be found in Peter Linebaugh and Marcus Rediker’s book The Many-Headed Hydra: The Hidden History of the Revolutionary Atlantic. According to Linebaugh and Rediker, the ruling classes imagined themselves to be the latter-day incarnation of Hercules, laboring to bring order to a chaotic world. The embodiment of their enemy was the mythological Hydra, whose many heads represented its multifarious elements: religious dissenters, radical commoners, rebellious African slaves, fiercely independent Native Americans, and freethinking women.

In the Americas and on the Atlantic, “the plebian commonism of the Old [World]” encountered “the primitive communism of the New World” and formed a hybrid, alternative vision that set itself against the emergent order of modernity. Late in A Field in England, a hallucinating Whitehead declares, “I am my own master”; this realization is precisely what the ruling classes feared most in the Hydra.

Despite its multitudes, the Hydra was ultimately unsuccessful at challenging the emerging capitalist, colonialist order of modernity. In the centuries since, it would be difficult to imagine a group that parallels the Hydra in its diversity, utopianism, and in the threat it poses to the ruling classes — that is, until today. The emergence of the 99% as a social grouping that has come to be dreaded and despised by members of the 1% reproduces the dynamics and the discourse of that era.

While a new era of globalization erodes the economic security of the vast majority of the US, the 1% and their political supporters insist that they work harder than the rest of us and thus their ownership of nearly half of the world’s wealth is for the greater good. Recently, we have been treated to numerous declarations from members of the 1% suggesting that they are under threat from the 99%.

These shrill cries about impending repression — invoking Nazism seems popular — reveal the degree to which the 1% identify with one another and fear the masses. Like the Hydra, the 99% is a rhetorical construction rather than a social formation with clear class consciousness. Its very diversity constitutes its greatest weakness. The repeated spread, defeat, and resurrection of movements like Occupy Wall Street and Spain’s indignados resemble the scattered but persistent revolts of the Hydra. Today’s Occupy activists should recall that a revolutionary conspiracy by a group of New York City laborers — black and white, slave and free — emerged in 1741 out of a waterfront tavern just blocks from today’s Zuccotti Park. With goals that are simultaneously utopian and practical, these movements appeal to both the basic needs and the deepest desires of common people around the globe. […]

Alain Badiou sees “the invariant features of every real mass movement: egalitarianism, mass democracy, the invention of slogans, bravery, the speed of reactions” embodied in both Thomas Müntzer’s movement of the 1500s and in Tahrir Square of the 2010s. As disparate groups occupy public spaces from Cairo to Madrid to New York, asserting their rights and presenting an alternative vision of their societies, we should not forget the members of the Hydra who fought against the exploitation of the ruling classes in favor of another world during the early modern period.

Some will argue that our present time is too distant to draw many practical lessons from this period. But that does not mean we cannot look to its events, personages, and symbols for inspiration. By coincidence, the rainbow flag used by today’s LGBT and peace activists bears a striking resemblance to the rainbow flag Thomas Müntzer once used to rally the German peasantry — a fitting symbol in any period for uniting a diverse coalition and insisting that another world is possible.

The Effect of Land Allotment on Native American Households During the Assimilation Era
by Christian Dippel and Dustin Frye

Toward the end of the 19th century, with the conclusion of the Indian Wars and the closing of the frontier, reformers and the U.S. government turned their attention towards the cultural assimilation of Native Americans, ninety percent of whom were living on the reservations created in the previous decades. This is signified by the famous 1892 quote: “kill the Indian in him, and save the man.”1 Assimilation efforts were centered on land allotment policies that broke up tribally owned reservation lands into individually owned homestead-sized land allotments. As the Bureau of Indian Affairs (BIA) commissioner noted: “if there were no other reason [for allotment], the fact that individual ownership of property is the universal custom among civilized people of this country would be a sufficient reason for urging the handful of Indians to adopt it.” Allotment was the cornerstone of federal Indian policy beginning with the passing of the General Allotment (or ‘Dawes’) Act in 1887 until it ended with the passing of the Indian Reorganization (or ‘Howard-Wheeler’) Act (IRA) in 1934 (Carlson, 1981, p18).

When a reservation was opened for allotment, all families on the reservation were given allotments, and these allotments were held in a trust managed by the local Indian agent (the BIA’s local superintendents in charge of a reservation). Trust-status meant allottees could not sell or collateralize an allotment. In order to obtain full ‘fee-simple’ legal title for their allotment, allottees had to be declared “competent” by the BIA agent (Carlson, 1981; Banner, 2009; Otis, 2014). In short, Indian allotment was designed as a conditional transfer program aimed at cultural assimilation. The first ‘treatment arm’ was an unconditional transfer program: receiving an allotment gave the allottee the unconditional right to use the land for their own purposes, as well as the right to leasing rents. The second treatment arm was only obtained conditional on proving one’s “competence.” Allotment’s conditional transfer arm (full title) was worth almost 20 times annual per capita incomes in our data, orders of magnitude larger than modern-day conditional transfer programs. Our paper is an investigation into how individual households responded to the incentives created by this program.

We hypothesize that individual allottees responded to the allotment policy’s incentive structure by signalling cultural assimilation to the BIA agents in order to be able to obtain full property rights over their allotments. First evidence of this comes from an annual panel of reservation aggregate data from the BIA’s annual reports from 1911 to 1934. In addition to schooling, these data include very direct measures of assimilation or assimilation-signalling, namely the number of “church-going Indians” and of those “wearing civilized dress.” We combine these data with the universe of Indian allotments, which the Bureau of Land Management (BLM) has digitized with geo-location and issuance year. In a within-reservation over-time comparison, we find that school-attendance, the number of church-going Indians and the number of those wearing civilized dress increased in lock-step with the expansion of allotment, even after controlling for potential changes in school and clerical infrastructure.

1 Quote from a speech by Capt. Richard Pratt, founder of the first Indian boarding school. Appendix-Figure A1 shows one of the many “before/after” pictures one finds in association with the Assimilation Era.

Metaphorical Space and Enclosure in Old English Poetry
by Benjamin S. Waller

A Language Older Than Words
by Derrick Jensen, pp. 101-6

Only recently—especially after teaching at a university for a few years— have I come to understand why the process of schooling takes so long. Even when I was young it seemed to me that most classroom material could be presented and assimilated in four, maybe five, years. After you learn fractions and negative numbers in first or second grade, what new principles are taught in math until algebra in junior high? It’s the same with science, art, history, reading, certainly writing. Nearly everything I learned those years—and this was true for my friends as well—was gleaned through books and conversations outside class. It’s true to the point of cliché that most of the “crap” we learn in high school, as Simon and Garfunkel put it, is a bland stew of names, dates, and platitudes to be stored up the night before each test, then forgotten the moment the test is handed in.

During high school, I believed the primary purpose of school was to break children of the habit of daydreaming. If you force them to sit still long enough, eventually they tire even of sinking turn-around fadeaways at the buzzer to win NBA championships. Having sat in the back of the class lining rockets over the left field fence for the better part of thirteen years, I was ready to move on.

I’ve since come to understand the reason school lasts thirteen years. It takes that long to sufficiently break a child’s will. It is not easy to disconnect children’s wills, to disconnect them from their own experiences of the world in preparation for the lives of painful employment they will have to endure. Less time wouldn’t do it, and in fact, those who are especially slow go to college. For the exceedingly obstinate child there is graduate school.

I have nothing against education; it’s just that education— from the Greek root educere, meaning to lead forth or draw out, and originally a midwife’s term meaning to be present at the birth of—is not the primary function of schooling. I’m not saying by all this that Mrs. Calloway, my first-grade teacher, was trying to murder the souls of her tiny charges, any more than I’ve been trying to say that individual scientists are necessarily hell-bent on destroying the planet or that individual Christians necessarily hate women and hate their bodies. The problem is much worse than that, it is not merely personal nor even institutional (although the institutions we’ve created do mirror the destructiveness of our culture). It is implicit in the processes, and therefore virtually transparent.

Take the notion of assigning grades in school. Like the wages for which people later slave—once they’ve entered “the real world”—the primary function of grades is to offer an external reinforcement to coerce people to perform tasks they’d rather not do. Did anyone grade you when you learned how to fish? What grades did you get for pretending, shooting hoops, playing pinball, reading good books, kissing (“I’m sorry, dear, but you receive a C”), riding horses, swimming in the ocean, having intense conversations with close friends? On the other hand, how often have you returned, simply for the joy of it, to not only peruse your high school history textbook, but to memorize names and dates, and, once again for the joy of it, to have a teacher mark, in bright red, your answers as incorrect?

Underlying tests as given in school are the presumptions not only that correct answers to specific questions exist, but that these answers are known to authority figures and can be found in books. Tests also generally discourage communal problem solving. Equally important is the presumption that a primary purpose of school is to deliver information to students. Never asked is the question of how this information makes us better people, or better kissers, for that matter. Systematically—inherent in the process—direct personal experience is subsumed to external authority, and at every turn creativity, critical thought, and the questioning of fundamental assumptions (such as, for example, the role of schooling on one’s socialization) are discouraged.

If you don’t believe me, pretend for a moment you’re once again in school. Pretend further that you have before you the final test for a final required class. If you fail this test, you fail the class. While you may have enjoyed the process of schooling, and may even have enjoyed this class, you enjoyed neither enough to warrant repetition. Pretend the test consists of one essay question, and pretend you know the instructor well enough to understand that if you mimic the instructor’s opinions you’ll get a higher grade. If you disagree with the instructor—pretend, finally, that you do— you’ll be held to a higher standard of proof. What do you do? Do you speak your mind? Do you lead with your heart? Do you take risks? Do you explore? Do you write the best damn essay the school has ever seen, then return next year to retake the class? Or do you join with thousands—if not millions—of students who face this dilemma daily and who astutely bullshit their way through, knowing, after all, that c stands for Credit?

Grades, as is true once again for wages in later life, are an implicit acknowledgment that the process of schooling is insufficiently rewarding on its own grounds for people to participate of their own volition. If I go fishing, the time on the water— listening to frogs, smelling the rich black scent of decaying cattails, holding long conversations with my fishing partner, watching osprey dive to emerge holding wriggling trout—serves me well enough to make me want to return. And even if I have a bad day fishing, which, as the bumper sticker proclaims, is supposed to be “better than a good day at work,” I still receive the reward of dinner. The process and product are their own primary rewards. I fish; I catch fish; I eat fish. I enjoy getting better at fishing. I enjoy eating fish. No grades nor dollars are required to convince me to do it. Only when essential rewards disappear does the need for grades and dollars arise.

It could be argued that I’m missing the point, that the product of the years of homework and papers and tests are not the physical artifacts, nor the grades, nor the bits of information, but instead the graduates themselves. But that’s my point exactly, and we must ask ourselves what sort of product is that, from what sort of process.

A primary purpose of school—and this is true for our culture’s science and religion as well—is to lead us away from our own experience. The process of schooling does not give birth to human beings—as education should but never will so long as it springs from the collective consciousness of our culture—but instead it teaches us to value abstract rewards at the expense of our autonomy, curiosity, interior lives, and time. This lesson is crucial to individual economic success (“I love art,” my students would say, “but I’ve got to make a living”), to the perpetuation of our economic system (What if all those who hated their jobs quit?), and it is crucial, as should be clear by now, to the rationale that causes all mass atrocities.

Through the process of schooling, each fresh child is attenuated, muted, molded, made—like aluminum—malleable yet durable, and so prepared to compete in society, and ultimately to lead this society where it so obviously is headed. Schooling as it presently exists, like science before it and religion before that, is necessary to the continuation of our culture and to the spawning of a new species of human, ever more submissive to authority, ever more pliant, prepared, by thirteen years of sitting and receiving, sitting and regurgitating, sitting and waiting for the end, prepared for the rest of their lives to toil, to propagate, to never make waves, and to live each day with never an original thought nor even a shred of hope.

In Letters From an American Farmer, Michel Guillaume Jean de Crévecoeur noted: “There must be in the Indians’ social bond something singularly captivating, and far superior to be boasted of among us; for thousands of Europeans are Indians, and we have no examples of even one of those Aborigines having from choice become Europeans.”

Benjamin Franklin was even more to the point: “No European who has tasted Savage Life can afterwards bear to live in our societies.” It was commonly noted that at prisoner exchanges, Indians ran joyously to their relatives while white captives had to be bound hand and foot to not run back to their captors.

It is small wonder, then, that from the beginning, whenever we have encountered an indigenous culture, we have had the Lord our God— replaced now by economic exigency—tell us that “thou shalt smite them; and utterly destroy them; thou shalt make no covenant with them, nor shew mercy unto them.” What seems at first aggression is in fact self preservation, a practical staunching of what would otherwise be an unmanageable and embarrassing flow of desertions.

The same self-preservation motivated my father’s actions when I was a child. To preserve the person that he had become, he had to smite and utterly destroy all who reminded him of what could have been, and of the person he once was, far beyond conscious memory, before his parents, too, out of self-preservation destroyed him. So he lashed out with fist, foot, voice, penis, all so he could forget, all so we could never know, ourselves, that alternatives to fear existed. Had he been able to destroy the stars to so destroy me, he would have done it. Had he been able to destroy the stars, as even now we are destroying the seas and forests and grasslands and deserts, he would have succeeded, I am sure, in destroying me.

In the eighteenth century, de Crévecoeur wrote, “As long as we keep ourselves busy tilling the earth, there is no fear of any of us becoming wild.” Though the wild outside diminishes each day, as do intact cultural alternatives, the fear of these alternatives remains. The fear shall remain so long as we live the way we do, and so long as there are alternatives we must avoid. The alternatives shall remain so long as there is life. We should not be surprised, then, that our culture as a whole must destroy all life and that we as individuals must not dwell upon the horrors we visit not only upon others but upon ourselves, that we dwell instead upon the daily earning of our bread, and beyond that pile upon ourselves project after project to keep ourselves always occupied, always unconscious of the fact that we do not have to live this way, always blindered to alternatives. For if we looked we might see, if we saw we might act, and if we acted we might take responsibility for our own lives. If we did that, what then?

Speaking Is Hearing

We modern people are used to hearing voices in our heads. This is taken as normal. The inner self that speaks inwardly arises in the individual hearing that self speak. Speaking is hearing. And hearing is authorization, what elicits a response and gives language psychological force and social persuasion.

When someone catches us muttering, we can feel exposed and often embarrassed. Usually, we didn’t even realize we were muttering, until someone asked us what we said or who we were talking to. Well, we were speaking to ourselves or rather one of our selves was speaking to us. It was a private dialogue and someone eavesdropping on us catches off guard.

This muttering is the adult version of what Lev Vygotsky called private speech. It’s what children do in talking to themselves before they learn to internalize it. This private speech is social in nature, even though it only involves the individual. This is because it develops from learning language from parents speaking to them. So, the child learns to talk to themselves in the way their parents talked to them.

The internalization of this is imperfect and incomplete. This is why we can fall back on spoken private speech, in helping to hear ouselves think. But none of this necessarily happens consciously. Neither the speaker nor listener in this self/selves-dialogue typically involves the ego-mind. It’s other parts of ourselves that are talking to one another and it mostly happens on automatic pilot.

We observed a related phenomenon in others. One person on multiple occasions was heard muttering when they didn’t think anyone else was listening, but it wasn’t clear that they were consciously listening either. The muttering was of a specific kind, that of echolalia. In each incident, the person had just left a conversation and, while walking away, they repeated what they just said. It’s as if the dialogue was somehow continuing or replaying.

The muttering might have only been one side of a dialogue going on. But as an outsider, we were only privy to the outwardly spoken voice. Maybe the muttering was a response to a comment or question we did not hear. What was said in the prior conversation with another human was then being inwardly conveyed to some part of the self. Not all of the inner selves were present and needed to know what was said. Or something like that.

There is ongoing communication and translation between the inner and outer worlds. It’s amusing, partly because it’s so common. We all do such things, usually without realizing it, until someone catches us and forces us to take notice. But even then, we quickly rationalize our odd verbal behavior and just as quickly forget it again, as we slip back into our narrative of a single coherent egoic consciousness.

* * *

“What I tell you in the dark, speak in the daylight; what is whispered in your ear, proclaim from the roofs.”
~ Matthew 10:27

“There are almost always words inside my head. In fact, I’ve asked people I live with to not turn on the radio in the morning. When they asked why, they thought my answer was weird: because it’s louder than the voice in my head and I can’t perform my morning routine without that voice.”
~ Carla

“We are familiar with the idea of ‘inner speech’ as developed by Lev Vygotsky (curiously unused by Jaynes). It is part of our consciousness that we ‘talk to ourselves’, urging ourselves to do or not to do something, hearing what we have to say. One of the huge benefits of this linguistic consciousness, Jaynes speculates, is that our ancestors became capable of sustained work over time.”
~Ciarán Benson, The Cultural Psychology of Self

“In the truly bicameral period, while bicameral individuals heard the voices of gods and ancestors, no supernatural entity speaks through a mortal’s mouth (though given neurocultural plasticity, exceptions were possible). Bicameral hallucinations were organized and heard from the right hemisphere. But in possession, what is spoken is left hemispheric speech (the left hemisphere’s Broca area) but controlled or under the guidance of the right hemisphere’s Wericke’s area). Like modern practitioners of spirit possession, a prophet would often not be aware of the divine message coming from his or her mouth (Jaynes, 1976; 353). The OT prophets may have been engaing in  “hallucinatory echolalia.” Echolalia is the phenomenon that occurs when an individual involuntarily repeats, parrot-like, the words of others. The causes of this disorder are vareied. For individuals who were possessed, whether by Yahweh or another supernatural entity, this phenomenon becomes halluncinatory echolalia in which a person is compelled to repeat out loud the voices of the entity that is speaking to him or her.”
~Brian J. McVeigh, The Psychology of the Bible

The Spell of Inner Speech
Who are we hearing and talking to?
Reading Voices Into Our Minds

Harriet Tubman: Voice-Hearing Visionary

Origin of Harriet Tubman in the Persistence of the Bicameral Mind

The movie ‘Harriet’ came out this year, amidst pandemic and protest. The portrayal of Harriet Tubman’s life and her strange abilities reminds one of Julian Jaynes’ theory of the bicameral mind, as written about in what is now a classic work, The Origin of Consciousness in the Breakdown of the Bicameral Mind. Some background will help and so let’s look at the biographical details of what is known. This famous Underground Railroad conductor was born Araminta Harriet Ross in the early 1820s and, when younger, she was known as ‘Minty’. Her parents were religious, as she would later become. She might also have been exposed to the various church affiliations of her master’s extended family.

These influences were diverse, writes James A. McGowan and William C. Kashatus in their book Harriet Tubman: A Biography (pp. 11-12): “As a child, Minty had been told Bible stories by her mother, and she was occasionally forced to attend the services held by Dr. Anthony Thompson, Jr., who was a licensed Methodist minister. But Minty and her parents might also have been influenced by Episcopal, Baptist, and Catholic teachings since the Pattisons, Thompsons, and Brodesses initially belonged to Anglican and Episcopal churches in Dorchester County before they became Methodists. In addition, some of the white Tubmans and Rosses were originally Catholic. Accordingly, Minty’s religious beliefs might have been a composite of several different Christian traditions that were adapted to the evangelical emphasis on spiritual freedom.”

Tubman’s mixed religious background was also noted by Kate C. Larson: “The “creolization” of this family more accurately reflects the blending of cultures from West Africa, Northern Europe, and local Indian peoples in the Chesapeake. As historian Mechal Sobel put it, this was a “world they made together.” By the time Tubman was born, first generation Africans were visible presences in Dorchester County […] Tubman and her family integrated a number of religious practices and beliefs into their daily lives, including Episcopal, Methodist, Baptist, Catholic, and even Quaker teachings, all religious denominations supported by local white masters and their neighbors who were intimately involved with Tubman’s family. Many slaves were required to attend the churches of their owners and temporary masters. Tubman’s religiosity, however, was a deeply personal spiritual experience, rooted in evangelical Christian teachings and familial traditions” (Harriet Ross Tubman).

Other scholars likewise agree, such as Robert Gudmestad: “Like many enslaved people, her belief system fused Christian and African beliefs” (Faith made Harriet Tubman fearless as she rescued slaves). This syncretism was made simpler for the commonalities traditional African religion had with Christianity or particular sects of Christianity: worship of one God who was supreme, relating to God as a helpful friend who could be heard and talked with (a commonality with Quakerism), belief in an eternal soul and an afterlife, rites and initiations involving immersion in water, etc. Early generations of slaves were often kept out of the churches and so this allowed folk religion to take on a life of its own with a slow merging of traditions, such as how African rhythms of mourning were incorporated into Gospel music.

Furthermore, religious fervor was at a peak in the early 1800s and it was part of the world Tubman’s parents lived in and that Tubman was born into. “Both races attended the massive camp meetings so Rit and Ben experienced these sporadic evangelical upsurges”, wrote Margaret Washington (Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman). “She grew up during the Second Great Awakening,” Gudmestad explained, “which was a Protestant religious revival in the United States. Preachers took the gospel of evangelical Christianity from place to place, and church membership flourished. Christians at this time believed that they needed to reform America in order to usher in Christ’s second coming. Some during that restless period believed it was the End Times, as it was easier to imagine the world coming to an end than to imagine it to become something else.

This would have been personally felt by Tubman. “A number of black female preachers,” Gudmestad goes on to say, “preached the message of revival and sanctification on Maryland’s Eastern Shore. Jarena Lee was the first authorized female preacher in the African Methodist Episcopal Church. It is not clear if Tubman attended any of Lee’s camp meetings, but she was inspired by the evangelist. She came to understand that women could hold religious authority.” The religious fervor was part of a growing political fervor, as the country moved toward Civil War. For blacks, Moses leading his people to freedom inspired more than faith and hope toward the afterlife.

Around the time of Tubman’s birth, there was the failed 1822 revolt planned by Denmark Vesey in South Carolina. Later in 1831, Nat Turner led his rebellion in nearby Virginia and that would’ve been an exciting event for enslaved blacks, especially a lonely young slave girl who at the time was being kept separate from her family and mercilessly whipped. Then throughout her teens and into her early twenties, there were numerous other uprisings: 1835–1838 Black Seminole Slave Rebellion, 1839 Amistad seizure, 1841 Creole case, 1842 Slave Revolt in the Cherokee Nation. The Creole case was the most successful slave revolt in United States history. Such tremendous events, one might imagine, could shape a young impressionable mind.

* * *

Harriet Tubman’s Ethno-Cultural Ancestry and Family Inheritance

Someone like Tubman didn’t come out of nowhere. “I am quite willing to acknowledge that she was almost an anomaly among her people,” wrote her early biographer Sarah Bradford, “and so far I can judge they all seem to be particularly intelligent, upright and religious people, and to have a strong feeling of family affection” (Harriet: The Moses of Her People). She earned her strong spirit honestly, from the black culture around here and as modeled by her parents. The spiritual inclinations, as with with knowledge of nature, came from her father: “As a clairvoyant, Minty believed that she inherited this second sense from her father, Ben. […] Listening to Ben’s stories, predictions and sharing his faith convinced Minty that an omniscient force protected her” (Margaret Washington, Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman). But it was her mother, in particular, who showed what it meant to be a fiercely protective woman when it came to family. When Tubman returned to free her family, including her elderly parents, she was acting on the values she was raised with:

“Rit struggled to keep her family together as slavery threatened to tear it apart. Edward Brodess sold three of her daughters (Linah, Mariah Ritty, and Soph), separating them from the family forever.[10] When a trader from Georgia approached Brodess about buying Rit’s youngest son, Moses, she hid him for a month, aided by other enslaved people and freedmen in the community.[11] At one point she confronted her owner about the sale.[12] Finally, Brodess and “the Georgia man” came toward the slave quarters to seize the child, where Rit told them, “You are after my son; but the first man that comes into my house, I will split his head open.”[12] Brodess backed away and abandoned the sale.[13] Tubman’s biographers agree that stories told about this event within the family influenced her belief in the possibilities of resistance.[13][14] (Harriet Tubman, Wikipedia)

Whatever the cause, a strong moral sense developed in Tubman. Around the age of twelve or fifteen, there was an incident where she refused to help an overseer catch and tie up a runaway slave. Instead, she stood in front of the door and blocked his way. He threw an iron weight after the escapee, but it came up short when it hit her in the head, knocking her unconscious. She later said that it “broke my skull” and, though her master wanted to send her back to work, it took her a long time to recover. “The teenager remained in a coma for weeks,” writes M.W. Taylor, “lying on a bed of rags in the corner of her family’s windowless wooden cabin. Not until the following spring was she able to get up and walk unaided” (Harriet Tubman: Antislavery Activist, p. 16). Kate C. Larson says that, “It took months for her mother to nurse her back to health” (Harriet Ross Tubman).

Ever after, she had seizures and trance-like states (“spells”, “sleeping fits”, or “a sort of stupor or lethargy at times”), premonitions and prophetic visions (“vivid dreams”), and out-of-body and other shamanic-like experiences — possibly caused by temporal lobe epilepsy, narcolepsy, cataplexy, or hypersomnia. She claimed to have heard the voice of God that guided and protected her, that He “spoke directly to my soul”. She “prayed all the time” and “was always talking to the Lord”“When I went to the horse trough to wash my face, and took up the water n my hands, I said, ‘Oh Lord, wash me, make me clean.’ When I took up the towel to wipe my face and hands, I cried, ‘Oh Lord, for Jesus’ sake, wipe away all my sins!’ ” (Sarah H. Bradford, Harriet, p. 11).

“During these hallucinatory states,” writes Gordon S. Johnson Jr., “she would also hear voices, screams, music, and rushing water, and feel as though her skin was on fire, while still aware of what was going on around her. The attacks could occur suddenly, without warning, even in the middle of a conversation. She would wake up and pick up the conversation where it left off a half hour later. In addition, Tubman would have terrible headaches, and would become more religious after the injury” (Harriet Tubman Suffered a TBI Early In Life).

While recuperating, she prayed for her master’s soul, that he might be saved and become a Christian. Her master’s behavior didn’t improve. In her stupor, no amount of whipping would arouse her. So he tried to sell her, but no one wanted to buy an injured and incapacitated slave, even though prior to the accident she had been hardworking and was able to do the work of a full-grown man. She didn’t want to be sold and separated from her family. One day she prayed that, if her master couldn’t be saved, the Lord should kill him and take him away. Shortly later, he did die and, with overwhelming guilt, she felt her prayer had been the cause.

Tubman’s experiences may have been shaped by African traditions, as there were many first generation slaves around. She would have been close to her immediate and extended family living in the area, as described by Professor Larson: “Harriet Tubman’s grandmother, Modesty, lived on Pattison’s property for an undetermined number of years after Rit left with Mary and moved to the Thompson plantation. Though the Thompson plantation sat about 6 miles to the west of the Pattison plantation and their neighbors along the Little Blackwater River near the bridge, their interactions were likely frequent and essential to maintaining social, political, and economic wellbeing” (Harriet Tubman Underground Railroad National Monument: Historic Resource Study).

An important familial link, as discussed above, was their shared religious inheritance. “Methodism was one source of strength, blending smoothly with cultural and religious traditions that survived the middle passage from Africa,” wrote Professor Larson. “First generation Africans, like her grandmother Modesty, embodied a living African connection and memory for the Bradford, Scenes in the Life of Harriet Tubman. Tubman’s religious fervor and trust in God to protect and guide her evolved from a fusion of these traditions.” Tubman remained close to family living on nearby plantations, such as being hired out to do logging work with her father and quite likely hearing the same sermons, maybe sometimes clandestinely meeting in the “hidden church” of informal religious gatherings.

Her first biographer, Fanklin Sanborn, said that she was “one degree removed from the wolds of Africa, her grandfather being an imported African of a chieftan family” and that, as “the grand-daughter of a slave imported from Africa,” she “has not a drop of white blood in her veins” (“The Late Araminta Davis: Better Known as ‘Moses’ or ‘Harriet Tubman’.” Franklin B. Sanborn Papers. Box 1, Folder 5. Box 1, Folder 5, American Antiquarian Society). The latter claim of her being pure African ancestry has been disputed and was contradicted by other accounts, but at least part of her family was of recent African ancestry as was common in that era, making her a second generation American in at least one line. With a living memory of the Old World, Tubman’s maternal grandmother Modesty Green would have been treated as what is called a griot, an elder who is a teacher, healer, and counselor; a keeper of knowledge, wisdom, and customs. She would have remembered the old world and had learned much about how to live in the new one, helping to shape the creole culture into which Tubman was born.

Modesty might have come from the Ashanti tribe of West Africa, specifically Ghana. She was sold as a slave sometime before 1785, the year Tubman’s mother Rittia (Rit, Ritty) Green was born. The Ashanti ethnicity was common in the region, writes Ann Malaspina: “During the eighteenth century, more than one million slaves were bought by British, Danish, and Dutch slave traders and shipped to the Americas from the Ashanti Empire on West Africa’s Gold Coast, a rich trading region. Many Ashanti slaves were sold to buyers in Maryland” (Harriet Tubman, p. 10). The Ashanti had a proud reputation and the ethnic culture made its presence known, such as the “Asante proverbs that Harriet picked up as a young girl (“Don’t test the depth of a river with both feet”)” (Catherine Clinton, Harriet Tubman). Along with the Ashante, blacks of Igbo descent were numerous in the Tidewater region of Maryland and Virginia (Igbo Americans, Wikipedia). These cultures, along with the Kongo people, were known to be proud and loyal. Also, West Africa had a tradition of respect for women — as property owners and leaders, and sometimes as warriors.

It’s the reason the Tidewater plantation owners preferred them as slaves. The preference in the Deep South was different because down there plantations were large commercial operations with typically absentee owners, an aristocracy that spent most of its time in Charleston, England, or elsewhere. Tidewater slaveholders had smaller plantations and were less prosperous. This meant they and their families lived close to slaves and, in some cases, would have worked with them. These Tidewater aristocrats were more likely to use the paternalistic rhetoric that identified slaves as part of the extended family, as often was literally the case from generations of close relations with many of the plantation owner’s mulatto children, grandchildren, cousins, etc running around. Cultures like the Ashanti and Igbo, in being strongly devoted to their families and communities, could be manipulated to keep slaves from running away. The downside to this communal solidarity is that these ethnic groups were known to be disobedient and cause a lot of trouble, including some of the greatest slave rebellions

Tubman is an exemplar of this Tidewater black culture. According to her own statements recorded by Frank C. Drake: “the old mammies to whom she told [her] dreams were wont to nod knowingly and say, ‘I reckon youse one o’ dem ‘Shantees’, chile.’ For they knew the tradition of the unconquerable Ashantee blood, which in a slave made him a thorn in the side of the planter or cane grower whose property he became, so that few of that race were in bondage” (“The Moses of Her People. Amazing Life work of Harriet Tubman,” New York Herald, New York, Sept. 22, 1907). The claim about her grandmother was confirmed by a piece from the year before Tubman’s death, written by Ann Fitzhugh Miller (granddaughter of Tubman’s friend Gerrit Smith), in reporting that Tubman believed her maternal grandmother had been “brought in a slave ship from Africa” (“Harriet Tubman,” American Review, August 1912, p. 420).

Professor Kate C. Larson concludes that, “It has been generally assumed at least one if not more of Tubman’s grandparents came directly from Africa” (Harriet Tubman Underground Railroad National Monument: Historic Resource Study). This is the reason for speculating about a more direct African influence or, at the very least, it shows how important an African identity was to Tubman’s sense of faith and spirituality. “Like many enslaved people, her belief system fused Christian and African beliefs,” Robert Gudmestad suggests. “Her belief that there was no separation between the physical and spiritual worlds was a direct result of African religious practices. Tubman literally believed that she moved between a physical existence and a spiritual experience where she sometimes flew over the land.”

* * *

Harriet Tubman’s Special Relationship with God and Archaic Authorization

Whatever was the original source and true nature of Harriet Tubman’s abilities, they did serve her well in freeing slaves and saved her from her pursuers. She always trusted her voices and visions, and would change her course of action in an instant, such as the time God told her to not continue down a road and so, without hesitation, she led her fellow fugitives across the rushing waters of an icy stream, but the “several stout men” in her care “refused to follow til they saw her safe on the other side”. Sarah Bradford goes on to say that, “The strange part of the story we found to be, that the masters of these men had put up the previous day, at the railroad station near where she left, an advertisement for them, offering a large reward for their apprehension; but they made a safe exit” (p. 45). Commenting on this incident, McGowan and Kashatus notes, “Similar instances occurred on her rescue missions whenever Harriet was forced to make an important decision” (Harriet Tubman: A Biography, p. 62).

This divine guidance probably made her behavior erratic and unpredictable, always one step ahead (or one step to the side) of the slave-catchers — maybe not unlike the Trickster stories she likely heard growing up, as part of the folklore tradition in African-American communities or possibly picked up from Native Americans who still lived in the area. Maybe there is a reason both Trickster stories and voice-hearing are often found in oral cultures. The Trickster, as an archetype similar to salvific figures, exists between the divine and human — Jesus often played the role of Trickster. Looking more closely at this mentality might also tell us something about the bicameral mind.

Her visions and voice-hearing was also a comfort and assurance to her; and, as some suggested, this gave her “command over others’ minds” (Edna Cheney, “Moses”, The Freedmen’s Record, p. 35) — that is to say, when around her, people paid attention and did what they were told. She had the power of charisma and persuasion, and failing that she had a gun that she was not afraid to use too good effect. She heard God’s voice in conviction and so she spoke with conviction. One was wise to not doubt her and, when leading slaves to freedom, she did not tolerate anyone challenging her authority. But it was in moments of solitude that she most strongly felt the divine. Based on interviews with Tubman in 1865, Edna Cheney conveyed it in the following way:

“When going on these journeys she often lay alone in the forests all night. Her whole soul was filled with awe of the mysterious Unseen Presence, which thrilled her with such depths of emotion, that all other care and fear vanished. Then she seemed to speak with her Maker “as a man talketh with his friend;” her child-like petitions had direct answers, and beautiful visions lifted her up above all doubt and anxiety into serene trust and faith. No man can be a hero without this faith in some form; the sense that he walks not in his own strength, but leaning on an almighty arm. Call it fate, destiny, what you will, Moses of old, Moses of to-day, believed it to be Almight God” (p. 36).

Friends and co-conspirators described Tubman as having lacked the gnawing anxiety and doubt that, according to Julian Jaynes, has marked egoic consciousness since the collapse of Bronze Age civilization. “Great fears were entertained for her safety,” according to William Still, an African American abolitionist who personally knew her, “but she seemed wholly devoid of personal fear. The idea of being captured by slave-hunters or slave-holders, seemed never to enter her mind.” That kind of absolute courage and conviction, based on trust of voices and visions, is not common in the modern mind. Her example inspired and impressed many.

Thomas Garrett, a close confidante, said that, “I never met with any person, of any color, who had more confidence in the voice of God, as spoken direct to her soul. She has frequently told me that she talked with God, and he talked with her every day of her life, and she has declared to me that she felt no more fear of being arrested by her former master, or any other person, when in his immediate neighborhood, than she did in the State of New York, or Canada, for she said she never ventured only where God sent her, and her faith in a Supreme Power truly was great” (letter, 1868). As an aside, there is an interesting detail about her relationship with God — it was told by Samuel Hopkins Adams, grandson of Tubman’s friend and benefactor Samuel Miles Hopkins (brother of Tubman’s biographer Sarah Bradford): “Her relations with the Deity were personal, even intimate, though respectful on her part. He always addressed her as Araminta, which was her christened name” (“Slave in the Family”, Grandfather Stories, pp. 277-278; quoted by Jean M. Humez on p. 355 of Harriet Tubman: The Life and the Life Stories).

In summarizing her faith, Milton C. Sernett concluded that, “Tubman did not distinguish between seer and saint. She seems to have believed that her trust in the Lord enabled her to meet all of life’s exigencies with a confident foreknowledge of how things would turn out, a habit others found impressive, or uncanny, as the case may be” (Harriet Tubman: Myth, Memory, and History, p. 145). That is it. This supreme confidence did not come from herself. At one moment of uncertainty, she was faced with making a decision. “The Lord told me to do this. I said, ‘Oh Lord, I can’t—don’t ask me—take somebody else.” God then spoke to her: “It’s you I want, Harriet Tubman” (Catherine Clinton, Harriet Tubman: The Road to Freedom).

Anyone familiar with Julian Jaynes’ theory of the bicameral mind would perk up at this discussion of voice-hearing, specifically of commanding voices with the undeniable and infallible power of archaic authorization. Besides this, he spoke of three other necessary components to the general bicameral paradigm, as much relevant today as it was during the Bronze Age (The Origin of Consciousness in the Breakdown of the Bicameral Mind, p. 324):

  • “The collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form”
  • “an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations”
  • “the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group”

Collective cognitive imperative is central what we are exploring here. Tubman grew up in a culture where such spiritual, paranormal, and shamanic experiences were still part of a living tradition, including traces of traditional African religion. She lacked doubt about this greater reality because almost everyone around her shared this sense of faith. As social creatures, such shared culture has a powerful effect upon the human mind. But at that point in early modernity when Tubman grew up, most of American society had lost the practices of induction and hence the ability to enter trances.

The Evangelical church, however, has long promoted trance experiences and trained people how to talk to God and listen for his voice (still does, in some cases: Tanya Luhrmann, When God Talks Back). Because of her brain condition, Tubman didn’t necessarily require induction, although her ritual of constant prayer probably helped. She went into trance apparently without having to try, one might say against her will. There is also another important contributing factor. Voice-hearing has historically been most common among non-literate, especially preliterate, societies — that is because the written word alters the human mind, as argued by many besides Jaynes: Marshall McLuhan, Walter Ong, etc. Such illiteracy would describe the American slave population since it was against the law for them to read and write.

* * *

Harriet Tubman’s Illiteracy and Storytelling Talent

This state of illiteracy included Tubman. During the Civil War, she spoke of a desire to become literate so as to “write her own life” (Cheney, p. 38), but there is no evidence she ever learned to write. “The blow to the head of Tubman received at about thirteen may have been the root cause of her illiteracy. According to Cheney’s sketch, “The trouble in her head prevents her from applying closely to a book” “ (Milton C. Sernett, Harriet Tubman: Myth, Memory, and History, p. 105). She remained her whole life fully immersed in an oral mindset. This was demonstrated by her heavy use of figurative language with concrete imagery, as when describing a Civil War battle — recorded by visiting historian Albert Bushnell Hart:

“And then we saw the lightning, and that was the guns; and then we heard the thunder, and that was the big guns; and then we heard the rain falling, and that was the drops of blood falling; and when we came to get in the crops, it was dead men that we reaped” (Slavery and Abolition, p. 209). Also, consider how she spoke of her personal experiences: “She loves to describe her visions, which are very real to her; but she must tell them word for word as they lie in her untutored mind, with endless repetitions and details, she cannot condensed them, whatever be your haste. She has great dramatic power; the scene rises before you as she saw it, and her voice and language change with her different actors” (Cheney, pp. 36-37).

Elaborating on her storytelling talent, Jean M. Humez writes: “One of Earl Conrad’s informants who as a child had known Tubman in her old age reported: “there never was any variation in the stories she told, whether to me or to any other” (Tatlock, 1939a). It is characteristic of the folklore performer trained in an oral culture to tell a story in precisely the right way each time. This is because the story itself is often regarded as a form of knowledge that will educate the young and be passed down through the generations. The storyteller must not weaken the story’s integrity with a poor performance” (Harriet Tubman: The Life and the Life Stories, p. 135).

This was also heard in how Tubman drew upon the down-to-earth style of old school religion: “Instead of the classical Greek “tricks of oratory” to which the college-educated Higginson refers, Tubman drew upon homelier sources of eloquence, such as scriptures she would have heard preached in the South. She frequently employed a teaching technique made familiar in the New Testament Gospels—the “parable’ or narrative metaphor—to make her lessons persuasive and memorable” (Jean M. Humez, Harriet Tubman: The Life and the Life Stories, p. 135). She knew of Jesus’ message through oral tellings by preachers and that was fitting since Jesus too effectively taught in the spoken word.

She was masterful. Even before a crowd of respectable whites, such as at abolitionist meetings, she could captivate an audience and move them to great emotion. Having witnessed a performance of Tubman’s oft-repeated story of former slave Joe’s arrival in Canada along with a rendition of the song he sang in joyous praise, Charlotte Forten recorded the impact it had on those present: “How exciting it was to hear her tell the story. And to hear the very scraps of jubilant hymns that he sang. She said the ladies crowded around them, and some laughed and some cried. My own eyes were full I listened to her” (Charlotte Forten, journal entry, Saturday, January 31, 1862).

All of these ways of speaking are typical of those born in oral societies. As such, her illiteracy might have been key. “She is a rare instance,” as told in The Freedmen’s Record, “in the midst of high civilization and intellectual culture, of a being of great native powers, working powerfully, and to beneficient ends, entirely unaided by school or books” (Cheney, p. 34). Maybe the two factors are closely linked. Even in the ancient world, some of the most famous and respected oracles were given by the uneducated and illiterate, often women. Tubman did have the oracular about her, as she occasionally prophesied outcomes and coming events.

We mainly know of Tubman through the stories she told and retold of herself and her achievements, surely having been important in gaining support and raising funds in those early years when she needed provisions to make her trips to the South. She came from a storytelling tradition and, obviously, she knew how to entertain and persuade, to make real the plight of the still enslaved and the dangers it took to gain their freedom. She drew in her audience, as if they were there with bloodhounds tracking them, with their lives hanging in the balance of a single wrong decision or unfortunate turn of events.

One of her greatest talents was weaving song into her stories, but that was also part of oral culture. The slave’s life was filled with song, from morning to night. They sung in church and while at work, at births and burials. These songs were often stories, many of them taken from or inspired by the religion that was so much a part of their daily experience. Song itself was a form of language: “Tubman used spirituals to signal her arrival or as a secret code to tell of her plans. She also used spirituals to reassure those she was leading of their safety and to lift their spirits during the long journey to freedom” (M.W. Taylor, Harriet Tubman: Antislavery Activist, p. 18). She also used the song of birds and owls to communicate, something she may have learned from the African or Native American tradition.

Song defined Tubman, as much as did her spirituality. “Religious songs,” Jean M. Humez explains, “embellished Tubman’s oral storytelling performances and were frequently central plot elements in her most popular Underground Railroad stories. There was the story of teasing the thick-witted “master” the night before her escape by using a familiar Methodist song, “I’m Bound for the Promised Land,” to communicate to her family her intention to run away. Singing was also integral to her much-told story about coded communication with fugitives she had hidden in the woods. “Go Down, Moses” meant “stay hidden,” while a “Methodist air,” “Hail, oh hail, ye happy spirits,” meant “all clear” (Bradford, 1869)” (Harriet Tubman: The Life and the Life Stories, p. 136).

Humez goes on to say that, “Though she was able to capture and reproduce the lyrics for her readers, Bradford was evidently bewildered by Tubman’s musical performances in much the same way Cheney was by her spiritual testimony: “The air sung to these words was so wild, so full of plaintive minor strains, and unexpected quavers, that I would defy any white person to learn it, and often as I heard it, it was to me a constant surprise” (Bradford, 1886, 35-36).” Her performances used a full range expression, including through her movement. She would wave her arms and clap her hands, sway and stamp her feet, dance and gesture — according to the details of what she spoke and rhythm of what she sang (Humez, p. 137). Orality is an embodied way of communicating.

* * *

Harriet Tubman’s Voice-Hearing and the Power of Oral Culture

Tubman may have been more talented and charismatic than most, but one suspects that such a commanding presence of speech and rhetorical persuasion would have been far more common among the enslaved who were raised in an oral culture where language was one of the few sources of power in defense against those who wielded physical violence and political force — such as the necessary ability for survival to use language that was coded and veiled, symbolic and metaphorical, whether in conversation or song, in order to communicate without stating something directly for fear of being overheard.

Her display of orality would have impressed many whites simply because literacy and the literary mind by that point had become the norm among the well-off white abolitionists who came to hear her. Generations had passed since orality had been prevalent in mainstream American society, especially among the emerging liberal class. The traditional culture of the ancien regime had been eroding since the colonial era. There is a power in oral cultures that the modern mind has forgotten, but there were those like Tubman who carried the last traces of oral culture into the 20th century before she finally died in her early 90s in 1913.

The bewilderment of whites, slave-catchers and abolitionists alike, by Tubman’s prowess makes one think of another example of the power of oral culture. The Mongol hordes, as they were perceived, acted in a way that was incomprehensible to the literate ruling elite of European feudalism. Genghis Khan established a mnemonic system used among his illiterate cavalry that allowed messages to be spread quickly and accurately. As all Mongols rode horses and carried all food with them, they were able to act collectively like a swarm and so could easily shift strategy in the middle of a battle. Oral culture had less rigid hierarchy. It was also highly religious and based in a shamanic tradition not unlike that of Africa. Genghis Khan regularly prayed to God, fasting for days until getting a clear message before he would leave on a military campaign. In similar fashion, Thomas Garrett said of Tubman: “She is a firm believer in spiritual manifestations […] she never goes on her missions of mercy without his (God’s) consent” (letter to Eliza Wigham, Dec. 27, 1856).

One imagines that, as with that Mongol leader, Tubman was so successful for the reason she wielded archaic authorization. That was the underlying force of personality and persuasion that made her way of speaking and acting so compelling, for the voice of God spoke through her. It was a much greater way of being in the world, a porous self that extended much further and that could reach into the heart and minds of others, apparently not limited to humans. Her “contemporaries noted that Tubman had a strange power over all animals—another indication of psychic ability—and insisted that she never feared the bloodhounds who dogged her trail when she became and Underground Railroad agent” (James A. McGowan & William C. Kashatus, Harriet Tubman: A Biography, pp. 10-11). Psychic ability or simply a rare example of a well-functioning bicameral mind in the modern era.

Some people did perceive her as being psychic or otherwise having an uncanny perception, an ability to know things it seems she shouldn’t be able to know. It depends on one’s psychological interpretation and theological persuasion. Her compatriot Thomas Garrett was also strongly religious in his commitment to abolitionism. “In fact,” states McGowan and Kashatus, “Garrett compared Harriet’s psychic ability to hear “the voice of God as spoken direct to her soul” to the Quakers’ concept of an Inner Light, or a divine presence in each human being that allows them to do God’s will on earth. Because of their common emphasis on a mystical experience and a shared religious perspective, Tubman and the Quakers developed a mutual trust” (Harriet Tubman: A Biography, p. 62). A particular incident helps explain Garret’s appraisal, from the same book (pp. 59-60):

“One late afternoon in mid-October 1856, Harriet arrived in Wilmington, Delaware, in need of funding for a rescue mission to the Eastern Shore. She went immediately to the office of Thomas Garrett, a white Quaker station master who also operated a hardware business in the town. “God sent me to you, Thomas,” said Harriet, dismissing the formality of a simple greeting. “He tells me you have money for me.” Amused by the request, Garrett jokingly asked: “Has God ever deceived thee?” “No,” she snapped. “I have always been liberal with thee, Harriet, and wish to be of assistance,” said the Quaker station master, stringing her along. “But I am not rich and cannot afford to give thee much.” Undeterred by the response, Harriet shot back: “God told me you’ve got money for me, and God never fools me!” Realizing that she was getting upset, Garrett cut to the chase: “Well, then, how much does thee need?” After reflecting a moment, Tubman said, “About 23 dollars.”

“The elderly Quaker shook his head in disbelief. Harriet’s request was almost exactly the amount he had received from an antislavery society in Scotland for her specific use. He went to his cash box, retrieved the donation, and handed it to his visitor. Smiling at her benefactor, Tubman took the cash, turned abruptly and marched out of the office. Astonished by the incident, Garrett later confided to another abolitionist that “there was something remarkable” about Harriet. “Whether it [was] clairvoyance or the divine impression on her mind, I cannot tell,” he admitted. “But I am certain she has a guide within herself other than the written word, for she never had any education.”1 By most accounts, Tubman’s behavior can be described as selfrighteous, if not extremely presumptuous. But she viewed herself as being chosen by God for the special duty of a liberator. In fact, she admitted that she “felt like Moses,” the Old Testament prophet, because “the Lord told me to go down South and bring up my brothers and sisters.” When she expressed doubt about her abilities and suggested that the Lord “take somebody else,” He replied: “It’s you I want, Harriet Tubman.”2 With such a divine commission, Tubman was confident that her visions and actions—no matter how rude by 19th–century society’s standards—were condoned by the Almighty. Thomas Garrett understood that.”

There is no doubt she had an instinctive understanding that was built on an impressive awareness, a keen presence of mind — call it psychic or bicameral. With our rigid egoic boundaries and schizoid mentality, we inhabitants of this modern hyper-individualistic world have much to learn about the deeper realms of the bundled mind, of the multiplicity of self. We have made ourselves alien to our own human and animal nature, and we are the lesser for it. The post-bicameral loss of not only God’s voice but of a more expansive way of being is still felt in a nostalgic longing that continues to rule over us, ever leading to backlashes of the reactionary mind. Even with possible brain damage, Tubman was no where near as mentally crippled as we are with our prized ego-consciousness that shuts out all other voices and presences.

In the Western world, it would be hard to find such a fine specimen of visionary voice-hearing. Harriet Tubman had a genius about her, both genius in the modern sense of brilliance and genius in the ancient sense of a guiding spirit. If she were around today, she would likely be medicated and institutionalized or maybe imprisoned, as a threat to sane and civil society (Bruce Levine, “Sublime Madness”: Anarchists, Psychiatric Survivors, Emma Goldman & Harriet Tubman). Yet there are still other societies, including developed countries, in the world where this is not the case.

Tanya Luhrmann, as inspired by Julian Jaynes, went into anthropology where she researches voice-hearing (her work on evangelicalism is briefly noted above). One study she did compared the experience of voice-hearers in the Ghana and the United States (Differences in voice-hearing experiences of people with psychosis in the U.S.A., India and Ghana: interview-based study). Unlike here in this country, those voice-hearer’s in certain non-Western culture are not treated as mentally ill and, unsurprisingly, neither do they experience cruel and persecutory voices — quite the opposite in being kind, affirming, and helpful as was the case with Tubman.

“In the case of voice hearing, culture may also play a role in helping people cope.  One study conducted by Luhrmann, the anthropologist, found that compared to their American counterparts, voice-hearing people diagnosed with schizophrenia in more collectivist cultures were more likely to perceive their voices as helpful and friendly, sometimes even resembling members of their friends and family. She adds that people who meet criteria for schizophrenia in India have better outcomes than their U.S. counterparts. She suspects this is because of “the negative salience” a diagnosis of schizophrenia holds in the U.S., as well as the greater rates of homelessness among people with schizophrenia in America” (Joseph Frankel, Psychics Who Hear Voices Could Be On to Something).

One suspects that the Ashanti and related African cultures that helped shape black traditions in Tubman’s Maryland are basically the same as the culture still existing in Ghana to this day. After all, the Ashanti Empire that began in the early colonial era, 1701, continued its rule well into the twentieth century, 1957. If it’s true that her grandmother Modesty was Ashanti, that would go a long way in explaining the cultural background to Tubman’s voice-hearing. It’s been speculated her father was the child of two Africans and it was directly from him that she claimed to have inherited her peculiar talents. It’s possible that elements of the bicameral mind survived later in those West African societies and from there was carried across the Middle Passage.

* * *

The Friendship and Freedom of the Living God

It’s important to think about the bicameral mind by looking at real world examples of voice-hearing. It might teach us something about what it means to be in relationship with a living God — a living world, a living experience of the greater mind, the bundled self (no matter one’s beliefs). Many Christians talk about such things, but few take it seriously, much less experience it or seek it out. That was what drew the Quakers to Tubman and others like her influenced by the African tradition of a living God. It wasn’t only a commonality of politics, in fighting for abolitionism and such. Rather, the politics was an expression of that particular kind of spiritual and epistemological experience.

To personally know God — or, if you prefer, to directly know concrete, lived reality — without the intervention of either priest or text or the equivalent can create immense power through authorization. It is an ability to act with confidence, rather than bowing down to external authority of hierarchical institutions, be it church clergy or plantation aristocracy. But it also avoids the other extreme, that of getting lost in the abstractions of the egoic consciousness that drain psychic reserves and make human will impotent. As Harriet Tubman proved, this other way of being can be a source of empowerment and liberation.

What made this possible is not only that she was illiterate but unchurched as well. In their own way, Quakers traditionally maintained a practice of being unchurched, in avoiding certain formal church institutions such as eschewing the ministerial profession. Slaves, on the other hand, were often forced to be unchurched in not being allowed to participate in formal religion. This would have helped maintain traditional African spiritual practice and experience. Interestingly, as J.E. Kennedy reports, one set of data found that “belief in the paranormal was positively related to religious faith but negatively related to religious participation” (The Polarization of Psi Beliefs; as discussed in NDE: Spirituality vs Religiosity). It’s ironic that formal religion (organized, institutionalized) and literacy, specifically in a text-based religion, have the powerful effect of disconnecting people from experience of God. Yet experience of God can break the spell of that mind virus.

The other thing is that, like African religion, the Quaker emphasis was on the communal. This might not seem obvious, in how Quakers believed in the individual’s relationship to God. That is where Tubman’s example is helpful. She too had an individual relationship to God, but her identity was also tied closely to kinship, community, and ancestry. We need to think more carefully about what is meant when we speak of individuality. One can gain one’s own private liberty by freeing oneself from shackled enslavement, that is to say changing one’s status from owned by another to owned by oneself (i.e., owned by the ego-self, in some ways an even more harsh taskmaster). Freedom, however, is something else entirely. The etymology of ‘freedom’ is the same as ‘friend’. To be free is to be among friends, to be a member of a free society — one is reminded that, to Quakers and West Africans alike, there was an inclination to relate to God as a friend. Considering this simple but profound understanding, it wasn’t enough for Tubman to escape her oppressive bondage, if she left behind everyone she loved.

Often she repeated her moral claim for either liberty or death, as if they were of equivalent value; whereas freedom is about life and the essence of life is shared, as freedom is always about connection and relationship, about solidarity and belonging. She couldn’t be free alone and, under the will of something greater than her, she returned South to free her kith and kin. The year Harriet Tubman first sought freedom, 1849, was the same year of the birth of Emma Lazarus, a poet who would write some of the most well known words on slavery and oppression, including the simple statement that, “Until we are all free, we are none of us free.” About a century later, this was rephrased by Martin Luther King Jr. during the Civil Rights movement when he said, “No one is free until we are all free.” One could trace this insight back to the ancient world, as when Jesus spoke that, “Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.” That is freedom.

A living God lives among a living generation of people, a living community. “For where two or three gather in my name,” as Jesus also taught, “there am I with them.” Quakers had a tradition of living constitutionalism, something now associated with liberalism but originally having its origins in a profound sense of the divine (Where Liberty and Freedom Converge). To the Quaker worldview, a constitution is a living agreement and expression of the Divine, a covenant between God a specific people; related to why Quakers denied natural law that would usurp the authorization of this divine presence. A constitution is not a piece of paper nor the words upon it. Nor can a constitution be imposed upon other people outside of that community of souls. So, neither slaves nor following generations are beholden to a constitution enacted by someone else. This was why Thomas Jefferson assumed later Americans would forever seek out new constitutions to express their democratic voice as a people. But those who understood this the best were Quakers; or those, like Thomas Paine, who were early on influenced by the Quaker faith.

Consider John Dickinson who was raised as a Quaker and, after inheriting slaves, freed them. He is the author of the first draft of America’s first constitution, the Articles of Confederation, which was inspired by Quaker constitutionalism. The Articles of Confederation was a living document, in that it’s only power was the authority of every state agreeing to it with total consensus and no change being allowed to be made to it without further consensus. The second constitution, simply known as the United States Constitution and unconstitutionally established according to the first constitution (The Vague and Ambiguous US Constitution), was designed to be a dead letter and it has become famous for enshrining the institution of slavery. Rather than expressing a message of freedom, it was a new system of centralized power and authority. The deity invoked under this oppression is a dead god, a god of death. No one hears the voice of this false god, this demiurge.

Such a false idol can make no moral claim over a free people. As such, a free people assert their freedom by the simplest act of walking away, as did Harriet Tubman by following the water gourd pointing to the North Star, and as she repeated many times in guiding her people to what to them was the Promised Land. What guided her was the living voice of the living God. They had their own divine covenant that took precedence over any paper scribbled upon by a human hand.

* * *

Harriet Tubman, an Unsung Naturalist, Used Owl Calls as a Signal on the Underground Railroad
by Allison Keys, Audubon Magazine

“It was in those timber fields where she learned the skills necessary to be a successful conductor on the Underground Railroad,” Crenshaw explains, “including how to read the landscape, how to be comfortable in the woods, how to navigate and use the sounds that were natural in Dorchester County at the time.”

Underground Railroad Secret Codes
from Harriet Tubman Historical Society

Supporters of the Underground Railroad used words railroad conductors employed everyday to create their own code as secret language in order to help slaves escape. Railroad language was chosen because the railroad was an emerging form of transportation and its communication language was not widespread. Code words would be used in letters to “agents” so that if they were intercepted they could not be caught. Underground Railroad code was also used in songs sung by slaves to communicate among each other without their masters being aware.

Myths & Facts About Harriet Tubman
from National Park Service

Tubman sang two songs while operating her rescue missions. Both are listed in Sarah Bradford’s biography Scenes in the Life of Harriet Tubman: “Go Down Moses,” and, “Bound For the Promised Land.” Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.

Songs of the Underground Railroad
from Harriet Tubman Historical Society

Songs were used in everyday life by African slaves. Singing was tradition brought from Africa by the first slaves; sometimes their songs are called spirituals. Singing served many purposes such as providing repetitive rhythm for repetitive manual work, inspiration and motivation. Singing was also use to express their values and solidarity with each other and during celebrations. Songs were used as tools to remember and communicate since the majority of slaves could not read.

Harriet Tubman and other slaves used songs as a strategy to communicate with slaves in their struggle for freedom. Coded songs contained words giving directions on how to escape also known as signal songs or where to meet known as map songs.

Songs used Biblical references and analogies of Biblical people, places and stories, comparing them to their own history of slavery. For example, “being bound for the land of Canaan” for a white person could mean ready to die and go to heaven; but to a slave it meant ready to go to Canada.

Scenes in the Life of Harriet Tubman
by Sarah Hopkins Bradford
pp. 25-27

After nightfall, the sound of a hymn sung at a distance comes upon the ears of the concealed and famished fugitives in the woods, and they know that their deliverer is at hand. They listen eagerly for the words she sings, for by them they are to be warned of danger, or informed of safety. Nearer and nearer comes the unseen singer, and the words are wafted to their ears:

Hail, oh hail ye happy spirits,
Death no more shall make you fear,
No grief nor sorrow, pain nor anger (anguish)
Shall no more distress you there.

Around him are ten thousan’ angels,
Always ready to ‘bey comman’.
Dey are always hobring round you,
Till you reach the hebbenly lan’.

Jesus, Jesus will go wid you;
He will lead you to his throne;
He who died has gone before you,
Trod de wine-press all alone.

He whose thunders shake creation;
He who bids the planets roll;
He who rides upon the temple, (tempest)
An’ his scepter sways de whole.

Dark and thorny is de desert,
Through de pilgrim makes his ways,
Yet beyon’ dis vale of sorrow,
Lies de fiel’s of endless days.

I give these words exactly as Harriet sang them to me to a sweet and simple Methodist air. “De first time I go by singing dis hymn, dey don’t come out to me,” she said, “till I listen if de coast is clar; den when I go back and sing it again, dey come out. But if I sing:

Moses go down in Egypt,
Till ole Pharo’ let me go;
Hadn’t been for Adam’s fall,
Shouldn’t hab to died at all,

den dey don’t come out, for dere’s danger in de way.”

Let The Circle Be Unbroken: The World Of Araminta (“Minty”) Ross Or The Making Of Harriet Tubman
by Margaret Washington

I. Building Communities
C. It Takes a Village to Raise a Child.

Enslaved African Americans came from a heritage that embraced concepts of solidarity in a descending order from the larger ethnic group, to the communal village, to the extended family to the nuclear family. Individualism (as opposed to individuality) was considered selfish and antithetical to the broader interests of a unit. Whether societies were matrilineal or patrilineal, nearly all were patriarchal (power rested with men). Nonetheless, the glue that bound the communal circle was the woman, considered the life giving force, the bearer of culture, essence of aesthetic beauty and key to a community’s longevity. Mothers, grandmothers, aunts, sisters etc. had oversight of children until puberty, when male and female rites of passage prepared them separately for their gendered communal roles. West African women were spiritually strong, morally respected, valued for their economic propensity, important in governance and in some cultures (Ashanti, Kongo, Ibo) powerful warriors. However devalued and exploited in America, Modesty, Rit and Minty exemplified how enslaved women resisted a sense of futility or fatalism and refashioned African attributes of beauty, dignity, self-worth and ethics. Enslaved women combed the waterways, forests and woods to obtain roots, herbs, leaves, sap, barks and other medicinal products for healing, amulets and even conjuration. Rit certainly used such remedies to nurse Minty back to health after extreme exhaustion, illnesses, beatings and her near fatal blow on the head. Rit learned these remedies and poultices from her mother Modesty and Harriet Tubman used them on the Underground Railroad. Their example reveals the significance of women to the community and that despite the assaults on the black family; it remained an institution, which even separation could not sever. […]

II ANCHORING THE SPIRIT
A. The Hidden Church: An African-Christian Synthesis.

If community was the base of African and African American life and culture, spirituality was the superstructure. Certainly enslaved people ultimately embraced Christianity. But for generations Southern whites feared exposing blacks to Christianity. The Bible’s Old Testament militant nationalism and New Testament’s spiritual  egalitarianism were not lost on African Americans, a few of whom were literate and the majority of whom felt that baptism was one kind of freedom.

Like most enslaved children, young Minty grew up outside of a church. However, since Ben Ross’s owner Anthony Thompson Sr., was a practicing Methodist, Minty’s family heard Christian sermons. But Edward Brodess was not devout and when he separated the Ross family, little Minty was hired out and did not receive white religious benevolence. But a tradition of black religion and spirituality existed independent of whites. In African culture, sacred worship embedded every aspect of life (rites of passage, marriage, funerals, child birth, etc.). Divine reverence was not confined to a building, a single ceremony or a specific day of the week. Spirituality was pervasive, expressive, emotional and evocative. Although the religious culture developed in America had African roots, the ravages of bondage created more social-spiritual convergences. In Minty’s world, spirituality was wrapped in temporal concerns affecting the individual, the family and the community. Worship was praising, praying, lamenting, hoping and drawing strength from each other. Long before Minty’s birth, Africans in America had created a “hidden church” where enslaved people gathered clandestinely (the woods, in cabins, in boats, in white people’s kitchens and even in the fields). In the hidden church they recounted religious and secular experiences; gave testimonies and created a space were women such as Rit could express the pain of having children sold or of trying to bring Minty back to life after her head was bashed in. In the hidden church, enslaved people created subversive songs, prayed for spiritual salvation, heavenly retribution and freedom.

Africans traveling the Maafa brought an ethos that merged the sacred and secular worlds. Enslaved African Americans embraced Christianity but also selectively adapted it to previous traditions and to their historical circumstances. Above all, they rejected incongruous white teachings meant to relegate blacks to perpetual slavery. Rather than being converted to Christianity as taught by whites, enslaved people converted Christianity to their own needs. Moreover, some significant African and Christian traditions had noteworthy commonalities.

Africans, like Christians believed in one God (Nzambi among the Bantu, Onyame among the Akan-Ashanti for example) who was the apex of all existence just as humanity was the center of earthly life. While gendered concepts of the African Supreme Being varied, like Jehovah, Africans’ God was revered, all-powerful and approachable. However, unlike Jehovah, the African Supreme Being was not feared, jealous nor wrathful. Other spirits exist in the African pantheon, like saints in Catholicism. But there was only one God. Hence, when whites spoke of a Supreme God, Africans understood. Harriet Tubman’s God was an all-powerful friend. According to Thomas Garrett, her close friend and a beloved Quaker Underground Railroad Conductor, Harriet spoke to God every day of her life. “I never knew anyone so confident of her faith,” said Garrett. (Letter in Bradford)

Africans, like Christians, believed in a soul, sometimes called the “heart” or “voice.” The soul was responsible for human behavior in life and was one’s spiritual existence after death. Some ethnicities had complicated concepts of the soul; others simply recognized the soul as the “little me in the big me” which lived on. Africans believed in honoring this life after death, especially as part of the kinship spiritual connection (ancestor reverence), which brought protection to the living. The curse of the dead was much dreaded in Africa and in America. Hence the importance of burial and funeral rites throughout the Diaspora, even today. A woman such as Harriet Tubman who embraced Christianity, also blended a spiritual syncretism that constructed a concept of the soul around moral ethics and faith imparted through the word of God, “as spoken to her soul” according to her friend Garrett. “She is a firm believer in spiritual manifestations . . . she never goes on her missions of mercy without his (God’s) consent.” (Garrett to Eliza Wigham, in McGowan, 135)

Water was a life giving force in African culture and the spirit world was under water. Throughout the African Diaspora, water represented divine transformations—birth, death, baptism and rebirth. For many enslaved people, accepting Christianity carried implications reminiscent of older traditions that surpassed what whites intended. In African cultures, an initiate received a “sacred bath” following a special protracted rite of passage symbolizing acceptance and integration into the community. Similarly, with Christianity enslaved people sought salvation through isolation, prayer, meditation, and communication with God through visions and signs from the natural environment. Baptism by total immersion represented final acceptance into the “ark of safety.” Although Methodists baptized by sprinkling, enslaved people insisted on going “down under” the water. They also equated spiritual transformation with secular change. Such thinking was Christian because the New Testament upheld spiritual egalitarianism. It was also African: One traveled briefly into the watery world of the ancestors as an uncivil “little spirit of the bush” full of individualistic anti-communal tendencies. One emerged from the water as a citizen of the community able to partake of all rights and privileges. The change was both divine and temporal; it was fervent, overwhelming and thoroughgoing. Canals, marshes, swamps and rivers surrounded African descended people on the Eastern Shore. Here they labored as slaves. Here they were baptized and hence constantly reminded of water’s spiritual and liberating significance.

Minty’s Christian conversion experience probably happened while working for the Stewarts in Caroline County. Whether because of that experience or her blow on the head, Minty insisted she spoke to God, had trances and saw visions that foretold future events. As a clairvoyant, Minty believed that she inherited this second sense from her father, Ben. Africans and African Americans believed that a clairvoyant person was born with a “caul” or “veil,” a portion of the birth membrane that remained on the head. They were seers and visionaries who communicated with the supernatural world and were under a special spiritual dispensation. Visions sometimes came while Minty worked, were accompanied by music and articulated in a different language. Minty also claimed exceptional power. When Edward Brodess sent slave traders to Ben’s cabin to inspect Minty, she prayed for God to cleanse Brodess’s heart and make him a good man or kill him. Brodess’ death convinced Minty that she had “prayed him to death.”1 Since his death put her in eminent danger of sale, Minty knew it was a sign from God to flee.

Northerners called Ben “a full-blooded Negro.” His parents were probably African born and told him the old Maafa adage that he passed on to Minty: some Africans could fly. Indeed, captured Ibo people committed suicide believing that their spirits flew back to Africa.2 Similarly, as Minty envisioned her escape, “She used to dream of flying over fiefs and towns, and rivers and mountings, looking down upon them ‘like a bird.'” When it appeared as if her strength would give out and she could not cross the river, “there would be ladies all dressed in white over there, and they would put our their arms and pull me across.” Listening to Ben’s stories, predictions and sharing his faith convinced Minty that an omniscient force protected her. In visions, she became a disembodied spirit observing earthly and heavenly scenes. Harriet Tubman told friends that God “called” her to activism against her wishes. She begged God to “get someone else” but to no avail. Since God called her, she depended on God to guide her away from danger.

Balance of Egalitarianism and Hierarchy

David Graeber, an anthropologist, and David Wengrow, an archaeologist, have a theory about hunter-gatherer societies having cycled between egalitarianism and hierarchy. That is to say hierarchies were temporary and often seasonal. There was no permanent leadership or ruling caste, as seen in the fluid social order of still surviving hunter. This carried over into the early settlements that were initially transitory meeting places, likely for feasts and festivals.

There are two questions that need to be answered. First, why did humans permanently settle down? Second, why did civilization get stuck in hierarchy? These questions have to be answered separately. For millennia into civilization, the egalitarian impulse persisted within many permanent settlements. There was no linear development from egalitarianism to hierarchy, no fall from the Garden of Eden.

Julian Jaynes, in his theorizing about the bicameral mind, offered a possible explanation. A contributing factor for permanent settlements would be because the speaking idols had to be kept in a single location with agriculture developing as a later result. Then as societies became more populous, complex and expansive, hierarchies (as with moralizing gods) became more important to compensate for the communal limits of a voice-hearing social order.

That kind of hierarchy, though, was a much later development, especially in its extreme forms not seen until the Axial Age empires. The earlier bicameral societies had a more communal identity. That would’ve been true on the level of experience, as even the voices people heard were shared. There wasn’t an internal self separate from the communal identity and so no conflict between the individual member and larger society. One either fully belonged to and was immersed in that culture or not.

Large, complex hierarchies weren’t needed. Bicameralism began in small settlements that lacked police, court systems, standing armies, etc — all the traits of an oppressively authoritarian hierarchy that would later be seen, such as the simultaneous appearance of sexual moralizing and pornographic art. It wasn’t the threat of violent force by centralized authority and concentrated power that created and maintained the bicameral order but, as still seen with isolated indigenous tribes, shared identity and experience.

An example of this is that of early Egyptians. They were capable of impressive technological feats and yet they didn’t even have basic infrastructure like bridges. It appears they initially were a loose association of farmers organized around the bicameral culture of archaic authorization and, in the off-season, they built pyramids without coercion. Slavery was not required for this, as there is no evidence of forced labor.

In so many ways, this is alien to the conventional understanding of civilization. It is so radically strange that to many it seems impossible, especially when it gets described as ‘egalitarian’ in placing it in a framework of modern ideas. Mention primitive ‘communism’ or ‘anarchism’ and you’ll really lose most people. Nonetheless, however one wants to describe and label it, this is what the evidence points toward.

Here is another related thought. How societies went from bicameral mind to consciousness is well-trodden territory. But what about how bicameralism emerged from animism? They share enough similarities that I’ve referred to them as the animistic-bicameral complex. The bicameral mind seems like a variant or extension of the voice-hearing in animism.

Among hunter-gatherers, it was often costume and masks through which gods, spirits, and ancestors spoke. Any individual potentially could become the vessel of possession because, in the animistic view, all the world is alive with voices. So, how did this animistic voice-hearing become narrowed down to idol worship of corpses and statues?

I ask this because this is central to the question of why humans created permanent settlements. A god-king’s voice of authorization was so powerful that it persisted beyond his death. The corpse was turned into a mummy, as his voice was a living memory that kept speaking, and so god-houses were built. But how did the fluid practice of voice-hearing in animism become centralized in a god-king?

Did this begin with the rise of shamanism? Some hunter-gatherers don’t have shamans. But once the role of shaman becomes a permanent authority figure mediating with other realms, it’s not a large leap from a shaman-king to a god-king who could be fully deified in death. In that case, how did shamanism act as a transitional proto-bicameralism? In this, we might begin to discern the hitch upon which permanent hierarchy eventually got stuck.

I might point out that there is much disagreement in this area of scholarship, as expected. The position of Graeber and Wengrow is highly contested, even among those offering alternative interpretations of the evidence see Peter Turchin (An Anarchist View of Human Social Evolution & A Feminist Perspective on Human Social Evolution) and Camilla Power (Gender egalitarianism made us human: patriarchy was too little, too late & Gender egalitarianism made us human: A response to David Graeber & David Wengrow’s ‘How to change the course of human history’).

But I don’t see the disagreements as being significant for the purposes here. Here is a basic point that Turchin explains: “The reason we say that foragers were fiercely egalitarian is because they practiced reverse dominance hierarchy” (from first link directly above). That seems to go straight to the original argument. Many other primates have social hierarchy, although not all. Some of the difference appears to be cultural, in that humans early in evolution appear to have developed cultural methods of enforcing egalitarianism. This cultural pattern has existed long enough to have fundamentally altered human nature.

According to Graeber and Wengrow, these egalitarian habits weren’t lost easily, even as society became larger and more complex. Modern authoritarian hierarchies represent a late development, a fraction of a percentage of human existence. They are far outside the human norm. In social science experiments, we see how the egalitarian impulse persists. Consider two examples. Children will naturally help those in need, until someone pays them money to do so, shifting from intrinsic motivation to extrinsic. The other study showed how most people, both children an adults, will choose to punish wrongdoers even at personal cost.

This in-built egalitarianism is an old habit that doesn’t die easily no matter how it is suppressed or perverted by systems of authoritarian power. It is the psychological basis of a culture of trust that permanent hierarchies take advantage of through manipulation of human nature. The egalitarian impulse gets redirected in undermining egalitarianism. This is why modern societies are so unstable, as compared to the ancient societies that lasted for millennia.

That said, there is nothing wrong with genuine authority, expertise, and leadership — as seen even in the most radically egalitarian societies like the Piraha. Hierarchies are also part of our natural repertoire and only problematic when they fall out of balance with egalitarianism and so become entrenched. One way or another, human societies cycle between hierarchy and egalitarianism, whether it cycles on a regular basis or necessitates collapse. That is the point Walter Scheidel makes in his book, The Great Leveler. High inequality destabilizes society and always brings its own downfall.

We need to relearn that balance, if we hope to avoid mass disaster. Egalitarianism is not a utopian ideal. It’s simply the other side of human nature that gets forgotten.

* * *

Archaeology, anarchy, hierarchy, and the growth of inequality
by Andre Costopoulos

In some ways, I agree with both Graeber and Wengrow, and with Turchin. Models of the growth of social inequality have indeed emphasized a one dimensional march, sometimes inevitable, from virtual equality and autonomy to strong inequality and centralization. I agree with Graeber and Wengrow that this is a mistaken view. Except I think humans have moved from strong inequality, to somewhat managed inequality, to strong inequality again.

The rise and fall of equality

Hierarchy, dominance, power, influence, politics, and violence are hallmarks not only of human social organization, but of that of our primate cousins. They are widespread among mammals. Inequality runs deep in our lineage, and our earliest identifiable human ancestors must have inherited it. But an amazing thing happened among Pleistocene humans. They developed strong social leveling mechanisms, which actively reduced inequality. Some of those mechanisms are still at work in our societies today: Ridicule at the expense of self-aggrandizers, carnival inversion as a reminder of the vulnerability of the powerful, ostracism of the controlling, or just walking away from conflict, for example.

Understanding the growth of equality in Pleistocene human communities is the big untackled project of Paleolithic archaeology, mostly because we assume they started from a state of egalitarianism and either degenerated or progressed from there, depending on your lens. Our broader evolutionary context argues they didn’t.

During the Holocene, under increasing sedentism and dependence on spatially bounded resources such as agricultural fields that represent significant energy investments, these mechanisms gradually failed to dampen the pressures for increasing centralization of power. However, even at the height of the Pleistocene egalitarian adaptation, there were elites if, using Turchin’s figure of the top one or two percent, we consider that the one or two most influential members in a network of a hundred are its elite. All the social leveling in the world could not contain influence. Influence, in the end, if wielded effectively, is power.

Ancient ‘megasites’ may reshape the history of the first cities
by Bruce Bower

No signs of a centralized government, a ruling dynasty, or wealth or social class disparities appear in the ancient settlement, the researchers say. Houses were largely alike in size and design. Excavations yielded few prestige goods, such as copper items and shell ornaments. Many examples of painted pottery and clay figurines typical of Trypillia culture turned up, and more than 6,300 animal bones unearthed at the site suggest residents ate a lot of beef and lamb. Those clues suggest daily life was much the same across Nebelivka’s various neighborhoods and quarters. […]

Though some of these sprawling sites had social inequality, egalitarian cities like Nebelivka were probably more widespread several thousand years ago than has typically been assumed, says archaeologist David Wengrow of University College London. Ancient ceremonial centers in China and Peru, for instance, were cities with sophisticated infrastructures that existed before any hints of bureaucratic control, he argues. Wengrow and anthropologist David Graeber of the London School of Economics and Political Science also made that argument in a 2018 essay in Eurozine, an online cultural magazine.

Councils of social equals governed many of the world’s earliest cities, including Trypillia megasites, Wengrow contends. Egalitarian rule may even have characterized Mesopotamian cities for their first few hundred years, a period that lacks archaeological evidence of royal burials, armies or large bureaucracies typical of early states, he suggests.

How to change the course of human history
by David Graeber and David Wengrow

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Simply framing the question this way means making a series of assumptions, that 1. there is a thing called ‘inequality,’ 2. that it is a problem, and 3. that there was a time it did not exist. Since the financial crash of 2008, of course, and the upheavals that followed, the ‘problem of social inequality’ has been at the centre of political debate. There seems to be a consensus, among the intellectual and political classes, that levels of social inequality have spiralled out of control, and that most of the world’s problems result from this, in one way or another. Pointing this out is seen as a challenge to global power structures, but compare this to the way similar issues might have been discussed a generation earlier. Unlike terms such as ‘capital’ or ‘class power’, the word ‘equality’ is practically designed to lead to half-measures and compromise. One can imagine overthrowing capitalism or breaking the power of the state, but it’s very difficult to imagine eliminating ‘inequality’. In fact, it’s not obvious what doing so would even mean, since people are not all the same and nobody would particularly want them to be.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society. That is the real political message conveyed by endless invocations of an imaginary age of innocence, before the invention of inequality: that if we want to get rid of such problems entirely, we’d have to somehow get rid of 99.9% of the Earth’s population and go back to being tiny bands of foragers again. Otherwise, the best we can hope for is to adjust the size of the boot that will be stomping on our faces, forever, or perhaps to wrangle a bit more wiggle room in which some of us can at least temporarily duck out of its way.

Mainstream social science now seems mobilized to reinforce this sense of hopelessness.

Rethinking cities, from the ground up
by David Wengrow

Settlements inhabited by tens of thousands of people make their first appearance in human history around 6,000 years ago. In the earliest examples on each continent, we find the seedbed of our modern cities; but as those examples multiply, and our understanding grows, the possibility of fitting them all into some neat evolutionary scheme diminishes. It is not just that some early cities lack the expected features of class divisions, wealth monopolies, and hierarchies of administration. The emerging picture suggests not just variability, but conscious experimentation in urban form, from the very point of inception. Intriguingly, much of this evidence runs counter to the idea that cities marked a ‘great divide’ between rich and poor, shaped by the interests of governing elites.

In fact, surprisingly few early cities show signs of authoritarian rule. There is no evidence for the existence of monarchy in the first urban centres of the Middle East or South Asia, which date back to the fourth and early third millennia BCE; and even after the inception of kingship in Mesopotamia, written sources tell us that power in cities remained in the hands of self-governing councils and popular assemblies. In other parts of Eurasia we find persuasive evidence for collective strategies, which promoted egalitarian relations in key aspects of urban life, right from the beginning. At Mohenjo-daro, a city of perhaps 40,000 residents, founded on the banks of the Indus around 2600 BCE, material wealth was decoupled from religious and political authority, and much of the population lived in high quality housing. In Ukraine, a thousand years earlier, prehistoric settlements already existed on a similar scale, but with no associated evidence of monumental buildings, central administration, or marked differences of wealth. Instead we find circular arrangements of houses, each with its attached garden, forming neighbourhoods around assembly halls; an urban pattern of life, built and maintained from the bottom-up, which lasted in this form for over eight centuries.⁶

A similar picture of experimentation is emerging from the archaeology of the Americas. In the Valley of Mexico, despite decades of active searching, no evidence for monarchy has been found among the remains of Teotihuacan, which had its magnificent heyday around 400 CE. After an early phase of monumental construction, which raised up the Pyramids of the Sun and Moon, most of the city’s resources were channelled into a prodigious programme of public housing, providing multi-family apartments for its residents. Laid out on a uniform grid, these stone-built villas — with their finely plastered floors and walls, integral drainage facilities, and central courtyards — were available to citizens regardless of wealth, status, or ethnicity. Archaeologists at first considered them to be palaces, until they realised virtually the entire population of the city (all 100,000 of them) were living in such ‘palatial’ conditions.⁷

A millennium later, when Europeans first came to Mesoamerica, they found an urban civilisation of striking diversity. Kingship was ubiquitous in cities, but moderated by the power of urban wards known as calpolli, which took turns to fulfil the obligations of municipal government, distributing the highest offices among a broad sector of the altepetl (or city-state). Some cities veered towards absolutism, but others experimented with collective governance. Tlaxcalan, in the Valley of Puebla, went impressively far in the latter direction. On arrival, Cortés described a commercial arcadia, where the ‘order of government so far observed among the people resembles very much the republics of Venice, Genoa, and Pisa for there is no supreme overlord.’ Archaeology confirms the existence here of an indigenous republic, where the most imposing structures were not palaces or pyramid-temples, but the residences of ordinary citizens, constructed around district plazas to uniformly high standards, and raised up on grand earthen terraces.⁸

Contemporary archaeology shows that the ecology of early cities was also far more diverse, and less centralised than once believed. Small-scale gardening and animal keeping were often central to their economies, as were the resources of rivers and seas, and indeed the ongoing hunting and collecting of wild seasonal foods in forests or in marshes, depending on where in the world we happen to be.⁹ What we are gradually learning about history’s first city-dwellers is that they did not always leave a harsh footprint on the environment, or on each other; and there is a contemporary message here too. When today’s urbanites take to the streets, calling for the establishment of citizens’ assemblies to tackle issues of climate change, they are not going against the grain of history or social evolution, but with its flow. They are asking us to reclaim something of the spark of political creativity that first gave life to cities, in the hope of discerning a sustainable future for the planet we all share.

Farewell to the ‘Childhood of Man’
by Gyrus

[Robert] Lowie made similar arguments to [Pierre] Clastres, about conscious knowledge of hierarchies among hunter-gatherers. However, for reasons related to his concentration on Amazonian Indians, Clastres missed a crucial point in Lowie’s work. Lowie highlighted the fact that among many foragers, such as the Eskimos in the Arctic, egalitarianism and hierarchy exist within the same society at once, cycling from one to another through seasonal social gatherings and dispersals. Based on social responses to seasonal variations in the weather, and patterns in the migration of hunted animals, not to mention the very human urge to sometimes hang out with a lot of people and sometimes to get the hell away from them, foraging societies often create and then dismantle hierarchical arrangements on a year-by-year basis.

There seems to have been some confusion about exactly what the pattern was. Does hierarchy arise during gatherings? This would tally with sociologist Émile Durkheim’s famous idea that ‘the gods’ were a kind of primitive hypothesis personifying the emergent forces that social complexity brought about. People sensed the dynamics changing as they lived more closely in greater numbers, and attributed these new ‘transcendent’ dynamics to organised supernatural forces that bound society together. Religion and cosmology thus function as naive mystifications of social forces. Graeber detailed ethnographic examples where some kind of ‘police force’ arises during tribal gatherings, enforcing the etiquette and social expectations of the event, but returning to being everyday people when it’s all over.

But sometimes, the gatherings are occasions for the subversion of social order — as is well known in civilised festivals such as the Roman Saturnalia. Thus, the evidence seemed to be confusing, and the idea of seasonal variations in social order was neglected. After the ’60s, the dominant view became that ‘simple’ egalitarian hunter-gatherers were superseded by ‘complex’ hierarchical hunter-gatherers as a prelude to farming and civilisation.

Graeber and Wengrow argue that the evidence isn’t confusing: it’s simply that hunter-gatherers are far more politically sophisticated and experimental than we’ve realised. Many different variations, and variations on variations, have been tried over the vast spans of time that hunter-gatherers have existed (over 200,000 years, compared to the 12,000 or so years we know agriculture has been around). Clastres was right: people were never naive, and resistance to the formation of hierarchies is a significant part of our heritage. However, seasonal variations in social structures mean that hierarchies may never have been a ghostly object of resistance. They have probably been at least a temporary factor throughout our long history.1 Sometimes they functioned, in this temporary guise, to facilitate socially positive events — though experience of their oppressive possibilities usually encouraged societies to keep them in check, and prevent them from becoming fixed.

How does this analysis change our sense of the human story? In its simplest form, it moves the debate from ‘how and when did hierarchy arise?’ to ‘how and when did we get stuck in the hierarchical mode?’. But this is merely the first stage in what Graeber and Wengrow promise is a larger project, which will include analysis of the persistence of egalitarianism among early civilisations, usually considered to be ‘after the fall’ into hierarchy.

 

Alienation and Soul Blindness

There is the view that consciousness* is a superficial overlay, that the animistic-bicameral mind is our fundamental nature and continues to operate within consciousness. In not recognizing this, we’ve become alienated from ourselves and from the world we are inseparable from. We don’t recognize that the egoic voice is but one of many voices and so we’ve lost appreciation for what it means to hear voices, including the internalized egoic voice that we’ve become identified with in submission to its demurgic authorization. This could be referred to as soul blindness, maybe related to soul loss — basically, a lack of psychological integration and coherency. Is this an inevitability within consciousness? Maybe not. What if a deeper appreciation of voice-hearing was developed within consciousness? What would emerge from consciousness coming to terms with its animistic-bicameral foundation? Would it still be consciousness or something else entirely?

* This is in reference to Julian Jaynes use of ‘consciousness’ that refers to the ego mind with its introspective and internal space built upon metaphor and narratization. Such consciousness as a social construction of a particular kind of culture is not mere perceptual awareness or biological reactivity.

* * *

Is It as Impossible to Build Jerusalem as It is to Escape Babylon? (Part Two)
by Peter Harrison

Marx identified the concept of alienation as being a separation, or estrangement, from one’s labour. And for Marx the consistent ability to labour, to work purposefully and consciously, as opposed to instinctively, towards a pre-imagined goal, was the trait that distinguished humans from other animals. This means also that humans are able to be persuaded to work creatively, with vigour and passion, for the goals of others, or for some higher goal than the maintenance of daily survival. As long as they are able see some tiny benefit for themselves, which might be service to a higher cause, or even just simple survival, since working for the goal of others may be the only means of obtaining food. So, Marx’s definition of alienation was more specific than an ‘existential’ definition because it specified labour as the defining human characteristic. But he was also aware that the general conditions of capitalism made this alienation more acute and that this escalated estrangement of humans from immediately meaningful daily activity led to a sense of being a stranger in one’s own world, and not only for the working class. This estrangement (I want to write étranger-ment, to reference Camus, but this is not a word) afflicted all classes, even those classes that seemed to benefit from class society, since capitalism had, even by his own time, gained an autonomy of its own. Life is as meaningless [or better: as anti-human] for a cleaner as it is for the head of a large corporation. This is why Marx stated that all people under capitalism were proletarian.

When I discovered the idea of soul blindness in Eduardo Kohn’s book, How Forests Think, I was struck by it as another useful way of understanding the idea of alienation. The concept of soul blindness, as used by the Runa people described by Kohn, seems to me to be related to the widespread Indigenous view of the recently deceased as aimless and dangerous beings who must be treated with great care and respect after their passing to prevent them wreaking havoc on the living. In Kohn’s interpretation, to be soul blind is to have reached the ‘terminus of selfhood,’ and this terminus can be reached while still alive, when one loses one’s sense of self through illness or despair, or even when one just drifts off into an unfocussed daze, or, more profoundly, sinks into an indifference similar to — to reference Camus again — that described by the character Meursault, in L’Etranger.

There are some accounts of Indigenous people first encountering white people in which the white people are initially seen as ghosts, one is recorded by Lévi-Strauss for Vanuatu. Another is embedded in the popular Aboriginal history of the area I live in. On first contact the white people are immediately considered to be some kind of ghost because of their white skin. This may have something to do with practice of preserving the bodies of the dead. This involves scraping off the top layer of skin which, apparently, makes the body white. This practice is described by the anthropologist, Atholl Chase, in his reminisces of Cape York. But for me there is more to the defining of the white intruders as ghosts because of their white skin. These foreigners also act as if they are soul blind. They are like machines, working for a cause that is external to them. For the Indigenous people these strangers do not seem to have soul: they are unpredictable; dangerous; they don’t know who they are.

But it is the anthropologist Eduardo Viveiros de Castro who, I think, connects most clearly to the work of James Hillman on the notion of the soul. James Hillman uses the term soul but he does not mean a Christian soul and he is not ultimately meaning the mind. For him the soul is a form of mediation between events and the subject and, in this sense, it might be similar to Bourdieu’s conception of ‘disposition.’ For Viveiros de Castro, ‘A perspective is not a representation because representations are a property of the mind or spirit, whereas the point of view is located in the body.’ Thus, Amerindian philosophy, which Viveiros de Castro is here describing, perhaps prefigures Hillman’s notion that ‘soul’ is ‘a perspective rather than a substance, a viewpoint towards things rather than a thing itself.’

Islamic Voice-Hearing

Islam, what kind of religion is it? Islam is the worship of a missing god, that is how we earlier described it. Some might consider that as unfair and dismissive to one of the world’s largest religions, but this is true to some extent for all post-bicameral religions. The difference is that Islam is among the most post-bicameral of the world religions. This is true simply in temporal terms.

The bicameral societies, according to Julian Jaynes, ended with the widespread collapse of the late Bronze Age empires and their trade networks. That happened around 1177 BCE, as the result of natural disasters and attacks by the mysterious Sea People, the latter maybe having formed out the refugees from the former. The Bronze Age continued for many centuries in various places: 700 BCE in Great Britain, Central Europe and China; 600 BCE in Northern Europe; 500 BCE in Korea and Ireland; and centuries beyond that in places like Japan.

But the Bronze Age Empires never returned. In that late lingering Bronze Age, a dark age took hold and put all of civilization onto a new footing. This was the era when, across numerous cultures, there were the endless laments about the gods, spirits, and ancestors having gone silent, having abandoned humanity. Entire cultural worldviews and psychological ways of being were utterly demolished or else irreparably diminished. This created an intense sense of loss, longing, and nostalgia that has never left humanity since.

Out of the ashes, while the Bronze Age was still holding on, the Axial Age arose around 900 BCE and continued until 200 BCE. New cultures were formed and new empires built. The result is what Jaynes described as ‘consciousness’ or what one can think of as introspective mental space, an inner world of egoic identity where the individual is separate from community and world. Consciousness and the formalized religions that accompanied it were a replacement for the loss of a world alive with voices.

By the time Rabbinic Judaism, Gnosticism, and Christianity came around, the Axial Age was already being looked back upon as a Golden Age and, other than through a few surviving myths, the Bronze Age before that was barely remembered at all. It would be nearly another 600 years after that first century monotheistic revival when Muhammad would have his visions of the angel Gabriel visiting him to speak on behalf of God. Islam is both post-bicameral and post-axial, to a far greater degree.

Muslims consider Muhammad to be the last prophet and even he didn’t get to hear God directly for it had to come through an angel. The voice of God had long ago grown so faint that people had come to rely on oracles, channelings, and such. These rather late revelations by way of Gabriel were but a barely audible echo of the archaic bicameral voices. It is may be understandable that, as with some oracles before him, Muhammad would declare God would never speak again. So, Islam, unlike the other monothesitic religions, fully embraces God’s absence from the world.

Actually, that is not quite right. Based on the Koran, God will never speak again until the Final Judgment. Then all will hear God again when he weighs your sins and decides the fate of your immortal soul. Here is the interesting part. The witnesses God shall call upon in each person’s case will be all the bicameral voices brought back out of silence. The animals and plants will witness for or against you, as will the earth and rocks and wind. Even your own resurrected body parts will come alive again with voices to speak of what you did. Body parts speaking is something familiar to those who read Jaynesian scholarship.

Until then, God and all the voices of the world will remain mute witnesses, watching your every move and taking notes. They see all, hear all, notice all — every time you masturbate or pick your nose, every time you have a cruel or impure thought, every time you don’t follow one of the large number of divine commandments, laws, and rules spelled out in the Koran. The entire world is spying upon you and will report back to God, at the end of time. The silent world only appears to be dumb and unconscious. God is biding his time, gathering a file on you like a cosmic FBI.

This could feel paralyzing, but in another way it offers total freedom from self, total freedom through complete submission. Jaynesian consciousness is a heavy load and that was becoming increasingly apparent over time, especially in the centuries following the Axial Age. The zealous idealism of the Axial Age prophets was growing dull and tiresome. By the time that Muhammad showed up, almost two millennia had passed since the bicameral mind descended into darkness. The new consciousness was sold as something amazing, but it hadn’t fully lived up to its promises. Instead, ever more brutal regimes came into power and a sense of anxiety was overtaking society.

Muhammad had an answer and the people of that region were obviously hungry for someone to provide an answer. After forming his large army, his military campaign barely experienced any resistance. And in a short period of time while he was still alive, most of the Arabian peninsula was converted to Islam. The silence of the gods had weakened society, but Muhammad offered an explanation for why the divine could no longer be experienced. He helped normalize what had once felt like a tragedy. He told them that they didn’t need to hear God because God had already revealed all knowledge to the prophets, including himself of course. No one had to worry, just follow orders and comply with commands.

All the tiresome complications of thought were unnecessary. God had already thought out everything for humans. The Koran as the final and complete holy text would entirely and permanently replace the bicameral voices, ever receding into the shadows of the psyche. But don’t worry, all those voices are still there, waiting to speak. But the only voice that the individual needed to listen to was that of the person directly above them in the religious hierarchy, be it one’s father or an imam or whoever else with greater official authority with a line of command that goes back to the prophets and through the angels to God Himself. Everything is in the Koran and the learned priestly class would explain it all and translate it into proper theocratic governance.

Muhammad came with a different message than anyone before. The Jewish prophets and Jesus, as with many Pagans, would speak of God as Father and humanity as His children. Early Christians took this as a challenge to a slave-based society, in borrowing from the Stoics that even a slave was free in his soul. Muhammad, instead, was offering another variety of freedom. We humans, rather than children of God, are slaves of God. The entire Islamic religion is predicated upon divine slavery, absolute submission. This is freedom from the harsh taskmaster of egoic individuality, a wannabe demiurge. Unlike Jesus, Muhammad formulated a totalitarian theocracy, a totalizing system. Nothing is left to question or interpretation, that is in theory or rather in belief.

This goes back to how, with the loss of the bicameral mind and social order, something took its place. It was a different kind of authoritarianism — rigid and hierarchical, centralized and concentrated, despotic and violent. Authoritarianism of this variety didn’t emerge until the late Bronze Age when the bicameral societies were becoming too large and complex, overstrained and unstable. Suddenly, as if to presage the coming collapse, there was the appearance of written laws, harsh punishment, and cruel torture — none of which ever existed before, according to historical records and archaeological finds. As the world shifted into post-bicameralism, this authoritarianism became ever more extreme (e.g., Roman Empire).

This was always the other side of the rise of individuality, of Jaynesian consciousness. The greater potential freedom the individual possesses the more that oppressive social control is required, as the communal bonds and social norms of the bicameral mind increasingly lost their hold to organically maintain order. Muhammad must have showed up at the precise moment of crisis in this change. After the Roman Empire’s system of slavery, Europe came up with feudalism to re-create some of what had disappeared. But apparently a different kind of solution was required in the Arab world.

Maybe this offsets the draining of psychic energy that comes with consciousness. Jaynes speculated that, like the schizophrenic, bicameral humans had immense energy and stamina which allowed them to accomplish near-miraculous feats such as building the pyramids with small populations and very little technology or infrastructure. Suppression of the extremes of individualism through emphasizing absolute subordination is maybe a way of keeping in check the energy loss of maintaining egoic consciousness. In the West, we eventually overcame this weakness by using massive doses of stimulants to overpower the otherwise debilitating anxiety and to help shore up the egoic boundaries, but this has come at the cost of destroying our physical health and mental health.

Time will tell which strategy is the most effective for long-term survival of specific societies. But I’m not sure I’d bet on the Western system, considering how unsustainable it appears to be and how easily it has become crippled by a minor disease epidemic like covid-19. Muhammad might simply have been trying to cobble together some semblance of a bicameral mind, in the face of divine silence. There is a good reason for trying to do that. Those bicameral societies lasted many millennia longer than has our post-bicameral civilization. It’s not clear that modern civilization or at least Western civilization will last beyond the end of this century. We underestimate the bicameral mind and the importance it played during the single longest period of advancement of civilization.

* * *

Let us leave a small note of a more personal nature. In the previous post (linked above), we mentioned that our line of inquiry began with a conversation we had with a friend of ours who is a Muslim. He also happens to be schizophrenic, i.e., a voice-hearer. The last post was about how voice=hearing is understood within Islam. Since supposedly God no longer speaks to humans nor do his angelic intermediaries, any voice a Muslim hears is automatically interpreted as not being of divine origins. It doesn’t necessarily make the voice evil, as it could be a jinn which is a neutral entity in Islamic theology, although jinn can be dangerous. Then again, voice-hearing might also be caused by an evil magician, what I think is called a sihir.

Anyway, we had the opportunity to speak to this friend once again, as we are both in jobs that require us to continue working downtown amidst everything otherwise being locked down because of the covid-19 epidemic. In being isolated from family and other friends, we’ve been meeting with this Islamic guy on a daily basis. Just this morning, we went for a long walk together and chatted about life and religion. He had previously talked about his schizophrenia in passing, apparently unworried by the stigma of it. He is an easy person to talk to, quite direct and open about his thoughts and experiences. I asked him about voice-hearing and he explained that, prior to being medicated, he would continue to hear people speak to him after they no longer were present. And unsurprisingly, the voices were often negative.

Both his imam and his therapist told him to ignore the voices. Maybe that is a standard approach in traditionally monotheistic cultures. As we mentioned in the other post, he is from North Africa where Arabs are common. But another friend of ours lives in Ghana, in West Africa. Voice-hearing experience among people in Ghana was compared to those in the United States, in the research of Tanya M. Luhrmann, an anthropologist inspired by Julian Jaynes. She found that Ghanans, with a tradition of voice-hearing (closer to bicameralism?), had a much more positive experience of the voices they heard. Americans, like our Islamic friend, did not tend to hear voices that were kind and helpful. This is probably the expectancy effect.

If you are raised to believe that voices are demonic or their Islamic equivalent of jinn or are from witches and evil magicians, or if you simply have been told voice-hearing means your insane, well, it’s not likely to lead to happy results when you do hear voices. I doubt it decreases the rate of voice-hearing, though. In spite of Islamic theology denying God and angels speak to humans any longer, that isn’t likely to have any affect on voice-hearing itself. So, the repressed bicameral mind keeps throwing out these odd experiences, but in our post-bicameral age we have fewer resources in dealing constructively with those voices. Simply denying and ignoring them probably is less helpful.

That is the ultimate snag. The same voices that once were identified as godly or something similar are now taken as false, unreal, or dangerous. In a sense, God never stopped speaking. One could argue that we all are voice-hearers, but some of us now call the voice of God as ‘conscience’ or whatever. Others, like Muslims, put great emphasis on this voice-hearing but have tried to gag God who goes on talking. Imagine how many potential new prophets have been locked away in psychiatric wards or, much worse, killed or imprisoned as heretics. If God can’t be silenced, the prophets who hear him can. The Old Testament even describes how the authorities forbid voice-hearing and demanded that voice-hearers be killed, even by their own parents.

The bicameral mind didn’t disappear naturally because it was inferior but because, in its potency, it was deemed dangerous to those who wanted to use brute power to enforce their own voices of authorization. The bicameral mind, once central to the social order, had become enemy number one. If people could talk to God directly, religion and its claims of authority would become irrelevant. That is how our Islamic friend, a devout religious practitioner, ended up being drugged up to get the voices to stop speaking.

Islam as Worship of a Missing God

A friend of ours is a Muslim and grew up in an Islamic country. As he talked about his religion, we realized how different it is from Christianity. There is no shared practice among Christians similar to the praying five times a day. From early on, Christianity was filled with diverse groups and disagreements, and that has only increased over time (there are over 4,600 denominations of Christianity in the United States alone). My friend had a hard time appreciating that there is no agreed upon authority, interpretation, or beliefs among all Christians.

Unlike Muhammad, Jesus never wrote anything nor was anything written down about him until much later. Nor did he intend to start a new religion. He offered no rules, social norms, instructions, etc for how to organize a church, a religious society, or a government. He didn’t even preach family values, if anything the opposite — from a command to let the dead bury themselves to the proclamation of having come to turn family members against each other. The Gospels offer no practical advice about anything. Much of Jesus’ teachings, beyond a general message of love and compassion, are vague and enigmatic, often parables that have many possible meanings.

Now compare Jesus to the Islamic prophet. Muhammad is considered the last prophet, although he never claimed to have heard the voice of God and instead supposedly having received the message secondhand through an angel. Still, according to Muslims, the Koran is the only complete holy text in existence — the final Word of God. That is also something that differs from Christianity. Jesus never asserted that God would become silent to all of humanity for eternity and that his worshippers would be condemned to a world without the God they longed for, in the way Allah never enters His own Creation.

Many Protestants and Anabaptists and those in similar groups believe that God continues to be revealed to people today, that the divine is known through direct experience, that the Bible as a holy text must be read as a personal relationship to God, not merely taken on the authority of blind faith. Some churches go so far as to teach people how to speak to and hear God (T.M. Luhrmann, When God Talks Back). Even within Catholicism, there have been further revelations of God since Jesus, from various mystics and saints that are acknowledged by the Vatican but also from ordinary Catholics claiming God spoke to them without any great fear of hereticism and excommunication.

It made me think about Julian Jaynes’ theory modern consciousness. With the collapse of the Bronze Age civilizations, there was this sense of the gods having gone silent. Yet this was never an absolute experience, as some people continued to hear the gods. Even into the modern world, occasionally people still claim to hear various gods and sometimes even found new religions based on revelations. The Bahai, for example, consider Muhammad to be just one more prophet with others having followed him. Hindus also have a living tradition of divine revelation that is equivalent to that of prophets. Only Islam, as far as I know, claims all prophecy and revelation to be ended for all time.

I was thinking about the sense of loss and loneliness people felt when bicameral societies came to an end. They were thrown onto an increasingly isolated individualism. Religion as we know it was designed to accommodate this, in order to give a sense of order, meaning and authority that had gone missing. But Islam takes this to an extreme. After Muhammad, no human supposedly would ever again personally hear, see, or experience the divine in any way (excluding mystical traditions like Sufism). For all intents and purposes, Allah has entirely receded from the world. The only sign of his existence that he left behind was a book of instructions. We must submit and comply or be punished in the afterlife, a world separate from this one

That seems so utterly depressing and dreary to me. I was raised Christian and on the far other extreme of Protestantism. My family attended the Unity Church that emphasizes direct experience of God to such a degree that the Bible itself was mostly ignored and almost irrelevant — why turn to mere words on paper when you can go straight to the source? Rather than being denied and condemned, to claim to have heard God speak would have been taken seriously. I’m no longer religious, but the nearly deist idea of a god that is distant and silent seems so alien and unappealing to me. Yet maybe that makes Islam well designed for the modern world, as it offers a strong response to atheism.

If you don’t have any experience of God, this is considered normal and expected in Islam, not something to be worried about, not something to challenge one’s faith as is common in Christianity (NDE: Spirituality vs Religiosity); and it avoids the riskiness and confusion of voice-hearing (Libby Anne, Voices in Your Head: Evangelicals and the Voice of God). One’s ignorance of the divine demonstrates one’s individual inadequacy and, as argued by religious authority, is all the more reason to submit to religious authority. Islamic relation between God and humanity is one-way, except to some extent by way of inspiration and dreams, but Allah himself never directly enters his Creation and so never directly interacts with humans, not even with prophets. Is that why constant prayer is necessary for Muslims, to offset God’s silence and vacancy? Worship of a missing God seems perfectly suited for the modern world.

Muslims are left with looking for traces of God in the Koran like ants crawling around in a footprint while trying to comprehend what made it and what it wants them to do. So, some of the ants claim to be part of a direct lineage of ants that goes back to an original ant that, according to tradition, was stepped upon by what passed by. These well-respected ants then explain to all the other ants what is meant by all the bumps and grooves in the dried mud. In worship, the ants pray toward the footprint and regularly gather to circle around it. This gives their life some sense of meaning and purpose and, besides, it maintains the social order.

That is what is needed in a world where the bicameral voices of archaic authorization no longer speak, no longer are heard. Something has to fill the silence as the loneliness it creates is unbearable. Islam has a nifty trick, embracing the emptiness and further irritating the overwhelming anxiety as it offers the salve for the soul. Muslims take the silence of God as proof of God, as a promise of something more. This otherworldly being, Allah, tells humans who don’t feel at home in this world that their real home is elsewhere, to which they will return if they do what they are told. Other religions do something similar, but Islam takes this to another level — arguably, the highest or most extreme form of monotheism, so far. The loss of the bicameral mind could not be pushed much further, one suspects, without being pushed into an abyss.

Islam is a truly modern religion. Right up there with capitalism and scientism.

* * *

Further discussion about this can be found on the Facebook page “Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind”.

 

To Empathize is to Understand

What is empathy as a cognitive ability? And what is empathy as an expansion of identity, as part of awareness of self and other?

There is a basic level of empathy that appears to be common across numerous species. Tortoises, when seeing another on its back, will help flip it over. There are examples of animals helping or cooperating with those from an entirely different species. Such behavior has been repeatedly demonstrated in laboratories as well. These involve fairly advanced expressions of empathy. In some cases, one might interpret it as indicating at least rudimentary theory of mind, the understanding that others have their own experience, perspective, and motivations. But obviously human theory of mind can be much more complex.

One explanation about greater empathy has to do with identity. Empathy in a way is simply a matter of what is included within one’s personal experience (Do To Yourself As You Would Do For Others). To extend identity is to extend empathy to another individual or a group (or anything else that can be brought within sphere of the self). For humans, this can mean learning to include one’s future self, to empathize with experience one has not yet had, the person one has not yet become. The future self is fundamentally no different than another person.

Without cognitive empathy, affective empathy is limited to immediate experience. It’s the ability to feel what another feels. But lacking cognitive empathy as happens in the most severe autism, theory of mind cannot be developed and so there is no way to identity, locate and understand that feeling. One can only emotionally react, not being able to differentiate one’s own emotion from that of another. In that case, there would be pure emotion, and yet no recognition of the other. Cognitive empathy is necessary to get beyond affective reactivity, not all that different than the biological reactivity of a slug.

It’s interesting that some species (primates, rats, dolphins, etc) might be able to have more cognitive empathy and theory of mind than some people at the extreme ends of severe autism, not necessarily being an issue of intelligence. On the other hand, the high functioning on the autistic spectrum, if intervention happens early enough, can be taught theory of mind, although it is challenging for the. This kind of empathy is considered a hallmark of humanity, a defining feature. This is what leads to problems of social behavior for those with autism spectrum disorder.

Someone entirely lacking in theory of mind would be extremely difficult to communicate and interact with beyond the most basic level, as is seen in the severest cases of autism and other extreme developmental conditions. Helen Keller asserts she had no conscious identity, no theory of her own mind or that of others, until she learned language.* Prior to her awakening, she was aggressive and violent in reacting to a world she couldn’t understand, articulate, or think about. That fits in with the speculations of Julian Jaynes. What he calls ‘consciousness’ is the addition of abstract thought by way of metaphorical language, as built upon concrete experience and raw affect. Keller discusses how her experience went from from the concreteness of touch to the abstraction of language. In becoming aware of the world, she became aware of herself.

Without normal development of language, the human mind is crippled: “The “black silence” of the deaf, blind and mute is similar in many respects to the situation of acutely autistic children where there are associated difficulties with language and the children seem to lack what has been called “a theory of mind” ” (Robin Allott, Helen Keller: Language and Consciousenss). Even so, there is more to empathy than language, and that might be true as well for some aspects or kinds of cognitve empathy. Language is not the only form of communication.

Rats are a great example in comparing to humans. We think of them as pests, as psychologically inferior. But anyone who has kept rats knows how intelligent and social they are. They are friendlier and more interactive than the typical cat. And research has shown how cognitively advanced they are in learning. Rats do have the typical empathy of concern for others. For example, they won’t hurt another rat in exchange for a reward and, given a choice, they would rather go hungry. But it goes beyond that.

It’s also shown that “rats are more likely and quicker to help a drowning rat when they themselves have experienced being drenched, suggesting that they understand how the drowning rat feels” (Kristin Andrews, Rats are us). And “rats who had been shocked themselves were less likely to allow other rats to be shocked, having been through the discomfort themselves.” They can also learn to play hide-and-seek which necessitates taking on the perspective others. As Ed Yong asks in The Game That Made Rats Jump for Joy, “In switching roles, for example, are they taking on the perspective of their human partners, showing what researchers call “theory of mind”?”

That is much more than mere affective empathy. This seems to involve active sympathy and genuine emotional understanding, that is to say cognitive empathy and theory of mind. If they are capable of both affective and cognitive empathy, however limited, and if Jaynesian consciousness partly consists of empathy imaginatively extended in space and time, then a case could be made that rats have more going on than simple perceptual awareness and biological reactivity. They are empathically and imaginatively engaging with others in the world around them. Does this mean they are creating and maintaining a mental model of others? Kristin Andrews details the extensive abilities of rats:

“We now know that rats don’t live merely in the present, but are capable of reliving memories of past experiences and mentally planning ahead the navigation route they will later follow. They reciprocally trade different kinds of goods with each other – and understand not only when they owe a favour to another rat, but also that the favour can be paid back in a different currency. When they make a wrong choice, they display something that appears very close to regret. Despite having brains that are much simpler than humans’, there are some learning tasks in which they’ll likely outperform you. Rats can be taught cognitively demanding skills, such as driving a vehicle to reach a desired goal, playing hide-and-seek with a human, and using the appropriate tool to access out-of-reach food.”

To imagine the future for purposes of thinking in advance and planning actions, that is quite advanced cognitive behavior. Julian Jaynes argued that was the purpose of humans developing a new kind of consciousness, as the imagined metaphorical space that is narratized allows for the consideration of alternatives, something he speculates was lacking in humans prior to the Axial Age when behavior supposedly was more formulaic and predetermined according to norms, idioms, etc. Yet rats can navigate a path they’ve never taken before with novel beginning and ending locations, which would require taking into account multiple options. What theoretically makes Jaynesian consciousness unique?

Jaynes argues that it’s the metaphorical inner space that is the special quality that created the conditions for the Axial Age and all that followed from it, the flourishing of complex innovations and inventions, the ever greater extremes of abstraction seen in philosophy, math and science. We have so strongly developed this post-bicameral mind that we barely can imagine anything else. But we know that other societies have very different kinds of mentalities, such as the extended and fluid minds of animistic cultures. What exactly is the difference?

Australian Aborigines give hint to something between the two kinds of mind. In some ways, the mnemonic systems represent more complex cognitive ability than we are capable with our Jaynesian consciousness. Instead of an imagined inner space, the Songlines are vast systems of experience and knowledge, culture and identity overlaid upon immense landscapes. These mappings of externalized cognitive space can be used to guide the individual across distant territories the individual has never seen before and help them to identify and use the materials (plants, stones, etc) at a location no one in their tribe has visited for generations. Does this externalized mind have less potential for advanced abilities? Upon Western contact, Aborigines had farming and ranching, kept crop surpluses in granaries, used water and land management.

It’s not hard to imagine civilization having developed along entirely different lines based on divergent mentalities and worldviews. Our modern egoic consciousness was not an inevitability and it likely is far from offering the most optimal functioning. We might already be hitting a dead end with our present interiorized mind-space. Maybe it’s our lack of empathy in understanding the minds of other humans and other species that is an in-built limitation to the post-bicameral world of Jaynesian consciousness. And so maybe we have much to learn from entirely other perspectives and experiences, even from rats.

* * *

* Helen Keller, from Light in My Darkness:

I had no concepts whatever of nature or mind or death or God. I literally thought with my body. Without a single exception my memories of that time are tactile. . . . But there is not one spark of emotion or rational thought in these distinct yet corporeal memories. I was like an unconscious clod of earth. There was nothing in me except the instinct to eat and drink and sleep. My days were a blank without past, present, or future, without hope or anticipation, without interest or joy. Then suddenly, I knew not how or where or when, my brain felt the impact of another mind, and I awoke to language, to knowledge, to love, to the usual concepts of nature, good, and evil. I was actually lifted from nothingness to human life.

And from The Story of My Life:

As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten–-a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that ‘w-a-t-e-r’ meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away.

And from The World I Live In:

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith. […]

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.”

I cannot represent more clearly than any one else the gradual and subtle changes from first impressions to abstract ideas. But I know that my physical ideas, that is, ideas derived from material objects, appear to me first an idea similar to those of touch. Instantly they pass into intellectual meanings. Afterward the meaning finds expression in what is called “inner speech.”  […]

As my experiences broadened and deepened, the indeterminate, poetic feelings of childhood began to fix themselves in definite thoughts. Nature—the world I could touch—was folded and filled with myself. I am inclined to believe those philosophers who declare that we know nothing but our own feelings and ideas. With a little ingenious reasoning one may see in the material world simply a mirror, an image of permanent mental sensations. In either sphere self-knowledge is the condition and the limit of our consciousness. That is why, perhaps, many people know so little about what is beyond their short range of experience. They look within themselves—and find nothing! Therefore they conclude that there is nothing outside themselves, either.

However that may be, I came later to look for an image of my emotions and sensations in others. I had to learn the outward signs of inward feelings. The start of fear, the suppressed, controlled tensity of pain, the beat of happy muscles in others, had to be perceived and compared with my own experiences before I could trace them back to the intangible soul of another. Groping, uncertain, I at last found my identity, and after seeing my thoughts and feelings repeated in others, I gradually constructed my world of men and of God. As I read and study, I find that this is what the rest of the race has done. Man looks within himself and in time finds the measure and the meaning of the universe.

* * *

As an example of how language relates to emotions:

The ‘untranslatable’ emotions you never knew you had
by David Robson

But studying these terms will not just be of scientific interest; Lomas suspects that familiarising ourselves with the words might actually change the way we feel ourselves, by drawing our attention to fleeting sensations we had long ignored.

“In our stream of consciousness – that wash of different sensations feelings and emotions – there’s so much to process that a lot passes us by,” Lomas says. “The feelings we have learned to recognise and label are the ones we notice – but there’s a lot more that we may not be aware of. And so I think if we are given these new words, they can help us articulate whole areas of experience we’ve only dimly noticed.”

As evidence, Lomas points to the work of Lisa Feldman Barrett at Northeastern University, who has shown that our abilities to identify and label our emotions can have far-reaching effects.

Her research was inspired by the observation that certain people use different emotion words interchangeably, while others are highly precise in their descriptions. “Some people use words like anxious, afraid, angry, disgusted to refer to a general affective state of feeling bad,” she explains. “For them, they are synonyms, whereas for other people they are distinctive feelings with distinctive actions associated with them.”

This is called “emotion granularity” and she usually measures this by asking the participants to rate their feelings on each day over the period of a few weeks, before she calculates the variation and nuances within their reports: whether the same old terms always coincide, for instance.

Importantly, she has found that this then determines how well we cope with life. If you are better able to pin down whether you are feeling despair or anxiety, for instance, you might be better able to decide how to remedy those feelings: whether to talk to a friend, or watch a funny film. Or being able to identify your hope in the face of disappointment might help you to look for new solutions to your problem.

In this way, emotion vocabulary is a bit like a directory, allowing you to call up a greater number of strategies to cope with life. Sure enough, people who score highly on emotion granularity are better able to recover more quickly from stress and are less likely to drink alcohol as a way of recovering from bad news. It can even improve your academic success. Marc Brackett at Yale University has found that teaching 10 and 11-year-old children a richer emotional vocabulary improved their end-of-year grades, and promoted better behaviour in the classroom. “The more granular our experience of emotion is, the more capable we are to make sense of our inner lives,” he says.

Both Brackett and Barrett agree that Lomas’s “positive lexicography” could be a good prompt to start identifying the subtler contours of our emotional landscape. “I think it is useful – you can think of the words and the concepts they are associated with as tools for living,” says Barrett. They might even inspire us to try new experiences, or appreciate old ones in a new light.

* * *

And related to all of this is hypocognition, overlapping with linguistic relativity — in how language and concepts determine our experience, identity, and sense of reality — constraining and framing and predetermining what we are even capable of perceiving, thinking about, and expressing:

Hypocognition is a censorship tool that mutes what we can feel
by Kaidi Wu

It is a strange feeling, stumbling upon an experience that we wish we had the apt words to describe, a precise language to capture. When we don’t, we are in a state of hypocognition, which means we lack the linguistic or cognitive representation of a concept to describe ideas or interpret experiences. The term was introduced to behavioural science by the American anthropologist Robert Levy, who in 1973 documented a peculiar observation: Tahitians expressed no grief when they suffered the loss of a loved one. They fell sick. They sensed strangeness. Yet, they could not articulate grief, because they had no concept of grief in the first place. Tahitians, in their reckoning of love and loss, and their wrestling with death and darkness, suffered not from grief but a hypocognition of grief. […]

But the darkest form of hypocognition is one born out of motivated, purposeful intentions. A frequently overlooked part of Levy’s treatise on Tahitians is why they suffered from a hypocognition of grief. As it turns out, Tahitians did have a private inkling of grief. However, the community deliberately kept the public knowledge of the emotion hypocognitive to suppress its expression. Hypocognition was used as a form of social control, a wily tactic to expressly dispel unwanted concepts by never elaborating on them. After all, how can you feel something that doesn’t exist in the first place?

Intentional hypocognition can serve as a powerful means of information control. In 2010, the Chinese rebel writer Han Han told CNN that any of his writings containing the words ‘government’ or ‘communist’ would be censored by the Chinese internet police. Ironically, these censorship efforts also muffled an abundance of praise from pro-leadership blogs. An effusive commendation such as ‘Long live the government!’ would be censored too, for the mere mention of ‘government’.

A closer look reveals the furtive workings of hypocognition. Rather than rebuking negative remarks and rewarding praises, the government blocks access to any related discussion altogether, rendering any conceptual understanding of politically sensitive information impoverished in the public consciousness. ‘They don’t want people discussing events. They simply pretend nothing happened… That’s their goal,’ Han Han said. Regulating what is said is more difficult than ensuring nothing is said. The peril of silence is not a suffocation of ideas. It is to engender a state of blithe apathy in which no idea is formed.

Do To Yourself As You Would Do For Others

“…our impulse control is less based on an order from our executive command center, or frontal cortex, and more correlated with the empathic part of our brain. In other words, when we exercise self-control, we take on the perspective of our future self and empathize with that self’s perspectives, feelings, and motivations.”
~ Alexandar Soutscheck

Self-control is rooted in self-awareness. Julian Jaynes and Brian McVeigh, in one of their talks, brought up the idea that “mind space” has increased over time: “The more things we think about, the more distinctions we make in our consciousness  between A and B, and so on, the more mind-space there is” (Discussions with Julian Jaynes, ed. by Brian J. McVeigh, p. 40). The first expansion was the creation of introspective consciousness itself. Narratization allowed that consciousness to also extend across time, to imagine possibilities and play out scenarios and consider consequences. Empathy, as we we experience it, might be a side effect of this as consciousness includes more and more within it, including empathy with our imagined future self. So, think of self-control as being kind to yourself, to your full temporal self, not only your immediate self.

This would relate to the suggestion that humans learn theory of mind, the basis of cognitive empathy, first by observing others and only later apply it to ourselves. That is to say the first expansion of mental space as consciousness takes root within relationship to others. It’s realizing that there might be inner experience within someone else that we claim inner space in our own experience. So, our very ability to understand ourselves is dependent on empathy with others. This was a central purpose of the religions that arose in the Axial Age, the traditions that continue into the modern world* (Tahere Salehi, The Effect of Training Self-Control and Empathy According to Spirituality on Self-Control and Empathy Preschool Female Students in Shiraz City). The prophets that emerged during that era taught love and compassion and introspection, not only as an otherworldly moral dictum but also in maintaining group coherence and the common good. The breakdown of what Jaynes called the bicameral mind was traumatic and a new empathic mind was needed to replace it, if only to maintain social order.

Social order has become a self-conscious obsession ever since, as Jaynesian consciousness in its tendency toward rigidity has inherent weaknesses. Social disconnection is a crippling of the mind because the human psyche is inherently social. Imagining our future selves is a relationship with a more expansive sense of self. It’s the same mechanism as relating to any other person. This goes back to Johann Hari’s idea, based on Bruce K. Alexander’s rat park research, that the addict is the ultimate individual. In this context, this ultimate individual lacking self-control is not only disconnected from other people but also disconnected from themselves. Addiction is isolating and isolation promotes addiction. Based on this understanding, I’ve proposed that egoic consciousness is inherently addictive and that post-axial society is dependent on addiction for social control.

But this psychological pattern is seen far beyond addiction. This fits our personal experience of self. When we were severely depressed, we couldn’t imagine or care about the future. This definitely inhibited self-control and led to more impulsive behavior in being in present-oriented psychological survival mode. Then again, the only reason self-control is useful at all is because, during and following the Axial Age, humans ever more loss the capacity of being part of a communal identity that created the conditions of communal control, the externally perceived commands of archaic authorization through voice-hearing. We’ve increasingly lost the capacity of a communal identity (extended mind/self) and hence a communal empathy, something that sounds strange or unappealing to the modern mind. In denying our social nature, this casts the shadow of authoritarianism, an oppressive and often violent enforcement of top-down control.

By the way, this isn’t merely about psychology. Lead toxicity causes higher rates of impulsivity and aggression. This is not personal moral failure but brain damage from poisoning. Sure, teaching brain-damaged kids and adults to have more empathy might help them overcome their disability. But if we are to develop and empathic society, we should learn to have enough empathy not to wantonly harm the brains of others with lead toxicity and other causes of stunted development (malnutrition, stress, ACEs, etc), just because they are poor or minority and can’t fight back. Maybe we need to first teach politicians and business leaders basic empathy, in overcoming the present dominance of pscyopathic traits, so that they could learn self-control in not harming others.

The part of the brain involving cognitive empathy and theory of mind is generally involved with selflessness and pro-social behavior. To stick with brain development and neurocognitive functioning, let’s look at diet. Weston A. Price, in studying traditional populations that maintained healthy diets, observed what he called moral health in that people seemed kinder, more helpful, and happier — they got along well. Strong social fabric and culture of trust is not an abstraction but built into general measures of health, in the case of Price’s work, having to do with nutrient-dense animal foods containing fat-soluble vitamins. As the standard American diet has worsened, so has mental health. That is a reason for hope. In an early study on the ketogenic diet as applied to childhood diabetes, the researchers made a side observation that not only did the diabetes symptoms improve but so did behavior. I’ve theorized about how a high-carb diet might be one of the factors that sustains the addictive and egoic self.

Narrow rigidity of the mind, as seen in the extremes of egoic consciousness, has come to be accepted as a social norm and even a social ideal. It is the social Darwinian worldview that has contributed to the rise of both competitive capitalism and the Dark Triad (psycopathy, narcissism, and Machiavellianism), and unsurprisingly it has led to a society that lacks awareness and appreciation of the harm caused to future generations (Scott Barry Kaufman, The Dark Triad and Impulsivity). Rather than normalized, maybe this dysfunction should be seen as a sickness, not only a soul sickness but a literal sickness of the body-mind that can be scientifically observed and measured, not to mention medically and socially treated. We need to thin the boundaries of the mind so as to expand our sense of self. Research shows that those with such thinner boundaries not only have more sense of identification with their future selves but also their past selves, in maintaining a connection to what it felt like to be a child. We need to care for ourselves and others in the way we would protect a child.

* * *

* In their article “Alone and aggressive“, A. William Crescioni and Roy F. Baumeister included the loss of meaning. It was maybe associated with the loss of empathy, specifically in understanding the meaning of others (e.g., the intention ‘behind’ words, gestures and actions). Meaning traditionally has been the purview of religion. And I’d suggest that it is not a coincidence that the obsession with meaning arose in the Axial Age right when words were invented for ‘religion’ as a formal institution separate from the rest of society. As Julian Jaynes argues, this was probably in response to the sense of nostalgia and longing that followed the silence of the gods, spirits, and ancestors.

A different kind of social connection had to be taught, but this post-bicameral culture wasn’t and still isn’t as effective in re-creating the strong social bonds of archaic humanity. Periods of moral crisis in fear of societal breakdown have repeated ever since, like a wound that was never healed. I’ve previously written about social rejection and aggressive behavior in relation to this (12 Rules for Potential School Shooters) — about school shooters, I explained:

Whatever they identify or don’t identify as, many and maybe most school shooters were raised Christian and one wonders if that plays a role in their often expressing a loss of meaning, an existential crisis, etc. Birgit Pfeifer and Ruard R. Ganzevoort focus on the religious-like concerns that obsess so many school shooters and note that many of them had religious backgrounds:

“Traditionally, religion offers answers to existential concerns. Interestingly, school shootings have occurred more frequently in areas with a strong conservative religious population (Arcus 2002). Michael Carneal (Heath High School shooting, 1997, Kentucky) came from a family of devoted members of the Lutheran Church. Mitchell Johnson (Westside Middle School shooting, 1998, Arkansas) sang in the Central Baptist Church youth choir (Newman et al. 2004). Dylan Klebold (Columbine shooting, 1999, Colorado) attended confirmation classes in accordance with Lutheran tradition. However, not all school shooters have a Christian background. Some of them declare themselves atheists…” (The Implicit Religion of School Shootings).

Princeton sociologist Katherine Newman, in studying school shootings, has noted that, “School rampage shootings tend to happen in small, isolated or rural communities. There isn’t a very direct connection between where violence typically happens, especially gun violence in the United States, and where rampage shootings happen” (Common traits of all school shooters in the U.S. since 1970).

It is quite significant that these American mass atrocities are concentrated in “small, isolated or rural communities” that are “frequently in areas with a strong conservative religious population”. That might more precisely indicate who these school shooters are and what they are reacting to. Also, one might note that rural areas in general and specifically in the South do have high rates of gun-related deaths, although many of them are listed as ‘accidental’ which is to say most rural shootings involve people who know each other; also true of school shootings.

* * *

Brain stimulation reveals crucial role of overcoming self-centeredness in self-control
by Alexander Soutschek, Christian C. Ruff, Tina Strombach, Tobias Kalenscher and Philippe N. Tobler

Empathic Self-Control
by David Shoemaker

People with a high degree of self-control typically enjoy better interpersonal relationships, greater social adjustment, and more happiness than those with a low degree of self-control. They also tend to have a high degree of empathy. Further, those with low self-control also tend to have low empathy. But what possible connection could there be between self-control and empathy, given that how one regulates oneself seems to have no bearing on how one views others. Nevertheless, this paper aims to argue for a very tight relation between self-control and empathy, namely, that empathy is in fact one type of self-control. The argument proceeds by exploring two familiar types of self-control, self-control over actions and attitudes, the objects for which we are also responsible. Call the former volitional self-control and the latter rational self-control. But we also seem to be responsible for—and have a certain type of control and self-control over—a range of perceptual states, namely, those in which we come to see from another person’s perspective how she views her valuable ends and what her emotional responses are to their thwarting or flourishing. This type of empathic self-control is a previously-unexplored feature of our interpersonal lives. In addition, once we see that the type of empathy exercised is also exercised when casting ourselves into the shoes of our future selves, we will realize how intra-personal empathy better enables both volitional and rational self-control.

Science Says When Self-Control Is Hard, Try Empathizing With Your Future Self
by Lindsay Shaffer

Soutscheck’s study also reveals what happens when we fail to exercise the empathic part of our brain. When Soutscheck interrupted the empathic center of the brain in 43 study volunteers, they were more likely to take a small amount of cash immediately over a larger amount in the future. They were also less inclined to share the money with a partner. Soutscheck’s study showed that the more people are stuck inside their own perspective, even just from having the empathic part of their brain disrupted, the more likely they are to behave selfishly and impulsively.

Self-Control Is Just Empathy With Your Future Self
by Ed Yong

This tells us that impulsivity and selfishness are just two halves of the same coin, as are their opposites restraint and empathy. Perhaps this is why people who show dark traits like psychopathy and sadism score low on empathy but high on impulsivity. Perhaps it’s why impulsivity correlates with slips among recovering addicts, while empathy correlates with longer bouts of abstinence. These qualities represent our successes and failures at escaping our own egocentric bubbles, and understanding the lives of others—even when those others wear our own older faces.

New Studies in Self Control: Treat Yourself Like You’d Treat Others
from Peak

A new study recently shifted the focus to a different mechanism of self control. Alexander Soutschek and colleagues from the University of Zurich believe self-control may be related to our ability to evaluate our future wants and needs.

The scientists suggest that this takes place in an area of the brain called the rTPJ, which has long been linked to selflessness and empathy for others. It’s an important part of our ability to “take perspectives” and help us step into the shoes of a friend.

The scientists hypothesized that perhaps the rTPJ treats our “future self” the same way it treats any other person. If it helps us step into our friend’s shoes, maybe we can do the same thing for ourselves. For example, if we’re deciding whether to indulge in another pint of beer at a bar, maybe our ability to hold off is related to our ability to imagine tomorrow morning’s hangover. As science writer Ed Yong explains, “Think of self-control as a kind of temporal selflessness. It’s Present You taking a hit to help out Future You.”

Empathy for Your Future Self
by Reed Rawlings

Further Research on the TPJ

The results of Soutscheks team were similar to past work on the empathy, future-self, and the TPJ. It’s believed a better connected rTPJ increases the likelihood of prosocial behaviors. Which relates to skills of executive function. Individuals who exhibit lower empathy, score higher for impulsivity – the opposite of self-control.

Keeping our future selves in mind may even keep our savings in check. In this research, Stanford University tested a “future self-continuity”. They wanted to explore how individuals related to their future self. Participants were asked to identify how they felt about the overlap between their current and future selves. They used the Venn diagrams below for this exercise.

If they saw themselves as separate, they were more likely to choose immediate rewards. A greater overlap increased the likelihood of selecting delayed rewards. In their final study, they assessed individuals from the San Francisco Bay area. The researchers found a correlation between wealth and an overlap between selves.

While the above research is promising, it doesn’t paint a full picture. Empathy seems useful, but making a sacrifice for our future-self requires that we understand the reason behind it. It’s the sacrifice that is especially crucial – positive gains demand negative trade-offs.

That’s where altruism, our willingness to give to others, comes in.

Why Do We Sacrifice?

Research from the University of Zurich’s examined some altruism’s driving factors. Their work came up with two correlations. First, the larger your rTPJ, the more likely you are to behave altruistically. Second, concerns of fairness affect how we give.

In this experiment, individuals were more generous if their choice would decrease inequality. When inequality would increase, participants were less likely to give.

This is an understandable human maxim. We have little reason to give to an individual who has more than we do. It feels completely unfair to do so. However, we’re raised to believe that helping those in need is objectively good. Helping ourselves should fall under the same belief.

Empathy and altruism, when focused on our own well-being, are intimately linked. To give selflessly, we need to have a genuine concern for another’s well-being. In this case, the ‘other’ is our future self. Thankfully, with a bit of reflection, each of us can gain a unique insight into our own lives.

Alone and aggressive: Social exclusion impairs self-control and empathy and increases hostile cognition and aggression.
by A. William Crescioni and Roy F. Baumeister
from Bullying, Rejection, and Peer Victimization ed. by Monic J. Harris
pp. 260-271 (full text)

Social Rejection and Emotional Numbing

Initial studies provided solid evidence for a causal relationship be-tween rejection and aggression. The mechanism driving this relation-ship remained unclear, however. Emotional distress was perhaps the most plausible mediator. Anxiety has been shown to play a role in both social rejection (Baumeister & Tice, 1990) and ostracism (Williamset al., 2000). Emotional distress, however, was not present in these experiments by Twenge et al. (2001). Only one significant mood effect was found, and even this effect deviated from expectations. The sole difference in mood between rejected and accepted participants was a slight decrease in positive affect. Rejected participants did not show any increase in negative affect; rather, they showed a flattening of affect, in particular a decrease in positive affect. This mood difference did not constitute a mediator of the link between rejection and aggression. It did, however, point toward a new line of thinking. It was possible that rejection would lead to emotional numbing rather than causing emotional distress. The flattening of affect seen in the previous set of studies would be consistent with a state of cognitive deconstruction. This state is characterized by an absence of emotion, an altered sense of time, a fixa-tion on the present, a lack of meaningful thought, and a general sense of lethargy (Baumeister, 1990). […]

Rejection and Self-Regulation

Although the emotional numbness and decrease in empathy experienced by rejected individuals play an important role in the link between social rejection and aggression, these effects do not constitute a complete explanation of why rejection leads to aggression. The diminished prosocial motivations experienced by those lacking in empathy can open the door to aggressive behavior, but having less of a desire to do good and having more of a desire to do harm are not necessarily equivalent. A loss of empathy, paired with the numbing effects of rejection, could lead individuals to shy away from those who had rejected them rather than lashing out. Emotional numbness, however, is not the only consequence of social rejection.

In addition to its emotional consequences, social rejection has adverse effects on a variety of cognitive abilities. Social rejection has been shown to decrease intelligent (Baumeister, Twenge, & Nuss, 2002) and meaningful thought (Twenge et al., 2002). But another category of cognitive response is self-regulation. Studies have demonstrated that self-regulation depends upon a finite resource and that acts of self-regulation can impair subsequent attempts to exercise self-control (Baumeister, Bratslavsky, Muraven, & Tice, 1998). Self-regulation has been shown to be an important tool for controlling aggressive impulses. Stucke and Baumeister (2006) found that targets whose ability to self-regulate had been depleted were more likely to respond aggressively to insulting provocation. DeWall, Baumeister, Stillman, and Galliot (2007) found that diminished self-regulatory resources led to an increase in aggression only in response to provocation; unprovoked participants showed no increase in aggressive behavior. Recall that in earlier work (Twenge et al.,2002) rejected individuals became more aggressive only when the target of their aggression was perceived as having insulted or provoked them.This aggression could have been the result of the diminished ability of rejected participants to regulate their aggressive urges. […]

These results clearly demonstrate that social rejection has a detrimental effect on self-regulation, but they do not explain why this is so and, indeed, the decrement in self-regulation would appear to be counterproductive for rejected individuals. Gaining social acceptance often involves regulating impulses in order to create positive impressions on others (Vohs, Baumeister, & Ciarocco, 2005). Rejected individuals should therefore show an increase in self-regulatory effort if they wish to create new connections or prevent further rejection. The observed drop in self-regulation therefore seems maladaptive. The explanation for this finding lies in rejection’s effect on self-awareness.

Self-awareness is an important prerequisite of conscious self-control (Carver & Scheier, 1981). Twenge et al. (2002) found that, when given the option, participants who had experienced rejection earlier in the study were more likely to sit facing away from rather than toward a mirror. Having participants face a mirror is a common technique for inducing self-awareness (Carver & Scheier, 1981), so participants’ unwillingness to do so following rejection provides evidence of a desire to avoid self-awareness. A drop in self-awareness is part of the suite of effects that comprises a state of cognitive deconstruction. Just as emotional numbness protects rejected individuals from the emotional distress of rejection, a drop in self-awareness would shield against awareness of personalflaws and shortcoming that could have led to that rejection. The benefit of this self-ignorance is that further distress over one’s inadequacies is mitigated. Unfortunately, this protection carries the cost of decreased self-regulation. Because self-regulation is important for positive self-presentation (Vohs et al., 2005), this drop in self-awareness could ironically lead to further rejection. […]

These data suggest that social rejection does not decrease the absolute ability of victims to self-regulate but rather decreases their willingness to exert the effort necessary to do so. Increased lethargy, another aspect of cognitive deconstruction, is consistent with this decrease in self-regulatory effort. Twenge et al. (2002) found that social rejection led participants to give shorter and less detailed explanations of proverbs. Because fully explaining the proverbs would require an effortful response, this shortening and simplification of responses is evidence of increased lethargy amongst rejected participants. This lethargy is not binding, however. When given sufficient incentive, rejected participants were able to match the self-regulatory performance of participants in other conditions. Inducing self-awareness also allowed rejected individuals to self-regulate as effectively as other participants. In the absence of such stimulation, however, rejected individuals showed a decrement in self-regulatory ability that constitutes an important contribution to explaining the link between rejection and aggression. […]

Rejection and Meaningfulness

Twenge et al. (2002) found that social rejection led to a decrease in meaningful thought among participants, as a well as an increased likelihood to endorse the statement, “Life is meaningless.” Williams (2002)has also suggested that social rejection ought to be associated with a perception of decreased meaning in life. Given the fundamental nature of the need to belong, it makes sense that defining life as meaningful would be at least in part contingent on the fulfillment of social needs. A recent line of work has looked explicitly at the effect of social rejection on the perception of meaning in life. Perceiving meaning in life has been shown to have an inverse relationship with hostility, aggression,and antisocial attitude (Mascaro, Morey, & Rosen, 2004). As such, any decrease in meaning associated with social rejection would constitute an important feature of the explanation of the aggressive behavior of rejected individuals.

The God of the Left Hemisphere:
Blake, Bolte Taylor and the Myth of Creation
by Roderick Tweedy

The left hemisphere is competitive… the will to power…is the agenda of the left hemisphere. It arose not to communicate with the world but to manipulate it. This inability to communicate or co-operate poses great difficulties for any project of reintegration or union. Its tendency would be to feed off the right hemisphere, to simply use and gain power over it too. Left hemisphere superiority is based, not on a leap forward by the left hemisphere, but on a ‘deliberate’ handicapping of the right. There is perhaps as much chance of persuading the head of a multinational to stop pursuing an agenda of self-interest and ruthless manipulation as there is of persuading the Urizenic program of the brain which controls him of “resubmitting” itself to the right hemisphere’s values and awareness.

The story of the Western world being one of increasing left-hemispheric domination, we would not expect insight to be the key note. Instead we would expect a sort of insouciant optimism, the sleepwalker whistling a happy tune as he ambles towards the abyss.

The left, rational, brain, it might be safe to conclude, has no idea how serious the problem is, that is to say, how psychopathic it has become. Of course, it doesn’t care that it doesn’t care. “The idiot Reasoner laughs at the Man of Imagination/And from laughter proceeds to murder by undervaluing calumny”, noted Blake in a comment that is only remarkable for the fact that it has taken two hundred years to understand.

The apparently “conscious” rational self, the driving program and personality of the left brain, turns out to be deeply unconscious, a pathological sleepwalker blithely poisoning its own environment whilst tenaciously clinging onto the delusion of its own rightness. This unfortunate mixture, of arrogance and ignorance, defines contemporary psychology. The left hemisphere not only cannot see that there is a problem, it cannot see that it is itself the problem.