Balance of Egalitarianism and Hierarchy

David Graeber, an anthropologist, and David Wengrow, an archaeologist, have a theory about hunter-gatherer societies having cycled between egalitarianism and hierarchy. That is to say hierarchies were temporary and often seasonal. There was no permanent leadership or ruling caste, as seen in the fluid social order of still surviving hunter. This carried over into the early settlements that were initially transitory meeting places, likely for feasts and festivals.

There are two questions that need to be answered. First, why did humans permanently settle down? Second, why did civilization get stuck in hierarchy? These questions have to be answered separately. For millennia into civilization, the egalitarian impulse persisted within many permanent settlements. There was no linear development from egalitarianism to hierarchy, no fall from the Garden of Eden.

Julian Jaynes, in his theorizing about the bicameral mind, offered a possible explanation. A contributing factor for permanent settlements would be because the speaking idols had to be kept in a single location with agriculture developing as a later result. Then as societies became more populous, complex and expansive, hierarchies (as with moralizing gods) became more important to compensate for the communal limits of a voice-hearing social order.

That kind of hierarchy, though, was a much later development, especially in its extreme forms not seen until the Axial Age empires. The earlier bicameral societies had a more communal identity. That would’ve been true on the level of experience, as even the voices people heard were shared. There wasn’t an internal self separate from the communal identity and so no conflict between the individual member and larger society. One either fully belonged to and was immersed in that culture or not.

Large, complex hierarchies weren’t needed. Bicameralism began in small settlements that lacked police, court systems, standing armies, etc — all the traits of an oppressively authoritarian hierarchy that would later be seen, such as the simultaneous appearance of sexual moralizing and pornographic art. It wasn’t the threat of violent force by centralized authority and concentrated power that created and maintained the bicameral order but, as still seen with isolated indigenous tribes, shared identity and experience.

An example of this is that of early Egyptians. They were capable of impressive technological feats and yet they didn’t even have basic infrastructure like bridges. It appears they initially were a loose association of farmers organized around the bicameral culture of archaic authorization and, in the off-season, they built pyramids without coercion. Slavery was not required for this, as there is no evidence of forced labor.

In so many ways, this is alien to the conventional understanding of civilization. It is so radically strange that to many it seems impossible, especially when it gets described as ‘egalitarian’ in placing it in a framework of modern ideas. Mention primitive ‘communism’ or ‘anarchism’ and you’ll really lose most people. Nonetheless, however one wants to describe and label it, this is what the evidence points toward.

Here is another related thought. How societies went from bicameral mind to consciousness is well-trodden territory. But what about how bicameralism emerged from animism? They share enough similarities that I’ve referred to them as the animistic-bicameral complex. The bicameral mind seems like a variant or extension of the voice-hearing in animism.

Among hunter-gatherers, it was often costume and masks through which gods, spirits, and ancestors spoke. Any individual potentially could become the vessel of possession because, in the animistic view, all the world is alive with voices. So, how did this animistic voice-hearing become narrowed down to idol worship of corpses and statues?

I ask this because this is central to the question of why humans created permanent settlements. A god-king’s voice of authorization was so powerful that it persisted beyond his death. The corpse was turned into a mummy, as his voice was a living memory that kept speaking, and so god-houses were built. But how did the fluid practice of voice-hearing in animism become centralized in a god-king?

Did this begin with the rise of shamanism? Some hunter-gatherers don’t have shamans. But once the role of shaman becomes a permanent authority figure mediating with other realms, it’s not a large leap from a shaman-king to a god-king who could be fully deified in death. In that case, how did shamanism act as a transitional proto-bicameralism? In this, we might begin to discern the hitch upon which permanent hierarchy eventually got stuck.

I might point out that there is much disagreement in this area of scholarship, as expected. The position of Graeber and Wengrow is highly contested, even among those offering alternative interpretations of the evidence see Peter Turchin (An Anarchist View of Human Social Evolution & A Feminist Perspective on Human Social Evolution) and Camilla Power (Gender egalitarianism made us human: patriarchy was too little, too late & Gender egalitarianism made us human: A response to David Graeber & David Wengrow’s ‘How to change the course of human history’).

But I don’t see the disagreements as being significant for the purposes here. Here is a basic point that Turchin explains: “The reason we say that foragers were fiercely egalitarian is because they practiced reverse dominance hierarchy” (from first link directly above). That seems to go straight to the original argument. Many other primates have social hierarchy, although not all. Some of the difference appears to be cultural, in that humans early in evolution appear to have developed cultural methods of enforcing egalitarianism. This cultural pattern has existed long enough to have fundamentally altered human nature.

According to Graeber and Wengrow, these egalitarian habits weren’t lost easily, even as society became larger and more complex. Modern authoritarian hierarchies represent a late development, a fraction of a percentage of human existence. They are far outside the human norm. In social science experiments, we see how the egalitarian impulse persists. Consider two examples. Children will naturally help those in need, until someone pays them money to do so, shifting from intrinsic motivation to extrinsic. The other study showed how most people, both children an adults, will choose to punish wrongdoers even at personal cost.

This in-built egalitarianism is an old habit that doesn’t die easily no matter how it is suppressed or perverted by systems of authoritarian power. It is the psychological basis of a culture of trust that permanent hierarchies take advantage of through manipulation of human nature. The egalitarian impulse gets redirected in undermining egalitarianism. This is why modern societies are so unstable, as compared to the ancient societies that lasted for millennia.

That said, there is nothing wrong with genuine authority, expertise, and leadership — as seen even in the most radically egalitarian societies like the Piraha. Hierarchies are also part of our natural repertoire and only problematic when they fall out of balance with egalitarianism and so become entrenched. One way or another, human societies cycle between hierarchy and egalitarianism, whether it cycles on a regular basis or necessitates collapse. That is the point Walter Scheidel makes in his book, The Great Leveler. High inequality destabilizes society and always brings its own downfall.

We need to relearn that balance, if we hope to avoid mass disaster. Egalitarianism is not a utopian ideal. It’s simply the other side of human nature that gets forgotten.

* * *

Archaeology, anarchy, hierarchy, and the growth of inequality
by Andre Costopoulos

In some ways, I agree with both Graeber and Wengrow, and with Turchin. Models of the growth of social inequality have indeed emphasized a one dimensional march, sometimes inevitable, from virtual equality and autonomy to strong inequality and centralization. I agree with Graeber and Wengrow that this is a mistaken view. Except I think humans have moved from strong inequality, to somewhat managed inequality, to strong inequality again.

The rise and fall of equality

Hierarchy, dominance, power, influence, politics, and violence are hallmarks not only of human social organization, but of that of our primate cousins. They are widespread among mammals. Inequality runs deep in our lineage, and our earliest identifiable human ancestors must have inherited it. But an amazing thing happened among Pleistocene humans. They developed strong social leveling mechanisms, which actively reduced inequality. Some of those mechanisms are still at work in our societies today: Ridicule at the expense of self-aggrandizers, carnival inversion as a reminder of the vulnerability of the powerful, ostracism of the controlling, or just walking away from conflict, for example.

Understanding the growth of equality in Pleistocene human communities is the big untackled project of Paleolithic archaeology, mostly because we assume they started from a state of egalitarianism and either degenerated or progressed from there, depending on your lens. Our broader evolutionary context argues they didn’t.

During the Holocene, under increasing sedentism and dependence on spatially bounded resources such as agricultural fields that represent significant energy investments, these mechanisms gradually failed to dampen the pressures for increasing centralization of power. However, even at the height of the Pleistocene egalitarian adaptation, there were elites if, using Turchin’s figure of the top one or two percent, we consider that the one or two most influential members in a network of a hundred are its elite. All the social leveling in the world could not contain influence. Influence, in the end, if wielded effectively, is power.

Ancient ‘megasites’ may reshape the history of the first cities
by Bruce Bower

No signs of a centralized government, a ruling dynasty, or wealth or social class disparities appear in the ancient settlement, the researchers say. Houses were largely alike in size and design. Excavations yielded few prestige goods, such as copper items and shell ornaments. Many examples of painted pottery and clay figurines typical of Trypillia culture turned up, and more than 6,300 animal bones unearthed at the site suggest residents ate a lot of beef and lamb. Those clues suggest daily life was much the same across Nebelivka’s various neighborhoods and quarters. […]

Though some of these sprawling sites had social inequality, egalitarian cities like Nebelivka were probably more widespread several thousand years ago than has typically been assumed, says archaeologist David Wengrow of University College London. Ancient ceremonial centers in China and Peru, for instance, were cities with sophisticated infrastructures that existed before any hints of bureaucratic control, he argues. Wengrow and anthropologist David Graeber of the London School of Economics and Political Science also made that argument in a 2018 essay in Eurozine, an online cultural magazine.

Councils of social equals governed many of the world’s earliest cities, including Trypillia megasites, Wengrow contends. Egalitarian rule may even have characterized Mesopotamian cities for their first few hundred years, a period that lacks archaeological evidence of royal burials, armies or large bureaucracies typical of early states, he suggests.

How to change the course of human history
by David Graeber and David Wengrow

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Simply framing the question this way means making a series of assumptions, that 1. there is a thing called ‘inequality,’ 2. that it is a problem, and 3. that there was a time it did not exist. Since the financial crash of 2008, of course, and the upheavals that followed, the ‘problem of social inequality’ has been at the centre of political debate. There seems to be a consensus, among the intellectual and political classes, that levels of social inequality have spiralled out of control, and that most of the world’s problems result from this, in one way or another. Pointing this out is seen as a challenge to global power structures, but compare this to the way similar issues might have been discussed a generation earlier. Unlike terms such as ‘capital’ or ‘class power’, the word ‘equality’ is practically designed to lead to half-measures and compromise. One can imagine overthrowing capitalism or breaking the power of the state, but it’s very difficult to imagine eliminating ‘inequality’. In fact, it’s not obvious what doing so would even mean, since people are not all the same and nobody would particularly want them to be.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society. That is the real political message conveyed by endless invocations of an imaginary age of innocence, before the invention of inequality: that if we want to get rid of such problems entirely, we’d have to somehow get rid of 99.9% of the Earth’s population and go back to being tiny bands of foragers again. Otherwise, the best we can hope for is to adjust the size of the boot that will be stomping on our faces, forever, or perhaps to wrangle a bit more wiggle room in which some of us can at least temporarily duck out of its way.

Mainstream social science now seems mobilized to reinforce this sense of hopelessness.

Rethinking cities, from the ground up
by David Wengrow

Settlements inhabited by tens of thousands of people make their first appearance in human history around 6,000 years ago. In the earliest examples on each continent, we find the seedbed of our modern cities; but as those examples multiply, and our understanding grows, the possibility of fitting them all into some neat evolutionary scheme diminishes. It is not just that some early cities lack the expected features of class divisions, wealth monopolies, and hierarchies of administration. The emerging picture suggests not just variability, but conscious experimentation in urban form, from the very point of inception. Intriguingly, much of this evidence runs counter to the idea that cities marked a ‘great divide’ between rich and poor, shaped by the interests of governing elites.

In fact, surprisingly few early cities show signs of authoritarian rule. There is no evidence for the existence of monarchy in the first urban centres of the Middle East or South Asia, which date back to the fourth and early third millennia BCE; and even after the inception of kingship in Mesopotamia, written sources tell us that power in cities remained in the hands of self-governing councils and popular assemblies. In other parts of Eurasia we find persuasive evidence for collective strategies, which promoted egalitarian relations in key aspects of urban life, right from the beginning. At Mohenjo-daro, a city of perhaps 40,000 residents, founded on the banks of the Indus around 2600 BCE, material wealth was decoupled from religious and political authority, and much of the population lived in high quality housing. In Ukraine, a thousand years earlier, prehistoric settlements already existed on a similar scale, but with no associated evidence of monumental buildings, central administration, or marked differences of wealth. Instead we find circular arrangements of houses, each with its attached garden, forming neighbourhoods around assembly halls; an urban pattern of life, built and maintained from the bottom-up, which lasted in this form for over eight centuries.⁶

A similar picture of experimentation is emerging from the archaeology of the Americas. In the Valley of Mexico, despite decades of active searching, no evidence for monarchy has been found among the remains of Teotihuacan, which had its magnificent heyday around 400 CE. After an early phase of monumental construction, which raised up the Pyramids of the Sun and Moon, most of the city’s resources were channelled into a prodigious programme of public housing, providing multi-family apartments for its residents. Laid out on a uniform grid, these stone-built villas — with their finely plastered floors and walls, integral drainage facilities, and central courtyards — were available to citizens regardless of wealth, status, or ethnicity. Archaeologists at first considered them to be palaces, until they realised virtually the entire population of the city (all 100,000 of them) were living in such ‘palatial’ conditions.⁷

A millennium later, when Europeans first came to Mesoamerica, they found an urban civilisation of striking diversity. Kingship was ubiquitous in cities, but moderated by the power of urban wards known as calpolli, which took turns to fulfil the obligations of municipal government, distributing the highest offices among a broad sector of the altepetl (or city-state). Some cities veered towards absolutism, but others experimented with collective governance. Tlaxcalan, in the Valley of Puebla, went impressively far in the latter direction. On arrival, Cortés described a commercial arcadia, where the ‘order of government so far observed among the people resembles very much the republics of Venice, Genoa, and Pisa for there is no supreme overlord.’ Archaeology confirms the existence here of an indigenous republic, where the most imposing structures were not palaces or pyramid-temples, but the residences of ordinary citizens, constructed around district plazas to uniformly high standards, and raised up on grand earthen terraces.⁸

Contemporary archaeology shows that the ecology of early cities was also far more diverse, and less centralised than once believed. Small-scale gardening and animal keeping were often central to their economies, as were the resources of rivers and seas, and indeed the ongoing hunting and collecting of wild seasonal foods in forests or in marshes, depending on where in the world we happen to be.⁹ What we are gradually learning about history’s first city-dwellers is that they did not always leave a harsh footprint on the environment, or on each other; and there is a contemporary message here too. When today’s urbanites take to the streets, calling for the establishment of citizens’ assemblies to tackle issues of climate change, they are not going against the grain of history or social evolution, but with its flow. They are asking us to reclaim something of the spark of political creativity that first gave life to cities, in the hope of discerning a sustainable future for the planet we all share.

Farewell to the ‘Childhood of Man’
by Gyrus

[Robert] Lowie made similar arguments to [Pierre] Clastres, about conscious knowledge of hierarchies among hunter-gatherers. However, for reasons related to his concentration on Amazonian Indians, Clastres missed a crucial point in Lowie’s work. Lowie highlighted the fact that among many foragers, such as the Eskimos in the Arctic, egalitarianism and hierarchy exist within the same society at once, cycling from one to another through seasonal social gatherings and dispersals. Based on social responses to seasonal variations in the weather, and patterns in the migration of hunted animals, not to mention the very human urge to sometimes hang out with a lot of people and sometimes to get the hell away from them, foraging societies often create and then dismantle hierarchical arrangements on a year-by-year basis.

There seems to have been some confusion about exactly what the pattern was. Does hierarchy arise during gatherings? This would tally with sociologist Émile Durkheim’s famous idea that ‘the gods’ were a kind of primitive hypothesis personifying the emergent forces that social complexity brought about. People sensed the dynamics changing as they lived more closely in greater numbers, and attributed these new ‘transcendent’ dynamics to organised supernatural forces that bound society together. Religion and cosmology thus function as naive mystifications of social forces. Graeber detailed ethnographic examples where some kind of ‘police force’ arises during tribal gatherings, enforcing the etiquette and social expectations of the event, but returning to being everyday people when it’s all over.

But sometimes, the gatherings are occasions for the subversion of social order — as is well known in civilised festivals such as the Roman Saturnalia. Thus, the evidence seemed to be confusing, and the idea of seasonal variations in social order was neglected. After the ’60s, the dominant view became that ‘simple’ egalitarian hunter-gatherers were superseded by ‘complex’ hierarchical hunter-gatherers as a prelude to farming and civilisation.

Graeber and Wengrow argue that the evidence isn’t confusing: it’s simply that hunter-gatherers are far more politically sophisticated and experimental than we’ve realised. Many different variations, and variations on variations, have been tried over the vast spans of time that hunter-gatherers have existed (over 200,000 years, compared to the 12,000 or so years we know agriculture has been around). Clastres was right: people were never naive, and resistance to the formation of hierarchies is a significant part of our heritage. However, seasonal variations in social structures mean that hierarchies may never have been a ghostly object of resistance. They have probably been at least a temporary factor throughout our long history.1 Sometimes they functioned, in this temporary guise, to facilitate socially positive events — though experience of their oppressive possibilities usually encouraged societies to keep them in check, and prevent them from becoming fixed.

How does this analysis change our sense of the human story? In its simplest form, it moves the debate from ‘how and when did hierarchy arise?’ to ‘how and when did we get stuck in the hierarchical mode?’. But this is merely the first stage in what Graeber and Wengrow promise is a larger project, which will include analysis of the persistence of egalitarianism among early civilisations, usually considered to be ‘after the fall’ into hierarchy.

 

Alienation and Soul Blindness

There is the view that consciousness* is a superficial overlay, that the animistic-bicameral mind is our fundamental nature and continues to operate within consciousness. In not recognizing this, we’ve become alienated from ourselves and from the world we are inseparable from. We don’t recognize that the egoic voice is but one of many voices and so we’ve lost appreciation for what it means to hear voices, including the internalized egoic voice that we’ve become identified with in submission to its demurgic authorization. This could be referred to as soul blindness, maybe related to soul loss — basically, a lack of psychological integration and coherency. Is this an inevitability within consciousness? Maybe not. What if a deeper appreciation of voice-hearing was developed within consciousness? What would emerge from consciousness coming to terms with its animistic-bicameral foundation? Would it still be consciousness or something else entirely?

* This is in reference to Julian Jaynes use of ‘consciousness’ that refers to the ego mind with its introspective and internal space built upon metaphor and narratization. Such consciousness as a social construction of a particular kind of culture is not mere perceptual awareness or biological reactivity.

* * *

Is It as Impossible to Build Jerusalem as It is to Escape Babylon? (Part Two)
by Peter Harrison

Marx identified the concept of alienation as being a separation, or estrangement, from one’s labour. And for Marx the consistent ability to labour, to work purposefully and consciously, as opposed to instinctively, towards a pre-imagined goal, was the trait that distinguished humans from other animals. This means also that humans are able to be persuaded to work creatively, with vigour and passion, for the goals of others, or for some higher goal than the maintenance of daily survival. As long as they are able see some tiny benefit for themselves, which might be service to a higher cause, or even just simple survival, since working for the goal of others may be the only means of obtaining food. So, Marx’s definition of alienation was more specific than an ‘existential’ definition because it specified labour as the defining human characteristic. But he was also aware that the general conditions of capitalism made this alienation more acute and that this escalated estrangement of humans from immediately meaningful daily activity led to a sense of being a stranger in one’s own world, and not only for the working class. This estrangement (I want to write étranger-ment, to reference Camus, but this is not a word) afflicted all classes, even those classes that seemed to benefit from class society, since capitalism had, even by his own time, gained an autonomy of its own. Life is as meaningless [or better: as anti-human] for a cleaner as it is for the head of a large corporation. This is why Marx stated that all people under capitalism were proletarian.

When I discovered the idea of soul blindness in Eduardo Kohn’s book, How Forests Think, I was struck by it as another useful way of understanding the idea of alienation. The concept of soul blindness, as used by the Runa people described by Kohn, seems to me to be related to the widespread Indigenous view of the recently deceased as aimless and dangerous beings who must be treated with great care and respect after their passing to prevent them wreaking havoc on the living. In Kohn’s interpretation, to be soul blind is to have reached the ‘terminus of selfhood,’ and this terminus can be reached while still alive, when one loses one’s sense of self through illness or despair, or even when one just drifts off into an unfocussed daze, or, more profoundly, sinks into an indifference similar to — to reference Camus again — that described by the character Meursault, in L’Etranger.

There are some accounts of Indigenous people first encountering white people in which the white people are initially seen as ghosts, one is recorded by Lévi-Strauss for Vanuatu. Another is embedded in the popular Aboriginal history of the area I live in. On first contact the white people are immediately considered to be some kind of ghost because of their white skin. This may have something to do with practice of preserving the bodies of the dead. This involves scraping off the top layer of skin which, apparently, makes the body white. This practice is described by the anthropologist, Atholl Chase, in his reminisces of Cape York. But for me there is more to the defining of the white intruders as ghosts because of their white skin. These foreigners also act as if they are soul blind. They are like machines, working for a cause that is external to them. For the Indigenous people these strangers do not seem to have soul: they are unpredictable; dangerous; they don’t know who they are.

But it is the anthropologist Eduardo Viveiros de Castro who, I think, connects most clearly to the work of James Hillman on the notion of the soul. James Hillman uses the term soul but he does not mean a Christian soul and he is not ultimately meaning the mind. For him the soul is a form of mediation between events and the subject and, in this sense, it might be similar to Bourdieu’s conception of ‘disposition.’ For Viveiros de Castro, ‘A perspective is not a representation because representations are a property of the mind or spirit, whereas the point of view is located in the body.’ Thus, Amerindian philosophy, which Viveiros de Castro is here describing, perhaps prefigures Hillman’s notion that ‘soul’ is ‘a perspective rather than a substance, a viewpoint towards things rather than a thing itself.’

Islamic Voice-Hearing

Islam, what kind of religion is it? Islam is the worship of a missing god, that is how we earlier described it. Some might consider that as unfair and dismissive to one of the world’s largest religions, but this is true to some extent for all post-bicameral religions. The difference is that Islam is among the most post-bicameral of the world religions. This is true simply in temporal terms.

The bicameral societies, according to Julian Jaynes, ended with the widespread collapse of the late Bronze Age empires and their trade networks. That happened around 1177 BCE, as the result of natural disasters and attacks by the mysterious Sea People, the latter maybe having formed out the refugees from the former. The Bronze Age continued for many centuries in various places: 700 BCE in Great Britain, Central Europe and China; 600 BCE in Northern Europe; 500 BCE in Korea and Ireland; and centuries beyond that in places like Japan.

But the Bronze Age Empires never returned. In that late lingering Bronze Age, a dark age took hold and put all of civilization onto a new footing. This was the era when, across numerous cultures, there were the endless laments about the gods, spirits, and ancestors having gone silent, having abandoned humanity. Entire cultural worldviews and psychological ways of being were utterly demolished or else irreparably diminished. This created an intense sense of loss, longing, and nostalgia that has never left humanity since.

Out of the ashes, while the Bronze Age was still holding on, the Axial Age arose around 900 BCE and continued until 200 BCE. New cultures were formed and new empires built. The result is what Jaynes described as ‘consciousness’ or what one can think of as introspective mental space, an inner world of egoic identity where the individual is separate from community and world. Consciousness and the formalized religions that accompanied it were a replacement for the loss of a world alive with voices.

By the time Rabbinic Judaism, Gnosticism, and Christianity came around, the Axial Age was already being looked back upon as a Golden Age and, other than through a few surviving myths, the Bronze Age before that was barely remembered at all. It would be nearly another 600 years after that first century monotheistic revival when Muhammad would have his visions of the angel Gabriel visiting him to speak on behalf of God. Islam is both post-bicameral and post-axial, to a far greater degree.

Muslims consider Muhammad to be the last prophet and even he didn’t get to hear God directly for it had to come through an angel. The voice of God had long ago grown so faint that people had come to rely on oracles, channelings, and such. These rather late revelations by way of Gabriel were but a barely audible echo of the archaic bicameral voices. It is may be understandable that, as with some oracles before him, Muhammad would declare God would never speak again. So, Islam, unlike the other monothesitic religions, fully embraces God’s absence from the world.

Actually, that is not quite right. Based on the Koran, God will never speak again until the Final Judgment. Then all will hear God again when he weighs your sins and decides the fate of your immortal soul. Here is the interesting part. The witnesses God shall call upon in each person’s case will be all the bicameral voices brought back out of silence. The animals and plants will witness for or against you, as will the earth and rocks and wind. Even your own resurrected body parts will come alive again with voices to speak of what you did. Body parts speaking is something familiar to those who read Jaynesian scholarship.

Until then, God and all the voices of the world will remain mute witnesses, watching your every move and taking notes. They see all, hear all, notice all — every time you masturbate or pick your nose, every time you have a cruel or impure thought, every time you don’t follow one of the large number of divine commandments, laws, and rules spelled out in the Koran. The entire world is spying upon you and will report back to God, at the end of time. The silent world only appears to be dumb and unconscious. God is biding his time, gathering a file on you like a cosmic FBI.

This could feel paralyzing, but in another way it offers total freedom from self, total freedom through complete submission. Jaynesian consciousness is a heavy load and that was becoming increasingly apparent over time, especially in the centuries following the Axial Age. The zealous idealism of the Axial Age prophets was growing dull and tiresome. By the time that Muhammad showed up, almost two millennia had passed since the bicameral mind descended into darkness. The new consciousness was sold as something amazing, but it hadn’t fully lived up to its promises. Instead, ever more brutal regimes came into power and a sense of anxiety was overtaking society.

Muhammad had an answer and the people of that region were obviously hungry for someone to provide an answer. After forming his large army, his military campaign barely experienced any resistance. And in a short period of time while he was still alive, most of the Arabian peninsula was converted to Islam. The silence of the gods had weakened society, but Muhammad offered an explanation for why the divine could no longer be experienced. He helped normalize what had once felt like a tragedy. He told them that they didn’t need to hear God because God had already revealed all knowledge to the prophets, including himself of course. No one had to worry, just follow orders and comply with commands.

All the tiresome complications of thought were unnecessary. God had already thought out everything for humans. The Koran as the final and complete holy text would entirely and permanently replace the bicameral voices, ever receding into the shadows of the psyche. But don’t worry, all those voices are still there, waiting to speak. But the only voice that the individual needed to listen to was that of the person directly above them in the religious hierarchy, be it one’s father or an imam or whoever else with greater official authority with a line of command that goes back to the prophets and through the angels to God Himself. Everything is in the Koran and the learned priestly class would explain it all and translate it into proper theocratic governance.

Muhammad came with a different message than anyone before. The Jewish prophets and Jesus, as with many Pagans, would speak of God as Father and humanity as His children. Early Christians took this as a challenge to a slave-based society, in borrowing from the Stoics that even a slave was free in his soul. Muhammad, instead, was offering another variety of freedom. We humans, rather than children of God, are slaves of God. The entire Islamic religion is predicated upon divine slavery, absolute submission. This is freedom from the harsh taskmaster of egoic individuality, a wannabe demiurge. Unlike Jesus, Muhammad formulated a totalitarian theocracy, a totalizing system. Nothing is left to question or interpretation, that is in theory or rather in belief.

This goes back to how, with the loss of the bicameral mind and social order, something took its place. It was a different kind of authoritarianism — rigid and hierarchical, centralized and concentrated, despotic and violent. Authoritarianism of this variety didn’t emerge until the late Bronze Age when the bicameral societies were becoming too large and complex, overstrained and unstable. Suddenly, as if to presage the coming collapse, there was the appearance of written laws, harsh punishment, and cruel torture — none of which ever existed before, according to historical records and archaeological finds. As the world shifted into post-bicameralism, this authoritarianism became ever more extreme (e.g., Roman Empire).

This was always the other side of the rise of individuality, of Jaynesian consciousness. The greater potential freedom the individual possesses the more that oppressive social control is required, as the communal bonds and social norms of the bicameral mind increasingly lost their hold to organically maintain order. Muhammad must have showed up at the precise moment of crisis in this change. After the Roman Empire’s system of slavery, Europe came up with feudalism to re-create some of what had disappeared. But apparently a different kind of solution was required in the Arab world.

Maybe this offsets the draining of psychic energy that comes with consciousness. Jaynes speculated that, like the schizophrenic, bicameral humans had immense energy and stamina which allowed them to accomplish near-miraculous feats such as building the pyramids with small populations and very little technology or infrastructure. Suppression of the extremes of individualism through emphasizing absolute subordination is maybe a way of keeping in check the energy loss of maintaining egoic consciousness. In the West, we eventually overcame this weakness by using massive doses of stimulants to overpower the otherwise debilitating anxiety and to help shore up the egoic boundaries, but this has come at the cost of destroying our physical health and mental health.

Time will tell which strategy is the most effective for long-term survival of specific societies. But I’m not sure I’d bet on the Western system, considering how unsustainable it appears to be and how easily it has become crippled by a minor disease epidemic like covid-19. Muhammad might simply have been trying to cobble together some semblance of a bicameral mind, in the face of divine silence. There is a good reason for trying to do that. Those bicameral societies lasted many millennia longer than has our post-bicameral civilization. It’s not clear that modern civilization or at least Western civilization will last beyond the end of this century. We underestimate the bicameral mind and the importance it played during the single longest period of advancement of civilization.

* * *

Let us leave a small note of a more personal nature. In the previous post (linked above), we mentioned that our line of inquiry began with a conversation we had with a friend of ours who is a Muslim. He also happens to be schizophrenic, i.e., a voice-hearer. The last post was about how voice=hearing is understood within Islam. Since supposedly God no longer speaks to humans nor do his angelic intermediaries, any voice a Muslim hears is automatically interpreted as not being of divine origins. It doesn’t necessarily make the voice evil, as it could be a jinn which is a neutral entity in Islamic theology, although jinn can be dangerous. Then again, voice-hearing might also be caused by an evil magician, what I think is called a sihir.

Anyway, we had the opportunity to speak to this friend once again, as we are both in jobs that require us to continue working downtown amidst everything otherwise being locked down because of the covid-19 epidemic. In being isolated from family and other friends, we’ve been meeting with this Islamic guy on a daily basis. Just this morning, we went for a long walk together and chatted about life and religion. He had previously talked about his schizophrenia in passing, apparently unworried by the stigma of it. He is an easy person to talk to, quite direct and open about his thoughts and experiences. I asked him about voice-hearing and he explained that, prior to being medicated, he would continue to hear people speak to him after they no longer were present. And unsurprisingly, the voices were often negative.

Both his imam and his therapist told him to ignore the voices. Maybe that is a standard approach in traditionally monotheistic cultures. As we mentioned in the other post, he is from North Africa where Arabs are common. But another friend of ours lives in Ghana, in West Africa. Voice-hearing experience among people in Ghana was compared to those in the United States, in the research of Tanya M. Luhrmann, an anthropologist inspired by Julian Jaynes. She found that Ghanans, with a tradition of voice-hearing (closer to bicameralism?), had a much more positive experience of the voices they heard. Americans, like our Islamic friend, did not tend to hear voices that were kind and helpful. This is probably the expectancy effect.

If you are raised to believe that voices are demonic or their Islamic equivalent of jinn or are from witches and evil magicians, or if you simply have been told voice-hearing means your insane, well, it’s not likely to lead to happy results when you do hear voices. I doubt it decreases the rate of voice-hearing, though. In spite of Islamic theology denying God and angels speak to humans any longer, that isn’t likely to have any affect on voice-hearing itself. So, the repressed bicameral mind keeps throwing out these odd experiences, but in our post-bicameral age we have fewer resources in dealing constructively with those voices. Simply denying and ignoring them probably is less helpful.

That is the ultimate snag. The same voices that once were identified as godly or something similar are now taken as false, unreal, or dangerous. In a sense, God never stopped speaking. One could argue that we all are voice-hearers, but some of us now call the voice of God as ‘conscience’ or whatever. Others, like Muslims, put great emphasis on this voice-hearing but have tried to gag God who goes on talking. Imagine how many potential new prophets have been locked away in psychiatric wards or, much worse, killed or imprisoned as heretics. If God can’t be silenced, the prophets who hear him can. The Old Testament even describes how the authorities forbid voice-hearing and demanded that voice-hearers be killed, even by their own parents.

The bicameral mind didn’t disappear naturally because it was inferior but because, in its potency, it was deemed dangerous to those who wanted to use brute power to enforce their own voices of authorization. The bicameral mind, once central to the social order, had become enemy number one. If people could talk to God directly, religion and its claims of authority would become irrelevant. That is how our Islamic friend, a devout religious practitioner, ended up being drugged up to get the voices to stop speaking.

Islam as Worship of a Missing God

A friend of ours is a Muslim and grew up in an Islamic country. As he talked about his religion, we realized how different it is from Christianity. There is no shared practice among Christians similar to the praying five times a day. From early on, Christianity was filled with diverse groups and disagreements, and that has only increased over time (there are over 4,600 denominations of Christianity in the United States alone). My friend had a hard time appreciating that there is no agreed upon authority, interpretation, or beliefs among all Christians.

Unlike Muhammad, Jesus never wrote anything nor was anything written down about him until much later. Nor did he intend to start a new religion. He offered no rules, social norms, instructions, etc for how to organize a church, a religious society, or a government. He didn’t even preach family values, if anything the opposite — from a command to let the dead bury themselves to the proclamation of having come to turn family members against each other. The Gospels offer no practical advice about anything. Much of Jesus’ teachings, beyond a general message of love and compassion, are vague and enigmatic, often parables that have many possible meanings.

Now compare Jesus to the Islamic prophet. Muhammad is considered the last prophet, although he never claimed to have heard the voice of God and instead supposedly having received the message secondhand through an angel. Still, according to Muslims, the Koran is the only complete holy text in existence — the final Word of God. That is also something that differs from Christianity. Jesus never asserted that God would become silent to all of humanity for eternity and that his worshippers would be condemned to a world without the God they longed for, in the way Allah never enters His own Creation.

Many Protestants and Anabaptists and those in similar groups believe that God continues to be revealed to people today, that the divine is known through direct experience, that the Bible as a holy text must be read as a personal relationship to God, not merely taken on the authority of blind faith. Some churches go so far as to teach people how to speak to and hear God (T.M. Luhrmann, When God Talks Back). Even within Catholicism, there have been further revelations of God since Jesus, from various mystics and saints that are acknowledged by the Vatican but also from ordinary Catholics claiming God spoke to them without any great fear of hereticism and excommunication.

It made me think about Julian Jaynes’ theory modern consciousness. With the collapse of the Bronze Age civilizations, there was this sense of the gods having gone silent. Yet this was never an absolute experience, as some people continued to hear the gods. Even into the modern world, occasionally people still claim to hear various gods and sometimes even found new religions based on revelations. The Bahai, for example, consider Muhammad to be just one more prophet with others having followed him. Hindus also have a living tradition of divine revelation that is equivalent to that of prophets. Only Islam, as far as I know, claims all prophecy and revelation to be ended for all time.

I was thinking about the sense of loss and loneliness people felt when bicameral societies came to an end. They were thrown onto an increasingly isolated individualism. Religion as we know it was designed to accommodate this, in order to give a sense of order, meaning and authority that had gone missing. But Islam takes this to an extreme. After Muhammad, no human supposedly would ever again personally hear, see, or experience the divine in any way (excluding mystical traditions like Sufism). For all intents and purposes, Allah has entirely receded from the world. The only sign of his existence that he left behind was a book of instructions. We must submit and comply or be punished in the afterlife, a world separate from this one

That seems so utterly depressing and dreary to me. I was raised Christian and on the far other extreme of Protestantism. My family attended the Unity Church that emphasizes direct experience of God to such a degree that the Bible itself was mostly ignored and almost irrelevant — why turn to mere words on paper when you can go straight to the source? Rather than being denied and condemned, to claim to have heard God speak would have been taken seriously. I’m no longer religious, but the nearly deist idea of a god that is distant and silent seems so alien and unappealing to me. Yet maybe that makes Islam well designed for the modern world, as it offers a strong response to atheism.

If you don’t have any experience of God, this is considered normal and expected in Islam, not something to be worried about, not something to challenge one’s faith as is common in Christianity (NDE: Spirituality vs Religiosity); and it avoids the riskiness and confusion of voice-hearing (Libby Anne, Voices in Your Head: Evangelicals and the Voice of God). One’s ignorance of the divine demonstrates one’s individual inadequacy and, as argued by religious authority, is all the more reason to submit to religious authority. Islamic relation between God and humanity is one-way, except to some extent by way of inspiration and dreams, but Allah himself never directly enters his Creation and so never directly interacts with humans, not even with prophets. Is that why constant prayer is necessary for Muslims, to offset God’s silence and vacancy? Worship of a missing God seems perfectly suited for the modern world.

Muslims are left with looking for traces of God in the Koran like ants crawling around in a footprint while trying to comprehend what made it and what it wants them to do. So, some of the ants claim to be part of a direct lineage of ants that goes back to an original ant that, according to tradition, was stepped upon by what passed by. These well-respected ants then explain to all the other ants what is meant by all the bumps and grooves in the dried mud. In worship, the ants pray toward the footprint and regularly gather to circle around it. This gives their life some sense of meaning and purpose and, besides, it maintains the social order.

That is what is needed in a world where the bicameral voices of archaic authorization no longer speak, no longer are heard. Something has to fill the silence as the loneliness it creates is unbearable. Islam has a nifty trick, embracing the emptiness and further irritating the overwhelming anxiety as it offers the salve for the soul. Muslims take the silence of God as proof of God, as a promise of something more. This otherworldly being, Allah, tells humans who don’t feel at home in this world that their real home is elsewhere, to which they will return if they do what they are told. Other religions do something similar, but Islam takes this to another level — arguably, the highest or most extreme form of monotheism, so far. The loss of the bicameral mind could not be pushed much further, one suspects, without being pushed into an abyss.

Islam is a truly modern religion. Right up there with capitalism and scientism.

* * *

Further discussion about this can be found on the Facebook page “Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind”.

 

To Empathize is to Understand

What is empathy as a cognitive ability? And what is empathy as an expansion of identity, as part of awareness of self and other?

There is a basic level of empathy that appears to be common across numerous species. Tortoises, when seeing another on its back, will help flip it over. There are examples of animals helping or cooperating with those from an entirely different species. Such behavior has been repeatedly demonstrated in laboratories as well. These involve fairly advanced expressions of empathy. In some cases, one might interpret it as indicating at least rudimentary theory of mind, the understanding that others have their own experience, perspective, and motivations. But obviously human theory of mind can be much more complex.

One explanation about greater empathy has to do with identity. Empathy in a way is simply a matter of what is included within one’s personal experience (Do To Yourself As You Would Do For Others). To extend identity is to extend empathy to another individual or a group (or anything else that can be brought within sphere of the self). For humans, this can mean learning to include one’s future self, to empathize with experience one has not yet had, the person one has not yet become. The future self is fundamentally no different than another person.

Without cognitive empathy, affective empathy is limited to immediate experience. It’s the ability to feel what another feels. But lacking cognitive empathy as happens in the most severe autism, theory of mind cannot be developed and so there is no way to identity, locate and understand that feeling. One can only emotionally react, not being able to differentiate one’s own emotion from that of another. In that case, there would be pure emotion, and yet no recognition of the other. Cognitive empathy is necessary to get beyond affective reactivity, not all that different than the biological reactivity of a slug.

It’s interesting that some species (primates, rats, dolphins, etc) might be able to have more cognitive empathy and theory of mind than some people at the extreme ends of severe autism, not necessarily being an issue of intelligence. On the other hand, the high functioning on the autistic spectrum, if intervention happens early enough, can be taught theory of mind, although it is challenging for the. This kind of empathy is considered a hallmark of humanity, a defining feature. This is what leads to problems of social behavior for those with autism spectrum disorder.

Someone entirely lacking in theory of mind would be extremely difficult to communicate and interact with beyond the most basic level, as is seen in the severest cases of autism and other extreme developmental conditions. Helen Keller asserts she had no conscious identity, no theory of her own mind or that of others, until she learned language.* Prior to her awakening, she was aggressive and violent in reacting to a world she couldn’t understand, articulate, or think about. That fits in with the speculations of Julian Jaynes. What he calls ‘consciousness’ is the addition of abstract thought by way of metaphorical language, as built upon concrete experience and raw affect. Keller discusses how her experience went from from the concreteness of touch to the abstraction of language. In becoming aware of the world, she became aware of herself.

Without normal development of language, the human mind is crippled: “The “black silence” of the deaf, blind and mute is similar in many respects to the situation of acutely autistic children where there are associated difficulties with language and the children seem to lack what has been called “a theory of mind” ” (Robin Allott, Helen Keller: Language and Consciousenss). Even so, there is more to empathy than language, and that might be true as well for some aspects or kinds of cognitve empathy. Language is not the only form of communication.

Rats are a great example in comparing to humans. We think of them as pests, as psychologically inferior. But anyone who has kept rats knows how intelligent and social they are. They are friendlier and more interactive than the typical cat. And research has shown how cognitively advanced they are in learning. Rats do have the typical empathy of concern for others. For example, they won’t hurt another rat in exchange for a reward and, given a choice, they would rather go hungry. But it goes beyond that.

It’s also shown that “rats are more likely and quicker to help a drowning rat when they themselves have experienced being drenched, suggesting that they understand how the drowning rat feels” (Kristin Andrews, Rats are us). And “rats who had been shocked themselves were less likely to allow other rats to be shocked, having been through the discomfort themselves.” They can also learn to play hide-and-seek which necessitates taking on the perspective others. As Ed Yong asks in The Game That Made Rats Jump for Joy, “In switching roles, for example, are they taking on the perspective of their human partners, showing what researchers call “theory of mind”?”

That is much more than mere affective empathy. This seems to involve active sympathy and genuine emotional understanding, that is to say cognitive empathy and theory of mind. If they are capable of both affective and cognitive empathy, however limited, and if Jaynesian consciousness partly consists of empathy imaginatively extended in space and time, then a case could be made that rats have more going on than simple perceptual awareness and biological reactivity. They are empathically and imaginatively engaging with others in the world around them. Does this mean they are creating and maintaining a mental model of others? Kristin Andrews details the extensive abilities of rats:

“We now know that rats don’t live merely in the present, but are capable of reliving memories of past experiences and mentally planning ahead the navigation route they will later follow. They reciprocally trade different kinds of goods with each other – and understand not only when they owe a favour to another rat, but also that the favour can be paid back in a different currency. When they make a wrong choice, they display something that appears very close to regret. Despite having brains that are much simpler than humans’, there are some learning tasks in which they’ll likely outperform you. Rats can be taught cognitively demanding skills, such as driving a vehicle to reach a desired goal, playing hide-and-seek with a human, and using the appropriate tool to access out-of-reach food.”

To imagine the future for purposes of thinking in advance and planning actions, that is quite advanced cognitive behavior. Julian Jaynes argued that was the purpose of humans developing a new kind of consciousness, as the imagined metaphorical space that is narratized allows for the consideration of alternatives, something he speculates was lacking in humans prior to the Axial Age when behavior supposedly was more formulaic and predetermined according to norms, idioms, etc. Yet rats can navigate a path they’ve never taken before with novel beginning and ending locations, which would require taking into account multiple options. What theoretically makes Jaynesian consciousness unique?

Jaynes argues that it’s the metaphorical inner space that is the special quality that created the conditions for the Axial Age and all that followed from it, the flourishing of complex innovations and inventions, the ever greater extremes of abstraction seen in philosophy, math and science. We have so strongly developed this post-bicameral mind that we barely can imagine anything else. But we know that other societies have very different kinds of mentalities, such as the extended and fluid minds of animistic cultures. What exactly is the difference?

Australian Aborigines give hint to something between the two kinds of mind. In some ways, the mnemonic systems represent more complex cognitive ability than we are capable with our Jaynesian consciousness. Instead of an imagined inner space, the Songlines are vast systems of experience and knowledge, culture and identity overlaid upon immense landscapes. These mappings of externalized cognitive space can be used to guide the individual across distant territories the individual has never seen before and help them to identify and use the materials (plants, stones, etc) at a location no one in their tribe has visited for generations. Does this externalized mind have less potential for advanced abilities? Upon Western contact, Aborigines had farming and ranching, kept crop surpluses in granaries, used water and land management.

It’s not hard to imagine civilization having developed along entirely different lines based on divergent mentalities and worldviews. Our modern egoic consciousness was not an inevitability and it likely is far from offering the most optimal functioning. We might already be hitting a dead end with our present interiorized mind-space. Maybe it’s our lack of empathy in understanding the minds of other humans and other species that is an in-built limitation to the post-bicameral world of Jaynesian consciousness. And so maybe we have much to learn from entirely other perspectives and experiences, even from rats.

* * *

* Helen Keller, from Light in My Darkness:

I had no concepts whatever of nature or mind or death or God. I literally thought with my body. Without a single exception my memories of that time are tactile. . . . But there is not one spark of emotion or rational thought in these distinct yet corporeal memories. I was like an unconscious clod of earth. There was nothing in me except the instinct to eat and drink and sleep. My days were a blank without past, present, or future, without hope or anticipation, without interest or joy. Then suddenly, I knew not how or where or when, my brain felt the impact of another mind, and I awoke to language, to knowledge, to love, to the usual concepts of nature, good, and evil. I was actually lifted from nothingness to human life.

And from The Story of My Life:

As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten–-a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that ‘w-a-t-e-r’ meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away.

And from The World I Live In:

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith. […]

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.”

I cannot represent more clearly than any one else the gradual and subtle changes from first impressions to abstract ideas. But I know that my physical ideas, that is, ideas derived from material objects, appear to me first an idea similar to those of touch. Instantly they pass into intellectual meanings. Afterward the meaning finds expression in what is called “inner speech.”  […]

As my experiences broadened and deepened, the indeterminate, poetic feelings of childhood began to fix themselves in definite thoughts. Nature—the world I could touch—was folded and filled with myself. I am inclined to believe those philosophers who declare that we know nothing but our own feelings and ideas. With a little ingenious reasoning one may see in the material world simply a mirror, an image of permanent mental sensations. In either sphere self-knowledge is the condition and the limit of our consciousness. That is why, perhaps, many people know so little about what is beyond their short range of experience. They look within themselves—and find nothing! Therefore they conclude that there is nothing outside themselves, either.

However that may be, I came later to look for an image of my emotions and sensations in others. I had to learn the outward signs of inward feelings. The start of fear, the suppressed, controlled tensity of pain, the beat of happy muscles in others, had to be perceived and compared with my own experiences before I could trace them back to the intangible soul of another. Groping, uncertain, I at last found my identity, and after seeing my thoughts and feelings repeated in others, I gradually constructed my world of men and of God. As I read and study, I find that this is what the rest of the race has done. Man looks within himself and in time finds the measure and the meaning of the universe.

* * *

As an example of how language relates to emotions:

The ‘untranslatable’ emotions you never knew you had
by David Robson

But studying these terms will not just be of scientific interest; Lomas suspects that familiarising ourselves with the words might actually change the way we feel ourselves, by drawing our attention to fleeting sensations we had long ignored.

“In our stream of consciousness – that wash of different sensations feelings and emotions – there’s so much to process that a lot passes us by,” Lomas says. “The feelings we have learned to recognise and label are the ones we notice – but there’s a lot more that we may not be aware of. And so I think if we are given these new words, they can help us articulate whole areas of experience we’ve only dimly noticed.”

As evidence, Lomas points to the work of Lisa Feldman Barrett at Northeastern University, who has shown that our abilities to identify and label our emotions can have far-reaching effects.

Her research was inspired by the observation that certain people use different emotion words interchangeably, while others are highly precise in their descriptions. “Some people use words like anxious, afraid, angry, disgusted to refer to a general affective state of feeling bad,” she explains. “For them, they are synonyms, whereas for other people they are distinctive feelings with distinctive actions associated with them.”

This is called “emotion granularity” and she usually measures this by asking the participants to rate their feelings on each day over the period of a few weeks, before she calculates the variation and nuances within their reports: whether the same old terms always coincide, for instance.

Importantly, she has found that this then determines how well we cope with life. If you are better able to pin down whether you are feeling despair or anxiety, for instance, you might be better able to decide how to remedy those feelings: whether to talk to a friend, or watch a funny film. Or being able to identify your hope in the face of disappointment might help you to look for new solutions to your problem.

In this way, emotion vocabulary is a bit like a directory, allowing you to call up a greater number of strategies to cope with life. Sure enough, people who score highly on emotion granularity are better able to recover more quickly from stress and are less likely to drink alcohol as a way of recovering from bad news. It can even improve your academic success. Marc Brackett at Yale University has found that teaching 10 and 11-year-old children a richer emotional vocabulary improved their end-of-year grades, and promoted better behaviour in the classroom. “The more granular our experience of emotion is, the more capable we are to make sense of our inner lives,” he says.

Both Brackett and Barrett agree that Lomas’s “positive lexicography” could be a good prompt to start identifying the subtler contours of our emotional landscape. “I think it is useful – you can think of the words and the concepts they are associated with as tools for living,” says Barrett. They might even inspire us to try new experiences, or appreciate old ones in a new light.

* * *

And related to all of this is hypocognition, overlapping with linguistic relativity — in how language and concepts determine our experience, identity, and sense of reality — constraining and framing and predetermining what we are even capable of perceiving, thinking about, and expressing:

Hypocognition is a censorship tool that mutes what we can feel
by Kaidi Wu

It is a strange feeling, stumbling upon an experience that we wish we had the apt words to describe, a precise language to capture. When we don’t, we are in a state of hypocognition, which means we lack the linguistic or cognitive representation of a concept to describe ideas or interpret experiences. The term was introduced to behavioural science by the American anthropologist Robert Levy, who in 1973 documented a peculiar observation: Tahitians expressed no grief when they suffered the loss of a loved one. They fell sick. They sensed strangeness. Yet, they could not articulate grief, because they had no concept of grief in the first place. Tahitians, in their reckoning of love and loss, and their wrestling with death and darkness, suffered not from grief but a hypocognition of grief. […]

But the darkest form of hypocognition is one born out of motivated, purposeful intentions. A frequently overlooked part of Levy’s treatise on Tahitians is why they suffered from a hypocognition of grief. As it turns out, Tahitians did have a private inkling of grief. However, the community deliberately kept the public knowledge of the emotion hypocognitive to suppress its expression. Hypocognition was used as a form of social control, a wily tactic to expressly dispel unwanted concepts by never elaborating on them. After all, how can you feel something that doesn’t exist in the first place?

Intentional hypocognition can serve as a powerful means of information control. In 2010, the Chinese rebel writer Han Han told CNN that any of his writings containing the words ‘government’ or ‘communist’ would be censored by the Chinese internet police. Ironically, these censorship efforts also muffled an abundance of praise from pro-leadership blogs. An effusive commendation such as ‘Long live the government!’ would be censored too, for the mere mention of ‘government’.

A closer look reveals the furtive workings of hypocognition. Rather than rebuking negative remarks and rewarding praises, the government blocks access to any related discussion altogether, rendering any conceptual understanding of politically sensitive information impoverished in the public consciousness. ‘They don’t want people discussing events. They simply pretend nothing happened… That’s their goal,’ Han Han said. Regulating what is said is more difficult than ensuring nothing is said. The peril of silence is not a suffocation of ideas. It is to engender a state of blithe apathy in which no idea is formed.

Do To Yourself As You Would Do For Others

“…our impulse control is less based on an order from our executive command center, or frontal cortex, and more correlated with the empathic part of our brain. In other words, when we exercise self-control, we take on the perspective of our future self and empathize with that self’s perspectives, feelings, and motivations.”
~ Alexandar Soutscheck

Self-control is rooted in self-awareness. Julian Jaynes and Brian McVeigh, in one of their talks, brought up the idea that “mind space” has increased over time: “The more things we think about, the more distinctions we make in our consciousness  between A and B, and so on, the more mind-space there is” (Discussions with Julian Jaynes, ed. by Brian J. McVeigh, p. 40). The first expansion was the creation of introspective consciousness itself. Narratization allowed that consciousness to also extend across time, to imagine possibilities and play out scenarios and consider consequences. Empathy, as we we experience it, might be a side effect of this as consciousness includes more and more within it, including empathy with our imagined future self. So, think of self-control as being kind to yourself, to your full temporal self, not only your immediate self.

This would relate to the suggestion that humans learn theory of mind, the basis of cognitive empathy, first by observing others and only later apply it to ourselves. That is to say the first expansion of mental space as consciousness takes root within relationship to others. It’s realizing that there might be inner experience within someone else that we claim inner space in our own experience. So, our very ability to understand ourselves is dependent on empathy with others. This was a central purpose of the religions that arose in the Axial Age, the traditions that continue into the modern world* (Tahere Salehi, The Effect of Training Self-Control and Empathy According to Spirituality on Self-Control and Empathy Preschool Female Students in Shiraz City). The prophets that emerged during that era taught love and compassion and introspection, not only as an otherworldly moral dictum but also in maintaining group coherence and the common good. The breakdown of what Jaynes called the bicameral mind was traumatic and a new empathic mind was needed to replace it, if only to maintain social order.

Social order has become a self-conscious obsession ever since, as Jaynesian consciousness in its tendency toward rigidity has inherent weaknesses. Social disconnection is a crippling of the mind because the human psyche is inherently social. Imagining our future selves is a relationship with a more expansive sense of self. It’s the same mechanism as relating to any other person. This goes back to Johann Hari’s idea, based on Bruce K. Alexander’s rat park research, that the addict is the ultimate individual. In this context, this ultimate individual lacking self-control is not only disconnected from other people but also disconnected from themselves. Addiction is isolating and isolation promotes addiction. Based on this understanding, I’ve proposed that egoic consciousness is inherently addictive and that post-axial society is dependent on addiction for social control.

But this psychological pattern is seen far beyond addiction. This fits our personal experience of self. When we were severely depressed, we couldn’t imagine or care about the future. This definitely inhibited self-control and led to more impulsive behavior in being in present-oriented psychological survival mode. Then again, the only reason self-control is useful at all is because, during and following the Axial Age, humans ever more loss the capacity of being part of a communal identity that created the conditions of communal control, the externally perceived commands of archaic authorization through voice-hearing. We’ve increasingly lost the capacity of a communal identity (extended mind/self) and hence a communal empathy, something that sounds strange or unappealing to the modern mind. In denying our social nature, this casts the shadow of authoritarianism, an oppressive and often violent enforcement of top-down control.

By the way, this isn’t merely about psychology. Lead toxicity causes higher rates of impulsivity and aggression. This is not personal moral failure but brain damage from poisoning. Sure, teaching brain-damaged kids and adults to have more empathy might help them overcome their disability. But if we are to develop and empathic society, we should learn to have enough empathy not to wantonly harm the brains of others with lead toxicity and other causes of stunted development (malnutrition, stress, ACEs, etc), just because they are poor or minority and can’t fight back. Maybe we need to first teach politicians and business leaders basic empathy, in overcoming the present dominance of pscyopathic traits, so that they could learn self-control in not harming others.

The part of the brain involving cognitive empathy and theory of mind is generally involved with selflessness and pro-social behavior. To stick with brain development and neurocognitive functioning, let’s look at diet. Weston A. Price, in studying traditional populations that maintained healthy diets, observed what he called moral health in that people seemed kinder, more helpful, and happier — they got along well. Strong social fabric and culture of trust is not an abstraction but built into general measures of health, in the case of Price’s work, having to do with nutrient-dense animal foods containing fat-soluble vitamins. As the standard American diet has worsened, so has mental health. That is a reason for hope. In an early study on the ketogenic diet as applied to childhood diabetes, the researchers made a side observation that not only did the diabetes symptoms improve but so did behavior. I’ve theorized about how a high-carb diet might be one of the factors that sustains the addictive and egoic self.

Narrow rigidity of the mind, as seen in the extremes of egoic consciousness, has come to be accepted as a social norm and even a social ideal. It is the social Darwinian worldview that has contributed to the rise of both competitive capitalism and the Dark Triad (psycopathy, narcissism, and Machiavellianism), and unsurprisingly it has led to a society that lacks awareness and appreciation of the harm caused to future generations (Scott Barry Kaufman, The Dark Triad and Impulsivity). Rather than normalized, maybe this dysfunction should be seen as a sickness, not only a soul sickness but a literal sickness of the body-mind that can be scientifically observed and measured, not to mention medically and socially treated. We need to thin the boundaries of the mind so as to expand our sense of self. Research shows that those with such thinner boundaries not only have more sense of identification with their future selves but also their past selves, in maintaining a connection to what it felt like to be a child. We need to care for ourselves and others in the way we would protect a child.

* * *

* In their article “Alone and aggressive“, A. William Crescioni and Roy F. Baumeister included the loss of meaning. It was maybe associated with the loss of empathy, specifically in understanding the meaning of others (e.g., the intention ‘behind’ words, gestures and actions). Meaning traditionally has been the purview of religion. And I’d suggest that it is not a coincidence that the obsession with meaning arose in the Axial Age right when words were invented for ‘religion’ as a formal institution separate from the rest of society. As Julian Jaynes argues, this was probably in response to the sense of nostalgia and longing that followed the silence of the gods, spirits, and ancestors.

A different kind of social connection had to be taught, but this post-bicameral culture wasn’t and still isn’t as effective in re-creating the strong social bonds of archaic humanity. Periods of moral crisis in fear of societal breakdown have repeated ever since, like a wound that was never healed. I’ve previously written about social rejection and aggressive behavior in relation to this (12 Rules for Potential School Shooters) — about school shooters, I explained:

Whatever they identify or don’t identify as, many and maybe most school shooters were raised Christian and one wonders if that plays a role in their often expressing a loss of meaning, an existential crisis, etc. Birgit Pfeifer and Ruard R. Ganzevoort focus on the religious-like concerns that obsess so many school shooters and note that many of them had religious backgrounds:

“Traditionally, religion offers answers to existential concerns. Interestingly, school shootings have occurred more frequently in areas with a strong conservative religious population (Arcus 2002). Michael Carneal (Heath High School shooting, 1997, Kentucky) came from a family of devoted members of the Lutheran Church. Mitchell Johnson (Westside Middle School shooting, 1998, Arkansas) sang in the Central Baptist Church youth choir (Newman et al. 2004). Dylan Klebold (Columbine shooting, 1999, Colorado) attended confirmation classes in accordance with Lutheran tradition. However, not all school shooters have a Christian background. Some of them declare themselves atheists…” (The Implicit Religion of School Shootings).

Princeton sociologist Katherine Newman, in studying school shootings, has noted that, “School rampage shootings tend to happen in small, isolated or rural communities. There isn’t a very direct connection between where violence typically happens, especially gun violence in the United States, and where rampage shootings happen” (Common traits of all school shooters in the U.S. since 1970).

It is quite significant that these American mass atrocities are concentrated in “small, isolated or rural communities” that are “frequently in areas with a strong conservative religious population”. That might more precisely indicate who these school shooters are and what they are reacting to. Also, one might note that rural areas in general and specifically in the South do have high rates of gun-related deaths, although many of them are listed as ‘accidental’ which is to say most rural shootings involve people who know each other; also true of school shootings.

* * *

Brain stimulation reveals crucial role of overcoming self-centeredness in self-control
by Alexander Soutschek, Christian C. Ruff, Tina Strombach, Tobias Kalenscher and Philippe N. Tobler

Empathic Self-Control
by David Shoemaker

People with a high degree of self-control typically enjoy better interpersonal relationships, greater social adjustment, and more happiness than those with a low degree of self-control. They also tend to have a high degree of empathy. Further, those with low self-control also tend to have low empathy. But what possible connection could there be between self-control and empathy, given that how one regulates oneself seems to have no bearing on how one views others. Nevertheless, this paper aims to argue for a very tight relation between self-control and empathy, namely, that empathy is in fact one type of self-control. The argument proceeds by exploring two familiar types of self-control, self-control over actions and attitudes, the objects for which we are also responsible. Call the former volitional self-control and the latter rational self-control. But we also seem to be responsible for—and have a certain type of control and self-control over—a range of perceptual states, namely, those in which we come to see from another person’s perspective how she views her valuable ends and what her emotional responses are to their thwarting or flourishing. This type of empathic self-control is a previously-unexplored feature of our interpersonal lives. In addition, once we see that the type of empathy exercised is also exercised when casting ourselves into the shoes of our future selves, we will realize how intra-personal empathy better enables both volitional and rational self-control.

Science Says When Self-Control Is Hard, Try Empathizing With Your Future Self
by Lindsay Shaffer

Soutscheck’s study also reveals what happens when we fail to exercise the empathic part of our brain. When Soutscheck interrupted the empathic center of the brain in 43 study volunteers, they were more likely to take a small amount of cash immediately over a larger amount in the future. They were also less inclined to share the money with a partner. Soutscheck’s study showed that the more people are stuck inside their own perspective, even just from having the empathic part of their brain disrupted, the more likely they are to behave selfishly and impulsively.

Self-Control Is Just Empathy With Your Future Self
by Ed Yong

This tells us that impulsivity and selfishness are just two halves of the same coin, as are their opposites restraint and empathy. Perhaps this is why people who show dark traits like psychopathy and sadism score low on empathy but high on impulsivity. Perhaps it’s why impulsivity correlates with slips among recovering addicts, while empathy correlates with longer bouts of abstinence. These qualities represent our successes and failures at escaping our own egocentric bubbles, and understanding the lives of others—even when those others wear our own older faces.

New Studies in Self Control: Treat Yourself Like You’d Treat Others
from Peak

A new study recently shifted the focus to a different mechanism of self control. Alexander Soutschek and colleagues from the University of Zurich believe self-control may be related to our ability to evaluate our future wants and needs.

The scientists suggest that this takes place in an area of the brain called the rTPJ, which has long been linked to selflessness and empathy for others. It’s an important part of our ability to “take perspectives” and help us step into the shoes of a friend.

The scientists hypothesized that perhaps the rTPJ treats our “future self” the same way it treats any other person. If it helps us step into our friend’s shoes, maybe we can do the same thing for ourselves. For example, if we’re deciding whether to indulge in another pint of beer at a bar, maybe our ability to hold off is related to our ability to imagine tomorrow morning’s hangover. As science writer Ed Yong explains, “Think of self-control as a kind of temporal selflessness. It’s Present You taking a hit to help out Future You.”

Empathy for Your Future Self
by Reed Rawlings

Further Research on the TPJ

The results of Soutscheks team were similar to past work on the empathy, future-self, and the TPJ. It’s believed a better connected rTPJ increases the likelihood of prosocial behaviors. Which relates to skills of executive function. Individuals who exhibit lower empathy, score higher for impulsivity – the opposite of self-control.

Keeping our future selves in mind may even keep our savings in check. In this research, Stanford University tested a “future self-continuity”. They wanted to explore how individuals related to their future self. Participants were asked to identify how they felt about the overlap between their current and future selves. They used the Venn diagrams below for this exercise.

If they saw themselves as separate, they were more likely to choose immediate rewards. A greater overlap increased the likelihood of selecting delayed rewards. In their final study, they assessed individuals from the San Francisco Bay area. The researchers found a correlation between wealth and an overlap between selves.

While the above research is promising, it doesn’t paint a full picture. Empathy seems useful, but making a sacrifice for our future-self requires that we understand the reason behind it. It’s the sacrifice that is especially crucial – positive gains demand negative trade-offs.

That’s where altruism, our willingness to give to others, comes in.

Why Do We Sacrifice?

Research from the University of Zurich’s examined some altruism’s driving factors. Their work came up with two correlations. First, the larger your rTPJ, the more likely you are to behave altruistically. Second, concerns of fairness affect how we give.

In this experiment, individuals were more generous if their choice would decrease inequality. When inequality would increase, participants were less likely to give.

This is an understandable human maxim. We have little reason to give to an individual who has more than we do. It feels completely unfair to do so. However, we’re raised to believe that helping those in need is objectively good. Helping ourselves should fall under the same belief.

Empathy and altruism, when focused on our own well-being, are intimately linked. To give selflessly, we need to have a genuine concern for another’s well-being. In this case, the ‘other’ is our future self. Thankfully, with a bit of reflection, each of us can gain a unique insight into our own lives.

Alone and aggressive: Social exclusion impairs self-control and empathy and increases hostile cognition and aggression.
by A. William Crescioni and Roy F. Baumeister
from Bullying, Rejection, and Peer Victimization ed. by Monic J. Harris
pp. 260-271 (full text)

Social Rejection and Emotional Numbing

Initial studies provided solid evidence for a causal relationship be-tween rejection and aggression. The mechanism driving this relation-ship remained unclear, however. Emotional distress was perhaps the most plausible mediator. Anxiety has been shown to play a role in both social rejection (Baumeister & Tice, 1990) and ostracism (Williamset al., 2000). Emotional distress, however, was not present in these experiments by Twenge et al. (2001). Only one significant mood effect was found, and even this effect deviated from expectations. The sole difference in mood between rejected and accepted participants was a slight decrease in positive affect. Rejected participants did not show any increase in negative affect; rather, they showed a flattening of affect, in particular a decrease in positive affect. This mood difference did not constitute a mediator of the link between rejection and aggression. It did, however, point toward a new line of thinking. It was possible that rejection would lead to emotional numbing rather than causing emotional distress. The flattening of affect seen in the previous set of studies would be consistent with a state of cognitive deconstruction. This state is characterized by an absence of emotion, an altered sense of time, a fixa-tion on the present, a lack of meaningful thought, and a general sense of lethargy (Baumeister, 1990). […]

Rejection and Self-Regulation

Although the emotional numbness and decrease in empathy experienced by rejected individuals play an important role in the link between social rejection and aggression, these effects do not constitute a complete explanation of why rejection leads to aggression. The diminished prosocial motivations experienced by those lacking in empathy can open the door to aggressive behavior, but having less of a desire to do good and having more of a desire to do harm are not necessarily equivalent. A loss of empathy, paired with the numbing effects of rejection, could lead individuals to shy away from those who had rejected them rather than lashing out. Emotional numbness, however, is not the only consequence of social rejection.

In addition to its emotional consequences, social rejection has adverse effects on a variety of cognitive abilities. Social rejection has been shown to decrease intelligent (Baumeister, Twenge, & Nuss, 2002) and meaningful thought (Twenge et al., 2002). But another category of cognitive response is self-regulation. Studies have demonstrated that self-regulation depends upon a finite resource and that acts of self-regulation can impair subsequent attempts to exercise self-control (Baumeister, Bratslavsky, Muraven, & Tice, 1998). Self-regulation has been shown to be an important tool for controlling aggressive impulses. Stucke and Baumeister (2006) found that targets whose ability to self-regulate had been depleted were more likely to respond aggressively to insulting provocation. DeWall, Baumeister, Stillman, and Galliot (2007) found that diminished self-regulatory resources led to an increase in aggression only in response to provocation; unprovoked participants showed no increase in aggressive behavior. Recall that in earlier work (Twenge et al.,2002) rejected individuals became more aggressive only when the target of their aggression was perceived as having insulted or provoked them.This aggression could have been the result of the diminished ability of rejected participants to regulate their aggressive urges. […]

These results clearly demonstrate that social rejection has a detrimental effect on self-regulation, but they do not explain why this is so and, indeed, the decrement in self-regulation would appear to be counterproductive for rejected individuals. Gaining social acceptance often involves regulating impulses in order to create positive impressions on others (Vohs, Baumeister, & Ciarocco, 2005). Rejected individuals should therefore show an increase in self-regulatory effort if they wish to create new connections or prevent further rejection. The observed drop in self-regulation therefore seems maladaptive. The explanation for this finding lies in rejection’s effect on self-awareness.

Self-awareness is an important prerequisite of conscious self-control (Carver & Scheier, 1981). Twenge et al. (2002) found that, when given the option, participants who had experienced rejection earlier in the study were more likely to sit facing away from rather than toward a mirror. Having participants face a mirror is a common technique for inducing self-awareness (Carver & Scheier, 1981), so participants’ unwillingness to do so following rejection provides evidence of a desire to avoid self-awareness. A drop in self-awareness is part of the suite of effects that comprises a state of cognitive deconstruction. Just as emotional numbness protects rejected individuals from the emotional distress of rejection, a drop in self-awareness would shield against awareness of personalflaws and shortcoming that could have led to that rejection. The benefit of this self-ignorance is that further distress over one’s inadequacies is mitigated. Unfortunately, this protection carries the cost of decreased self-regulation. Because self-regulation is important for positive self-presentation (Vohs et al., 2005), this drop in self-awareness could ironically lead to further rejection. […]

These data suggest that social rejection does not decrease the absolute ability of victims to self-regulate but rather decreases their willingness to exert the effort necessary to do so. Increased lethargy, another aspect of cognitive deconstruction, is consistent with this decrease in self-regulatory effort. Twenge et al. (2002) found that social rejection led participants to give shorter and less detailed explanations of proverbs. Because fully explaining the proverbs would require an effortful response, this shortening and simplification of responses is evidence of increased lethargy amongst rejected participants. This lethargy is not binding, however. When given sufficient incentive, rejected participants were able to match the self-regulatory performance of participants in other conditions. Inducing self-awareness also allowed rejected individuals to self-regulate as effectively as other participants. In the absence of such stimulation, however, rejected individuals showed a decrement in self-regulatory ability that constitutes an important contribution to explaining the link between rejection and aggression. […]

Rejection and Meaningfulness

Twenge et al. (2002) found that social rejection led to a decrease in meaningful thought among participants, as a well as an increased likelihood to endorse the statement, “Life is meaningless.” Williams (2002)has also suggested that social rejection ought to be associated with a perception of decreased meaning in life. Given the fundamental nature of the need to belong, it makes sense that defining life as meaningful would be at least in part contingent on the fulfillment of social needs. A recent line of work has looked explicitly at the effect of social rejection on the perception of meaning in life. Perceiving meaning in life has been shown to have an inverse relationship with hostility, aggression,and antisocial attitude (Mascaro, Morey, & Rosen, 2004). As such, any decrease in meaning associated with social rejection would constitute an important feature of the explanation of the aggressive behavior of rejected individuals.

The God of the Left Hemisphere:
Blake, Bolte Taylor and the Myth of Creation
by Roderick Tweedy

The left hemisphere is competitive… the will to power…is the agenda of the left hemisphere. It arose not to communicate with the world but to manipulate it. This inability to communicate or co-operate poses great difficulties for any project of reintegration or union. Its tendency would be to feed off the right hemisphere, to simply use and gain power over it too. Left hemisphere superiority is based, not on a leap forward by the left hemisphere, but on a ‘deliberate’ handicapping of the right. There is perhaps as much chance of persuading the head of a multinational to stop pursuing an agenda of self-interest and ruthless manipulation as there is of persuading the Urizenic program of the brain which controls him of “resubmitting” itself to the right hemisphere’s values and awareness.

The story of the Western world being one of increasing left-hemispheric domination, we would not expect insight to be the key note. Instead we would expect a sort of insouciant optimism, the sleepwalker whistling a happy tune as he ambles towards the abyss.

The left, rational, brain, it might be safe to conclude, has no idea how serious the problem is, that is to say, how psychopathic it has become. Of course, it doesn’t care that it doesn’t care. “The idiot Reasoner laughs at the Man of Imagination/And from laughter proceeds to murder by undervaluing calumny”, noted Blake in a comment that is only remarkable for the fact that it has taken two hundred years to understand.

The apparently “conscious” rational self, the driving program and personality of the left brain, turns out to be deeply unconscious, a pathological sleepwalker blithely poisoning its own environment whilst tenaciously clinging onto the delusion of its own rightness. This unfortunate mixture, of arrogance and ignorance, defines contemporary psychology. The left hemisphere not only cannot see that there is a problem, it cannot see that it is itself the problem.

Battle of Voices of Authorization in the World and in Ourselves

New Feelings: Podcast Passivity
by Suzannah Showler

My concern is that on some level, I’m prone to mistake any voice that pours so convincingly into my brain for my own. And maybe it’s not even a mistake, per se, so much as a calculated strategy on the part of my ego to maintain its primacy, targeting and claiming any foreign object that would stray so far into the inner-sanctum of my consciousness. Whether the medium is insidious, my mind a greedy assimilation machine, or both, it seems that at least some of the time, podcasts don’t just drown out my inner-monologue — they actually overwrite it. When I listen to a podcast, I think some part of me believes I’m only hearing myself think.

Twentieth-century critics worried about this, too. Writing sometime around the late 1930s, Theodore Adorno theorized that a solitary listener under the influence of radio is vulnerable to persuasion by an anonymous authority. He writes: “The deeper this [radio] voice is involved within his own privacy, the more it appears to pour out of the cells of his more intimate life; the more he gets the impression that his own cupboard, his own photography, his own bedroom speaks to him in a personal way, devoid of the intermediary stage of the printed words; the more perfectly he is ready to accept wholesale whatever he hears. It is just this privacy which fosters the authority of the radio voice and helps to hide it by making it no longer appear to come from outside.”

I’ll admit that I have occasionally been gripped by false memories as a result of podcasts — been briefly sure that I’d seen a TV show I’d never watched, or convinced that it was a friend, not a professional producer, who told me some great anecdote. But on the whole, my concern is less that I am being brainwashed and more that I’m indulging in something deeply avoidant: filling my head with ideas without actually having to do the messy, repetitive, boring, or anxious work of making meaning for myself. It’s like downloading a prefabbed stream of consciousness and then insisting it’s DIY. The effect is twofold: a podcast distracts me from the tedium of being alone with myself, while also convincingly building a rich, highly-produced version of my inner life. Of course that’s addictive — it’s one of the most effective answers to loneliness and self-importance I can imagine.

Being Your Selves: Identity R&D on alt Twitter
by Aaron Z. Lewis

Digital masks are making the static and immortal soul of the Renaissance seem increasingly out of touch. In an environment of info overload, it’s easy to lose track of where “my” ideas come from. My brain is filled with free-floating thoughts that are totally untethered from the humans who came up with them. I speak and think in memes — a language that’s more like the anonymous manuscript culture of medieval times than the individualist Renaissance era. Everything is a remix, including our identities. We wear our brains outside of our skulls and our nerves outside our skin. We walk around with other people’s voices in our heads. The self is in the network rather than a node.

The ability to play multiple characters online means that the project of crafting your identity now extends far beyond your physical body. In his later years, McLuhan predicted that this newfound ability would lead to a society-wide identity crisis:

The instant nature of electric-information movement is decentralizing — rather than enlarging — the family of man into a new state of multitudinous tribal existences. Particularly in countries where literate values are deeply institutionalized, this is a highly traumatic process, since the clash of old segmented visual culture and the new integral electronic culture creates a crisis of identity, a vacuum of the self, which generates tremendous violence — violence that is simply an identity quest, private or corporate, social or commercial.

As I survey the cultural landscape of 2020, it seems that McLuhan’s predictions have unfortunately come true. More than ever before, people are exposed to a daily onslaught of world views and belief systems that threaten their identities. Social media has become the battlefield for a modern-day Hobbesian war of all-against-all. And this conflict has leaked into the allegedly “offline” world.

“Individuation is not the culmination of the person; it is the end of the person.”

Julian Jaynes and the Jaynesian scholars have made a compelling argument about where egoic consciousness originated and how it formed. But in all the Jaynesian literature, I don’t recall anyone suggesting how to undo egoic consciousness, much less suggesting we should attempt annihilation of the demiurgic ego.

That latter project is what preoccupied Carl Jung, and it is what Peter Kingsley has often written about. They suggest it is not only possible but inevitable. In a sense, the ego is already dead and we are already in the underworld. We are corpses and our only task is to grieve.

The Cry of Merlin: Carl Jung and the Insanity of Reason
Gregory Shaw on Peter Kingsley

Kingsley explains that Jung emulated these magicians, and his journey through the Underworld followed the path of Pythagoras, Parmenides and Empedocles. Jung translated the terminology of the ancients into “scientific” terms, calling the initiation he realized in the abyss “individuation.” For Jungians today, individuation is the culmination of psychic development, as if it were our collective birthright. Yet Kingsley points out that this notion of individuation is a domestication, commodification, and utter distortion of what Jung experienced. Individuation is not the culmination of the person; it is the end of the person. It is the agonizing struggle of becoming a god and a person simultaneously, of living in contradictory worlds, eternity and time.

Kingsley reveals that although individuation is the quintessential myth of Jung’s psychology, it is almost never experienced because no one can bear it. Individuation is the surrendering of the personal to the impersonal, and precisely what Jung experienced it to be, the death of his personality. Jung explains that individuation is a total mystery; the mystery of the Grail that holds the essence of God. According to Henry Corbin, Jung saw “true individuation as becoming God or God’s secret.” Put simply, individuation is deification. To his credit, over twenty years ago Richard Noll argued this point and wrote that Jung experienced deification in the form of the lion-headed Mithras (Leontocephalus), but Kingsley gives the context for deification that Noll does not, and the context is crucial. He shows that Jung’s deification was not an “ego trip” that gave rise to “a religious cult with [Jung] as the totem,” Noll’s assumption; nor was it a “colossal narcissism,” as Ernest Jones suggested, but precisely the opposite. Individuation cuts to the very core of self-consciousness; it is the annihilation of the ego, not its inflation. […]

What is fundamentally important about Catafalque is that Kingsley demonstrates convincingly that Jung recovered the shamanic path exemplified by Pythagoras, Parmenides, and Socrates. Jung tried to save us from the “insanity of reason” by descending to the underworld, serving the archetypes, and disavowing the impiety of “the Greeks” who reduce the sacred to rationalizations. There is much in Catafalque I have not addressed, perhaps the most important is Kingsley’s discussion of the Hebrew prophets who raged against a godless world. Kingsley here appropriately includes Allen Ginsberg’s Howl, that draws from the rhythms of these prophets to wail against the “insanity of America,” its mechanized thinking, suffocating architecture, and the robotic efficiency that is the child of Reason. This almost verbatim mirrors the words of Jung who, after visiting New York, says “suppose an age when the machine gets on top of us …. After a while, when we have invested all our energy in rational forms, they will strangle us…They are the dragons now, they became a sort of nightmare.

Kingsley ends Catafalque with depressing prophecies about the end of western civilization, both from Jung and from Kingsley himself. The great wave that was our civilization has spent itself. We are in the undertow now, and we don’t even realize it. To read these chapters is to feel as if one is already a corpse. And Kingsley presents this so bluntly, with so much conviction, it is, frankly, disturbing. And even though Kingsley writes that “Quite literally, our western world has come to an end,” I don’t quite believe him. When speaking about Jung giving psychological advice, Kingsley says “make sure you have enough mētis or alertness not to believe him,” and I don’t believe Kingsley’s final message either. Kingsley’s message of doom is both true and false. The entire book has been telling us that we are already dead, that we are already in the underworld, but, of course, we just don’t understand it. So, then he offers us a very physical and literal picture of our end, laced with nuclear fallout and images of contamination. And he forthrightly says the purpose of his work is “to provide a catafalque for the western world.” It is, he says, time to grieve, and I think he is right. We need to grieve for the emptiness of our world, for our dead souls, our empty lives, but this grief is also the only medicine that can revive the collective corpse that we have become. Kingsley is doing his best to show us, without any false hope, the decaying corpse that we are. It is only through our unwavering acceptance, grieving and weeping for this, that we can be healed. In Jung’s terms, only the death of the personal can allow for birth into the impersonal. Into what…? We cannot know. We never will. It is not for our insatiable minds.

The Link Between Individualism and Collectivism

Individualism and collectivism. Autonomy and authoritarianism. These are opposites, right? Maybe not.

Julian Jaynes argued that humans, in the earliest small city-states, lived in a state he called the bicameral mind. It was a shared sense of identity where ‘thoughts’ were more publicly experienced as voices that were culturally inherited across generations. He observed that the rise of egoic consciousness as the isolated and independent self was simultaneous with a shift in culture and social order.

What was seen was a new kind of authoritarianism, much more brutally oppressive, much more centralized, hierarchical, and systematic. As the communal societies of the bicameral mind entered their end phase heading toward the collapse of the Bronze Age, there was the emergence of written laws, court systems, and standing armies. Criminals, enemy soldiers, and captives were treated much more harshly with mass killings like never before seen. Social order was no longer an organic community but required top-down enforcement.

One evidence of this new mentality was the sudden appearance of pornographic imagery. For thousands of years, humans created art, but never overtly sexual in nature. Then humans apparently became self-conscious of sexuality and also became obsessed with it. This was also a time when written laws and norms about sexuality became common. With sexual prurience came demands of sexual purity.

Repression was the other side of rigid egoic consciousness, as to maintain social control the new individualized self had to be controlled by society. The organic sense of communal identity could no longer be taken for granted and relied upon. The individual was cut off from the moral force of voice-hearing and so moral transgression as sin became an issue. This was the ‘Fall of Man’.

What is at stake is not merely an understanding of the past. We are defined by this past for it lives on within us. We are the heirs of millennia of psycho-cultural transformation. But our historical amnesia and our splintered consciousness leaves us adrift forces that we don’t understand or recognize. We are confused why, as we move toward greater individualism, we feel anxious about the looming threat of ever worse authoritarianism. There is a link between the two that is built into Jaynesian consciousness. But this is not fatalism, as if we are doomed to be ripped apart by diametric forces.

If we accept our situation and face the dilemma, we might be able to seek a point of balance. This is seen in Scandinavian countries where it is precisely a strong collective identity, culture of trust, and social democracy, even some democratic socialism, that makes possible a more stable and less fearful sense of genuine individuality (Anu Partanen, The Nordic Theory of Everything; & Nordic Theory of Love and Individualism). What is counter-intuitive to the American sensibility — or rather American madness — is that this doesn’t require greater legal regulations, such as how there is less red tape in starting a business in Scandinavia than the United States.

A book worth reading is Timothy Carney’s Alienated America. The author comes from the political right, but he is not a radical right-winger. His emphasis is on social conservatism, although the points he is making is dependent on the liberal viewpoint of social science. Look past some of the conservative biases of interpretation and there is much here that liberals, progressives, and even left-wingers could agree with.

He falls into the anti-government rhetoric of pseudo-libertarianism which causes him to be blind to how Scandinavian countries can have big governments that can rely more on culture of trust, rather than regulations, to enforce social norms. What Scandinavians would likely find odd is this American right-wing belief that government is separate from society, even when society isn’t outright denied as did Margaret Thatcher.

It’s because of this confusion that his other insights are all the more impressive. He is struggling against his own ideological chains. It shows how, even as the rhetoric maintains power over the mind, certain truths are beginning to shine through the weakening points of ideological fracture.

Even so, he ultimately fails to escape the gravity of right-wing ideological realism in coming to the opposite conclusion of Anu Partanen who understands that it is precisely the individual’s relationship to the state that allows for individual freedom. Carney, instead, wants to throw out both ‘collectivism’ and ‘hyper-individualism’. He expresses the still potent longing for the bicameral mind and its archaic authorization to compel social order.

What he misses is that this longing itself is part of the post-bicameral trap of Jaynesian consciousness, as the more one seeks to escape the dynamic the more tightly wound one becomes within its vice grip. It is only in holding lightly one’s place within the dynamic that one can steer a pathway through the narrow gap between the distorted extremes of false polarization and forced choice. This is exaggerated specifically by high inequality, not only of wealth but more importantly of resources and opportunities, power and privilege.

High inequality is correlated with mental illness, conflict, aggressive behavior, status anxiety, social breakdown, loss of social trust, political corruption, crony capitalism, etc. Collectivism and individualism may only express as authoritarianism and hyper-individualism under high inequality conditions. For some reason, many conservatives and right-wingers not only seem blind to the harm of inequality but, if anything, embrace it as a moral good expressing a social Darwinian vision of capitalist realism that must not be questioned.

Carney points to the greater social and economic outcomes of Scandinavian countries. But he can’t quite comprehend why such a collectivist society doesn’t have the problems he ascribes to collectivism. He comes so close to such an important truth, only to veer again back into the safety of right-wing ideology. Still, just the fact that, as a social conservative concerned for the public good, he feels morally compelled to acknowledge the kinds of things left-wingers have been talking about for generations shows that maybe we are finally coming to a point of reckoning.

Also, it is more than relevant that this is treading into the territory of Jaynesian thought, although the author has no clue how deep and dark are the woods once he leaves the well-beaten path. Even the briefest of forays shows how much has been left unexplored.

* * *

Alienated America:
Why Some Places Thrive While Others Collapse
by Timothy P. Carney

Two Sides of the Same Coin

“Collectivism and atomism are not opposite ends of the political spectrum,” Yuval Levin wrote in Fractured Republic, “but rather two sides of one coin. They are closely related tendencies, and they often coexist and reinforce one another—each making the other possible.” 32

“The Life of Julia” is clearly a story of atomization, but it is one made possible by the story of centralization: The growth of the central state in this story makes irrelevant—and actually difficult—the existence of any other organizations. Julia doesn’t need to belong to anything because central government, “the one thing we all belong to” (the Democratic Party’s mantra in that election), 33 took care of her needs.

This is the tendency of a large central state: When you strengthen the vertical bonds between the state and the individual, you tend to weaken the horizontal bonds between individuals. What’s left is a whole that by some measures is more cohesive, but individuals who are individually all less connected to one another.

Tocqueville foresaw this, thanks to the egalitarianism built into our democracy: “As in centuries of equality no one is obliged to lend his force to those like him and no one has the right to expect great support from those like him, each is at once independent and weak.

“His independence fills him with confidence and pride among his equals, and his debility makes him feel, from time to time, the need of the outside help that he cannot expect from any of them, since they are all impotent and cold.”

Tocqueville concludes, “In this extremity he naturally turns his regard to the immense being that rises alone in the midst of universal debasement.” 34

The centralizing state is the first step in this. The atomized individual is the end result: There’s a government agency to feed the hungry. Why should I do that? A progressive social philosophy, aimed at liberating individuals by means of a central state that provides their basic needs, can actually lead to a hyper-individualism.

According to some lines of thought, if you tell a man he has an individual duty to his actual neighbor, you are enslaving that man. It’s better, this viewpoint holds, to have the state carry out our collective duty to all men, and so no individual has to call on any other individual for what he needs. You’re freed of both debt to your neighbor (the state is taking care of it) and need (the state is taking care of it).

When Bernie Sanders says he doesn’t believe in charity, and his partymates say “government is the name for the things we do together,” the latter can sound almost like an aspiration —that the common things, and our duties to others, ought to be subsumed into government. The impersonality is part of the appeal, because everyone alike is receiving aid from the nameless bureaucrats and is thus spared the indignity of asking or relying on neighbors or colleagues or coparishioners for help.

And when we see the state crowding out charity and pushing religious organizations back into the corner, it’s easy to see how a more ambitious state leaves little oxygen for the middle institutions, thus suffocating everything between the state and the individual.

In these ways, collectivism begets atomization.

Christopher Lasch, the leftist philosopher, put it in the terms of narcissism. Paternalism, and the transfer of responsibility from the individual to a bureaucracy of experts, fosters a narcissism among individuals, Lasch argued. 35 Children are inherently narcissistic, and a society that deprives adults of responsibility will keep them more childlike, and thus more self-obsessed.

It’s also true that hyper-individualism begets collectivism. Hyper-individualism doesn’t work as a way of life. Man is a political animal and is meant for society. He needs durable bonds to others, such as those formed in institutions like a parish, a sports club, or a school community. Families need these bonds to other families as well, regardless of what Pa in Little House on the Prairie seemed to think at times.

The little platoons of community provide role models, advice, and a safety net, and everyone needs these things. An individual who doesn’t join these organizations soon finds himself deeply in need. The more people in need who aren’t cared for by their community, the more demand there is for a large central state to provide the safety net, the guidance, and the hand-holding.

Social scientists have repeatedly come across a finding along these lines. “[G]overnment regulation is strongly negatively correlated with measures of trust,” four economists wrote in MIT’s Quarterly Journal of Economics . The study relied on an international survey in which people were asked, “Generally speaking, would you say that most people can be trusted or that you need to be very careful in dealing with people?” The authors also looked at answers to the question “Do you have a lot of confidence, quite a lot of confidence, not very much confidence, no confidence at all in the following: Major companies? Civil servants?”

They found, among other examples:

High-trusting countries such as Nordic and Anglo-Saxon countries impose very few controls on opening a business, whereas low-trusting countries, typically Mediterranean, Latin-American, and African countries, impose heavy regulations. 36

The causality here goes both ways. In less trusting societies, people demand more regulation, and in more regulated societies, people trust each other less. This is the analogy of the Industrial Revolution’s vicious circle between Big Business and Big Labor: The less trust in humanity there is, the more rules crop up. And the more rules, the less people treat one another like humans, and so on.

Centralization of the state weakens the ties between individuals, leaving individuals more isolated, and that isolation yields more centralization.

The MIT paper, using economist-speak, concludes there are “two equilibria” here. That is, a society is headed toward a state of either total regulation and low trust, or low regulation and high trust. While both destinations might fit the definition of equilibrium, the one where regulation replaces interpersonal trust is not a fitting environment for human happiness.

On a deeper level, without a community that exists on a human level—somewhere where everyone knows your name, to borrow a phrase—a human can’t be fully human. To bring back the language of Aristotle for a moment, we actualize our potential only inside a human-scaled community.

And if you want to know what happens to individuals left without a community in which to live most fully as human, where men and women are abandoned, left without small communities in which to flourish, we should visit Trump Country.

Jaynesian Linguistic Relativity

  • “All of these concrete metaphors increase enormously our powers of perception of the world about us and our understanding of it, and literally create new objects. Indeed, language is an organ of perception, not simply a means of communication.
  • The lexicon of language, then, is a finite set of terms that by metaphor is able to stretch out over an infinite set of circumstances, even to creating new circumstances thereby.
  • “The bicameral mind with its controlling gods was evolved as a final stage of the evolution of language. And in this development lies the origin of civilization.”
  • “For if consciousness is based on language, then it follows that it is of much more recent origin than has been heretofore supposed. Consciousness come after language! The implications of such a position are extremely serious.
  • But there’s no doubt about it, Whorfian hypothesis is true for some of the more abstract concepts we have. Certainly, in that sense, I would certainly be a Whorfian. But I don’t think Whorf went far enough.
    ~Julian Jaynes

Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, makes statements that obviously express a view of linguistic relativity, also known as the Sapir-Whorf hypothesis or Whorfian hypothesis, whether or not the related strong form of linguistic determinism, although the above quotes do indicate the strong form. Edward Sapir and Benjamin Lee Whorf, by the way, weren’t necessarily arguing for the determinism that was later ascribed to them or at least to Whorf (Straw Men in the Linguistic Imaginary). Yet none of Jaynes’ writings ever directly refer to this other field of study or the main thinkers involved, even though it is one of the closest fields to his own hypothesis on language and metaphor in relation to perception, cognition, and behavior. It’s also rare to see this connection come up in the writings of any Jaynesian scholars. There apparently isn’t even a single mention, even in passing, in the discussion forum at the official site of the Julian Jaynes Society (no search results were found for: Edward Sapir, Benjamin Lee Whorf, Sapir-Whorf, Whorfian, Whorfianism, linguistic relativity, linguistic relativism, or linguistic determinism), although I found a few writings elsewhere that touch upon this area of overlap (see end of post). Besides myself, someone finally linked to an article about linguistic relativity in the Facebook group dedicated to his book (also see below).

Limiting ourselves to published work, the one and only significant exception I’ve found is a passing mention from Brian J. McVeigh in his book The “Other” Psychology of Julian Jaynes: “Also, since no simple causal relation between language and interiorized mentation exists, an examination of how a lexicon shapes psychology is not necessarily a Sapir-Whorfian application of linguistic theory.” But since Sapir and Whorf didn’t claim a simple causal relation, this leads me to suspect that McVeigh isn’t overly familiar with their scholarship or widely read in the more recent research. But if I’m misunderstanding him and he has written more fully elsewhere about this, I’d love to read it (owning some of his books, I do enjoy and highly respect McVeigh’s work, as I might consider him the leading Jaynesian scholar). In my having brought this up in a Julian Jaynes Facebook group, Paul Otteson responded that, “my take on linguistic relativism and determinism is that they are obvious.” But obviously, it isn’t obvious to many others, including some Jaynesian scholars who are academic experts on linguistic analysis of texts and culture, as is the case with McVeigh. “For many of us,” Jeremy Lent wrote in The Patterning Instinct, “the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?” (quoted in Straw Men in the Linguistic Imaginary).

Jaynes focused heavily on how metaphors shape an experience of interiorized and narratized space, i.e., a specific way of perceiving space and time in relation to identity. More than relevant is the fact that, in linguistic relativity research, how language shapes spatial and temporal perception has also been a a key area of study. Linguistic relativity has gained compelling evidence in recent decades. And several great books have been written exploring and summarizing the evidence: Vyvyan Evans’s The Language Myth, Guy Deutscher’ Through the Looking Glass, Benjamin K. Bergen’s Louder Than Words, Aneta Pavlenko’s The Bilingual Mind, Jeremy Lent’s The Patterning Instinct, Caleb Everett’s Linguistic Relativity and Numbers and the Making of Us (maybe include Daniel L. Everett’s Dark Matter of the Mind, Language: The Cultural Tool, and How Language Began). This would be a fruitful area for Jaynesian thought, not to mention it would help it to break out into wider scholarly interest. The near silence is surprising because of the natural affinity between the two groups of thinkers. (Maybe I’m missing something. Does anyone know of a Jaynesian scholar exploring linguistic relativity, a linguistic relativity scholar studying Jaynesianism, or any similar crossover?)

What makes it odd to me is that Jaynes was clearly influenced by linguistic relativity, if not directly then indirectly. Franz Boas’ theories on language and culture shaped linguistic relativists along with the thinkers read by Jaynes, specifically Ruth Benedict. Jaynes was caught up in a web of influences that brought him into the sphere of linguistic relativity and related anthropological thought, along with philology, much of it going back to Boas: “Julian Jaynes had written about the comparison of shame and guilt cultures. He was influenced in by E. R. Dodds (and Bruno Snell). Dodds in turn based some of his own thinking about the Greeks on the work of Ruth Benedict, who originated the shame and guilt culture comparison in her writings on Japan and the United States. Benedict, like Margaret Mead, had been taught by Franz Boas. Boas developed some of the early anthropological thinking that saw societies as distinct cultures” (My Preoccupied Mind: Blogging and Research).

Among these thinkers, there is an interesting Jungian influence as well: “Boas founded a school of thought about the primacy of culture, the first major challenge to race realism and eugenics. He gave the anthropology field new direction and inspired a generation of anthropologists. This was the same era during which Jung was formulating his own views. As with Jung before him, Jaynes drew upon the work of anthropologists. Both also influenced anthropologists, but Jung’s influence of course came earlier. Even though some of these early anthropologists were wary of Jungian psychology, such as archetypes and collective unconscious, they saw personality typology as a revolutionary framework (those influenced also included the likes of Edward Sapir and Benjamin Lee Whorf, both having been mentors of Boas who maybe was the source of introducing linguistic relativity into American thought). Through personality types, it was possible to begin understanding what fundamentally made one mind different from another, a necessary factor in distinguishing one culture from another” (The Psychology and Anthropology of Consciousness). The following is from Jung and the Making of Modern Psychology, Sonu Shamdasani (Kindle Locations 4706-4718):

“The impact of Jung’s typology on Ruth Benedict may be found in her concept of Apollonian and Dionysian culture patterns which she first put forward in 1928 in “Psychological Types in the cultures of the Southwest,” and subsequently elaborated in Patterns of Culture. Mead recalled that their conversations on this topic had in part been shaped by Sapir and Oldenweiser’s discussion of Jung’s typology in Toronto in 1924 as well as by Seligman’s article cited above (1959, 207). In Patterns of Culture, Benedict discussed Wilhelm Worringer’s typification of empathy and abstraction, Oswald Spengler’s of the Apollonian and the Faustian and Friedrich Nietzsche’s of the Apollonian and the Dionysian. Conspicuously, she failed to cite Jung explicitly, though while criticizing Spengler, she noted that “It is quite as convincing to characterize our cultural type as thoroughly extravert … as it is to characterize it as Faustian” (1934, 54-55). One gets the impression that Benedict was attempting to distance herself from Jung, despite drawing some inspiration from his Psychological Types.

“In her autobiography, Mead recalls that in the period that led up to her Sex and Temperament, she had a great deal of discussion with Gregory Bateson concerning the possibility that aside from sex difference, there were other types of innate differences which “cut across sex lines” (1973, 216). She stated that: “In my own thinking I drew on the work of Jung, especially his fourfold scheme for grouping human beings as psychological types, each related to the others in a complementary way” (217). Yet in her published work, Mead omitted to cite Jung’s work. A possible explanation for the absence of citation of Jung by Benedict and Mead, despite the influence of his typological model, was that they were developing diametrically opposed concepts of culture and its relation to the personality to Jung’s. Ironically, it is arguably through such indirect and half-acknowledged conduits that Jung’s work came to have its greatest impact upon modern anthropology and concepts of culture. This short account of some anthropological responses to Jung may serve to indicate that when Jung’s work was engaged with by the academic community, it was taken to quite different destinations, and underwent a sea change.”

As part of the intellectual world that shaped Jaynes’ thought, this Jungian line of influence feeds into the Boasian line of influence. But interestingly, in the Jaynesian sphere, the Jungian side of things is the least obvious component. Certainly, Jaynes didn’t see the connection, despite Jung’s Jaynesian-like comments about consciousness long before Jaynes wrote about it in 1976. Jung, writing in 1960 stated that, “There is in my opinion no tenable argument against the hypothesis that psychic functions which today seem conscious to us were once unconscious and yet worked as if they were conscious” (On the Nature of the Psyche; see post). And four years later wrote that, “Consciousness is a very recent acquisition of nature” (Man and His Symbols; see post). In distancing himself from Jung, Jaynes was somewhat critical, though not dismissive: “Jung had many insights indeed, but the idea of the collective unconscious and of the archetypes has always seemed to me to be based on the inheritance of acquired characteristics, a notion not accepted by biologists or psychologists today” (quoted by Philip Ardery in “Ramifications of Julian Jaynes’s theory of consciousness for traditional general semantics“). His criticism was inaccurate, though, since Jung’s actual position was that, “It is not, therefore, a question of inherited ideas but of inherited possibilities of ideas” (What is the Blank Slate of the Mind?). So, in actuality, Jaynes’ view on this point appears to be right in line with that of Jung. This further emphasizes the unacknowledged Jungian influence.

I never see this kind of thing come up in Jaynesian scholarship. It makes me wonder how many Jaynesian scholars recognize the intellectual debt they owe to Boas and his students, including Sapir and Whorf. More than a half century before Jaynes published his book, a new way of thinking was paving the way. Jaynes didn’t come out of nowhere. Then again, neither did Boas. There are earlier linguistic philosophers such as Wilhelm von Humboldt — from On Language (1836): “Via the latter, qua character of a speech-sound, a pervasive analogy necessarily prevails in the same language; and since a like subjectivity also affects language in the same notion, there resides in every language a characteristic world-view. As the individual sound stands between man and the object, so the entire language steps in between him and the nature that operates, both inwardly and outwardly, upon him. He surrounds himself with a world of sounds, so as to take up and process within himself the world of objects. These expressions in no way outstrip the measure of the simple truth. Man lives primarily with objects, indeed, since feeling and acting in him depend on his presentations, he actually does so exclusively, as language presents them to him. By the same act whereby he spins language out of himself, he spins himself into it, and every language draws about the people that possesses it a circle whence it is possible to exit only by stepping over at once into the circle of another one. To learn a foreign language should therefore be to acquire a new standpoint in the world-view hitherto possessed, and in fact to a certain extent is so, since every language contains the whole conceptual fabric and mode of presentation of a portion of mankind.” The development of thought over time is always fascinating. But schools of thought too easily become narrow and insular over time, forgetting their own roots and becoming isolated from related areas of study. The Boasian lineage and Jaynesian theory have ever since been developing separately but in parallel. Maybe it’s time for them to merge back together or, at the very least, cross-pollinate.

To be fair, linguistic relativity has come up ever so slightly elsewhere in Jaynesian scholarship. As a suggestion, Marcel Kuijsten pointed to “John Limber’s chapter “Language and Consciousness” in Reflections on the Dawn of Consciousness”. I looked at that Limber piece. He does discuss this broad area of study involving language, thought, and consciousness. But as far as I can tell (based on doing an ebook search for relevant terms), he nowhere discusses Boas, Sapir, or Whorf. At best, he makes an indirect and brief mention of “pre-Whorfian advocates” without even bothering to mention, much less detail, Whorfian advocates or where they came from and how there is a line of influence from Boas to Jaynes. It’s an even more passing comment than that of McVeigh’s. It is found in note 82: “For reviews of non-Jaynesian ideas on inner speech and consciousness, see Sokolov (1972), Kucaj (1982), Dennett (1991), Nørretranders (1998), and Morin (2005). Vygotsky, of course, was somewhat of a Marxist and probably took something from Marx’s (1859) often cited “It is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness.” Vygotsky was also influenced by various pre-Whorfian advocates of linguistic relativity. I say “Vygotsky as inspiration” because I have not as yet found much of substance in any of his writings on consciousness beyond that of the Marx quote above. (Several of his papers are available online at http://www.marxists.org.)” So, apparently in the entire Jaynesian literature and commentary, there are only two miniscule acknowledgements that linguistic relativists exist at all (nor much reference to similar thinkers like Marxist Lev Vygotsky; or consider Marx’s theory of species-being; also note the omission of Alfred Korzybski’s General Semantics). Considering the fact that Jaynes was making an argument for linguistic relativity and possibly going so far as linguistic determinism, whether or not he knew it and thought about it that way, this oversight really gets me thinking.

That was where my thought ended, until serendipity brought forth a third example. It is in a passage from one of McVeigh’s more recent books, Discussions with Julian Jaynes (2016). In the June 5, 1991 session of their talks, almost a couple of decades after the publication of his book, Jaynes spoke to McVeigh about this:
McVeigh: “The first thing I want to ask you about is language. Because in our book, language plays an important role, specifically metaphors. And what would you say to those who would accuse you of being too Whorfian? Or how would you handle the charge that you’re saying it is language that determines thought in your book? Or would you agree with the statement, “As conscious developed, language changed to reflect this transformation?” So, in other words, how do you handle this [type of] old question in linguistics, “Which comes first, the chicken or the egg?””
Jaynes: “Well, you see Whorf applies to some things and doesn’t apply to others, and it’s being carried to a caricature state when somebody, let’s say, shows [a people perceives colors] and they don’t have words for colors. That’s supposed to disprove Whorf. That’s absolutely ridiculous. Because after all, animals, fish have very good color vision. But there’s no doubt about it, Whorfian hypothesis is true for some of the more abstract concepts we have. Certainly, in that sense, I would certainly be a Whorfian. But I don’t think Whorf went far enough. That’s what I used to say. I’m trying to think of the way I would exactly say it. I don’t know. for example, his discussion of time I think it is very appropriate. Indeed, there wouldn’t be such a thing as time without consciousness. No concept of it.”
Jaynes bluntly stated, “I would certainly be a Whorfian.” He said this in response to a direct question McVeigh asked him about being accused of being a Whorfian. There was no dancing around it. Jaynes apparently thought it was obvious enough to not require further explanation. That makes it all the more odd that McVeigh, a Jaynesian scholar who has spent his career studying language, has never since pointed out this intriguing detail. After all, if Jaynes was a Whorfian by his own admission and McVeigh is a Jaynesian scholar, then doesn’t it automatically follow that McVeigh in studying Jaynesianism is studying Whorfianism?

That still leaves plenty of room for interpretation. It’s not clear what was Jayne’s full position on the Sapir-Whorf hypothesis. Remarkably, he did not only identify as a Whorfian for he then suggested that he went beyond Whorf. I don’t know what that means, but it does get one wondering. Whorf wasn’t offering any coherent and overarching explanatory theory in the way that did Jaynes. Rather, the Sapir-Whorf hypothesis is more basic in simply suggesting language can influence and maybe sometimes determine thought, perception, and behavior. That is more of a general framework of research that potentially could apply to a wide variety of theories. I’d argue it not only partly but entirely applies to Jaynes’ theory as well — as neither Sapir nor Whorf, as far as I know, were making any assertions for or against the role of language in the formation of consciousness. Certainly, Jaynesian consciousness or the bicameral mind before it would not be precluded according to the Sapir-Whorf linguistic paradigm. Specifically in identifying as Whorfian, Jaynes agrees that, “Whorfian hypothesis is true for some of the more abstract concepts we have.” What does he mean by ‘abstract’ in this context? I don’t recall any of the scholarly and popular texts on linguistic relativity ever describing the power of language being limited to abstractions. Then again, neither did Jaynes directly state it is limited in this fashion, even as he does not elaborate on any other applications. However, McVeigh interpreted his words as implying such a limitation — from the introduction of the book, McVeigh wrote that, “he argues that the relation between words and concepts is not one of simple causation and that the Whorfian hypothesis only works for certain abstract notions. In other words, the relation between language and conscious interiority is subtle and complex.” Well, I’m not expert on the writings of Whorf, but my sense is that Whorf would not necessarily disagree with that assessment. One of the best sources of evidence for such subtlety and complexity might be found in linguistic relativity, a growing field of research. It is the area of overlap that remains terra incognito. I’m not sure anyone knows the details of how linguistic relativity might apply to Jaynesian consciousness as metaphorical mindspace nor how it might apply the other way around.

* * *

Though reworked a bit, I wrote much of the above about a year ago in the Facebook group Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind. And I just now shared a variation of my thoughts in another post to the same group. This link between the Jaynesian and the Whorfian (along with the Boasian, Marxian, Jungian, etc) has been on my mind for a while, but it was hard to write about as few others have written about it. There is a fairly large literature of Jaynesian scholarship and an even more vast literature of linguistic relativity research. Yet to find even passing references to both together is a rare finding. Below are the few examples I could find on the entire world wide web.

Language and thought: A Jaynesian Perspective
by Rachel Williams, Minds and Brains

The Future of Philosophy of Mind
by Rachel Williams, Minds and Brains

Recursion, Linguistic Evolution, Consciousness, the Sapir-Whorf Hypothesis, and I.Q.
by Gary Williams, New Amsterdam Paleoconservative

Rhapsody on Blue
by Chad Hill, the HipCrime Vocab
(a regular commenter on the Facebook group)

Why ancient civilizations couldn’t see the color blue
posted by J Nickolas FitzGerald, Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind Facebook group

* * *

Out of curiosity, I did some less extensive searches, in relation to Julian Jaynes, for some other thinkers, specifically Lev Vygotsky and Alfred Korzybski. The latter only showed up to a significant degree in a single scholarly article on Jaynes’ work (Philip Ardery, Ramifications of Julian Jaynes’s Theory of Consciousness for Traditional General Semantics), although Charles Eisenstein does mention the two thinkers in the same passage of his book The Ascent of Humanity but without making any direct connection or comparison. Greater relevance is found with Vygotsky and indeed he does come up more often, including several times on the official Julian Jaynes Society website and also in two of the collections of Jaynesian scholarship.

Two of the mentions of Vygotsky on the website are Books Related to Jaynes’s Bicameral Mind Theory and Supplementary Material (for Reflections on the Dawn of Consciousness), with the third offering some slight commentary — Marcel Kuijsten’s Critique 13, from Critiques and Responses: Part 2, where he writes: “For the vast differences between consciousness as described by Jaynes, Dennett, Carruthers, Vygotsky, and others – which is linguistically based and uniquely human – vs. non-linguistic animal cognition, see Peter Carruthers, Language, Thought and Consciousness, Jose Luis Bermudez, Ch. 9, “The Limits of Thinking Without Words,” in Thinking without Words, Lev Vygotsky, Thought and Language, Daniel Dennett, Kinds of Minds, etc.” In the introduction to The Julian Jaynes Collection, Marcel Kuijsten discusses Jayne’s first hypothesis that consciousness is based on language. Vygotsky is mentioned in passing while explaining the views of another scholar:

“The debate over the importance of language for consciousness has a long history and has seen renewed interest in recent years. While many theorists continue to assume that infants are born conscious (confusing consciousness with sense perception), the work of child psychologist Philip Zelazo strongly supports Jaynes’s argument that consciousness develops in children over time through the acquisition of language. Building on the work of the early twentieth century Russian psychologists Lev Vygotsky and Alexander Luria and the Swiss psychologist Jean Piaget, Zelazo and his colleagues propose a model for the development of consciousness in children that highlights the importance of the interaction between thought and language. 11 Zelazo describes “four major age-related increases” in consciousness in children and corresponding increases in children’s ability to spatialize time. Zelazo’s fourth stage, reflective consciousness , corresponds roughly to Jaynes’s definition of consciousness, whereas Zelazo’s first stage, minimal consciousness, describes what Jaynes would term reactivity or basic sense perception.”

A slightly fuller, if brief, comment on Vygotsky is found in The “Other” Psychology of Julian Jaynes. The author, Brian J. McVeigh, writes that, “An important intellectual descendant of Volkerpsychologie took root in the Soviet Union with the work of the cultural-historical approach of Lev Vygotsky (1896-1934) (1998), Alexander Luria (1902-77) (1976), and Aleksei Leontiev (1903-79) (1978, 2005 [1940]). Vygotsky and Luria (1993 [1930]) emphasized the inherently social nature of mind, language, and thought. Higher mental processes are complex and self-regulating, social in origin, mediated, and “conscious and voluntary in their mode of functioning” (cited in Meshcheriakov 2000; 43; see Wertsch 1985, 1991).”

Interestingly, Rachel Williams, in the above linked post The Future of Philosophy of Mind, also brings up Vygotsky. “Julian Jaynes has already cleared the underbrush to prepare the way for social-linguistic constructivism,” she explains. “And not your Grandpa’s neutered Sapir-Whorf hypothesis either. I’m talking about the linguistic construction of consciousness and higher-order thought itself. In other words, Vygotsky, not Whorf.” So, she obviously thinks Vygotsky is of utmost importance. I must admit that I’m actually not all that familiar with Vygotsky, but I am familiar with how influential he has been on the thought of others. I have greater interest in Korzybski by way of my appreciation for William S. Burrough’s views of “word virus” and “Control”.

* * *

It should be mentioned that Jaynesian scholarship, in general, is immense in scope. Look at any of the books put out on the topic and you’ll be impressed. Those like Kuijsten and McVeigh are familiar and conversant with a wide variety of scholars and texts. But for whatever reason, certain thinkers haven’t shown up much on their intellectual radars. About the likes of Vygotsky and Korzybski, I feel less surprised that they don’t appear as often in Jaynesian scholarship. Though influential, knowledge of them is limited and I don’t generally see them come up in consciousness studies more broadly. Sapir and Whorf, on the other hand, have had a much larger impact and, over time, their influence has continuously grown. Linguistic relativity has gained a respectability that Jaynesian scholarship still lacks.

I sometimes suspect that Jaynesian scholars are still too worried about respectability, as black sheep in the academic world. Few serious intellectuals took Jaynes seriously and that still is the case. That used to be also true of Sapir and Whorf, but that has changed. Linguistic relativity, with improved research, has recovered the higher status it had earlier last century. That is the difference for Jaynesian scholarship, as it never was respectable. I think that is why linguistic relativity got so easily ignored or dismissed. Jaynesian scholars might’ve been worried about aligning their own theories to another field of study that was, for a generation of scholars, heavily criticized and considered taboo. The lingering stigma of ‘strong’ Whorfianism as linguistic determinism, that we aren’t entirely isolated autonomous self-determined free agents, is still not acceptable in mainstream thought in this hyper-individualistic society. But one would think Jaynesian scholars would be sympathetic as the same charge of heresy is lodged against them.

Whatever motivated Jaynesian scholars in the past, it is definitely long past the time to change tack. Linguistic relativity is an area of real world research that could falsifiably test and potentially demonstrate the verity of Jaynes’ theory. Simply for practical reasons, those wishing to promote Jaynes’ work might be wise to piggyback on these obvious connections into more mainstream thought, such as mining the work of the popular Daniel Everett and his son Caleb Everett. That would draw Jaynesian scholarship into one of the main battles in all of linguistics, that of the debate between Daniel Everett and Noam Chomsky about recursion. There is a great opening for bringing attention to Jaynes — discuss why recursion is relevant to consciousness studies in general and Jaynesian consciousness in particular. Or better yet, show the commonalities between Jaynes and Jung, considering Jung is one of the most popular thinkers in the Western world. And as I’ve argued in great detail, such larger context has everything to do with the cultural and cognitive differences demonstrated by linguistic relativity.

In general, Jaynesian studies has been trapped in an intellectual backwater. There has yet to be a writer to popularize Jaynes’ views as they apply to the larger world and present society, from politics to culture, from the economy to environmentalism, from media to entertainment. Even among intellectuals and academics, it remains largely unknown and even less understood. This is beginning to change, though. HBO’s Westworld did more than anything to bring Jaynes’ ideas to a larger audience that otherwise would never come across such strange insights into human nature. Placing this radical theory within a science fiction narrative makes it less daunting and threatening to status quo thought. There is nothing like a story to slip a meme past the psychological defenses. Now that a seed has been planted, may it grow in the public mind.

Let me add that my pointed jabs at the Jaynesian world come from a place of love. Jaynes is one of the main inspirations to my thought. And I enjoy reading Jaynesian scholarship more than about any other field. I just want to see it expand, to become even more impressive. Besides, I’ve never been one for respectability, whether in politics or intellectual pursuits. Still, I couldn’t help but feel kind of bad about writing this post. It could be perceived as if all I was doing was complaining. And I realize that my sense of respect for Jaynesian scholars might be less than obvious to someone casually reading it (I tried to remedy that in clarifying my position in the main text above). I didn’t intend it as an attack on those scholars I have learned so much from. But I felt a need to communicate something, even if all I accomplished for the moment was making an observation.

It’s true that, instead of complaining about the omission of linguistic relativity, I could make a positive contribution by simply writing about how linguistic relativity applies to Jaynesian scholarship. If others haven’t shown the connections, the evidence and the examples, well then maybe I should. And I probably will, eventually. But it might take a while before I get around to that project. When I do, it could be a partial continuation of or tangent from my ongoing theorizing about symbolic conflation and such — that is tough nut I’ve been trying to crack for years. Still, the omission of linguistic relativity itself somehow seemed significant in my mind. I’m not sure why. This post is basically a way of setting forth a problem to be solved. The significance is that linguistic relativity would offer the real world examples of how Jaynesian views of consciousness, authorization, narratization, etc might apply to our everyday experience. It would help explain why such complex analysis, intellectually brilliant as it is, is relevant at all to our actual lives.