Psychology in Religion or as a Religion

There is a strong connection between Islamic doctrine and, as Julian Jaynes wrote about, the post-bicameral experience of the lost divine, of God/gods gone silent. As a much later religious development, Islam took this sense of loss to a further extreme in the theological claim that neither God nor the angels any longer speak to humans (Islam as Worship of a Missing God; & Islamic Voice-Hearing), and that silence will continue until the end of time.

The divine supposedly can only be known about indirectly, by way of dreams and other means. It also makes it a much more text-based religion, since Muhammad wrote down his visions there has been total divine silence. So, there is greater focus on the power of language and textual analysis, as the only hope we have of sensing the voice of God in life is by reading the words of prophets who did hear God or, in the case of Muhammad, heard the archangel Gabriel speak on behalf of God.

In a way, this makes Islam a more modern religion, much further distant from bicameral voice-hearing. It was founded, after all, more than a half millennium following the earlier monotheistic revival in the post-axial era of the first century. So, Islam could be seen as an attempt to come to terms with a world ever more dominated by Jaynesian consciousness.

Evidence of this could be seen with Islamic psychology, ilm al-nafs. In the West, psychology developed more separately from and independently of religion, specifically Christianity and Judaism. But in Islam, psychological study and mental health became central to the religion itself and developed early on. That is a telling difference, so it seems to me.

Here is a possible explanation. Unlike the other monotheistic religions, the divine mind and voice in Islam is so distant as to have no immediate contact with the human world. This forces humans to study their own minds more carefully, including dreams, to sense the influence of the divine like reading the currents of the ocean by watching the ripples on the surface. This makes psychology to be potentially all the more important to Islam.

The West, instead, has largely replaced religion with psychology. This was necessary as religion had not as fully adapted itself to the new psychological mindset that emerged from Jaynesian consciousness. This leaves an uneasy relationship between religion and psychology for Western culture, something that is maybe less of an issue within Islam.

Islam has a more complicated and nuanced relationship to voice-hearing. This maybe requires a more psychological approach. The Islamic individual has a greater responsibility in determining the sources of voices, as part of religious practice.

The Islamic tradition sees religion and psychology as being inseparable. The psychologist Carl Jung, having developed mutual respect with the Islamic scholar Henry Corbin, agreed with that view in stating to Sigmund Freud that “religion can only be replaced by religion” (quoted in Peter Kingsley’s Catafalque). Jung argued that, “We must read the Bible or we shall not understand psychology. Our psychology, our whole lives, our language and imagery, are built upon the Bible.”

There is no way to remove religion from psychology. And all that we’ve accomplished in the modern West is to turn psychology into its own religion.

Balance of Egalitarianism and Hierarchy

David Graeber, an anthropologist, and David Wengrow, an archaeologist, have a theory about hunter-gatherer societies having cycled between egalitarianism and hierarchy. That is to say hierarchies were temporary and often seasonal. There was no permanent leadership or ruling caste, as seen in the fluid social order of still surviving hunter. This carried over into the early settlements that were initially transitory meeting places, likely for feasts and festivals.

There are two questions that need to be answered. First, why did humans permanently settle down? Second, why did civilization get stuck in hierarchy? These questions have to be answered separately. For millennia into civilization, the egalitarian impulse persisted within many permanent settlements. There was no linear development from egalitarianism to hierarchy, no fall from the Garden of Eden.

Julian Jaynes, in his theorizing about the bicameral mind, offered a possible explanation. A contributing factor for permanent settlements would be because the speaking idols had to be kept in a single location with agriculture developing as a later result. Then as societies became more populous, complex and expansive, hierarchies (as with moralizing gods) became more important to compensate for the communal limits of a voice-hearing social order.

That kind of hierarchy, though, was a much later development, especially in its extreme forms not seen until the Axial Age empires. The earlier bicameral societies had a more communal identity. That would’ve been true on the level of experience, as even the voices people heard were shared. There wasn’t an internal self separate from the communal identity and so no conflict between the individual member and larger society. One either fully belonged to and was immersed in that culture or not.

Large, complex hierarchies weren’t needed. Bicameralism began in small settlements that lacked police, court systems, standing armies, etc — all the traits of an oppressively authoritarian hierarchy that would later be seen, such as the simultaneous appearance of sexual moralizing and pornographic art. It wasn’t the threat of violent force by centralized authority and concentrated power that created and maintained the bicameral order but, as still seen with isolated indigenous tribes, shared identity and experience.

An example of this is that of early Egyptians. They were capable of impressive technological feats and yet they didn’t even have basic infrastructure like bridges. It appears they initially were a loose association of farmers organized around the bicameral culture of archaic authorization and, in the off-season, they built pyramids without coercion. Slavery was not required for this, as there is no evidence of forced labor.

In so many ways, this is alien to the conventional understanding of civilization. It is so radically strange that to many it seems impossible, especially when it gets described as ‘egalitarian’ in placing it in a framework of modern ideas. Mention primitive ‘communism’ or ‘anarchism’ and you’ll really lose most people. Nonetheless, however one wants to describe and label it, this is what the evidence points toward.

Here is another related thought. How societies went from bicameral mind to consciousness is well-trodden territory. But what about how bicameralism emerged from animism? They share enough similarities that I’ve referred to them as the animistic-bicameral complex. The bicameral mind seems like a variant or extension of the voice-hearing in animism.

Among hunter-gatherers, it was often costume and masks through which gods, spirits, and ancestors spoke. Any individual potentially could become the vessel of possession because, in the animistic view, all the world is alive with voices. So, how did this animistic voice-hearing become narrowed down to idol worship of corpses and statues?

I ask this because this is central to the question of why humans created permanent settlements. A god-king’s voice of authorization was so powerful that it persisted beyond his death. The corpse was turned into a mummy, as his voice was a living memory that kept speaking, and so god-houses were built. But how did the fluid practice of voice-hearing in animism become centralized in a god-king?

Did this begin with the rise of shamanism? Some hunter-gatherers don’t have shamans. But once the role of shaman becomes a permanent authority figure mediating with other realms, it’s not a large leap from a shaman-king to a god-king who could be fully deified in death. In that case, how did shamanism act as a transitional proto-bicameralism? In this, we might begin to discern the hitch upon which permanent hierarchy eventually got stuck.

I might point out that there is much disagreement in this area of scholarship, as expected. The position of Graeber and Wengrow is highly contested, even among those offering alternative interpretations of the evidence see Peter Turchin (An Anarchist View of Human Social Evolution & A Feminist Perspective on Human Social Evolution) and Camilla Power (Gender egalitarianism made us human: patriarchy was too little, too late & Gender egalitarianism made us human: A response to David Graeber & David Wengrow’s ‘How to change the course of human history’).

But I don’t see the disagreements as being significant for the purposes here. Here is a basic point that Turchin explains: “The reason we say that foragers were fiercely egalitarian is because they practiced reverse dominance hierarchy” (from first link directly above). That seems to go straight to the original argument. Many other primates have social hierarchy, although not all. Some of the difference appears to be cultural, in that humans early in evolution appear to have developed cultural methods of enforcing egalitarianism. This cultural pattern has existed long enough to have fundamentally altered human nature.

According to Graeber and Wengrow, these egalitarian habits weren’t lost easily, even as society became larger and more complex. Modern authoritarian hierarchies represent a late development, a fraction of a percentage of human existence. They are far outside the human norm. In social science experiments, we see how the egalitarian impulse persists. Consider two examples. Children will naturally help those in need, until someone pays them money to do so, shifting from intrinsic motivation to extrinsic. The other study showed how most people, both children an adults, will choose to punish wrongdoers even at personal cost.

This in-built egalitarianism is an old habit that doesn’t die easily no matter how it is suppressed or perverted by systems of authoritarian power. It is the psychological basis of a culture of trust that permanent hierarchies take advantage of through manipulation of human nature. The egalitarian impulse gets redirected in undermining egalitarianism. This is why modern societies are so unstable, as compared to the ancient societies that lasted for millennia.

That said, there is nothing wrong with genuine authority, expertise, and leadership — as seen even in the most radically egalitarian societies like the Piraha. Hierarchies are also part of our natural repertoire and only problematic when they fall out of balance with egalitarianism and so become entrenched. One way or another, human societies cycle between hierarchy and egalitarianism, whether it cycles on a regular basis or necessitates collapse. That is the point Walter Scheidel makes in his book, The Great Leveler. High inequality destabilizes society and always brings its own downfall.

We need to relearn that balance, if we hope to avoid mass disaster. Egalitarianism is not a utopian ideal. It’s simply the other side of human nature that gets forgotten.

* * *

Archaeology, anarchy, hierarchy, and the growth of inequality
by Andre Costopoulos

In some ways, I agree with both Graeber and Wengrow, and with Turchin. Models of the growth of social inequality have indeed emphasized a one dimensional march, sometimes inevitable, from virtual equality and autonomy to strong inequality and centralization. I agree with Graeber and Wengrow that this is a mistaken view. Except I think humans have moved from strong inequality, to somewhat managed inequality, to strong inequality again.

The rise and fall of equality

Hierarchy, dominance, power, influence, politics, and violence are hallmarks not only of human social organization, but of that of our primate cousins. They are widespread among mammals. Inequality runs deep in our lineage, and our earliest identifiable human ancestors must have inherited it. But an amazing thing happened among Pleistocene humans. They developed strong social leveling mechanisms, which actively reduced inequality. Some of those mechanisms are still at work in our societies today: Ridicule at the expense of self-aggrandizers, carnival inversion as a reminder of the vulnerability of the powerful, ostracism of the controlling, or just walking away from conflict, for example.

Understanding the growth of equality in Pleistocene human communities is the big untackled project of Paleolithic archaeology, mostly because we assume they started from a state of egalitarianism and either degenerated or progressed from there, depending on your lens. Our broader evolutionary context argues they didn’t.

During the Holocene, under increasing sedentism and dependence on spatially bounded resources such as agricultural fields that represent significant energy investments, these mechanisms gradually failed to dampen the pressures for increasing centralization of power. However, even at the height of the Pleistocene egalitarian adaptation, there were elites if, using Turchin’s figure of the top one or two percent, we consider that the one or two most influential members in a network of a hundred are its elite. All the social leveling in the world could not contain influence. Influence, in the end, if wielded effectively, is power.

Ancient ‘megasites’ may reshape the history of the first cities
by Bruce Bower

No signs of a centralized government, a ruling dynasty, or wealth or social class disparities appear in the ancient settlement, the researchers say. Houses were largely alike in size and design. Excavations yielded few prestige goods, such as copper items and shell ornaments. Many examples of painted pottery and clay figurines typical of Trypillia culture turned up, and more than 6,300 animal bones unearthed at the site suggest residents ate a lot of beef and lamb. Those clues suggest daily life was much the same across Nebelivka’s various neighborhoods and quarters. […]

Though some of these sprawling sites had social inequality, egalitarian cities like Nebelivka were probably more widespread several thousand years ago than has typically been assumed, says archaeologist David Wengrow of University College London. Ancient ceremonial centers in China and Peru, for instance, were cities with sophisticated infrastructures that existed before any hints of bureaucratic control, he argues. Wengrow and anthropologist David Graeber of the London School of Economics and Political Science also made that argument in a 2018 essay in Eurozine, an online cultural magazine.

Councils of social equals governed many of the world’s earliest cities, including Trypillia megasites, Wengrow contends. Egalitarian rule may even have characterized Mesopotamian cities for their first few hundred years, a period that lacks archaeological evidence of royal burials, armies or large bureaucracies typical of early states, he suggests.

How to change the course of human history
by David Graeber and David Wengrow

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Simply framing the question this way means making a series of assumptions, that 1. there is a thing called ‘inequality,’ 2. that it is a problem, and 3. that there was a time it did not exist. Since the financial crash of 2008, of course, and the upheavals that followed, the ‘problem of social inequality’ has been at the centre of political debate. There seems to be a consensus, among the intellectual and political classes, that levels of social inequality have spiralled out of control, and that most of the world’s problems result from this, in one way or another. Pointing this out is seen as a challenge to global power structures, but compare this to the way similar issues might have been discussed a generation earlier. Unlike terms such as ‘capital’ or ‘class power’, the word ‘equality’ is practically designed to lead to half-measures and compromise. One can imagine overthrowing capitalism or breaking the power of the state, but it’s very difficult to imagine eliminating ‘inequality’. In fact, it’s not obvious what doing so would even mean, since people are not all the same and nobody would particularly want them to be.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society. That is the real political message conveyed by endless invocations of an imaginary age of innocence, before the invention of inequality: that if we want to get rid of such problems entirely, we’d have to somehow get rid of 99.9% of the Earth’s population and go back to being tiny bands of foragers again. Otherwise, the best we can hope for is to adjust the size of the boot that will be stomping on our faces, forever, or perhaps to wrangle a bit more wiggle room in which some of us can at least temporarily duck out of its way.

Mainstream social science now seems mobilized to reinforce this sense of hopelessness.

Rethinking cities, from the ground up
by David Wengrow

Settlements inhabited by tens of thousands of people make their first appearance in human history around 6,000 years ago. In the earliest examples on each continent, we find the seedbed of our modern cities; but as those examples multiply, and our understanding grows, the possibility of fitting them all into some neat evolutionary scheme diminishes. It is not just that some early cities lack the expected features of class divisions, wealth monopolies, and hierarchies of administration. The emerging picture suggests not just variability, but conscious experimentation in urban form, from the very point of inception. Intriguingly, much of this evidence runs counter to the idea that cities marked a ‘great divide’ between rich and poor, shaped by the interests of governing elites.

In fact, surprisingly few early cities show signs of authoritarian rule. There is no evidence for the existence of monarchy in the first urban centres of the Middle East or South Asia, which date back to the fourth and early third millennia BCE; and even after the inception of kingship in Mesopotamia, written sources tell us that power in cities remained in the hands of self-governing councils and popular assemblies. In other parts of Eurasia we find persuasive evidence for collective strategies, which promoted egalitarian relations in key aspects of urban life, right from the beginning. At Mohenjo-daro, a city of perhaps 40,000 residents, founded on the banks of the Indus around 2600 BCE, material wealth was decoupled from religious and political authority, and much of the population lived in high quality housing. In Ukraine, a thousand years earlier, prehistoric settlements already existed on a similar scale, but with no associated evidence of monumental buildings, central administration, or marked differences of wealth. Instead we find circular arrangements of houses, each with its attached garden, forming neighbourhoods around assembly halls; an urban pattern of life, built and maintained from the bottom-up, which lasted in this form for over eight centuries.⁶

A similar picture of experimentation is emerging from the archaeology of the Americas. In the Valley of Mexico, despite decades of active searching, no evidence for monarchy has been found among the remains of Teotihuacan, which had its magnificent heyday around 400 CE. After an early phase of monumental construction, which raised up the Pyramids of the Sun and Moon, most of the city’s resources were channelled into a prodigious programme of public housing, providing multi-family apartments for its residents. Laid out on a uniform grid, these stone-built villas — with their finely plastered floors and walls, integral drainage facilities, and central courtyards — were available to citizens regardless of wealth, status, or ethnicity. Archaeologists at first considered them to be palaces, until they realised virtually the entire population of the city (all 100,000 of them) were living in such ‘palatial’ conditions.⁷

A millennium later, when Europeans first came to Mesoamerica, they found an urban civilisation of striking diversity. Kingship was ubiquitous in cities, but moderated by the power of urban wards known as calpolli, which took turns to fulfil the obligations of municipal government, distributing the highest offices among a broad sector of the altepetl (or city-state). Some cities veered towards absolutism, but others experimented with collective governance. Tlaxcalan, in the Valley of Puebla, went impressively far in the latter direction. On arrival, Cortés described a commercial arcadia, where the ‘order of government so far observed among the people resembles very much the republics of Venice, Genoa, and Pisa for there is no supreme overlord.’ Archaeology confirms the existence here of an indigenous republic, where the most imposing structures were not palaces or pyramid-temples, but the residences of ordinary citizens, constructed around district plazas to uniformly high standards, and raised up on grand earthen terraces.⁸

Contemporary archaeology shows that the ecology of early cities was also far more diverse, and less centralised than once believed. Small-scale gardening and animal keeping were often central to their economies, as were the resources of rivers and seas, and indeed the ongoing hunting and collecting of wild seasonal foods in forests or in marshes, depending on where in the world we happen to be.⁹ What we are gradually learning about history’s first city-dwellers is that they did not always leave a harsh footprint on the environment, or on each other; and there is a contemporary message here too. When today’s urbanites take to the streets, calling for the establishment of citizens’ assemblies to tackle issues of climate change, they are not going against the grain of history or social evolution, but with its flow. They are asking us to reclaim something of the spark of political creativity that first gave life to cities, in the hope of discerning a sustainable future for the planet we all share.

Farewell to the ‘Childhood of Man’
by Gyrus

[Robert] Lowie made similar arguments to [Pierre] Clastres, about conscious knowledge of hierarchies among hunter-gatherers. However, for reasons related to his concentration on Amazonian Indians, Clastres missed a crucial point in Lowie’s work. Lowie highlighted the fact that among many foragers, such as the Eskimos in the Arctic, egalitarianism and hierarchy exist within the same society at once, cycling from one to another through seasonal social gatherings and dispersals. Based on social responses to seasonal variations in the weather, and patterns in the migration of hunted animals, not to mention the very human urge to sometimes hang out with a lot of people and sometimes to get the hell away from them, foraging societies often create and then dismantle hierarchical arrangements on a year-by-year basis.

There seems to have been some confusion about exactly what the pattern was. Does hierarchy arise during gatherings? This would tally with sociologist Émile Durkheim’s famous idea that ‘the gods’ were a kind of primitive hypothesis personifying the emergent forces that social complexity brought about. People sensed the dynamics changing as they lived more closely in greater numbers, and attributed these new ‘transcendent’ dynamics to organised supernatural forces that bound society together. Religion and cosmology thus function as naive mystifications of social forces. Graeber detailed ethnographic examples where some kind of ‘police force’ arises during tribal gatherings, enforcing the etiquette and social expectations of the event, but returning to being everyday people when it’s all over.

But sometimes, the gatherings are occasions for the subversion of social order — as is well known in civilised festivals such as the Roman Saturnalia. Thus, the evidence seemed to be confusing, and the idea of seasonal variations in social order was neglected. After the ’60s, the dominant view became that ‘simple’ egalitarian hunter-gatherers were superseded by ‘complex’ hierarchical hunter-gatherers as a prelude to farming and civilisation.

Graeber and Wengrow argue that the evidence isn’t confusing: it’s simply that hunter-gatherers are far more politically sophisticated and experimental than we’ve realised. Many different variations, and variations on variations, have been tried over the vast spans of time that hunter-gatherers have existed (over 200,000 years, compared to the 12,000 or so years we know agriculture has been around). Clastres was right: people were never naive, and resistance to the formation of hierarchies is a significant part of our heritage. However, seasonal variations in social structures mean that hierarchies may never have been a ghostly object of resistance. They have probably been at least a temporary factor throughout our long history.1 Sometimes they functioned, in this temporary guise, to facilitate socially positive events — though experience of their oppressive possibilities usually encouraged societies to keep them in check, and prevent them from becoming fixed.

How does this analysis change our sense of the human story? In its simplest form, it moves the debate from ‘how and when did hierarchy arise?’ to ‘how and when did we get stuck in the hierarchical mode?’. But this is merely the first stage in what Graeber and Wengrow promise is a larger project, which will include analysis of the persistence of egalitarianism among early civilisations, usually considered to be ‘after the fall’ into hierarchy.

 

Alienation and Soul Blindness

There is the view that consciousness* is a superficial overlay, that the animistic-bicameral mind is our fundamental nature and continues to operate within consciousness. In not recognizing this, we’ve become alienated from ourselves and from the world we are inseparable from. We don’t recognize that the egoic voice is but one of many voices and so we’ve lost appreciation for what it means to hear voices, including the internalized egoic voice that we’ve become identified with in submission to its demurgic authorization. This could be referred to as soul blindness, maybe related to soul loss — basically, a lack of psychological integration and coherency. Is this an inevitability within consciousness? Maybe not. What if a deeper appreciation of voice-hearing was developed within consciousness? What would emerge from consciousness coming to terms with its animistic-bicameral foundation? Would it still be consciousness or something else entirely?

* This is in reference to Julian Jaynes use of ‘consciousness’ that refers to the ego mind with its introspective and internal space built upon metaphor and narratization. Such consciousness as a social construction of a particular kind of culture is not mere perceptual awareness or biological reactivity.

* * *

Is It as Impossible to Build Jerusalem as It is to Escape Babylon? (Part Two)
by Peter Harrison

Marx identified the concept of alienation as being a separation, or estrangement, from one’s labour. And for Marx the consistent ability to labour, to work purposefully and consciously, as opposed to instinctively, towards a pre-imagined goal, was the trait that distinguished humans from other animals. This means also that humans are able to be persuaded to work creatively, with vigour and passion, for the goals of others, or for some higher goal than the maintenance of daily survival. As long as they are able see some tiny benefit for themselves, which might be service to a higher cause, or even just simple survival, since working for the goal of others may be the only means of obtaining food. So, Marx’s definition of alienation was more specific than an ‘existential’ definition because it specified labour as the defining human characteristic. But he was also aware that the general conditions of capitalism made this alienation more acute and that this escalated estrangement of humans from immediately meaningful daily activity led to a sense of being a stranger in one’s own world, and not only for the working class. This estrangement (I want to write étranger-ment, to reference Camus, but this is not a word) afflicted all classes, even those classes that seemed to benefit from class society, since capitalism had, even by his own time, gained an autonomy of its own. Life is as meaningless [or better: as anti-human] for a cleaner as it is for the head of a large corporation. This is why Marx stated that all people under capitalism were proletarian.

When I discovered the idea of soul blindness in Eduardo Kohn’s book, How Forests Think, I was struck by it as another useful way of understanding the idea of alienation. The concept of soul blindness, as used by the Runa people described by Kohn, seems to me to be related to the widespread Indigenous view of the recently deceased as aimless and dangerous beings who must be treated with great care and respect after their passing to prevent them wreaking havoc on the living. In Kohn’s interpretation, to be soul blind is to have reached the ‘terminus of selfhood,’ and this terminus can be reached while still alive, when one loses one’s sense of self through illness or despair, or even when one just drifts off into an unfocussed daze, or, more profoundly, sinks into an indifference similar to — to reference Camus again — that described by the character Meursault, in L’Etranger.

There are some accounts of Indigenous people first encountering white people in which the white people are initially seen as ghosts, one is recorded by Lévi-Strauss for Vanuatu. Another is embedded in the popular Aboriginal history of the area I live in. On first contact the white people are immediately considered to be some kind of ghost because of their white skin. This may have something to do with practice of preserving the bodies of the dead. This involves scraping off the top layer of skin which, apparently, makes the body white. This practice is described by the anthropologist, Atholl Chase, in his reminisces of Cape York. But for me there is more to the defining of the white intruders as ghosts because of their white skin. These foreigners also act as if they are soul blind. They are like machines, working for a cause that is external to them. For the Indigenous people these strangers do not seem to have soul: they are unpredictable; dangerous; they don’t know who they are.

But it is the anthropologist Eduardo Viveiros de Castro who, I think, connects most clearly to the work of James Hillman on the notion of the soul. James Hillman uses the term soul but he does not mean a Christian soul and he is not ultimately meaning the mind. For him the soul is a form of mediation between events and the subject and, in this sense, it might be similar to Bourdieu’s conception of ‘disposition.’ For Viveiros de Castro, ‘A perspective is not a representation because representations are a property of the mind or spirit, whereas the point of view is located in the body.’ Thus, Amerindian philosophy, which Viveiros de Castro is here describing, perhaps prefigures Hillman’s notion that ‘soul’ is ‘a perspective rather than a substance, a viewpoint towards things rather than a thing itself.’

To Empathize is to Understand

What is empathy as a cognitive ability? And what is empathy as an expansion of identity, as part of awareness of self and other?

There is a basic level of empathy that appears to be common across numerous species. Tortoises, when seeing another on its back, will help flip it over. There are examples of animals helping or cooperating with those from an entirely different species. Such behavior has been repeatedly demonstrated in laboratories as well. These involve fairly advanced expressions of empathy. In some cases, one might interpret it as indicating at least rudimentary theory of mind, the understanding that others have their own experience, perspective, and motivations. But obviously human theory of mind can be much more complex.

One explanation about greater empathy has to do with identity. Empathy in a way is simply a matter of what is included within one’s personal experience (Do To Yourself As You Would Do For Others). To extend identity is to extend empathy to another individual or a group (or anything else that can be brought within sphere of the self). For humans, this can mean learning to include one’s future self, to empathize with experience one has not yet had, the person one has not yet become. The future self is fundamentally no different than another person.

Without cognitive empathy, affective empathy is limited to immediate experience. It’s the ability to feel what another feels. But lacking cognitive empathy as happens in the most severe autism, theory of mind cannot be developed and so there is no way to identity, locate and understand that feeling. One can only emotionally react, not being able to differentiate one’s own emotion from that of another. In that case, there would be pure emotion, and yet no recognition of the other. Cognitive empathy is necessary to get beyond affective reactivity, not all that different than the biological reactivity of a slug.

It’s interesting that some species (primates, rats, dolphins, etc) might be able to have more cognitive empathy and theory of mind than some people at the extreme ends of severe autism, not necessarily being an issue of intelligence. On the other hand, the high functioning on the autistic spectrum, if intervention happens early enough, can be taught theory of mind, although it is challenging for the. This kind of empathy is considered a hallmark of humanity, a defining feature. This is what leads to problems of social behavior for those with autism spectrum disorder.

Someone entirely lacking in theory of mind would be extremely difficult to communicate and interact with beyond the most basic level, as is seen in the severest cases of autism and other extreme developmental conditions. Helen Keller asserts she had no conscious identity, no theory of her own mind or that of others, until she learned language.* Prior to her awakening, she was aggressive and violent in reacting to a world she couldn’t understand, articulate, or think about. That fits in with the speculations of Julian Jaynes. What he calls ‘consciousness’ is the addition of abstract thought by way of metaphorical language, as built upon concrete experience and raw affect. Keller discusses how her experience went from from the concreteness of touch to the abstraction of language. In becoming aware of the world, she became aware of herself.

Without normal development of language, the human mind is crippled: “The “black silence” of the deaf, blind and mute is similar in many respects to the situation of acutely autistic children where there are associated difficulties with language and the children seem to lack what has been called “a theory of mind” ” (Robin Allott, Helen Keller: Language and Consciousenss). Even so, there is more to empathy than language, and that might be true as well for some aspects or kinds of cognitve empathy. Language is not the only form of communication.

Rats are a great example in comparing to humans. We think of them as pests, as psychologically inferior. But anyone who has kept rats knows how intelligent and social they are. They are friendlier and more interactive than the typical cat. And research has shown how cognitively advanced they are in learning. Rats do have the typical empathy of concern for others. For example, they won’t hurt another rat in exchange for a reward and, given a choice, they would rather go hungry. But it goes beyond that.

It’s also shown that “rats are more likely and quicker to help a drowning rat when they themselves have experienced being drenched, suggesting that they understand how the drowning rat feels” (Kristin Andrews, Rats are us). And “rats who had been shocked themselves were less likely to allow other rats to be shocked, having been through the discomfort themselves.” They can also learn to play hide-and-seek which necessitates taking on the perspective others. As Ed Yong asks in The Game That Made Rats Jump for Joy, “In switching roles, for example, are they taking on the perspective of their human partners, showing what researchers call “theory of mind”?”

That is much more than mere affective empathy. This seems to involve active sympathy and genuine emotional understanding, that is to say cognitive empathy and theory of mind. If they are capable of both affective and cognitive empathy, however limited, and if Jaynesian consciousness partly consists of empathy imaginatively extended in space and time, then a case could be made that rats have more going on than simple perceptual awareness and biological reactivity. They are empathically and imaginatively engaging with others in the world around them. Does this mean they are creating and maintaining a mental model of others? Kristin Andrews details the extensive abilities of rats:

“We now know that rats don’t live merely in the present, but are capable of reliving memories of past experiences and mentally planning ahead the navigation route they will later follow. They reciprocally trade different kinds of goods with each other – and understand not only when they owe a favour to another rat, but also that the favour can be paid back in a different currency. When they make a wrong choice, they display something that appears very close to regret. Despite having brains that are much simpler than humans’, there are some learning tasks in which they’ll likely outperform you. Rats can be taught cognitively demanding skills, such as driving a vehicle to reach a desired goal, playing hide-and-seek with a human, and using the appropriate tool to access out-of-reach food.”

To imagine the future for purposes of thinking in advance and planning actions, that is quite advanced cognitive behavior. Julian Jaynes argued that was the purpose of humans developing a new kind of consciousness, as the imagined metaphorical space that is narratized allows for the consideration of alternatives, something he speculates was lacking in humans prior to the Axial Age when behavior supposedly was more formulaic and predetermined according to norms, idioms, etc. Yet rats can navigate a path they’ve never taken before with novel beginning and ending locations, which would require taking into account multiple options. What theoretically makes Jaynesian consciousness unique?

Jaynes argues that it’s the metaphorical inner space that is the special quality that created the conditions for the Axial Age and all that followed from it, the flourishing of complex innovations and inventions, the ever greater extremes of abstraction seen in philosophy, math and science. We have so strongly developed this post-bicameral mind that we barely can imagine anything else. But we know that other societies have very different kinds of mentalities, such as the extended and fluid minds of animistic cultures. What exactly is the difference?

Australian Aborigines give hint to something between the two kinds of mind. In some ways, the mnemonic systems represent more complex cognitive ability than we are capable with our Jaynesian consciousness. Instead of an imagined inner space, the Songlines are vast systems of experience and knowledge, culture and identity overlaid upon immense landscapes. These mappings of externalized cognitive space can be used to guide the individual across distant territories the individual has never seen before and help them to identify and use the materials (plants, stones, etc) at a location no one in their tribe has visited for generations. Does this externalized mind have less potential for advanced abilities? Upon Western contact, Aborigines had farming and ranching, kept crop surpluses in granaries, used water and land management.

It’s not hard to imagine civilization having developed along entirely different lines based on divergent mentalities and worldviews. Our modern egoic consciousness was not an inevitability and it likely is far from offering the most optimal functioning. We might already be hitting a dead end with our present interiorized mind-space. Maybe it’s our lack of empathy in understanding the minds of other humans and other species that is an in-built limitation to the post-bicameral world of Jaynesian consciousness. And so maybe we have much to learn from entirely other perspectives and experiences, even from rats.

* * *

* Helen Keller, from Light in My Darkness:

I had no concepts whatever of nature or mind or death or God. I literally thought with my body. Without a single exception my memories of that time are tactile. . . . But there is not one spark of emotion or rational thought in these distinct yet corporeal memories. I was like an unconscious clod of earth. There was nothing in me except the instinct to eat and drink and sleep. My days were a blank without past, present, or future, without hope or anticipation, without interest or joy. Then suddenly, I knew not how or where or when, my brain felt the impact of another mind, and I awoke to language, to knowledge, to love, to the usual concepts of nature, good, and evil. I was actually lifted from nothingness to human life.

And from The Story of My Life:

As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten–-a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that ‘w-a-t-e-r’ meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away.

And from The World I Live In:

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith. […]

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.”

I cannot represent more clearly than any one else the gradual and subtle changes from first impressions to abstract ideas. But I know that my physical ideas, that is, ideas derived from material objects, appear to me first an idea similar to those of touch. Instantly they pass into intellectual meanings. Afterward the meaning finds expression in what is called “inner speech.”  […]

As my experiences broadened and deepened, the indeterminate, poetic feelings of childhood began to fix themselves in definite thoughts. Nature—the world I could touch—was folded and filled with myself. I am inclined to believe those philosophers who declare that we know nothing but our own feelings and ideas. With a little ingenious reasoning one may see in the material world simply a mirror, an image of permanent mental sensations. In either sphere self-knowledge is the condition and the limit of our consciousness. That is why, perhaps, many people know so little about what is beyond their short range of experience. They look within themselves—and find nothing! Therefore they conclude that there is nothing outside themselves, either.

However that may be, I came later to look for an image of my emotions and sensations in others. I had to learn the outward signs of inward feelings. The start of fear, the suppressed, controlled tensity of pain, the beat of happy muscles in others, had to be perceived and compared with my own experiences before I could trace them back to the intangible soul of another. Groping, uncertain, I at last found my identity, and after seeing my thoughts and feelings repeated in others, I gradually constructed my world of men and of God. As I read and study, I find that this is what the rest of the race has done. Man looks within himself and in time finds the measure and the meaning of the universe.

* * *

As an example of how language relates to emotions:

The ‘untranslatable’ emotions you never knew you had
by David Robson

But studying these terms will not just be of scientific interest; Lomas suspects that familiarising ourselves with the words might actually change the way we feel ourselves, by drawing our attention to fleeting sensations we had long ignored.

“In our stream of consciousness – that wash of different sensations feelings and emotions – there’s so much to process that a lot passes us by,” Lomas says. “The feelings we have learned to recognise and label are the ones we notice – but there’s a lot more that we may not be aware of. And so I think if we are given these new words, they can help us articulate whole areas of experience we’ve only dimly noticed.”

As evidence, Lomas points to the work of Lisa Feldman Barrett at Northeastern University, who has shown that our abilities to identify and label our emotions can have far-reaching effects.

Her research was inspired by the observation that certain people use different emotion words interchangeably, while others are highly precise in their descriptions. “Some people use words like anxious, afraid, angry, disgusted to refer to a general affective state of feeling bad,” she explains. “For them, they are synonyms, whereas for other people they are distinctive feelings with distinctive actions associated with them.”

This is called “emotion granularity” and she usually measures this by asking the participants to rate their feelings on each day over the period of a few weeks, before she calculates the variation and nuances within their reports: whether the same old terms always coincide, for instance.

Importantly, she has found that this then determines how well we cope with life. If you are better able to pin down whether you are feeling despair or anxiety, for instance, you might be better able to decide how to remedy those feelings: whether to talk to a friend, or watch a funny film. Or being able to identify your hope in the face of disappointment might help you to look for new solutions to your problem.

In this way, emotion vocabulary is a bit like a directory, allowing you to call up a greater number of strategies to cope with life. Sure enough, people who score highly on emotion granularity are better able to recover more quickly from stress and are less likely to drink alcohol as a way of recovering from bad news. It can even improve your academic success. Marc Brackett at Yale University has found that teaching 10 and 11-year-old children a richer emotional vocabulary improved their end-of-year grades, and promoted better behaviour in the classroom. “The more granular our experience of emotion is, the more capable we are to make sense of our inner lives,” he says.

Both Brackett and Barrett agree that Lomas’s “positive lexicography” could be a good prompt to start identifying the subtler contours of our emotional landscape. “I think it is useful – you can think of the words and the concepts they are associated with as tools for living,” says Barrett. They might even inspire us to try new experiences, or appreciate old ones in a new light.

* * *

And related to all of this is hypocognition, overlapping with linguistic relativity — in how language and concepts determine our experience, identity, and sense of reality — constraining and framing and predetermining what we are even capable of perceiving, thinking about, and expressing:

Hypocognition is a censorship tool that mutes what we can feel
by Kaidi Wu

It is a strange feeling, stumbling upon an experience that we wish we had the apt words to describe, a precise language to capture. When we don’t, we are in a state of hypocognition, which means we lack the linguistic or cognitive representation of a concept to describe ideas or interpret experiences. The term was introduced to behavioural science by the American anthropologist Robert Levy, who in 1973 documented a peculiar observation: Tahitians expressed no grief when they suffered the loss of a loved one. They fell sick. They sensed strangeness. Yet, they could not articulate grief, because they had no concept of grief in the first place. Tahitians, in their reckoning of love and loss, and their wrestling with death and darkness, suffered not from grief but a hypocognition of grief. […]

But the darkest form of hypocognition is one born out of motivated, purposeful intentions. A frequently overlooked part of Levy’s treatise on Tahitians is why they suffered from a hypocognition of grief. As it turns out, Tahitians did have a private inkling of grief. However, the community deliberately kept the public knowledge of the emotion hypocognitive to suppress its expression. Hypocognition was used as a form of social control, a wily tactic to expressly dispel unwanted concepts by never elaborating on them. After all, how can you feel something that doesn’t exist in the first place?

Intentional hypocognition can serve as a powerful means of information control. In 2010, the Chinese rebel writer Han Han told CNN that any of his writings containing the words ‘government’ or ‘communist’ would be censored by the Chinese internet police. Ironically, these censorship efforts also muffled an abundance of praise from pro-leadership blogs. An effusive commendation such as ‘Long live the government!’ would be censored too, for the mere mention of ‘government’.

A closer look reveals the furtive workings of hypocognition. Rather than rebuking negative remarks and rewarding praises, the government blocks access to any related discussion altogether, rendering any conceptual understanding of politically sensitive information impoverished in the public consciousness. ‘They don’t want people discussing events. They simply pretend nothing happened… That’s their goal,’ Han Han said. Regulating what is said is more difficult than ensuring nothing is said. The peril of silence is not a suffocation of ideas. It is to engender a state of blithe apathy in which no idea is formed.

Do To Yourself As You Would Do For Others

“…our impulse control is less based on an order from our executive command center, or frontal cortex, and more correlated with the empathic part of our brain. In other words, when we exercise self-control, we take on the perspective of our future self and empathize with that self’s perspectives, feelings, and motivations.”
~ Alexandar Soutscheck

Self-control is rooted in self-awareness. Julian Jaynes and Brian McVeigh, in one of their talks, brought up the idea that “mind space” has increased over time: “The more things we think about, the more distinctions we make in our consciousness  between A and B, and so on, the more mind-space there is” (Discussions with Julian Jaynes, ed. by Brian J. McVeigh, p. 40). The first expansion was the creation of introspective consciousness itself. Narratization allowed that consciousness to also extend across time, to imagine possibilities and play out scenarios and consider consequences. Empathy, as we we experience it, might be a side effect of this as consciousness includes more and more within it, including empathy with our imagined future self. So, think of self-control as being kind to yourself, to your full temporal self, not only your immediate self.

This would relate to the suggestion that humans learn theory of mind, the basis of cognitive empathy, first by observing others and only later apply it to ourselves. That is to say the first expansion of mental space as consciousness takes root within relationship to others. It’s realizing that there might be inner experience within someone else that we claim inner space in our own experience. So, our very ability to understand ourselves is dependent on empathy with others. This was a central purpose of the religions that arose in the Axial Age, the traditions that continue into the modern world* (Tahere Salehi, The Effect of Training Self-Control and Empathy According to Spirituality on Self-Control and Empathy Preschool Female Students in Shiraz City). The prophets that emerged during that era taught love and compassion and introspection, not only as an otherworldly moral dictum but also in maintaining group coherence and the common good. The breakdown of what Jaynes called the bicameral mind was traumatic and a new empathic mind was needed to replace it, if only to maintain social order.

Social order has become a self-conscious obsession ever since, as Jaynesian consciousness in its tendency toward rigidity has inherent weaknesses. Social disconnection is a crippling of the mind because the human psyche is inherently social. Imagining our future selves is a relationship with a more expansive sense of self. It’s the same mechanism as relating to any other person. This goes back to Johann Hari’s idea, based on Bruce K. Alexander’s rat park research, that the addict is the ultimate individual. In this context, this ultimate individual lacking self-control is not only disconnected from other people but also disconnected from themselves. Addiction is isolating and isolation promotes addiction. Based on this understanding, I’ve proposed that egoic consciousness is inherently addictive and that post-axial society is dependent on addiction for social control.

But this psychological pattern is seen far beyond addiction. This fits our personal experience of self. When we were severely depressed, we couldn’t imagine or care about the future. This definitely inhibited self-control and led to more impulsive behavior in being in present-oriented psychological survival mode. Then again, the only reason self-control is useful at all is because, during and following the Axial Age, humans ever more loss the capacity of being part of a communal identity that created the conditions of communal control, the externally perceived commands of archaic authorization through voice-hearing. We’ve increasingly lost the capacity of a communal identity (extended mind/self) and hence a communal empathy, something that sounds strange or unappealing to the modern mind. In denying our social nature, this casts the shadow of authoritarianism, an oppressive and often violent enforcement of top-down control.

By the way, this isn’t merely about psychology. Lead toxicity causes higher rates of impulsivity and aggression. This is not personal moral failure but brain damage from poisoning. Sure, teaching brain-damaged kids and adults to have more empathy might help them overcome their disability. But if we are to develop and empathic society, we should learn to have enough empathy not to wantonly harm the brains of others with lead toxicity and other causes of stunted development (malnutrition, stress, ACEs, etc), just because they are poor or minority and can’t fight back. Maybe we need to first teach politicians and business leaders basic empathy, in overcoming the present dominance of pscyopathic traits, so that they could learn self-control in not harming others.

The part of the brain involving cognitive empathy and theory of mind is generally involved with selflessness and pro-social behavior. To stick with brain development and neurocognitive functioning, let’s look at diet. Weston A. Price, in studying traditional populations that maintained healthy diets, observed what he called moral health in that people seemed kinder, more helpful, and happier — they got along well. Strong social fabric and culture of trust is not an abstraction but built into general measures of health, in the case of Price’s work, having to do with nutrient-dense animal foods containing fat-soluble vitamins. As the standard American diet has worsened, so has mental health. That is a reason for hope. In an early study on the ketogenic diet as applied to childhood diabetes, the researchers made a side observation that not only did the diabetes symptoms improve but so did behavior. I’ve theorized about how a high-carb diet might be one of the factors that sustains the addictive and egoic self.

Narrow rigidity of the mind, as seen in the extremes of egoic consciousness, has come to be accepted as a social norm and even a social ideal. It is the social Darwinian worldview that has contributed to the rise of both competitive capitalism and the Dark Triad (psycopathy, narcissism, and Machiavellianism), and unsurprisingly it has led to a society that lacks awareness and appreciation of the harm caused to future generations (Scott Barry Kaufman, The Dark Triad and Impulsivity). Rather than normalized, maybe this dysfunction should be seen as a sickness, not only a soul sickness but a literal sickness of the body-mind that can be scientifically observed and measured, not to mention medically and socially treated. We need to thin the boundaries of the mind so as to expand our sense of self. Research shows that those with such thinner boundaries not only have more sense of identification with their future selves but also their past selves, in maintaining a connection to what it felt like to be a child. We need to care for ourselves and others in the way we would protect a child.

* * *

* In their article “Alone and aggressive“, A. William Crescioni and Roy F. Baumeister included the loss of meaning. It was maybe associated with the loss of empathy, specifically in understanding the meaning of others (e.g., the intention ‘behind’ words, gestures and actions). Meaning traditionally has been the purview of religion. And I’d suggest that it is not a coincidence that the obsession with meaning arose in the Axial Age right when words were invented for ‘religion’ as a formal institution separate from the rest of society. As Julian Jaynes argues, this was probably in response to the sense of nostalgia and longing that followed the silence of the gods, spirits, and ancestors.

A different kind of social connection had to be taught, but this post-bicameral culture wasn’t and still isn’t as effective in re-creating the strong social bonds of archaic humanity. Periods of moral crisis in fear of societal breakdown have repeated ever since, like a wound that was never healed. I’ve previously written about social rejection and aggressive behavior in relation to this (12 Rules for Potential School Shooters) — about school shooters, I explained:

Whatever they identify or don’t identify as, many and maybe most school shooters were raised Christian and one wonders if that plays a role in their often expressing a loss of meaning, an existential crisis, etc. Birgit Pfeifer and Ruard R. Ganzevoort focus on the religious-like concerns that obsess so many school shooters and note that many of them had religious backgrounds:

“Traditionally, religion offers answers to existential concerns. Interestingly, school shootings have occurred more frequently in areas with a strong conservative religious population (Arcus 2002). Michael Carneal (Heath High School shooting, 1997, Kentucky) came from a family of devoted members of the Lutheran Church. Mitchell Johnson (Westside Middle School shooting, 1998, Arkansas) sang in the Central Baptist Church youth choir (Newman et al. 2004). Dylan Klebold (Columbine shooting, 1999, Colorado) attended confirmation classes in accordance with Lutheran tradition. However, not all school shooters have a Christian background. Some of them declare themselves atheists…” (The Implicit Religion of School Shootings).

Princeton sociologist Katherine Newman, in studying school shootings, has noted that, “School rampage shootings tend to happen in small, isolated or rural communities. There isn’t a very direct connection between where violence typically happens, especially gun violence in the United States, and where rampage shootings happen” (Common traits of all school shooters in the U.S. since 1970).

It is quite significant that these American mass atrocities are concentrated in “small, isolated or rural communities” that are “frequently in areas with a strong conservative religious population”. That might more precisely indicate who these school shooters are and what they are reacting to. Also, one might note that rural areas in general and specifically in the South do have high rates of gun-related deaths, although many of them are listed as ‘accidental’ which is to say most rural shootings involve people who know each other; also true of school shootings.

* * *

Brain stimulation reveals crucial role of overcoming self-centeredness in self-control
by Alexander Soutschek, Christian C. Ruff, Tina Strombach, Tobias Kalenscher and Philippe N. Tobler

Empathic Self-Control
by David Shoemaker

People with a high degree of self-control typically enjoy better interpersonal relationships, greater social adjustment, and more happiness than those with a low degree of self-control. They also tend to have a high degree of empathy. Further, those with low self-control also tend to have low empathy. But what possible connection could there be between self-control and empathy, given that how one regulates oneself seems to have no bearing on how one views others. Nevertheless, this paper aims to argue for a very tight relation between self-control and empathy, namely, that empathy is in fact one type of self-control. The argument proceeds by exploring two familiar types of self-control, self-control over actions and attitudes, the objects for which we are also responsible. Call the former volitional self-control and the latter rational self-control. But we also seem to be responsible for—and have a certain type of control and self-control over—a range of perceptual states, namely, those in which we come to see from another person’s perspective how she views her valuable ends and what her emotional responses are to their thwarting or flourishing. This type of empathic self-control is a previously-unexplored feature of our interpersonal lives. In addition, once we see that the type of empathy exercised is also exercised when casting ourselves into the shoes of our future selves, we will realize how intra-personal empathy better enables both volitional and rational self-control.

Science Says When Self-Control Is Hard, Try Empathizing With Your Future Self
by Lindsay Shaffer

Soutscheck’s study also reveals what happens when we fail to exercise the empathic part of our brain. When Soutscheck interrupted the empathic center of the brain in 43 study volunteers, they were more likely to take a small amount of cash immediately over a larger amount in the future. They were also less inclined to share the money with a partner. Soutscheck’s study showed that the more people are stuck inside their own perspective, even just from having the empathic part of their brain disrupted, the more likely they are to behave selfishly and impulsively.

Self-Control Is Just Empathy With Your Future Self
by Ed Yong

This tells us that impulsivity and selfishness are just two halves of the same coin, as are their opposites restraint and empathy. Perhaps this is why people who show dark traits like psychopathy and sadism score low on empathy but high on impulsivity. Perhaps it’s why impulsivity correlates with slips among recovering addicts, while empathy correlates with longer bouts of abstinence. These qualities represent our successes and failures at escaping our own egocentric bubbles, and understanding the lives of others—even when those others wear our own older faces.

New Studies in Self Control: Treat Yourself Like You’d Treat Others
from Peak

A new study recently shifted the focus to a different mechanism of self control. Alexander Soutschek and colleagues from the University of Zurich believe self-control may be related to our ability to evaluate our future wants and needs.

The scientists suggest that this takes place in an area of the brain called the rTPJ, which has long been linked to selflessness and empathy for others. It’s an important part of our ability to “take perspectives” and help us step into the shoes of a friend.

The scientists hypothesized that perhaps the rTPJ treats our “future self” the same way it treats any other person. If it helps us step into our friend’s shoes, maybe we can do the same thing for ourselves. For example, if we’re deciding whether to indulge in another pint of beer at a bar, maybe our ability to hold off is related to our ability to imagine tomorrow morning’s hangover. As science writer Ed Yong explains, “Think of self-control as a kind of temporal selflessness. It’s Present You taking a hit to help out Future You.”

Empathy for Your Future Self
by Reed Rawlings

Further Research on the TPJ

The results of Soutscheks team were similar to past work on the empathy, future-self, and the TPJ. It’s believed a better connected rTPJ increases the likelihood of prosocial behaviors. Which relates to skills of executive function. Individuals who exhibit lower empathy, score higher for impulsivity – the opposite of self-control.

Keeping our future selves in mind may even keep our savings in check. In this research, Stanford University tested a “future self-continuity”. They wanted to explore how individuals related to their future self. Participants were asked to identify how they felt about the overlap between their current and future selves. They used the Venn diagrams below for this exercise.

If they saw themselves as separate, they were more likely to choose immediate rewards. A greater overlap increased the likelihood of selecting delayed rewards. In their final study, they assessed individuals from the San Francisco Bay area. The researchers found a correlation between wealth and an overlap between selves.

While the above research is promising, it doesn’t paint a full picture. Empathy seems useful, but making a sacrifice for our future-self requires that we understand the reason behind it. It’s the sacrifice that is especially crucial – positive gains demand negative trade-offs.

That’s where altruism, our willingness to give to others, comes in.

Why Do We Sacrifice?

Research from the University of Zurich’s examined some altruism’s driving factors. Their work came up with two correlations. First, the larger your rTPJ, the more likely you are to behave altruistically. Second, concerns of fairness affect how we give.

In this experiment, individuals were more generous if their choice would decrease inequality. When inequality would increase, participants were less likely to give.

This is an understandable human maxim. We have little reason to give to an individual who has more than we do. It feels completely unfair to do so. However, we’re raised to believe that helping those in need is objectively good. Helping ourselves should fall under the same belief.

Empathy and altruism, when focused on our own well-being, are intimately linked. To give selflessly, we need to have a genuine concern for another’s well-being. In this case, the ‘other’ is our future self. Thankfully, with a bit of reflection, each of us can gain a unique insight into our own lives.

Alone and aggressive: Social exclusion impairs self-control and empathy and increases hostile cognition and aggression.
by A. William Crescioni and Roy F. Baumeister
from Bullying, Rejection, and Peer Victimization ed. by Monic J. Harris
pp. 260-271 (full text)

Social Rejection and Emotional Numbing

Initial studies provided solid evidence for a causal relationship be-tween rejection and aggression. The mechanism driving this relation-ship remained unclear, however. Emotional distress was perhaps the most plausible mediator. Anxiety has been shown to play a role in both social rejection (Baumeister & Tice, 1990) and ostracism (Williamset al., 2000). Emotional distress, however, was not present in these experiments by Twenge et al. (2001). Only one significant mood effect was found, and even this effect deviated from expectations. The sole difference in mood between rejected and accepted participants was a slight decrease in positive affect. Rejected participants did not show any increase in negative affect; rather, they showed a flattening of affect, in particular a decrease in positive affect. This mood difference did not constitute a mediator of the link between rejection and aggression. It did, however, point toward a new line of thinking. It was possible that rejection would lead to emotional numbing rather than causing emotional distress. The flattening of affect seen in the previous set of studies would be consistent with a state of cognitive deconstruction. This state is characterized by an absence of emotion, an altered sense of time, a fixa-tion on the present, a lack of meaningful thought, and a general sense of lethargy (Baumeister, 1990). […]

Rejection and Self-Regulation

Although the emotional numbness and decrease in empathy experienced by rejected individuals play an important role in the link between social rejection and aggression, these effects do not constitute a complete explanation of why rejection leads to aggression. The diminished prosocial motivations experienced by those lacking in empathy can open the door to aggressive behavior, but having less of a desire to do good and having more of a desire to do harm are not necessarily equivalent. A loss of empathy, paired with the numbing effects of rejection, could lead individuals to shy away from those who had rejected them rather than lashing out. Emotional numbness, however, is not the only consequence of social rejection.

In addition to its emotional consequences, social rejection has adverse effects on a variety of cognitive abilities. Social rejection has been shown to decrease intelligent (Baumeister, Twenge, & Nuss, 2002) and meaningful thought (Twenge et al., 2002). But another category of cognitive response is self-regulation. Studies have demonstrated that self-regulation depends upon a finite resource and that acts of self-regulation can impair subsequent attempts to exercise self-control (Baumeister, Bratslavsky, Muraven, & Tice, 1998). Self-regulation has been shown to be an important tool for controlling aggressive impulses. Stucke and Baumeister (2006) found that targets whose ability to self-regulate had been depleted were more likely to respond aggressively to insulting provocation. DeWall, Baumeister, Stillman, and Galliot (2007) found that diminished self-regulatory resources led to an increase in aggression only in response to provocation; unprovoked participants showed no increase in aggressive behavior. Recall that in earlier work (Twenge et al.,2002) rejected individuals became more aggressive only when the target of their aggression was perceived as having insulted or provoked them.This aggression could have been the result of the diminished ability of rejected participants to regulate their aggressive urges. […]

These results clearly demonstrate that social rejection has a detrimental effect on self-regulation, but they do not explain why this is so and, indeed, the decrement in self-regulation would appear to be counterproductive for rejected individuals. Gaining social acceptance often involves regulating impulses in order to create positive impressions on others (Vohs, Baumeister, & Ciarocco, 2005). Rejected individuals should therefore show an increase in self-regulatory effort if they wish to create new connections or prevent further rejection. The observed drop in self-regulation therefore seems maladaptive. The explanation for this finding lies in rejection’s effect on self-awareness.

Self-awareness is an important prerequisite of conscious self-control (Carver & Scheier, 1981). Twenge et al. (2002) found that, when given the option, participants who had experienced rejection earlier in the study were more likely to sit facing away from rather than toward a mirror. Having participants face a mirror is a common technique for inducing self-awareness (Carver & Scheier, 1981), so participants’ unwillingness to do so following rejection provides evidence of a desire to avoid self-awareness. A drop in self-awareness is part of the suite of effects that comprises a state of cognitive deconstruction. Just as emotional numbness protects rejected individuals from the emotional distress of rejection, a drop in self-awareness would shield against awareness of personalflaws and shortcoming that could have led to that rejection. The benefit of this self-ignorance is that further distress over one’s inadequacies is mitigated. Unfortunately, this protection carries the cost of decreased self-regulation. Because self-regulation is important for positive self-presentation (Vohs et al., 2005), this drop in self-awareness could ironically lead to further rejection. […]

These data suggest that social rejection does not decrease the absolute ability of victims to self-regulate but rather decreases their willingness to exert the effort necessary to do so. Increased lethargy, another aspect of cognitive deconstruction, is consistent with this decrease in self-regulatory effort. Twenge et al. (2002) found that social rejection led participants to give shorter and less detailed explanations of proverbs. Because fully explaining the proverbs would require an effortful response, this shortening and simplification of responses is evidence of increased lethargy amongst rejected participants. This lethargy is not binding, however. When given sufficient incentive, rejected participants were able to match the self-regulatory performance of participants in other conditions. Inducing self-awareness also allowed rejected individuals to self-regulate as effectively as other participants. In the absence of such stimulation, however, rejected individuals showed a decrement in self-regulatory ability that constitutes an important contribution to explaining the link between rejection and aggression. […]

Rejection and Meaningfulness

Twenge et al. (2002) found that social rejection led to a decrease in meaningful thought among participants, as a well as an increased likelihood to endorse the statement, “Life is meaningless.” Williams (2002)has also suggested that social rejection ought to be associated with a perception of decreased meaning in life. Given the fundamental nature of the need to belong, it makes sense that defining life as meaningful would be at least in part contingent on the fulfillment of social needs. A recent line of work has looked explicitly at the effect of social rejection on the perception of meaning in life. Perceiving meaning in life has been shown to have an inverse relationship with hostility, aggression,and antisocial attitude (Mascaro, Morey, & Rosen, 2004). As such, any decrease in meaning associated with social rejection would constitute an important feature of the explanation of the aggressive behavior of rejected individuals.

The God of the Left Hemisphere:
Blake, Bolte Taylor and the Myth of Creation
by Roderick Tweedy

The left hemisphere is competitive… the will to power…is the agenda of the left hemisphere. It arose not to communicate with the world but to manipulate it. This inability to communicate or co-operate poses great difficulties for any project of reintegration or union. Its tendency would be to feed off the right hemisphere, to simply use and gain power over it too. Left hemisphere superiority is based, not on a leap forward by the left hemisphere, but on a ‘deliberate’ handicapping of the right. There is perhaps as much chance of persuading the head of a multinational to stop pursuing an agenda of self-interest and ruthless manipulation as there is of persuading the Urizenic program of the brain which controls him of “resubmitting” itself to the right hemisphere’s values and awareness.

The story of the Western world being one of increasing left-hemispheric domination, we would not expect insight to be the key note. Instead we would expect a sort of insouciant optimism, the sleepwalker whistling a happy tune as he ambles towards the abyss.

The left, rational, brain, it might be safe to conclude, has no idea how serious the problem is, that is to say, how psychopathic it has become. Of course, it doesn’t care that it doesn’t care. “The idiot Reasoner laughs at the Man of Imagination/And from laughter proceeds to murder by undervaluing calumny”, noted Blake in a comment that is only remarkable for the fact that it has taken two hundred years to understand.

The apparently “conscious” rational self, the driving program and personality of the left brain, turns out to be deeply unconscious, a pathological sleepwalker blithely poisoning its own environment whilst tenaciously clinging onto the delusion of its own rightness. This unfortunate mixture, of arrogance and ignorance, defines contemporary psychology. The left hemisphere not only cannot see that there is a problem, it cannot see that it is itself the problem.

The Yak Horns of Technology

A rustic came to a lama and asked him to teach him meditation. And the monk, realizing the mental aptitude of the enthusiast told him to sit in a quiet place and meditate on a yak. The simpleton did as he was directed and after some time when the monk came back to him to out his progress, asked him to come out from the secluded apartment. The rustic said, “How can I come out, the door is too small. These horns of mine do not allow me to get out.”

M.K. Spencer, relating a story told by Alexander David Neil

Working public service offers plentiful opportunities for observation of humans. My job is as a parking ramp cashier and the scenario forces drivers into specific options. There are multiple exit lanes, each with signs and machines, some with cashiers. One amusing pattern is how, once an individual enters a lane, often others will pile up behind them in a long line even though the other lanes are empty. It’s mindless herd mentality and normal human behavior. We are social animals, after all. Following others and doing what they do is a mental shorthand. It works most of the time.

There is another example that is even more amusing and odd. It is also different because it is less universal in involving a specific demographic, mostly young people. Some of the lanes have self-pay stations and there are sometimes problems, as often user error as technological failure. There is a ‘help’ button a customer can push to get immediate assistance, but many customers back up and go to a lane with a cashier. The problem is they usually forget to get their ticket back from the machine by hitting the ‘cancel’ button. So, they show up at my window without a ticket. I tell them they need to get their ticket because otherwise they’ll be charged for a lost ticket.

This gets their attention and also this is where it gets interesting. For older people, they might get irritable at the inconvenience, but they’ll usually get out of their car and walk over to the other lane to retrieve their ticket. Nothing complicated, just common sense, right? Well, let’s introduce into the equation someone in their late teens or early twenties, which at this point means those in Generation Z. Then the response is typically far different.

Upon hearing my explanation of the situation, the young person often looks at me with befuddlement and will tell me they don’t know how to get their ticket because a car pulled behind them. They try to figure out how to drive their car back over… and never doubt that they will try, no matter how much traffic is backed up behind them. If I don’t tell them to get out of their car and walk over, they might struggle for minutes or longer in a state of incomprehension. I usually help them out, but not always. I sometimes leave it as an experiment to see how long it will take them to realize they can get out of their car and simply walk over there.

Kids these days, I tell ya. I’m not without sympathy. It’s not their fault since it is how they’ve been raised, surrounded by and immersed in technology. It’s hard for them to think how to act without technology, to think outside of it. Of course, this makes them very adept in using technology, but sometimes technology is plain unhelpful. Sometimes, you have to get out of your car or get out of whatever other device your mind is trapped within. Those yak horns are only in your imagination.

“Individuation is not the culmination of the person; it is the end of the person.”

Julian Jaynes and the Jaynesian scholars have made a compelling argument about where egoic consciousness originated and how it formed. But in all the Jaynesian literature, I don’t recall anyone suggesting how to undo egoic consciousness, much less suggesting we should attempt annihilation of the demiurgic ego.

That latter project is what preoccupied Carl Jung, and it is what Peter Kingsley has often written about. They suggest it is not only possible but inevitable. In a sense, the ego is already dead and we are already in the underworld. We are corpses and our only task is to grieve.

The Cry of Merlin: Carl Jung and the Insanity of Reason
Gregory Shaw on Peter Kingsley

Kingsley explains that Jung emulated these magicians, and his journey through the Underworld followed the path of Pythagoras, Parmenides and Empedocles. Jung translated the terminology of the ancients into “scientific” terms, calling the initiation he realized in the abyss “individuation.” For Jungians today, individuation is the culmination of psychic development, as if it were our collective birthright. Yet Kingsley points out that this notion of individuation is a domestication, commodification, and utter distortion of what Jung experienced. Individuation is not the culmination of the person; it is the end of the person. It is the agonizing struggle of becoming a god and a person simultaneously, of living in contradictory worlds, eternity and time.

Kingsley reveals that although individuation is the quintessential myth of Jung’s psychology, it is almost never experienced because no one can bear it. Individuation is the surrendering of the personal to the impersonal, and precisely what Jung experienced it to be, the death of his personality. Jung explains that individuation is a total mystery; the mystery of the Grail that holds the essence of God. According to Henry Corbin, Jung saw “true individuation as becoming God or God’s secret.” Put simply, individuation is deification. To his credit, over twenty years ago Richard Noll argued this point and wrote that Jung experienced deification in the form of the lion-headed Mithras (Leontocephalus), but Kingsley gives the context for deification that Noll does not, and the context is crucial. He shows that Jung’s deification was not an “ego trip” that gave rise to “a religious cult with [Jung] as the totem,” Noll’s assumption; nor was it a “colossal narcissism,” as Ernest Jones suggested, but precisely the opposite. Individuation cuts to the very core of self-consciousness; it is the annihilation of the ego, not its inflation. […]

What is fundamentally important about Catafalque is that Kingsley demonstrates convincingly that Jung recovered the shamanic path exemplified by Pythagoras, Parmenides, and Socrates. Jung tried to save us from the “insanity of reason” by descending to the underworld, serving the archetypes, and disavowing the impiety of “the Greeks” who reduce the sacred to rationalizations. There is much in Catafalque I have not addressed, perhaps the most important is Kingsley’s discussion of the Hebrew prophets who raged against a godless world. Kingsley here appropriately includes Allen Ginsberg’s Howl, that draws from the rhythms of these prophets to wail against the “insanity of America,” its mechanized thinking, suffocating architecture, and the robotic efficiency that is the child of Reason. This almost verbatim mirrors the words of Jung who, after visiting New York, says “suppose an age when the machine gets on top of us …. After a while, when we have invested all our energy in rational forms, they will strangle us…They are the dragons now, they became a sort of nightmare.

Kingsley ends Catafalque with depressing prophecies about the end of western civilization, both from Jung and from Kingsley himself. The great wave that was our civilization has spent itself. We are in the undertow now, and we don’t even realize it. To read these chapters is to feel as if one is already a corpse. And Kingsley presents this so bluntly, with so much conviction, it is, frankly, disturbing. And even though Kingsley writes that “Quite literally, our western world has come to an end,” I don’t quite believe him. When speaking about Jung giving psychological advice, Kingsley says “make sure you have enough mētis or alertness not to believe him,” and I don’t believe Kingsley’s final message either. Kingsley’s message of doom is both true and false. The entire book has been telling us that we are already dead, that we are already in the underworld, but, of course, we just don’t understand it. So, then he offers us a very physical and literal picture of our end, laced with nuclear fallout and images of contamination. And he forthrightly says the purpose of his work is “to provide a catafalque for the western world.” It is, he says, time to grieve, and I think he is right. We need to grieve for the emptiness of our world, for our dead souls, our empty lives, but this grief is also the only medicine that can revive the collective corpse that we have become. Kingsley is doing his best to show us, without any false hope, the decaying corpse that we are. It is only through our unwavering acceptance, grieving and weeping for this, that we can be healed. In Jung’s terms, only the death of the personal can allow for birth into the impersonal. Into what…? We cannot know. We never will. It is not for our insatiable minds.

Sugar is an Addictive Drug

Sugar is addictive. That is not a metaphor. It is literally an addictive drug, a gateway drug. Sugar is the first drug that most humans ever experience.

For many Americans, the addictive nature of it begins shaping the brain in infancy, as sweeteners are put into formula. And if you didn’t get formula, I bet you didn’t make it past toddlerhood without getting regularly dosed with sugar: sweet baby food, candy, cake, etc.

Addiction is trained into us during the most key years of physiological development. What we eat in the first few years, as research shows, determines what tastes good to us for the rest of our lives. We are hooked.

(I’ve previously written on food addiction: The Agricultural Mind; & Diets and Systems.)

* * *

WHAT IS FOOD ADDICTION?
By H. Theresa Wright, MS, RD, LDN and Joan Ifland, PhD

The addictive properties of sugar are perhaps the most studied.[6]  Rats will choose sugar, high fructose corn syrup, and saccharine over cocaine and heroin. Rats have shown a withdrawal syndrome similar to that of morphine [7]. Sugar activates the dopamine pathway. [8]  Food addiction recovery groups often recommend abstinence from sugar and sweeteners. [8]

Experts Agree: Sugar Might Be as Addictive as Cocaine
by Anna Schaefer and Kareem Yasin

Indeed, research on rats from Connecticut College has shown that Oreo cookies activate more neurons in the brain’s pleasure center than cocaine does (and just like humans, the rats would eat the filling first). And a 2008 Princeton studyTrusted Source found that, under certain circumstances, not only could rats become dependent on sugar, but this dependency correlated with several aspects of addiction, including craving, binging, and withdrawal.

The case for treating sugar like a dangerous drug

German Lopez: Walk me through the argument for treating sugar like a controlled substance.

Robert Lustig: The definition of addicted is that you know it’s bad for you and you can’t stop anyway, like heroin, cocaine, alcohol, and nicotine. You know it’s bad for you. You  know it will kill you. But you can’t stop anyway, because the biochemical drive to consume is greater than any cognitive ability to restrain oneself.

There are two phenomena attached to addiction: one’s called tolerance, the other is withdrawal. It turns out sugar does both of those as well.

If a substance is abused and addictive and it contributes to societal problems, that’s criteria for regulation.

GL: Is that really grounds for considering it a controlled substance, though?

RL: There are four things that have to be met in order to consider a substance worthy of regulation. Number one: ubiquity — you can’t get rid of it, it’s everywhere. Number two: toxicity — it has to hurt you. Number three: abuse. Number four: externalities, which means it has a negative impact on society.

Sugar meets all four criteria, hands down. One, it’s ubiquitous — it’s everywhere, and it’s cheap. Two, as I mentioned, we have a dose threshold, and we are above it. Three, if it’s addictive, it’s abused. Four, how does your sugar consumption hurt me? Well, my employer has to pay $2,750 per employee for obesity management and medicine, whether I’m obese or not.

GL: The thing that led me to look into your paper is that I wrote an article a couple weeks back about how the three most dangerous drugs in the country are legal: tobacco, alcohol, and prescription painkillers. And a few people mentioned that I forgot sugar. That idea really interested me.

RL: Yeah, that’s right. The Wall Street Journal asked Americans what are the most dangerous of four substances in America: tobacco, 49 percent; alcohol, 24 percent; sugar, 15 percent; and then marijuana, 8 percent. Sugar was doubly worrisome to Americans than marijuana was. How about that?

GL: One potential hurdle is that controlled substances are typically seen as drugs. Do you consider sugar a drug?

RL: Of course it’s a drug. It’s very simple: a drug is a substance that has effects on the body, and the effects have to be exclusive of calories.

So in order to qualify it as a drug, the negative effects of sugar have to be exclusive of its calories. Is 100 calories of sugar different from, say, 100 calories in broccoli? The answer is absolutely.

Can you name another substance of abuse for which the effect of the substance is more dangerous than the calories it harbors? Alcohol. Its calories are dangerous not because they’re calories; they’re dangerous because they’re part of alcohol. Sugar is the same.

Sugar is the alcohol of a child. You would never let a child drink a can of Budweiser, but you would never think twice about a can of Coke. Yet what it does to the liver, what it does to the arteries, what it does to the heart is all the same. And that’s why we have adolescents with type 2 diabetes.

 

There are some studies of rats that are completely addicted to cocaine. So they have this drip, cocaine just comes out, and so they’re consuming it all the time. This is the crazy part. As soon as they taste sugar, they don’t care about the cocaine anymore and all they care about is a sugar. That is how addictive sugar is. It’s so addictive that rats that are addicted to cocaine, which we all know is an addictive substance, they would prefer the sugar over cocaine.

There is another study where rats are pulling a cord and every time they pull the cord a little bit a little drip of sugar water comes out. So they’re confined into this space and that is all they get. So then they learn to pull the cord so that they can get their drip of sugar. And over time the researchers open the door so that they have access to the outside. They even have access to family and they have access to all these other foods.

And guess what these rats do. They don’t care about anything else, but they just wait and wait and obsessively pull the cord to try to get sugar. This is how scary and addictive sugar is.

 

Fat Chance: Fructose 2.0 by Dr. Robert Lustig (Transcript)

So the question is, is fast food addictive? What do you think? Yes? No? Okay, so we actually looked at that question.

So everybody familiar with this book? Michael Moss put this out, “Salt, sugar, fat, how the giants hooked us”, right? This is wrong, this is a mistake. Because there is one thing not on the list. What’s missing? Caffeine.

Now we’ve got fast food! Okay, salt, sugar, fat and caffeine, right? So the question is, of these four which are addictive?

Let’s talk about salt. Is salt addictive? No, it’s not addictive. In humans the threshold is physiologically fixed, higher levels are attributable to preference but you can alter that preference, lots of people do especially when they have to go low salt for some reason. And we know because we take care of a disease in endocrinology called salt-losing congenital adrenal hyperplasia where their kidneys are losing salt non stop. But when we give them the salt retaining hormone that works in the kidney called aldosterone, their salt intake goes way down. And if they were addicted that wouldn’t happen.

So when we fix their physiology, their preference gets a lot better. So salt? Not addictive.

Now let’s take fat. Is fat addictive? What do you think? Nope, rodents binge but show no signs of dependence, and humans they always binge on high fat high carb or high sugar items, like pizza and ice cream, you don’t binge on high fat per se, otherwise the Atkins diet would have everybody addicted and they’ll tell you, you know they are losing weight, how could they lose weight if they are all addicted?

Energy density actually has a stronger association with obesity and metabolic syndrome than fat does.

So, fat? Not addictive.

So we are left with these two. Caffeine? Oh man, caffeine is addictive and if you take my Starbucks away from me I’ll kill you. Model drug of dependence, gateway drug in fact, dependence show in children, adolescence, adults, 30% who consume it meet the DSM criteria for dependence and physiological addiction is well established with the headache, and the test performance, and everything else. Mega addictive.

But do you see anybody going out and regulating Starbucks or Pizza or anything like that? Why? Because it’s not toxic. It’s addictive, but not toxic, unless you mix it with alcohol and then you got something called four loco and that we are banning, everybody got it?

So when it’s toxic and addictive we ban it or we regulate it. And so, caffeine and alcohol together that’s a bad deal. But caffeine alone? Keep your hands of my Starbucks.

So caffeine? Yes, addictive.

Okay, that leaves this one. Sugar, is sugar addictive? What do you think? You know, we’ve known this for a long time, because, anybody know what this is? It’s called sweeties. This is a super concentrated sucrose, sugar solution, that you dip the pacifier in and you put in the newborn baby boy’s mouth before you do the circumcision, because it releases opioids and deadens the pain. And this has been known forever. Then you mix it with a little wine and then you got a really good cocktail, eh?

So is there really such a thing as sugar addiction, we have to look for similarities to other drugs of dependence like nicotine, morphine, amphetamine, cocaine. The one I think is most appropriate is alcohol, because after all alcohol and sugar are basically metabolized the same way, because after all where do you get alcohol from? Fermentation of sugar, it’s called wine, right? We do it every day, up in Sonoma. The big difference between alcohol and sugar is that for alcohol the yeast does the first step of metabolism called glycolysis; for sugar we do our own first step, but after that when the mitochondria see it, it doesn’t matter where it came from. And that’s the point, and that’s why they both cause the same diseases. And they do the same thing to the brain.

So for the criteria for addiction in animals are bingeing, withdrawal, craving, and then there is one down here called cross-sensitization with other drugs of abuse, that means that if you expose an animal to one drug of abuse, like cocaine for 3 weeks and addict them, and then you expose them to a second drug they’ve never seen before, like say amphetamine, they’re addicted to the amphetamine even though they’d never seen it before, because the dopamine receptors are already down-regulated because they are the same dopamine receptors, everybody got it?

Okay, and so, does sugar do this? Absolutely. Q.E.D. slammed on, sugar is addictive in animals.

What about humans? Who saw this movie? Right? Did you like it? More or less?

I’ve a big problem with this movie, because if you watch the movie his doctor, Morgan’s doctor keeps saying: “You gotta get off this high fat diet, high fat diet, high fat diet, high fat diet, high fat diet” Not the high fat diet, it’s the high sugar diet, high sugar diet, that’s what caused all the problems.

So, can sugar be addictive? Watch.

“I was feeling bad” “In the car, feeling like…I was feeling really, really sick and unhappy…started eating, feel great…feel really good now… I feel so good as crazy… Ain’t that right baby? Yeah you’re right darling”

This was on day 18, of his 30 day sojourn from McDonald’s. He just described withdrawal, that’s withdrawal, and he needed another hit in order to feel good again. He just described withdrawal, he was a vegan, right? Because his girlfriend was a vegan chef and in 18 days he’s a sugar addict.

So, you tell me. So this is what we are dealing with. We are dealing with an industry that wants us to consume its product, well gee, every industry wants us to consume their product in some fashion or another, the question is what if it hurts you? What if it hurts you?

 

 

 

 

The Dark Mind of Robert David Steele

There is an area of social science research that speaks powerfully to the reactionary mind and why it is so hard to pin down. In a reactionary society such as ours during this reactionary age of modernity, it can be hard to tell who is and who is not a reactionary. I suspect that all of us have a bit of reactionary in us, as potential that can become manifest when we let down our guard. One of the tricky parts is reactionaries rarely identity as reactionaries nor would think of themselves that way. That is part of the nature of the reactionary mind, to appear as something else, even to the person possessed by it. To map out the terrain, it’s helpful to look to the Dark Triad — the potent mix of authoritarianism, narcissism, and Machiavellianism. The third facet, less often discussed, is my focus here (Silvio Manno, The dangerous falsehoods fabricated by Machiavellian leaders afflict the world today).

Machiavellianism relates to suspicious paranoia that can express as belief in conspiracy theories. We tend to think of this tendency in negative terms, but let’s keep in mind that, “On the positive side, belief in conspiracy theories has been associated with openness to experience… and support for democratic principles” (Sutton & Douglas, see below). As it has been said, just because you’re paranoid doesn’t mean they aren’t out to get you. Maintaining an attitude of mistrust toward the threat of authoritarianism is a reasonable and moral response to authoritarianism. Yet on the other hand, mistrust pushed to the extreme makes one vulnerable to the lures of the reactionary mind, fear turned in on itself and projected out onto others. A deficit of trustworthy sources of info, as happens under oppressive conditions, creates a vacuum that must be filled and people do their best to make sense of the patterns they perceive. This is not a healthy situation. When culture of trust is lacking, people perceive others as untrustworthy and they act accordingly. “Machiavellianism predicted participants’ agreement with conspiracy theories,” wrote Sutton and Douglas. “Also, participants’ personal willingness to conspire predicted the extent to which they endorsed the conspiracy theories. This mediated the relationship between Machiavellianism and endorsement of conspiracy theories.” This is how the dark triad comes to dominance, in the world and in the mind. It warps our sense of reality and creates warped individuals.

Just think of Trump and you have the idiot savant’s version of this phenomenon (heavy emphasis on the idiot part), although I’d advise careful awareness as it can express in a much more sophisticated manner (e.g., Karl Rove and his cynical manipulation of the “reality-based community”). Even so, let’s stick with this obvious example for the very reason that apparently it isn’t obvious to many. There are those who think of themselves as good people, shocking as it may seem, who genuinely believe and have faith in Trump (I’ve already analyzed the authoritarianism of Clinton Democrats and so I will ignore that for the time being). I know such people. Some of them are simply not all that thoughtful and so are easily manipulated by lies, melodrama, partisanship, and whatever other bullshit. I have a hard time being too harshly critical, as many of them really don’t understand anything about what is going on in the world. They are useful idiots to the social dominators aspiring to their authoritarian dreams, but they honestly don’t have a clue what they’re being used for. This makes them potentially dangerous, even if they are less of a direct threat. There is another class of Trump supporter, though, that is far more dangerous and concerning, not to mention bewildering.

Consider Robert David Steele, a military officer and supposedly a former (?) CIA spy who has since re-styled himself as a political reformer, open source advocate, and freedom fighter. Going by my initial take, he comes across as a right-wing nationalist and populist with a Cold War vibe about him, the weird mix of religious patriotism and pseudo-libertarianism, capitalist realism and regressive flirtations with progressive language… or something like that, although when he is criticizing corrupt power and advocating open source he can almost sound like a leftist at times. He was the 2012 Reform Party’s presidential nominee and he is more well known, across the political spectrum, for advocating electoral reform. Some of what he says sounds perfectly reasonable and respectable, but he also makes some truly bizarre statements. He has claimed that the world is ruled by Zionists, especially Hollywood, that Hillary Clinton wants to legalize bestiality and pedophilia, and that NASA is sending abducted children to be sex slaves on a Martian colony (Kyle Mantyla, Robert David Steele: Hillary Clinton Was ‘Going To Legalize Bestiality And Pedophilia’; Ben Collins, NASA Denies That It’s Running a Child Slave Colony on Mars; Wikispooks, Robert Steele: Mars child colony claims). In his Zionist fear-mongering, he has associated with the likes of Jeff Rense, David Icke, and David Duke — as dementedly and dangerously far right as you can get without falling off the edge of flat earth.

I’m familiar with right-wing paranoiacs and I’m not without sympathy. There is a soft place in my heart for conspiracy theories and my curiosity has led me into dark corners of humanity, but I must admit that Steele is an extreme example among extremes. More than a few people think that, if not outright incompetent, he is controlled opposition and a paid fake, a disinfo agent, a fraud, hustling a buck, or that something is not right about him, maybe even that Once CIA always CIA, while it’s also been said he sounds like Alex Jones — the latter is understandable since he has been interviewed by Jones (Richard Wooley, Donald Trump, Alex Jones and the illusion of knowledge). The same accusations are made against Alex Jones as well and they do ring true. Some wealthy interests are promoting Jones and probably Steele too, for whatever reason that might be — the alt-right is filled with shills, paid trolls, and a variety of mercenaries (Competing Media ManipulationsGrassroots or Astroturf?, Skepticism and Conspiracy, Hillsdale’s Imprimis: Neocon PropagandaVictor Davis Hanson: Right-Wing PropagandistBerkeley Scholar Doesn’t Admit He Is A Corporate Shill). I’m not sure it matters whether or not Steele, Jones, and similar types are true believers. Either way, they’re influential figures to keep your eyes on.

Steele has also done talks and interviews with The Guardian’s Nafeez Ahmed, RT’s Max Keiser, Coast to Coast AM’s Lisa Garr, and many others, including multiple appearances on BBC Radio. His writings can be found in a wide variety of publications, such as: Forbes, Huffington Post, Veterans Today, CounterPunch, openDemocracy, etc. Articles about him and his election reform campaign have appeared in the mainstream media as well. Bernie Sanders and Thom Hartmann wrote prefaces to one of his books, and Howard Bloom wrote a foreword to another one. The guy gets around and draws some significant figures into his orbit. He also has appeared alongside the leftist citizen-journalist Caitlin Johnstone. She has sought cross-ideological alliance with the ‘anti-establishment’ right which unfortunately, I’d argue, is inseparable from the alt-right despite her claims to the contrary. She received a lot of flack and now regrets allowing herself to get associated with him: “I made a very unwise appearance alongside the very shady Robert David Steele” (A Year Ago I Wrote About Cross-Ideological Collaboration. Here’s How It’s Been Going). She got played by Steele, as did former Congresswoman Cynthia McKinney, although the latter was already well on her way to discrediting herself with conspiracy theories and antisemitism (see her page on Rational Wiki and on Discover the Networks). McKinney is obviously drawn to Steele because of his own inclinations toward conspiracy theories and antisemitism; but what is Johnstone’s excuse? Her husband, Tim Foley, says “she adores” McKinney and that is precisely how she got mixed up with Steele in the first place (10 Facts About Caitlin Johnstone, From The Guy Who Knows Her Better Than Anyone). Such unwise decisions seem inevitable once entering the murky waters and miasmic fog where swamp creatures dwell.

Johnstone’s husband blames himself for letting that situation happen, as he encouraged her to go on the show: “Before we knew it there she was, with Steele talking about how “the alt-right and the alt-left” need to come together, a position Caitlin never held, but in too much of a mental fog to protest” (10 Facts About Caitlin Johnstone, From The Guy Who Knows Her Better Than Anyone). That doesn’t seem accurate. After the show, she had a positive appraisal of Steele: “Here’s Cynthia McKinney, PhD and Robert David Steele coming to my defense over the right-left collaboration against the deep state I keep talking about.” (Facebook, July 21, 2017). Those words express no desire to protest nor a delayed realization that there was a potential problem. “If you recall, this is around the same time,” writes Scott Creighton, “that swindler Robert David Steele was pushing for the same “unite” cause but at least he was honest when he said he was doing it in order to bring the alt-left into the Trump camp in order to ensure his victory in 2020. That fraud fell apart and eventually Caitlin realized what a cretin [Mike] Cernovich was and she too gave up on this effort” (How Caitlin Johnstone is Just Plain Wrong about “Conspiracy Theories”).

This is how right-wing reactionaries seek legitimacy, by co-opting the rhetoric of the political left (e.g., Glenn Beck writing a book about Thomas Paine) and, by disguising their true intentions, drawing in those who otherwise would be resistant and unpersuaded (e.g., Steve Bannon as the architect behind Donald Trump using New Deal Progressive rhetoric as campaign promises). This is a lesson I learned in dealing with the alt-right. I used to debate with race realists such as human biodiversity advocates, until I realized all that I was accomplishing was giving them legitimacy in treating their views as worthy of public debate. It was irrelevant that they presented themselves as rational and weren’t explicitly racist, even in their denying racist allegations with shows of sincerity, as their rhetoric was making racism more acceptable by spinning it in new ways. That is their talent, spreading bullshit. Reactionaries are brilliant in manipulating the left in this manner. This is what worries me about Steele, in how he is able to speak to the concerns of the political left and then use the support he gains to promote Trump’s truly sick agenda or rather to promote the agenda of the lords and masters of the swamp hidden behind Trump’s buffoonery.

There is good reason Johnstone came around to calling Steele ‘shady’. His response to free speech of others is to threaten their free speech. The economist Michael Hudson, among others, has written about Steele’s use of frivolous lawsuits to shut down opponents (Robert David Steele’s ‘Feral’ Lawsuit Movement). In writing about this anti-democratic behavior (Robert David Steele: The Pinocchio Effect), he drew the ire of Steele himself who, in a comment from just a couple of days ago, wrote: “Thank you for this. I have copied it to my attorney with the suggestion that we add you to the roster of those to be called to testify about the conspiracy to defame me. The facts are the facts. I have two witnesses, both employed by NATO, who will testify to the truth of my claim. You are now part of my lawsuit against Jason Goodman, Patricia Negron, and Susan Lutzke. Congratulations.” Instead of countering with a fair-minded response and fact-based counterargument, he immediately went on the attack to silence someone who dared oppose him, which ironically substantiates the mindset portrayed in the article itself. It’s even more amusing in the context that, a little less than a decade ago, Steele specifically told people they should “listen to” Michael Hudson (No Labels “Non-Party” Equals “Four More Years” for Wall Street, Goldman Sachs, Grand Theft USA). This demonstrates lizard-brain levels of moral depravity, and the hypocrisy of it is beyond depressing. He is the guy presenting himself as a defender of an open society. Obviously, he isn’t to be trusted.

Yet I can’t help but feeling sorry for the guy. In the way that Trump appears to be exhibiting early onset dementia, I wouldn’t be surprised if Steele is suffering from paranoid schizophrenia or some other mental illness. Then again, maybe that is a given in a society that is insane. People become Machiavellian because that is how a Machiavellian society shapes them, and most definitely Steele is so shaped at this point, after having spent his entire career in right-wing authoritarian institutions of power, the military and CIA. That is what first occurred to me when my progressive friend asked me to look into him. The kind of anti-Zionist language goes far beyond criticisms of Israel as an authoritarian state, in the way the United States is also authoritarian. In his Machiavellian-minded support of President Trump, Steele wants to believe that Trump’s outward show of support for Machiavellian ‘Zionists’ is a deceptive ploy of Machiavellian genius: “The announced move of the US Embassy to Jerusalem – what one erudite British citizen labels a “diplomatic bon-bon” [7] – may have been part of a deeper strategy to finish Benjamin Netanyahu off while uniting the Arab tribes” (Is Zionism Over?). Ah, the tangled webs the paranoid mind weaves. His obsession with conspiracy theories about Zionists and pedophilia rings is typical of a certain kind of right-wing mindset, but I’m not sure that he was always this way.

My friend was inspired by his book, The Open Source Revolution, written back in 2012. That book does not deal in conspiracy theory, as far as I can tell, nor does it once mention Zionism, pedophilia, etc. Here is a taste of it: “The goal is to reject money and concentrated illicitly aggregated and largely phantom wealth in favor of community wealth defined by community knowledge, community sharing of information, and community definition of truth derived in transparency and authenticity, the latter being the ultimate arbiter of shared wealth. When we relate and share knowledge authentically, this places us in a state of grace, a state of “win-win” harmony with all others, and establishes trust among all” (from excerpt). Sounds nice, inspiring even. He mentions how he had originally believed in Barack Obama before realizing he was more of the same. That is what led to his writing an earlier book, Election 2008: Lipstick on the Pig. By the time 2012 rolled around, his identity as a patriotic, paternalistic, and progressive Democrat was clearly changing. In the book from that year, he wrote that,

“Understanding and accepting this sorry state of affairs has been part of my own personal and professional rejection of American exceptionalism and the rule by an elite. This shift in perspective recognizes the need for a new planet-wide consciousness based on an open information sharing and direct democracy. For many years I thought that our elected representatives had been corrupted by corporations and, more recently, by banks (or, I should say, the people who use these structures as veils for their own unethical accumulation of profit). I was in error. As we now know from numerous cases, the most blatant being that of former Congressman Randy Cunningham, it is more often elected representatives who have been shaking down banks and corporations in order to fund their own ambitions to remain in power and to profit at the expense of the people.”

Though not speaking in the overt language of the conspiratorial-minded, his words were beginning to express more of that worldview. Rather than it being a systemic problem of capitalism and corporatism, it is the fault of devious individuals who manipulate the system. The elite, rather than being an enlightened technocracy, are something darker — in this black-and-white dogmatism, those in positions of power are either good or evil with no gray area, no shade or tint, much less nuances of color. Before it was the banks that were the problem, but with his shift of focus it’s a small step to embracing the alleged child-molesting Zionists as the real source of power behind the banks. He used to talk about peaceful reform, but, in recent years, he has taken on more of the dark vision of Christian fundamentalism with hints of gnostic-like demonic archons and End Times longing. Nonetheless, I was curious and felt a desire to give Steele a fair hearing. So, I used a web search function to look for results prior to Trump’s presidential campaign, prior to Obama’s administration, and prior to the 9/11 terrorist attack. He didn’t sound all that crazy in the past and, the further I looked back, the more normal he spoke.

Even in 2012 when he started ranting about Zionists, it was relatively mild in tone while also giving voice to anti-authoritarianism and anti-colonialism, almost left-wing in ideology (The after effects of the Arab Spring, good or bad for Israel?). It’s true that Steele was on Alex Jones show as early as 2006, but keep in mind that Jones was far less crazy back then and far more coherent in his own criticisms of corrupt and abusive power (Kourosh Ziabari, Google following CIA’s path in confronting Iran). It can be easy to forget that, when you go back far enough, Jones had a significant following on the political left. It was a different world before both Trump lunacy syndrome and Obama derangement syndrome. It’s been a slow but steady decline for people like this. Decades ago, all that Steele was known for was his open source advocacy in arguing that secrecy was a bad way of doing anything, especially government. There was nothing controversial about this, other than being controversial to secretive authoritarians.

He went from that to his present belief that there are NASA martian colonies filled with child sex slaves. In both cases, he comes across as wholly earnest, for whatever that is worth. Still, earnest or not, there might be forces greater than him that are using and manipulating him for purposes he does not fathom. Seeing Machiavellianism in others opens one up to manipulation by Machiavellian social dominators. If there actually were demonic/Satanic forces as he believes, then one might suggest he is possessed by them. He has turned to the dark side or rather his mind has become lost in dark places, but it’s an all too common, if extreme, example of spiritual sickness and soul loss. His fear-mongering about pedophiles ruling the world is not only mental illness for there are real-world consequences, such as Alex Jones spreading conspiracy theories about pedophilia (Pizzagate) until one of his listeners took him seriously enough to go out and shoot up a restaurant.

I have no desire to discredit the lifework of Robert David Steele. His earlier message of freedom for all remains valid, but as a spokesperson he is damaged goods and his writings are tainted. I gave an accounting of this to my aforementioned friend who inquired about him. My friend became convinced that he should no longer recommend him to others. It’s sad to see someone’s mental breakdown play out on the public stage. And even sadder is that the message itself loses credibility in the process and so public debate about democracy becomes muddied. That furthers the agenda of anti-democratic forces. If nothing else, we can learn from such cases, learn about the importance of intellectual self-defense and psychological self-care. It’s too easy for any of us, in reacting to reactionaries, to become reactionaries ourselves. We should be aware of how hatred and fear can consume the mind. We can only be ruled by the darkness outside of us when it has first come to rule inside of us. Maintaining a positive vision is most important as a candle to light our way, to see the passage ahead and to see the precipice we walk along. It’s a long way down to tumble, if we lose our footing.

* * *

Power, Politics, and Paranoia
ed. by Jan-Willem van Prooijen, Paul A. M. van Lange
“Examining the monological nature of conspiracy theories”
by Robbie M. Sutton and Karen M. Douglas

People generally want to explain socially significant events such as the deaths of celebrities and major international disasters (e.g., Leman and Cinnirella, 2007 ; Weiner, 1985 ), but lack direct access to definitive proof of the truth or otherwise of a conspiracy theory. Even the educated middle classes of functioning democracies need to rely on second, third, and n th hand reportage and interpretation in media channels, since they lack direct access to the facts (Sutton, 2010 ). Writing from a political science perspective, Sunstein and Vermeule ( 2009 ) speculate that communities who lack even this information tend to be more susceptible to conspiracy theorizing. These communities include disadvantaged and marginalized groups, and citizens of highly authoritarian states. Such communities experience “a sharply limited number of (relevant) informational sources,” which leads them to experience “crippled epistemologies” in which they are forced to rely on unreliable sources (p. 204). As psychologists, we would suggest that lack of knowledge, however severe, forces members of the public to rely not only on indirect and unreliable sources but also on cognitive heuristics that allow workable, even if unreliable, inferences in the face of incomplete information. One such heuristic is projection: using beliefs about the self as a basis to evaluate claims about other people.

Specifically, we contend that the social-cognitive tool of projection can help people in these uncertain situations (Ames, 2004 ; Krueger, 2000 ; McCloskey, 1958 ). When people are unsure about what someone may or may not have done, they can use their own thoughts, feelings, motivations, or action tendencies as a source of information. That is, they can judge others by judging what they themselves think they would do. For example, people may be more likely to adopt the hypothesis that Princess Diana was assassinated if they believe that they, personally, would be willing to take part in this act if they were in the same situation. So, a person’s perception that “I would do it” informs their perception that “others did it.” Beliefs in conspiracy theories – even about completely unrelated events – may therefore be held together by people’s judgments of their own moral tendencies.

We tested the role of projection in two studies (Douglas and Sutton, 2011 ). In the first study, we asked participants to complete the scale for Machiavellianism – an individual differences variable associated with personal morality (Christie and Geis, 1970 ). Measuring Machiavellianism allowed us to test the prediction that the relationship between personal moral qualities and beliefs in conspiracy theories would be mediated by projection of those moral qualities onto others. We asked participants to rate their agreement with a range of conspiracy theories and measured their tendency to project by asking them, for each individual conspiracy theory, how willing they would have been to participate in the conspiracy themselves (e.g., “If you had been in the position of the US government, would you have ordered the attack on the Twin Towers on 9/11?”). As hypothesized, Machiavellianism predicted participants’ agreement with conspiracy theories. Also, participants’ personal willingness to conspire predicted the extent to which they endorsed the conspiracy theories. This mediated the relationship between Machiavellianism and endorsement of conspiracy theories.

In a second study, we experimentally manipulated participants’ feelings of personal morality. We reasoned that by recalling a time when they behaved in a moral and decent manner, people would perceive themselves as less likely to participate in conspiracies. As predicted, participants asked to remember a time when they helped someone in need were subsequently less willing to conspire than control participants. They also endorsed a range of conspiracy theories less strongly. This decline in conspiracy belief was mediated by a decrease in willingness to conspire. These two studies, taken together, suggest that conspiracy theories may be held together by projection. Beliefs may not support each other, but instead may be held together by believers’ perception of their own moral tendencies (Douglas and Sutton, 2011 ).

“Consciousness is a very recent acquisition of nature…”

“There are historical reasons for this resistance to the idea of an unknown part of the human psyche. Consciousness is a very recent acquisition of nature, and it is still in an “experimental” state. It is frail, menaced by specific dangers, and easily injured. As anthropologists have noted, one of the most common mental derangements that occur among primitive people is what they call “the loss of a soul”—which means, as the name indicates, a noticeable disruption (or, more technically, a dissociation) of consciousness.

“Among such people, whose consciousness is at a different level of development from ours, the “soul” (or psyche) is not felt to be a unit. Many primitives assume that a man has a “bush soul” as well as his own, and that this bush soul is incarnate in a wild animal or a tree, with which the human individual has some kind of psychic identity. This is what the distinguished French ethnologist Lucien Lévy-Brühl called a “mystical participation.” He later retracted this term under pressure of adverse criticism, but I believe that his critics were wrong. It is a well-known psychological fact that an individual may have such an unconscious identity with some other person or object.

“This identity takes a variety of forms among primitives. If the bush soul is that of an animal, the animal itself is considered as some sort of brother to the man. A man whose brother is a crocodile, for instance, is supposed to be safe when swimming a crocodile-infested river. If the bush soul is a tree, the tree is presumed to have something like parental authority over the individual concerned. In both cases an injury to the bush soul is interpreted as an injury to the man.

“In some tribes, it is assumed that a man has a number of souls; this belief expresses the feeling of some primitive individuals that they each consist of several linked but distinct units. This means that the individual’s psyche is far from being safely synthesized; on the contrary, it threatens to fragment only too easily under the onslaught of unchecked emotions.”

Carl Jung, Man and His Symbols
Part 1: Approaching the Unconscious
The importance of dreams