Individualism and Isolation

“When an Indian child has been brought up among us, taught our language and habituated to our customs, yet if he goes to see his relations and makes one Indian ramble with them, there is no persuading him ever to return. [But] when white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho’ ransomed by their friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good opportunity of escaping again into the woods, from whence there is no reclaiming them.”
~ Benjamin Franklin

“The Indians, their old masters, gave them their choice and, without requiring any consideration, told them that they had been long as free as themselves. They chose to remain, and the reasons they gave me would greatly surprise you: the most perfect freedom, the ease of living, the absence of those cares and corroding solicitudes which so often prevail with us… all these and many more motives which I have forgot made them prefer that life of which we entertain such dreadful opinions. It cannot be, therefore, so bad as we generally conceive it to be; there must be in their social bond something singularly captivating, and far superior to anything to be boasted of among us; for thousands of Europeans are Indians, and we have no examples of even one of those Aborigines having from choice become Europeans! There must be something more congenial to our native dispositions than the fictitious society in which we live, or else why should children, and even grown persons, become in a short time so invincibly attached to it? There must be something very bewitching in their manners, something very indelible and marked by the very hands of nature. For, take a young Indian lad, give him the best education you possibly can, load him with your bounty, with presents, nay with riches, yet he will secretly long for his native woods, which you would imagine he must have long since forgot, and on the first opportunity he can possibly find, you will see him voluntarily leave behind all you have given him and return with inexpressible joy to lie on the mats of his fathers…

“Let us say what will of them, of their inferior organs, of their want of bread, etc., they are as stout and well made as the Europeans. Without temples, without priests, without kings and without laws, they are in many instances superior to us, and the proofs of what I advance are that they live without care, sleep without inquietude, take life as it comes, bearing all its asperities with unparalleled patience, and die without any kind of apprehension for what they have done or for what they expect to meet with hereafter. What system of philosophy can give us so many necessary qualifications for happiness? They most certainly are much more closely connected to nature than we are; they are her immediate children…”
~ J. Hector St. John de Crèvecœur

Western, educated, industrialized, rich, and democratic. Such societies, as the acronym goes, are WEIRD. But what exactly makes them weird?

This occurred to me because of reading Sebastian Junger’s Tribe. Much of what gets attributed to these WEIRD descriptors has been around for more than a half millennia, at least since the colonial imperialism began in Europe. From the moment Europeans settled in the Americas, a significant number of colonists were choosing to live among the natives because in many ways that lifestyle was a happier and healthier way of living with less stress and work, less poverty and inequality, not only lacking in arbitrary political power but also allowing far more personal freedom, especially for women.

Today, when we bother to think much about the problems we face, we mostly blame them on the side effects of modernity. But colonial imperialism began when Europe was still under the sway of of monarchies, state churches, and feudalism. There was nothing WEIRD about Western civilization at the time.

Those earlier Europeans hadn’t yet started to think of themselves in terms of a broad collective identity such as ‘Westerners’. They weren’t particularly well educated, not even the upper classes. Industrialization was centuries away. As for being rich, there was some wealth back then but it was limited to a few and even for those few it was rather unimpressive by modern standards. And Europeans back then were extremely anti-democratic.

Since European colonists were generally no more WEIRD than various native populations, we must look for other differences between them. Why did so many Europeans choose to live among the natives? Why did so many captured Europeans who were adopted into tribes refuse or resist being ‘saved’? And why did the colonial governments have to create laws and enforce harsh punishments in their having tried to stop people from ‘going native’?

European society, on both sides of the ocean, was severely oppressive and violent. That was particularly true in Virginia that was built on the labor of indentured servitude, at a time when most were worked to death before getting the opportunity for release from bondage. They had plenty of reason to seek the good life among the natives. But life was less than pleasant in the other colonies as well. A similar pattern repeated itself.

Thomas Morton went off with some men into the wilderness to start their own community where they commingled with the natives. This set an intolerable example that threatened Puritan social control and so the Puritans destroyed their community of the free. Roger Williams, a Puritan minister, took on the mission to convert the natives but found himself converted instead. He fled Puritan oppression because, as he put, the natives were more civilized than the colonists. Much later on, Thomas Paine living near some still free Indian tribes observed that their communities demonstrated greater freedom, self-governance, and natural rights than the colonies. He hoped Americans could take that lesson to heart. Other founders, from John Adams to Thomas Jefferson, looked admiringly to the example of their Indian neighbors.

The point is that whatever is problematic about Western society has been that way for a long time. Modernity has worsened this condition of unhappiness and dysfunction. There is no doubt that becoming more WEIRD has made us ever more weird, but Westerners were plenty weird from early on before they became WEIRD. Maybe the turning point for the Western world was the loss of our own traditional cultures and tribal lifestyles, as the Roman model of authoritarianism spread across Europe and became dominant. We have yet to shake off these chains of the past and instead have forced them upon everyone else.

It is what some call the Wetiko, one of the most infectious and deadly of mind viruses. “The New World fell not to a sword but to a meme,” as Daniel Quinn stated it (Beyond Civilization, p. 50). But it is a mind virus that can only take hold after immunity is destroyed. As long as there were societies of the free, the contagion was contained because the sick could be healed. But the power of the contagion is that the rabidly infected feel an uncontrollable compulsion to attack and kill the uninfected, the very people who would offer healing. Then the remaining survivors become infected and spread it further. A plague of victimization until no one is left untouched, until there is nowhere else to escape. Once all alternatives are eliminated, once a demiurgic monoculture comes to power, we are trapped in what Philip K. Dick called the Black Iron Prison. Sickness becomes all we know.

The usefulness of taking note of contemporary WEIRD societies isn’t that WEIRDness is the disease but that it shows the full blown set of symptoms of the disease. But the onset of the symptoms comes long after the infection, like a slow-growing malignant tumor in the brain. Still, symptoms are important, specifically when there is a comparison to a healthy population. That is what the New World offered the European mind, a comparison. The earliest accounts of native societies in the Americas helped Europeans to diagnose their own disease and helped motivate them to begin looking for a cure, although the initial attempts were fumbling and inept. The first thing that some Europeans did was simply to imagine what a healthy community might look like. That is what Thomas More attempted to do six centuries ago with his book Utopia.

Maybe the key is that of social concerns. Utopian visions have always focused on the social aspect, typically describing how people would ideally live together in communities. That was also the focus of the founders when they sought out alternative possibilities of organizing society. The change with colonialism, as feudalism was breaking down, was loss of belonging, of community and kinship. Modern individualism began not as an ideal but as a side effect of social breakdown, a condition of isolation and disconnection forced upon entire populations rather than having been freely chosen. And it was traumatizing to the Western psyche, and still is traumatizing, as seen with the high rates of mental illnesses in WEIRD societies, especially in the hyper-individualistic United States. That began before the defining factors of the WEIRD took hold. It was that trauma that made the WEIRD possible.

The colonists, upon meeting natives, discovered what had been lost. And for many colonists, that loss had happened within living memory. The hunger for what was lost was undeniable. To have seen a traditional communities that still functioned would have been like taking a breath of fresh air after having spent months in the stench of a ship’s hull. Not only did these native communities demonstrate what was recently lost but also what had been lost so much earlier. As many Indian tribes had more democratic practices, so did many European tribes prior to feudalism. But colonists had spent their entire lives being told democracy was impossible, that personal freedom was dangerous or even sinful.

The difference today is that none of this is within living memory for most of us, specifically Americans, unless one was raised Amish or in some similar community. The closest a typical American comes to this experience is by joining the military during war time. That is one of the few opportunities for a modern equivalent to a tribe, at least within WEIRD societies. And maybe a large part of the trauma soldiers struggle with isn’t merely the physical violence of war but the psychological violence of returning to the state of alienation, the loss of a bond that was closer than that of their own families.

Sebastian Junger notes that veterans who return to strong social support experience low rates of long-term PTSD, similar to Johann Hari’s argument about addiction in Chasing the Scream and his argument about depression in Lost Connections. Trauma, depression, addiction, etc — these are consequences of or worsened by isolation. These responses are how humans cope under stressful and unnatural conditions.

* * *

Tribe: On Homecoming and Belonging
by Sebastian Junger
pp. 16-25

The question for Western society isn’t so much why tribal life might be so appealing—it seems obvious on the face of it—but why Western society is so unappealing. On a material level it is clearly more comfortable and protected from the hardships of the natural world. But as societies become more affluent they tend to require more, rather than less, time and commitment by the individual, and it’s possible that many people feel that affluence and safety simply aren’t a good trade for freedom. One study in the 1960s found that nomadic !Kung people of the Kalahari Desert needed to work as little as twelve hours a week in order to survive—roughly one-quarter the hours of the average urban executive at the time. “The ‘camp’ is an open aggregate of cooperating persons which changes in size and composition from day to day,” anthropologist Richard Lee noted with clear admiration in 1968. “The members move out each day to hunt and gather, and return in the evening to pool the collected foods in such a way that every person present receives an equitable share… Because of the strong emphasis on sharing, and the frequency of movement, surplus accumulation… is kept to a minimum.”

The Kalahari is one of the harshest environments in the world, and the !Kung were able to continue living a Stone-Age existence well into the 1970s precisely because no one else wanted to live there. The !Kung were so well adapted to their environment that during times of drought, nearby farmers and cattle herders abandoned their livelihoods to join them in the bush because foraging and hunting were a more reliable source of food. The relatively relaxed pace of !Kung life—even during times of adversity—challenged long-standing ideas that modern society created a surplus of leisure time. It created exactly the opposite: a desperate cycle of work, financial obligation, and more work. The !Kung had far fewer belongings than Westerners, but their lives were under much greater personal control. […]

First agriculture, and then industry, changed two fundamental things about the human experience. The accumulation of personal property allowed people to make more and more individualistic choices about their lives, and those choices unavoidably diminished group efforts toward a common good. And as society modernized, people found themselves able to live independently from any communal group. A person living in a modern city or a suburb can, for the first time in history, go through an entire day—or an entire life—mostly encountering complete strangers. They can be surrounded by others and yet feel deeply, dangerously alone.

The evidence that this is hard on us is overwhelming. Although happiness is notoriously subjective and difficult to measure, mental illness is not. Numerous cross-cultural studies have shown that modern society—despite its nearly miraculous advances in medicine, science, and technology—is afflicted with some of the highest rates of depression, schizophrenia, poor health, anxiety, and chronic loneliness in human history. As affluence and urbanization rise in a society, rates of depression and suicide tend to go up rather than down. Rather than buffering people from clinical depression, increased wealth in a society seems to foster it.

Suicide is difficult to study among unacculturated tribal peoples because the early explorers who first encountered them rarely conducted rigorous ethnographic research. That said, there is remarkably little evidence of depression-based suicide in tribal societies. Among the American Indians, for example, suicide was understood to apply in very narrow circumstances: in old age to avoid burdening the tribe, in the ritual paroxysms of grief following the death of a spouse, in a hopeless but heroic battle with an enemy, and in an attempt to avoid the agony of torture. Among tribes that were ravaged by smallpox, it was also understood that a person whose face had been hideously disfigured by lesions might kill themselves. According to The Ethics of Suicide: Historical Sources , early chroniclers of the American Indians couldn’t find any other examples of suicide that were rooted in psychological causes. Early sources report that the Bella Coola, the Ojibwa, the Montagnais, the Arapaho, the Plateau Yuma, the Southern Paiute, and the Zuni, among many others, experienced no suicide at all.

This stands in stark contrast to many modern societies, where the suicide rate is as high as 25 cases per 100,000 people. (In the United States, white middle-aged men currently have the highest rate at nearly 30 suicides per 100,000.) According to a global survey by the World Health Organization, people in wealthy countries suffer depression at as much as eight times the rate they do in poor countries, and people in countries with large income disparities—like the United States—run a much higher lifelong risk of developing severe mood disorders. A 2006 study comparing depression rates in Nigeria to depression rates in North America found that across the board, women in rural areas were less likely to get depressed than their urban counterparts. And urban North American women—the most affluent demographic of the study—were the most likely to experience depression.

The mechanism seems simple: poor people are forced to share their time and resources more than wealthy people are, and as a result they live in closer communities. Inter-reliant poverty comes with its own stresses—and certainly isn’t the American ideal—but it’s much closer to our evolutionary heritage than affluence. A wealthy person who has never had to rely on help and resources from his community is leading a privileged life that falls way outside more than a million years of human experience. Financial independence can lead to isolation, and isolation can put people at a greatly increased risk of depression and suicide. This might be a fair trade for a generally wealthier society—but a trade it is. […]

The alienating effects of wealth and modernity on the human experience start virtually at birth and never let up. Infants in hunter-gatherer societies are carried by their mothers as much as 90 percent of the time, which roughly corresponds to carrying rates among other primates. One can get an idea of how important this kind of touch is to primates from an infamous experiment conducted in the 1950s by a primatologist and psychologist named Harry Harlow. Baby rhesus monkeys were separated from their mothers and presented with the choice of two kinds of surrogates: a cuddly mother made out of terry cloth or an uninviting mother made out of wire mesh. The wire mesh mother, however, had a nipple that dispensed warm milk. The babies took their nourishment as quickly as possible and then rushed back to cling to the terry cloth mother, which had enough softness to provide the illusion of affection. Clearly, touch and closeness are vital to the health of baby primates—including humans.

In America during the 1970s, mothers maintained skin-to-skin contact with babies as little as 16 percent of the time, which is a level that traditional societies would probably consider a form of child abuse. Also unthinkable would be the modern practice of making young children sleep by themselves. In two American studies of middle-class families during the 1980s, 85 percent of young children slept alone in their own room—a figure that rose to 95 percent among families considered “well educated.” Northern European societies, including America, are the only ones in history to make very young children sleep alone in such numbers. The isolation is thought to make many children bond intensely with stuffed animals for reassurance. Only in Northern European societies do children go through the well-known developmental stage of bonding with stuffed animals; elsewhere, children get their sense of safety from the adults sleeping near them.

The point of making children sleep alone, according to Western psychologists, is to make them “self-soothing,” but that clearly runs contrary to our evolution. Humans are primates—we share 98 percent of our DNA with chimpanzees—and primates almost never leave infants unattended, because they would be extremely vulnerable to predators. Infants seem to know this instinctively, so being left alone in a dark room is terrifying to them. Compare the self-soothing approach to that of a traditional Mayan community in Guatemala: “Infants and children simply fall asleep when sleepy, do not wear specific sleep clothes or use traditional transitional objects, room share and cosleep with parents or siblings, and nurse on demand during the night.” Another study notes about Bali: “Babies are encouraged to acquire quickly the capacity to sleep under any circumstances, including situations of high stimulation, musical performances, and other noisy observances which reflect their more complete integration into adult social activities.”

As modern society reduced the role of community, it simultaneously elevated the role of authority. The two are uneasy companions, and as one goes up, the other tends to go down. In 2007, anthropologist Christopher Boehm published an analysis of 154 foraging societies that were deemed to be representative of our ancestral past, and one of their most common traits was the absence of major wealth disparities between individuals. Another was the absence of arbitrary authority. “Social life is politically egalitarian in that there is always a low tolerance by a group’s mature males for one of their number dominating, bossing, or denigrating the others,” Boehm observed. “The human conscience evolved in the Middle to Late Pleistocene as a result of… the hunting of large game. This required… cooperative band-level sharing of meat.”

Because tribal foragers are highly mobile and can easily shift between different communities, authority is almost impossible to impose on the unwilling. And even without that option, males who try to take control of the group—or of the food supply—are often countered by coalitions of other males. This is clearly an ancient and adaptive behavior that tends to keep groups together and equitably cared for. In his survey of ancestral-type societies, Boehm found that—in addition to murder and theft—one of the most commonly punished infractions was “failure to share.” Freeloading on the hard work of others and bullying were also high up on the list. Punishments included public ridicule, shunning, and, finally, “assassination of the culprit by the entire group.” […]

Most tribal and subsistence-level societies would inflict severe punishments on anyone who caused that kind of damage. Cowardice is another form of community betrayal, and most Indian tribes punished it with immediate death. (If that seems harsh, consider that the British military took “cowards” off the battlefield and executed them by firing squad as late as World War I.) It can be assumed that hunter-gatherers would treat their version of a welfare cheat or a dishonest banker as decisively as they would a coward. They may not kill him, but he would certainly be banished from the community. The fact that a group of people can cost American society several trillion dollars in losses—roughly one-quarter of that year’s gross domestic product—and not be tried for high crimes shows how completely de-tribalized the country has become.

Dishonest bankers and welfare or insurance cheats are the modern equivalent of tribe members who quietly steal more than their fair share of meat or other resources. That is very different from alpha males who bully others and openly steal resources. Among hunter-gatherers, bullying males are often faced down by coalitions of other senior males, but that rarely happens in modern society. For years, the United States Securities and Exchange Commission has been trying to force senior corporate executives to disclose the ratio of their pay to that of their median employees. During the 1960s, senior executives in America typically made around twenty dollars for every dollar earned by a rank-and-file worker. Since then, that figure has climbed to 300-to-1 among S&P 500 companies, and in some cases it goes far higher than that. The US Chamber of Commerce managed to block all attempts to force disclosure of corporate pay ratios until 2015, when a weakened version of the rule was finally passed by the SEC in a strict party-line vote of three Democrats in favor and two Republicans opposed.

In hunter-gatherer terms, these senior executives are claiming a disproportionate amount of food simply because they have the power to do so. A tribe like the !Kung would not permit that because it would represent a serious threat to group cohesion and survival, but that is not true for a wealthy country like the United States. There have been occasional demonstrations against economic disparity, like the Occupy Wall Street protest camp of 2011, but they were generally peaceful and ineffective. (The riots and demonstrations against racial discrimination that later took place in Ferguson, Missouri, and Baltimore, Maryland, led to changes in part because they attained a level of violence that threatened the civil order.) A deep and enduring economic crisis like the Great Depression of the 1930s, or a natural disaster that kills tens of thousands of people, might change America’s fundamental calculus about economic justice. Until then, the American public will probably continue to refrain from broadly challenging both male and female corporate leaders who compensate themselves far in excess of their value to society.

That is ironic, because the political origins of the United States lay in confronting precisely this kind of resource seizure by people in power. King George III of England caused the English colonies in America to rebel by trying to tax them without allowing them a voice in government. In this sense, democratic revolutions are just a formalized version of the sort of group action that coalitions of senior males have used throughout the ages to confront greed and abuse. Thomas Paine, one of the principal architects of American democracy, wrote a formal denunciation of civilization in a tract called Agrarian Justice : “Whether… civilization has most promoted or most injured the general happiness of man is a question that may be strongly contested,” he wrote in 1795. “[Both] the most affluent and the most miserable of the human race are to be found in the countries that are called civilized.”

When Paine wrote his tract, Shawnee and Delaware warriors were still attacking settlements just a few hundred miles from downtown Philadelphia. They held scores of white captives, many of whom had been adopted into the tribe and had no desire to return to colonial society. There is no way to know the effect on Paine’s thought process of living next door to a communal Stone-Age society, but it might have been crucial. Paine acknowledged that these tribes lacked the advantages of the arts and science and manufacturing, and yet they lived in a society where personal poverty was unknown and the natural rights of man were actively promoted.

In that sense, Paine claimed, the American Indian should serve as a model for how to eradicate poverty and bring natural rights back into civilized life.

* * *

Dark Triad Domination
Urban Weirdness
Social Disorder, Mental Disorder
The Unimagined: Capitalism and Crappiness
It’s All Your Fault, You Fat Loser!
How Universal Is The Mind?
Bias About Bias
“Beyond that, there is only awe.”



Lock Without a Key

In his recent book The “Other” Psychology of Julian Jaynes, Brian J. McVeigh brings the latest evidence to bear on the the theory of the bicameral mind. His focus is on textual evidence, the use of what he calls mind words and their frequency. But before getting to the evidence itself, he clarifies a number of issues, specifically the ever confounding topic of consciousness itself. Many have proclaimed Jaynes to have been wrong based on their having no clue of what he was talking about, as he used consciousness in a particular way that he took great care to define… for anyone who bothers to actually read his book before dismissing it.

To avoid confusion, McVeigh refers to Jaynesian ‘consciousness’ as conscious interiority (others simply call it J-consciousness). McVeigh states that his purpose is to “distinguish it from perception, thinking, cognition, rational thought, and other concepts.” Basically, Jaynes wasn’t talking about any form of basic awareness and physiological reactivity (e.g., a slug recoiling from salt) nor complex neurocognitive skills (e.g., numerical accounting of trade goods and taxes), as the bicameral mind possessed these traits. Starting with Jaynes’ analysis, McVeigh as he does in other books gives a summary of the features of concious interiority:

  • Spatialization of Psyche
  • Introception
  • Excerption
  • Self-narratzation
  • Self-autonomy
  • Self-authorization
  • Concilence
  • Indidviduation
  • Self-reflexivity

These describe a historically contingent state of mind and social identity, a way of being in the world and experiencing reality, a social order and cultural lifestyle. These traits are peculiar to civilization of these past millennia. But be clear that they are no indication of superiority, as archaic civilizations were able to accomplish tasks we find phenomenal — such as envisioning and planning over multiple generations to carve stones that weigh hundreds of tons, move them over great distances, and place them into precise position in the process of building large complex structures, something we don’t know how to accomplish without the advances of modern mathematics, architecture, and technology.

There is nothing inevitable nor necessary about consciousness, and it is surely possible that civilization could have developed in alternative ways that might have been far more advanced than what we presently know. Consider that the largest city in the world during early European colonialism was in the Americas, where the bicameral mind may have held on for a longer period of time. If not for the decimation by disease (and earlier Bronze Age decimation by environmental catastrophe), bicameralism might have continued to dominate civilization and led to a different equivalent of the Axial Age with whatever would have followed from it. History is full of arbitrary junctures that end up tipping humanity one way or another, depending on which way the wind happens to be blowing on a particular day or how the entire weather pattern might shift over several centuries.

That said, in history as it did play out, we now have our specific egoic consciousness and it is hard for us to imagine anything else, even if in the grand scheme of things our mighty neurocognitive monoculture is a mere momentary blip in the passing of eons, maybe disappearing in the near future as quickly as it came. Our civilization lives in a constant state of precarious uncertainty. That is why we study the past, in order to grapple with the present, and maybe just maybe glimpse what is yet to come. We might as well spend our time in this game of attempted self-understanding, if only to amuse future generations who might happen upon our half-witted ponderings.

Let us consider one aspect of consciousness. Here is McVeigh’s quick summary of concilience (pp. 39-40): A “slightly ambiguous perceived object is made to conform to some previously learned schema.” Consilience (or assimilation) is “doing in mind-space what narratization does in our mind-time or spatialized time. It brings things together [as] conscious objects just as narratization brings episodes together as a story” (Jaynes 1976; 64-65). I hadn’t previously given this much thought. But for some reason it stood out to me in my perusal. Immediately coming to mind was Lewis Hyde’s metonymy, along with the nexus of metaphorical framing, embodied mind, and symbolic conflation. Related to concilience and narratization, McVeigh speaks of coception which he defines as “how perceptions and introceptions coincide (such overlapping deludes us into assuming that interior experiences are sensory reflections of reality)” (p. 41).

I’m not sure to what degree I comprehend what McVeigh is getting at. But I grasp some hints of what it might mean. It resonates with my readings of Lewis Hyde’s Trickster Makes This World (see “Why are you thinking about this?”). Hyde defines metonymy as “substituting of one thing for another” (p. 169), “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman).”

Even more relevant is Hyde’s detailed exploration of the body and shame. I’d propose that this is a key factor in the rise of consciousness. Jaynes’ noted that shame and general obsession over nakedness, sexuality, etc can’t be found in the earliest texts and art (maybe not entirely unrelated to how anthropologists have observed the depression that follows when tribal people first see themselves in a mirror and so see themselves as others see them, which leads to the entertaining thought of Bronze Age civilizations having collapsed as mirrors became widely used and the populations descended into mass despondency). Hyde says that, “The construction of the trap of shame begins with this metonymic trick, a kind of bait and switch in which one’s changeable social place is figured in terms of an unchangeable part of the body. Then by various means the trick is made to blend invisibly into the landscape. […] In short, to make the trap of shame we inscribe the body as a sign of wider worlds, then erase the artifice of that signification so that the content of shame becomes simply the way things are, as any fool can see.”

This word magik of hypnotic metonymy and concilience is how we fall into our collective trance. We mass hallucinate our moral imagination into social reality. And then we enforce the narrative of individual selfhood onto ourselves, passing it on as an intergenerational inheritance or curse. What if shame is the cornerstone of modern civilization built over the buried corpse of bicameral civilization? We construct the social order around us like a prison to which we forgot to design a key. Shame is the trap we set for ourselves, the trap we keep on triggering, the door locking behind us each time. And so shame is the mechanism of the lock that we refuse to look at, the mechanism that would release us.

It is only in consciousness that we can remain unconscious.

Powerful Conspiracies & Open Secrets

Why do so many Americans, including the well educated, know so little about how power operates? There is a naivete that dominates the mainstream mind, in how the sociopolitical reality tunnel is mediated by way of corporate media, think tanks, perception management, etc.

The most powerful conspiracies often operate as open secrets. The conspirators, not that they think of themselves that way, don’t worry about exposure because the public is kept in a permanent state of apathetic ignorance. The alternative media and foreign media that investigates and reports on these conspiracies rarely reaches the American audience. Nor do those in domestic corporate media and among the domestic political elite necessarily get much exposure to other views, as they are as trapped in echo chambers and media bubbles as anyone else. It’s a shared condition of mediated reality.

If mainstream media reporting and the political narrative is controlled, then control can be maintained of public perception and opinion. Truth doesn’t matter when truth has been made invisible and irrelevant to most of the targeted population. As long as the official story is repeated enough, few will question it and the few who do will be dismissed as cranks and conspiracy theorists, not that most will even take notice. A news report might occasionally surface, barely touching upon it, only to disappear once again. The very casual and fleeting treatment of the incident communicates and reinforces its insignificance to the audience.

Conspiracies are easy, as long as there is an established system of persuasion and influence. It doesn’t require a secret cabal operating in a dark room. Those in positions of power and authority, including media personalities, live in the same world and share the same vested interests. Like the rest of the population, those in the upper classes come to believe the stories they tell. The first victim of propaganda are the propagandists, as any con man has to first con himself. And humans are talented at rationalizing.

Truth is easily sacrificed, just as easily as it is ignored, even for those who know better. We simultaneously know and don’t know all kinds of things. In a cynical age such as ours, a conspiracy doesn’t seem like a conspiracy when it operates out in the open. We take it as business as usual, the way power always operates. The greatest conspiracy is our collective blindness and silence.

* * *

Daniel Ellsberg’s Advice for How to Stop Current and Future Wars
by Norman Solomon

Daniel Ellsberg has a message that managers of the warfare state don’t want people to hear.

“If you have information that bears on deception or illegality in pursuing wrongful policies or an aggressive war,” he said in a statement released last week, “don’t wait to put that out and think about it, consider acting in a timely way at whatever cost to yourself. … Do what Katharine Gun did.”

If you don’t know what Katharine Gun did, chalk that up to the media power of the war system.

Ellsberg’s video statement went public as this month began, just before the 15th anniversary of the revelation by a British newspaper, the Observer, of a secret NSA memo—thanks to Katharine Gun. At the UK’s intelligence agency GCHQ, about 100 people received the same email memo from the National Security Agency on the last day of January 2003, seven weeks before the invasion of Iraq got underway. Only Katharine Gun, at great personal risk, decided to leak the document.

If more people had taken such risks in early 2003, the Iraq War might have been prevented. If more people were willing to take such risks in 2018, the current military slaughter in several nations, mainly funded by U.S. taxpayers, might be curtailed if not stopped. Blockage of information about past whistleblowing deprives the public of inspiring role models.

That’s the kind of reality George Orwell was referring to when he wrote: “Who controls the past controls the future; who controls the present controls the past.” […]

Katharine Gun foiled that plan. While scarcely reported in the U.S. media (despite cutting-edge news releases produced by my colleagues at the Institute for Public Accuracy beginning in early March of 2003), the revelations published by the Observer caused huge media coverage across much of the globe—and sparked outrage in several countries with seats on the Security Council.

“In the rest of the world, there was great interest in the fact that American intelligence agencies were interfering with their policies of their representatives in the Security Council,” Ellsberg noted. A result was that for some governments on the Security Council at the time, the leak “made it impossible for their representatives to support the U.S. wish to legitimize this clear case of aggression against Iraq. So the U.S. had to give up its plan to get a supporting vote in the UN.” The U.S. and British governments “went ahead anyway, but without the legitimating precedent of an aggressive war that would have had, I think, many consequences later.”

The BBC’s disgraceful attempt at a McCarthyist-style shaping public of perceptions and flouting impartiality rule
by Kitty S. Jones

One source of media bias is a failure to include a perspective, viewpoint or information within a news story that might be objectively regarded as being important. This is important because exclusion of a particular viewpoint or opinion on a subject might be expected to shift the ‘Overton Window’, defining what it is politically acceptable to say. This can happen in such a way that a viewpoint becomes entirely eliminated or marginalised from political discourse. Within academic media theory, there is a line of reasoning that media influence on audiences is not immediate but occurs more through a continual process of repeated arguments – the ‘steady drip’ effect.

A second potential source of bias is ‘bias by selection’. This might entail particular issues or viewpoints being more frequently covered, or certain guests or organisations being more likely to be selected. There are several others, for some of which the BBC has regularly been criticised.

Herman and Chomsky (1988) proposed a propaganda model hypothesising systematic biases of media from structural economic causes. Their proposition is that media ownership by corporations, (and in other media, funding from advertising), the use of official sources, efforts to discredit independent media (“flak”), and “anti-communist“ ideology as the filters that bias news in favour of corporate and partisan political interests.

Politically biased messages may be conveyed via visual cues from the sets as a kind of underhanded reference-dependent framing. A frame defines the packaging of an element of rhetoric in such a way as to encourage certain interpretations and to discourage others. It entails the selection of some aspects of a perceived reality and makes them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation and so on. […]

As Walter Lipman once noted, the news media are a primary source of those “pictures in our heads” about the larger world of public affairs, a world that for most citizens is “out of reach, out of sight, out of mind.” What people know about the world is largely based on what the media decide to show them. More specifically, the result of this mediated view of the world is that the priorities of the media strongly influence the priorities of the public. Elements prominent on the media agenda become prominent in the public mind.

Given the reduction in sophistication and rationality in government rhetoric, media news and current affairs presentation, (and reduction in democratic accountability, for that matter) we don’t currently have a climate that particularly encourages citizens to think critically and for themselves.

* * *

On a related topic, Kitty S. Jones has another piece she just put up. It is about how we are influenced. This is also a bit of an open secret. There has been some leaks about it, along with some investigative reporting. Yet the average American remains uninformed and unaware or else not comprehending or taking seriously the threat.

The fact of the matter is that a conspiracy to influence is as easy to accomplish as anything else. We are surrounded by highly effective conspiracies to influence us, many that operate out in the open and still we rarely notice them. They are the air we breathe in a cynical society such as this. They have become normalized and so we have stopped being shocked by how far the powerful are willing to go.

It would be depressingly naive to dismiss this as the rantings of conspiracy theorists. This is one of the most central concerns of anyone who both understands what democracy is and why it matters. We either want a free society or not. When we find ourselves being kept in the dark, the first thing to do is to turn on a light and look around to see what is going on, no matter how ugly is the reality we see and no matter how uncomfortable it makes us feel.

Here is Jones’ most recent post:

Cambridge Analytica, the commodification of voter decision-making and marketisation of democracy
by Kitty S Jones

It’s not such a big inferential leap to conclude that governments are attempting to manage legitimate criticism and opposition while stage-managing our democracy.

I don’t differentiate a great deal between the behavioural insights team at the heart of the Conservative cabinet office, and the dark world of PR and ‘big data’ and ‘strategic communications’ companies like Cambridge Analytica. The political misuse of psychology has been disguised as some kind of technocratic “fix” for a failing neoliberal paradigm, and paraded as neutral “science”.

However, its role as an authoritarian prop for an ideological imposition on the population has always been apparent to some of us, because the bottom line is that it is all about influencing people’s perceptions and decisions, using psychological warfare strategies.

The Conservatives’ behaviour change agenda is designed to align citizen’s perceptions and behaviours with neoliberal ideology and the interests of the state. However, in democratic societies, governments are traditionally elected to reflect and meet public needs. The use of “behaviour change” policy involves the state acting upon individuals, and instructing them how they must be.

Last year, I wrote a detailed article about some of these issues, including discussion of Cambridge Analytica’s involvement in data mining and the political ‘dark’ advertising that is only seen by its intended recipients. This is a much greater cause for concern than “fake news” in the spread of misinformation, because it is invisible to everyone but the person being targeted. This means that the individually tailored messages are not open to public scrutiny, nor are they fact checked.

A further problem is that no-one is monitoring the impact of the tailored messages and the potential to cause harm to individuals. The dark adverts are designed to exploit people’s psychological vulnerabilities, using personality profiling, which is controversial in itself. Intentionally generating and manipulating fear and anxiety to influence political outcomes isn’t a new thing. Despots have been using fear and slightly less subtle types of citizen “behaviour change” programmes for a long time.

* * *

In Kitty S. Jones’ above linked piece, she refers to an article that makes clear the dangerous consequences of covert anti-democratic tactics. This is reminiscent of the bad ol’ days of COINTELPRO, one of the many other proven government conspiracies. If this doesn’t wake you up, you must be deep asleep.

How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations
by Glenn Greenwald

One of the many pressing stories that remains to be told from the Snowden archive is how western intelligence agencies are attempting to manipulate and control online discourse with extreme tactics of deception and reputation-destruction. It’s time to tell a chunk of that story, complete with the relevant documents.

Over the last several weeks, I worked with NBC News to publish a series of articles about “dirty trick” tactics used by GCHQ’s previously secret unit, JTRIG (Joint Threat Research Intelligence Group). These were based on four classified GCHQ documents presented to the NSA and the other three partners in the English-speaking “Five Eyes” alliance. Today, we at the Intercept are publishing another new JTRIG document, in full, entitled “The Art of Deception: Training for Online Covert Operations.”

By publishing these stories one by one, our NBC reporting highlighted some of the key, discrete revelations: the monitoring of YouTube and Blogger, the targeting of Anonymous with the very same DDoS attacks they accuse “hacktivists” of using, the use of “honey traps” (luring people into compromising situations using sex) and destructive viruses. But, here, I want to focus and elaborate on the overarching point revealed by all of these documents: namely, that these agencies are attempting to control, infiltrate, manipulate, and warp online discourse, and in doing so, are compromising the integrity of the internet itself.

Among the core self-identified purposes of JTRIG are two tactics: (1) to inject all sorts of false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable. To see how extremist these programs are, just consider the tactics they boast of using to achieve those ends: “false flag operations” (posting material to the internet and falsely attributing it to someone else), fake victim blog posts (pretending to be a victim of the individual whose reputation they want to destroy), and posting “negative information” on various forums. […]

These agencies’ refusal to “comment on intelligence matters” – meaning: talk at all about anything and everything they do – is precisely why whistleblowing is so urgent, the journalism that supports it so clearly in the public interest, and the increasingly unhinged attacks by these agencies so easy to understand. Claims that government agencies are infiltrating online communities and engaging in “false flag operations” to discredit targets are often dismissed as conspiracy theories, but these documents leave no doubt they are doing precisely that.

Whatever else is true, no government should be able to engage in these tactics: what justification is there for having government agencies target people – who have been charged with no crime – for reputation-destruction, infiltrate online political communities, and develop techniques for manipulating online discourse? But to allow those actions with no public knowledge or accountability is particularly unjustifiable.

Corporate Imperialism

Corporations have always been forms or aspects of governments, agents and manifestations of state power. The earliest corporate charters were given to colonial governments that often were simultaneously for-profit business ventures and were operated accordingly — typically dependent on free stolen land and resources combined with a cheap workforce of impoverished immigrants, convict labor, indentured servants, and slaves. That is the origin of modern capitalism.

By definition, a corporation is a political entity and institution, a creature of government. A corporate charter is a legal and political construction offering legal rights and privileges that are protected and enforced by official authority and, when necessary, violent force. In some cases, from the East India Company ruling India to the American Robber Barons ruling company towns, corporations have operated their own policing and employed their own goons. And as long as political reform or populist revolution doesn’t take them out of power, they eventually become fully functioning governments.

Essentially, a corporation is no different than a central bank, an alphabet soup agency, a political party, etc. In fact, many regulatory agencies are captured by and act on the behalf of corporations, not on behalf of the people or their elected representatives. Even from the beginning, it was never clear whether corporations were entities beholden to governments or a new kind of governing body and political organization. The struggle between colonial corporations and the colonial empires was often about which elite held ultimate power, only later involving local populations attempting to seize power for self-governance. The American Revolution, for example, was as much a revolt against a corporation as it was against an empire.

We are living at a time when the majority (about two third) of the largest economies in the world are transnational corporations. These new corporations are not only seizing the power of governments or otherwise pulling the strings behind the scenes: bribery, blackmail, cronyism, etc. Acting beyond the level of nation-states, they are creating something entirely new — a global network of corporate governance that lacks any and all democratic procedure, transparency, and accountability.

Once colonial imperialism asserted itself, it was inevitable what corporations would become. The early ideology of corporatism had its origins in the Catholic Church, another vast transnational institution. But now corporations serve no other master than raw power, which is to say authoritarianism — national corporatocracy growing into an even more fearsome predator, transnational inverted totalitarianism ruled by psychopaths, dominators, and narcissists.

As our new Lord and Savior Donald Trump demonstrates, a successful plutocrat and kleptocrat can declare bankruptcy numerous times over decades and still maintain his position of immense wealth while using that wealth to buy political influence and position (with decades of ties to foreign oligarchs and crime syndicates involving apparent money laundering, only now being investigated but probably with no real consequences). Before Trump, it was Ronald Reagan who went from radio sportscaster to Hollywood actor to corporate spokesperson to politician to the most powerful man in the world. But if not a cult of media personality like that surrounding Reagan or Trump, we would be instead be ruled by an internet tycoon like Jeff Bezos (with his ties to the CIA and Pentagon) or a tech tycoon like Peter Thiel (with his dreams of utopian technocracy)— the results would be similar, an ever increasing accumulation of wealth and concentration of power.

Even more concerning are the powerful interests and dark money that operate behind the scenes, the Koch brothers and Mercer families of the world, the most successful of them remaining hidden from public disclosure and news reporting. The emergent corporate imperialism isn’t limited to individuals but crony networks of establishment power, political dynasties, and vast inherited wealth; along with lobbyist organizations, think tanks, front groups, big biz media, etc.

The money men (they are mostly men and, of course, white) are the celebrities and idols of the present corporatist world in the way those in past eras admired, worshipped, and bowed down to popes, monarchs, and aristocrats. This 21st century ruling elite, including the puppet masters that keep the show going, is as untouchable as that of the ancien regime and in many ways more powerful if more covert than the East India Company, that is until a new revolutionary era comes. There isn’t much room for hope. In all of these centuries of struggle between various ruling elites, democracy for all its rhetoric remains a dream yet to be made real, a promise yet to be fulfilled.

* * *

The East India Company: The original corporate raiders
by William Dalrymple

It seemed impossible that a single London corporation, however ruthless and aggressive, could have conquered an empire that was so magnificently strong, so confident in its own strength and brilliance and effortless sense of beauty.

Historians propose many reasons: the fracturing of Mughal India into tiny, competing states; the military edge that the industrial revolution had given the European powers. But perhaps most crucial was the support that the East India Company enjoyed from the British parliament. The relationship between them grew steadily more symbiotic throughout the 18th century. Returned nabobs like Clive used their wealth to buy both MPs and parliamentary seats – the famous Rotten Boroughs. In turn, parliament backed the company with state power: the ships and soldiers that were needed when the French and British East India Companies trained their guns on each other. […]

In September, the governor of India’s central bank, Raghuram Rajan, made a speech in Mumbai expressing his anxieties about corporate money eroding the integrity of parliament: “Even as our democracy and our economy have become more vibrant,” he said, “an important issue in the recent election was whether we had substituted the crony socialism of the past with crony capitalism, where the rich and the influential are alleged to have received land, natural resources and spectrum in return for payoffs to venal politicians. By killing transparency and competition, crony capitalism is harmful to free enterprise, and economic growth. And by substituting special interests for the public interest, it is harmful to democratic expression.

His anxieties were remarkably like those expressed in Britain more than 200 years earlier, when the East India Company had become synonymous with ostentatious wealth and political corruption: “What is England now?” fumed the Whig litterateur Horace Walpole, “A sink of Indian wealth.” In 1767 the company bought off parliamentary opposition by donating £400,000 to the Crown in return for its continued right to govern Bengal. But the anger against it finally reached ignition point on 13 February 1788, at the impeachment, for looting and corruption, of Clive’s successor as governor of Bengal, Warren Hastings. It was the nearest the British ever got to putting the EIC on trial, and they did so with one of their greatest orators at the helm – Edmund Burke.

Burke, leading the prosecution, railed against the way the returned company “nabobs” (or “nobs”, both corruptions of the Urdu word “Nawab”) were buying parliamentary influence, not just by bribing MPs to vote for their interests, but by corruptly using their Indian plunder to bribe their way into parliamentary office: “To-day the Commons of Great Britain prosecutes the delinquents of India,” thundered Burke, referring to the returned nabobs. “Tomorrow these delinquents of India may be the Commons of Great Britain.”

Burke thus correctly identified what remains today one of the great anxieties of modern liberal democracies: the ability of a ruthless corporation corruptly to buy a legislature. And just as corporations now recruit retired politicians in order to exploit their establishment contacts and use their influence, so did the East India Company. So it was, for example, that Lord Cornwallis, the man who oversaw the loss of the American colonies to Washington, was recruited by the EIC to oversee its Indian territories. As one observer wrote: “Of all human conditions, perhaps the most brilliant and at the same time the most anomalous, is that of the Governor General of British India. A private English gentleman, and the servant of a joint-stock company, during the brief period of his government he is the deputed sovereign of the greatest empire in the world; the ruler of a hundred million men; while dependant kings and princes bow down to him with a deferential awe and submission. There is nothing in history analogous to this position …”

Hastings survived his impeachment, but parliament did finally remove the EIC from power following the great Indian Uprising of 1857, some 90 years after the granting of the Diwani and 60 years after Hastings’s own trial. On 10 May 1857, the EIC’s own security forces rose up against their employer and on successfully crushing the insurgency, after nine uncertain months, the company distinguished itself for a final time by hanging and murdering tens of thousands of suspected rebels in the bazaar towns that lined the Ganges – probably the most bloody episode in the entire history of British colonialism.

Enough was enough. The same parliament that had done so much to enable the EIC to rise to unprecedented power, finally gobbled up its own baby. The British state, alerted to the dangers posed by corporate greed and incompetence, successfully tamed history’s most voracious corporation. In 1859, it was again within the walls of Allahabad Fort that the governor general, Lord Canning, formally announced that the company’s Indian possessions would be nationalised and pass into the control of the British Crown. Queen Victoria, rather than the directors of the EIC would henceforth be ruler of India. […]

For the corporation – a revolutionary European invention contemporaneous with the beginnings of European colonialism, and which helped give Europe its competitive edge – has continued to thrive long after the collapse of European imperialism. When historians discuss the legacy of British colonialism in India, they usually mention democracy, the rule of law, railways, tea and cricket. Yet the idea of the joint-stock company is arguably one of Britain’s most important exports to India, and the one that has for better or worse changed South Asia as much any other European idea. Its influence certainly outweighs that of communism and Protestant Christianity, and possibly even that of democracy.

Companies and corporations now occupy the time and energy of more Indians than any institution other than the family. This should come as no surprise: as Ira Jackson, the former director of Harvard’s Centre for Business and Government, recently noted, corporations and their leaders have today “displaced politics and politicians as … the new high priests and oligarchs of our system”. Covertly, companies still govern the lives of a significant proportion of the human race.

The 300-year-old question of how to cope with the power and perils of large multinational corporations remains today without a clear answer: it is not clear how a nation state can adequately protect itself and its citizens from corporate excess. As the international subprime bubble and bank collapses of 2007-2009 have so recently demonstrated, just as corporations can shape the destiny of nations, they can also drag down their economies. In all, US and European banks lost more than $1tn on toxic assets from January 2007 to September 2009. What Burke feared the East India Company would do to England in 1772 actually happened to Iceland in 2008-11, when the systemic collapse of all three of the country’s major privately owned commercial banks brought the country to the brink of complete bankruptcy. A powerful corporation can still overwhelm or subvert a state every bit as effectively as the East India Company did in Bengal in 1765.

Corporate influence, with its fatal mix of power, money and unaccountability, is particularly potent and dangerous in frail states where corporations are insufficiently or ineffectually regulated, and where the purchasing power of a large company can outbid or overwhelm an underfunded government. This would seem to have been the case under the Congress government that ruled India until last year. Yet as we have seen in London, media organisations can still bend under the influence of corporations such as HSBC – while Sir Malcolm Rifkind’s boast about opening British embassies for the benefit of Chinese firms shows that the nexus between business and politics is as tight as it has ever been.

The East India Company no longer exists, and it has, thankfully, no exact modern equivalent. Walmart, which is the world’s largest corporation in revenue terms, does not number among its assets a fleet of nuclear submarines; neither Facebook nor Shell possesses regiments of infantry. Yet the East India Company – the first great multinational corporation, and the first to run amok – was the ultimate model for many of today’s joint-stock corporations. The most powerful among them do not need their own armies: they can rely on governments to protect their interests and bail them out. The East India Company remains history’s most terrifying warning about the potential for the abuse of corporate power – and the insidious means by which the interests of shareholders become those of the state. Three hundred and fifteen years after its founding, its story has never been more current.


“We forgot.”

When somebody asked Alexander Hamilton why the Framers hadn’t mentioned God in the Constitution, his answer was deadpan hilarious: “We forgot.”
~ Kurt Andersen

The 18th century captures the American imagination, for reasons that are obvious and less so. It was a pivotal point and many were aware of it at the time. Over the preceding centuries, Feudalism slowly declined for numerous reasons. The most obvious force of change was the enclosure movement that evicted peasants from their land, their homes, and their communities.

This created a teeming population of landless peasants who were homeless, unemployed, and often starving. This sent waves of refugees heading for the cities and later the colonies. It was a direct attack on the rights of commoners (what the American colonists referred to as the rights of Englishmen). With the loss of Feudalism, there was the loss the Church’s traditional role and intimate participation in the daily lives of communities (see Dancing in the Streets by Barbara Ehrenreich). There also was the compounding impact of the Renaissance, Peasants’ Revolt, Reformation, English Civil War, Scientific Revolution, Enlightenment, and expanding colonial imperialism.

Yet, even as the early revolutionary era came to a close, much of the ancient world or the immediate sense of its loss was still fresh in living memory, at least for the older generations. Post-Reformation religious war went hand in hand with political and economic radicalism with early signs of class war, populism, and communism showing up as Feudalism waned, from the Peasants’ Revolt to the English Civil War. Immediately preceding the American Revolution, there was the First Great Awakening which kept alive the earlier radicalism while pushing it to further extremes, this being the initial motivation for the separation of church and state since the religious dissenters were being excluded and oppressed by Anglican state power.

Yet most Americans at the time weren’t formally religious. There were few ministers in the colonies, especially in rural areas. Americans had low rates of church attendance, with rates not increasing until the 19th century (see The Churching of America by Roger Finke and Rodney Stark). It was precisely this lack of formal religion that fed into a new rabid free-for-all where anyone’s religiosity was as good as another’s, where anyone could become a preacher and start their own sect or turn to whatever ideology they preferred, religious or anti-religious. This is how the influences of Reformation and Enlightenment melded together, creating a force greater than either alone.

Even so, the First Great Awakening didn’t directly impact many Americans. Those who heard the fiery preachers of the time were a small part of the population, although in certain cities it led to great tumult. The effect was uneven, some places unaware a change was happening. It was a slow build up of unrest as the American colonies moved toward revolution. It wasn’t so much religion itself but broader cultural shifts. The radical religious were getting louder but so were the radical irreligious. Both hereticism and secularism became virulent, sometimes flowing together as a single force, but not always.

Also, none of it fit into clear class lines. The upper class were filled with unitarians, universalists, deists, and secularists — this was seen in the founding generation but began to take hold earlier such as with Thomas Morton and Roger Williams. But some of the most heretical anti-Christians emerged from the working class, the most famous being Thomas Paine but included several other influential figures. The growing rift was not even so much between Christianity and atheism, rather more between establishment power and the challenges of dissent. On either side of the divide, many voices found themselves formed into a new alignment, voices that otherwise would have been antagonistic.

As with our present moment, the era preceding revolution was a struggle between the contented and the restless, with the former becoming more authoritarian and the latter more radicalized. That schism is a wound that has never healed. The American soul remains fractured. The caricature of culture war spectacle won’t save us. It’s not about religion. The American Founders didn’t forget about God. It wasn’t the issue that mattered then nor that matters now. Religiosity and heresy, even when they take center stage, are always expressions of or proxies for something else.

* * *

Fantasyland, How America Went Haywire:
A 500-Year History

by Kurt Andersen
pp. 56-59

Chapter 8
Meanwhile, in the Eighteenth-Century Reality-Based Community

THE TWENTY-FOUR-YEAR-OLD PHENOM GEORGE WHITEFIELD arrived in America for the first time just before All Saints’ Day, Halloween 1739. The first major stop on his all-colonies tour was Philadelphia. Crowds equal to half the inhabitants of the city gathered to see each performance. Among them was the not-so-religious young printer and publisher Benjamin Franklin.

Franklin was astonished by how Whitefield could “bring men to tears by pronouncing Mesopotamia, ” and “how much they admired and respected him, notwithstanding his common Abuse of them, by assuring them they were naturally half Beasts and half Devils.” The publisher introduced himself on the spot and signed up to print a four-volume set of Whitefield’s journals and sermons, which became an enormous bestseller. But Franklin’s only awakening during the Great Awakening was to the profits available by pandering to American religionists. Over the next three years, he published an evangelical book almost monthly. With Whitefield himself, Franklin wrote, he formed “no religious Connection.”

Franklin and his fellow Founders’ conceptions of God tended toward the vague and impersonal, a Creator who created and then got out of the way. The “enthusiasts” of the era—channelers of the Holy Spirit, elaborate decoders of the divine plan, proselytizers—were not their people. John Adams fretted in a letter to Jefferson that his son John Quincy might “retire…to study prophecies to the end of his life.” Adams wrote to a Dutch friend that the Bible consists of “millions of fables, tales, legends,” and that Christianity had “prostituted” all the arts “to the sordid and detestable purposes of superstition and fraud.” George Washington “is an unbeliever,” Jefferson once reckoned, and only “has divines constantly about him because he thinks it right to keep up appearances.” Jefferson himself kept up appearances by attending church but instructed his seventeen-year-old nephew to “question with boldness even the existence of a god; because, if there be one, he must more approve the homage of reason, than that of blindfolded fear.” He considered religions “all alike, founded upon fables and mythologies,” including “our particular superstition,” Christianity. One winter in the White House, President Jefferson performed an extraordinary act of revisionism: he cut up two copies of the New Testament, removing all references to miracles, including Christ’s resurrection, and called the reassembled result The Life and Morals of Jesus of Nazareth . “As to Jesus of Nazareth,” Franklin wrote just before he died, “I have…some doubts as to his Divinity; though it is a question I do not dogmatize upon…and I think it needless to busy myself with it now, when I expect soon an opportunity of knowing the truth with less trouble.”

When somebody asked Alexander Hamilton why the Framers hadn’t mentioned God in the Constitution, his answer was deadpan hilarious: “We forgot.”

Yet ordinary American people were apparently still much more religious than the English. In 1775 Edmund Burke warned his fellow members of Parliament that the X factor driving the incipient colonial rebellion was exactly that, the uppity Americans’ peculiar ultra-Protestant zeal. For them, Burke said, religion “is in no way worn out or impaired.”

Thus none of the Founders called himself an atheist. Yet by the standards of devout American Christians, then and certainly now, most were blasphemers. In other words, they were men of the Enlightenment, good-humored seculars who mainly chose reason and science to try to understand the nature of existence, the purposes of life, the shape of truth. Jefferson said Bacon, Locke, and Newton were “the three greatest men that have ever lived, without any exception.” Franklin, close friends with the Enlightenment philosophe Voltaire, * was called “the modern Prometheus” by the Enlightenment philosopher Immanuel Kant, and Adams was friends with the Enlightenment philosopher David Hume, whose 1748 essay “Of Miracles” was meant to be “an everlasting check to all kinds of superstitious delusion.” America’s political founders had far more in common with their European peers than with the superstar theologians barnstorming America to encourage superstitious delusion. “The motto of enlightenment,” Kant wrote the year after America won its war of independence, “is… Sapere aude! ” or Dare to know. “Have courage to use your own understanding!”

For three centuries, the Protestant Reformation and the emerging Enlightenment were strange bedfellows, symbiotically driving the radical idea of freedom of thought, each paving the way for the success of the other. Protestants decided they could reject the Vatican and start their own religion, and they continued rejecting the authority and doctrines of each new set of Protestant bosses and started their own new religions again and again. Enlightenment thinkers took freedom of thought a step further, deciding that people were also free to put supernatural belief and religious doctrine on the back burner or reject them altogether.

But the Enlightenment part of this shift in thinking was a double-edged sword. The Enlightenment liberated people to believe anything whatsoever about every aspect of existence—true, false, good, bad, sane, insane, plausible, implausible, brilliant, stupid, impossible. Its optimistic creators and enthusiasts ever since have assumed that in the long run, thanks to an efficient marketplace of ideas, reason would win. The Age of Reason had led to the Enlightenment, smart rationalists and empiricists were behind both, so…right?

No. “The familiar and often unquestioned claim that the Enlightenment was a movement concerned exclusively with enthralling reason over the passions and all other forms of human feeling or attachment, is…simply false,” writes the UCLA historian Anthony Pagden in The Enlightenment: And Why It Still Matters . “The Enlightenment was as much about rejecting the claims of reason and of rational choice as it was about upholding them.” The Enlightenment gave license to the freedom of all thought, in and outside religion, the absurd and untrue as well as the sensible and true. Especially in America. At the end of the 1700s, with the Enlightenment triumphant, science ascendant, and tolerance required, craziness was newly free to show itself. “Alchemy, astrology…occult Freemasonry, magnetic healing, prophetic visions, the conjuring of spirits, usually thought sidelined by natural scientists a hundred years earlier,” all revived, the Oxford historian Keith Thomas explains, their promoters and followers “implicitly following Kant’s injunction to think for themselves. It was only in an atmosphere of enlightened tolerance that such unorthodox cults could have been openly practiced.”

Kant himself saw the conundrum the Enlightenment faced. “Human reason,” he wrote in The Critique of Pure Reason, “has this peculiar fate, that in one species of its knowledge”—the spiritual, the existential, the meaning of life—“it is burdened by questions which…it is not able to ignore, but which…it is also not able to answer.” Americans had the peculiar fate of believing they could and must answer those religious questions the same way mathematicians and historians and natural philosophers answered theirs.

* “As long as there are fools and rascals,” Voltaire wrote in 1767, “there will be religions. [And Christianity] is assuredly the most ridiculous, the most absurd…religion

Political Right Rhetoric

The following is an accurate description of the political rhetoric, the labels and language in its use on the political right (from a Twitter thread). It is by Matthew A. Sears, an Associate Professor of Classics and Ancient History at the University of New Brunswick.

1. “I’m neither a liberal nor a conservative.” = “I’m totally a conservative.”

2. “I’m a radical centrist.” = “I’m totally a conservative.”

3. “I’m a classical liberal.” = “I’m a neoliberal who’s never read any classical liberals.”

4. “I’m not usually a fan of X.” *Retweets and agrees with everything X says.*

5. “I’m a free speech absolutist.” = “I’m glad racists are now free to speak publicly.”

6. “I believe in confronting views one finds offensive.” *Whines about being bullied by lefties.*

7. “My views are in the minority and aren’t given a fair hearing.”*Buys the best-selling book in the world.*

8. “Where else would you rather live?” = “Canada is perfect for me, and it better not frigging change to be better for anyone else.”

9. “Nazis should be able to speak and given platforms so we can debate them.” *Loses mind if someone says ‘fuck’ to a Nazi.*

10. “The left has taken over everything.” *Trump is president and the Republicans control Congress.*

And, finally, the apex of Twitterspeak:

11. “The left are tyrants and have taken over everything and refuse to hear other perspectives and pose a dire threat to the republic and Western Civilization.” *Ben Shapiro has over a million followers.*

I’d say treat this thread as an Enigma Machine for Quillette-speak/viewpoint-diversity-speak/reverse-racism-speak/MRA-speak, but none of these chaps are enigmas.

I can’t believe I have to add this, but some are *outraged* by this thread: I don’t mind if you’re *actually* centrist or conservative. I just mind if you *pretend to be* left/centrist for rhetorical/media cred/flamewar purposes, while *only* taking conservative stances. Sheesh

Like, I’m pretty left-wing on many issues these days. It would be sneaky of me to identity as “conservative” or “classical liberal” or whatever only to dump on all their ideas and always support opposing ideas. A left-winger or centrist is what a left-winger or centrist tweets.

James Taoist added:

12. “I’m a strict Constitutionalist” = “I’m as racist as fuck.”

Hunger for Connection

“Just as there are mental states only possible in crowds, there are mental states only possible in privacy.”

Those are the words of Sarah Perry from Luxuriating in Privacy. I came across the quote from a David Chapman tweet. He then asks, “Loneliness epidemic—or a golden age of privacy?” With that lure, I couldn’t help but bite.

I’m already familiar with Sarah Perry’s writings at Ribbonfarm. There is even an earlier comment by me at the piece the quote comes from, although I had forgotten about it. In the post, she begins with links to some of her previous commentary, the first one (Ritual and the Consciousness Monoculture) having been my introduction to her work. I referenced it in my post Music and Dance on the Mind and it does indeed connect to the above thought on privacy.

In that other post by Perry, she discusses Keeping Together in Time by William H. McNeill. His central idea is “muscular bonding” that creates, maintains, and expresses a visceral sense of group-feeling and fellow-feeling. This can happen through marching, dancing, rhythmic movements, drumming, chanting, choral singing, etc (for example, see: Choral Singing and Self-Identity). McNeill quotes A. R. Radcliffe about the Andaman islanders: “As the dancer loses himself in the dance, as he becomes absorbed in the unified community, he reaches a state of elation in which he feels himself filled with energy or force immediately beyond his ordinary state, and so finds himself able to perform prodigies of exertion” (Kindle Locations 125-126).

The individual is lost, at least temporarily, an experience humans are drawn to in many forms. Individuality is tiresome and we moderns feel compelled to take a vacation from it. Having forgotten earlier ways of being, maybe privacy is the closest most of us get to lowering our stressful defenses of hyper-individualistic pose and performance. The problem is privacy so easily reinforces the very individualistic isolation that drains us of energy.

This might create the addictive cycle that Johann Hari discussed in Chasing the Scream and would relate to the topic of depression in his most recent book, Lost Connections. He makes a strong argument about the importance of relationships of intimacy, bonding, and caring (some communities have begun to take seriously this issue; others deem what is required are even higher levels of change, radical and revolutionary). In particular, the rat park research is fascinating. The problem with addiction is that it simultaneously relieves the pain of our isolation while further isolating us. Or at least this is what happens in a punitive society with weak community and culture of trust. For that reason, we should look to other cultures for comparison. In some traditional societies, there is a greater balance and freedom to choose. I specifically had the Piraha in mind, as described by Daniel Everett.

The Piraha are a prime example of how not all cultures have a dualistic conflict between self and community, between privacy and performance. Their communities are loosely structured and the individual is largely autonomous in how and with whom they use their time. They lack much in the way of formal social structure, since there are no permanent positions of hierarchical authority (e.g., no tribal council of elders), although any given individual might temporarily take a leadership position in order to help accomplish an immediate task. Nor do they have much in the way of ritual or religion. It isn’t an oppressive society.

Accordingly, Everett observes how laid back, relaxed, and happy they seem. Depression, anxiety, and suicide appear foreign to them. When he told them about a depressed family member who killed herself, the Piraha laughed because assumed he was joking. There was no known case of suicide in the tribe. Even more interesting is that, growing up, the Piraha don’t exhibit transitional periods such as the terrible twos or teenage rebelliousness. They simply go from being weaned to joining adult activities with no one telling them to what to do.

The modern perceived conflict between group and individual might not be a universal and intrinsic aspect of human society. But it does seem a major issue for WEIRD societies, in particular. Maybe has to do with how ego-bound is our sense of identity. The other thing the Piraha lack is a permanent, unchanging self-identity because such as a meeting with a spirit in the jungle might lead to a change of name and, to the Piraha, the person who went by the previous name no longer is there. They feel no need to defend their individuality because any given individual self can be set aside.

It is hard for Westerners and Americans most of all to imagine a society that is this far different. It is outside of the mainstream capacity of imagining what is humanly possible. It’s similar to why so many people reject out of hand such theories as Julian Jaynes’ bicameral mind. Such worldviews simply don’t fit into what we know. But maybe this sense of conflict we cling to is entirely unnecessary. If so, why do we feel such conflict is inevitable? And so why do we value privacy so highly? What is it that we seek from being isolated and alone? What is it that we think we have lost that needs to be regained? To help answer these questions, I’ll present a quote by Julian Jaynes that I included in writing Music and Dance On the Mind — from his book that Perry is familiar with:

“Another advantage of schizophrenia, perhaps evolutionary, is tirelessness. While a few schizophrenics complain of generalized fatigue, particularly in the early stages of the illness, most patients do not. In fact, they show less fatigue than normal persons and are capable of tremendous feats of endurance. They are not fatigued by examinations lasting many hours. They may move about day and night, or work endlessly without any sign of being tired. Catatonics may hold an awkward position for days that the reader could not hold for more than a few minutes. This suggests that much fatigue is a product of the subjective conscious mind, and that bicameral man, building the pyramids of Egypt, the ziggurats of Sumer, or the gigantic temples at Teotihuacan with only hand labor, could do so far more easily than could conscious self-reflective men.”

Considering that, it could be argued that privacy is part of the same social order, ideological paradigm, and reality tunnel that tires us out so much in the first place. Endlessly without respite, we feel socially compelled to perform our individuality. And even in retreating into privacy, we go on performing our individuality for our own private audience, as played out on the internalized stage of self-consciousness that Jaynes describes. That said, even though the cost is high, it leads to great benefits for society as a whole. Modern civilization wouldn’t be possible without it. The question is whether the costs outweigh the benefits and also whether the costs are sustainable or self-destructive in the long term.

As Eli wrote in the comments section to Luxuriating in Privacy: “Privacy isn’t an unalloyed good. As you mention, we are getting ever-increasing levels of privacy to “luxuriate” in. But who’s to say we’re not just coping with the change modernity constantly imposes on us? Why should we elevate the coping mechanism, when it may well be merely a means to lessen the pain of an unnecessarily “alienating” constructed environment.” And “isn’t the tiresomeness of having to model the social environment itself contingent on the structural precariousness of one’s place in an ambiguous, constantly changing status hierarchy?”

Still, I do understand where Perry is coming from, as I’m very much an introvert who values my alone time and can be quite jealous of my privacy, although I can’t say that close and regular social contact “fills me with horror.” Having lived alone for years in apartments and barely knowing my neighbors, I spend little time at my ‘home’ and instead choose to regularly socialize with my family at my parents’ house. Decades of depression has caused me to be acutely aware of the double-edged sword of privacy.

Let me respond to some specifics of Perry’s argument.  “Consider obesity,” she writes. “A stylized explanation for rising levels of overweight and obesity since the 1980s is this: people enjoy eating, and more people can afford to eat as much as they want to. In other words, wealth and plenty cause obesity.” There are some insightful comparisons of eating practices. Not all modern societies with equal access to food have equal levels of obesity. Among many other health problems, obesity can result from stress because our bodies prepare for challenging times by accumulating fat reserves. And if there is enough stress, studies have found this is epigenetically passed onto children.

As a contrast, consider the French culture surrounding food. The French don’t eat much fast food, don’t tend to eat or drink on the go. It is more common for them sit down to enjoy their coffee in the morning, rather than putting it in a traveling mug to drink on the way to work. Also, they are more likely to take long lunches in order to eat leisurely and typically do so with others. For the French, the expectation is that meals are to be enjoyed as a social experience and so they organize their entire society accordingly. Even though they eat many foods that some consider unhealthy, they don’t have the same high rates of stress-related diseases as do Americans.

An even greater contrast comes from looking once again at the Piraha. They live in an environment of immense abundance. And it requires little work to attain sustenance. In a few hours of work, an individual could get enough food to feed an extended family for multiple meals. They don’t worry about going hungry and yet, for various reasons, will choose not to not eat for extended periods of time when they wish to spend their time in other ways such as relaxing or dancing. They impose a feast and fast lifestyle on themselves, a typical pattern for hunter-gatherers. As with the French, when the Piraha have a meal, it is very much a social event. Unsurprisingly, the Piraha are slim and trim, muscular and healthy. They don’t suffer from stress-related physical and mental conditions, certainly not obesity.

Perry argues that, “Analogized to privacy, perhaps the explanation of atomization is simply that people enjoy privacy, and can finally afford to have as much as they want. Privacy is an economic good, and people show a great willingness to trade other goods for more privacy.” Using Johan Hari’s perspective, I might rephrase it: Addiction is economically profitable within the hyper-individualism of capitalist realism, and people show a difficult to control craving that causes them to pay high costs to feed their addiction. Sure, temporarily alleviating the symptoms makes people feel better. But what is it a symptom of? That question is key to understanding. I’m persuaded that the issue at hand is disconnection, isolation, and loneliness. So much else follows from that.

Explaining the title of her post, Perry writes that: “One thing that people are said to do with privacy is to luxuriate in it. What are the determinants of this positive experience of privacy, of privacy experienced as a thing in itself, rather than through violation?” She goes on to describes on to describes the features of privacy, various forms of personal space and enclosure. Of course, Julian Jaynes argued that the ultimate privacy is the construction of individuality itself, the experience of space metaphorically internalized and interiorized. Further development of privacy, however, is a rather modern invention. For example, it wasn’t until recent centuries that private bedrooms became common, having been popularized in Anglo-American culture by Quakers. Before that, full privacy was a rare experience and far from having been considered a human necessity or human right.

But we have come to take privacy for granted, not talking about certain details is a central part of privacy itself. “Everybody knows that everybody poops. Still, you’re not supposed to poop in front of people. The domain of defecation is tacitly edited out of our interactions with other people: for most social purposes, we are expected to pretend that we neither produce nor dispose of bodily wastes, and to keep any evidence of such private. Polite social relations exclude parts of reality by tacit agreement; scatological humor is a reminder of common knowledge that is typically screened off by social agreement. Sex and masturbation are similar.”

Defecation is a great example. There is no universal experience about the privatization of the act of pooping. In early Europe, relieving oneself in public was common and considered well within social norms. It was a slow ‘civilizing’ process to teach people to be ashamed of bodily functions, even simple things like farting and belching in public (there are a number of interesting books on the topic). I was intrigued by Susan P. Mattern’s The Prince of Medicine. She describes how almost everything in the ancient world was a social experience. Even taking a shit was an opportunity to meet and chat with one’s family, friends, and neighbors. They apparently felt no drain of energy or need to perform in their social way of being in the world. It was relaxed and normal to them, simply how they lived and they knew nothing else.

Also, sex and masturbation haven’t always been exclusively private acts. We have little knowledge of sex in the archaic world. Jaynes noted that sexuality wasn’t treated as anything particularly concerning and worrisome during the bicameral era. Obsession with sex, positive or negative, more fully developed during the Axial Age. As late as Feudalism, heavily Christianized Europe offered little opportunity for privacy and maintained a relatively open attitude about sexuality during many public celebrations, specifically Carnival, and they spent an amazing amount of their time in public celebrations. Barbara Ehrenreich describes this ecstatic communality in Dancing in the Streets. Like the Piraha, these earlier Europeans had a more social and fluid sense of identity.

Let me finish by responding to Perry’s conclusion: “As I wrote in A Bad Carver, social interaction has increasingly become “unbundled” from other things. This may not be a coincidence: it may be that people have specifically desired more privacy, and the great unbundling took place along that axis especially, in response to demand. Modern people have more room, more autonomy, more time alone, and fewer social constraints than their ancestors had a hundred years ago. To scoff at this luxury, to call it “alienation,” is to ignore that it is the choices of those who are allegedly alienated that create this privacy-friendly social order.”

There is no doubt what people desire. In any given society, most people desire whatever they are acculturated to desire. Example after example of this can be found in social science research, the anthropological literature, and classical studies. It’s not obvious to me that there is any evidence that modern people have fewer social constraints. What is clear is that we have different social constraints and that difference seems to have led to immense stress, anxiety, depression, and fatigue. Barbara Ehrenreich discusses the rise in depression in particular, as have others such as Mark Fisher’s work on capitalist realism (I quote him and others here). The studies on WEIRD cultures are also telling (see: Urban Weirdness and Dark Triad Domination).

The issue isn’t simply what choices we make but what choices we are offered and denied, what choices we can and cannot perceive or even imagine. And that relates to how we lose contact with the human realities of other societies that embody other possibilities not chosen or even considered within the constraints of our own. We are disconnected not only from others within our society. Our WEIRD monocultural dominance has isolated us also from other expressions of human potential.

We luxuriate in privacy because our society offers us few other choices, like a choice between junk food or starvation, in which case junk food tastes delicious. For most modern Westerners, privacy is nothing more than a temporary escape from an overwhelming world. But what we most deeply hunger for is genuine connection.

“…consciousness is itself the result of learning.”

As above, so below
by Axel Cleeremans

A central aspect of the entire hierarchical predictive coding approach, though this is not readily apparent in the corresponding literature, is the emphasis it puts on learning mechanisms. In other works (Cleeremans, 2008, 2011), I have defended the idea that consciousness is itself the result of learning. From this perspective, agents become conscious in virtue of learning to redescribe their own activity to themselves. Taking the proposal that consciousness is inherently dynamical seriously opens up the mesmerizing possibility that conscious awareness is itself a product of plasticity-driven dynamics. In other words, from this perspective, we learn to be conscious. To dispel possible misunderstandings of this proposal right away, I am not suggesting that consciousness is something that one learns like one would learn about the Hundred Years War, that is, as an academic endeavour, but rather that consciousness is the result (vs. the starting point) of continuous and extended interaction with the world, with ourselves, and with others. The brain, from this perspective, continuously (and unconsciously) learns to anticipate the consequences of its own activity on itself, on the environment, and on other brains, and it is from the practical knowledge that accrues in such interactions that conscious experience is rooted. This perspective, in short, endorses the enactive approach introduced by O’Regan and Noë (2001), but extends it both inwards (the brain learning about itself) and further outwards (the brain learning about other brains), so connecting with the central ideas put forward by the predictive coding approach to cognition. In this light, the conscious mind is the brain’s (implicit, enacted) theory about itself, expressed in a language that other minds can understand.

The theory rests on several assumptions and is articulated over three core ideas. A first assumption is that information processing as carried out by neurons is intrinsically unconscious. There is nothing in the activity of individual neurons that make it so that their activity should produce conscious experience. Important consequences of this assumption are (1) that conscious and unconscious processing must be rooted in the same set of representational systems and neural processes, and (2) that tasks in general will always involve both conscious and unconscious influences, for awareness cannot be “turned off” in normal participants.

A second assumption is that information processing as carried out by the brain is graded and cascades (McClelland, 1979) in a continuous flow (Eriksen & Schultz, 1979) over the multiple levels of a heterarchy (Fuster, 2008) extending from posterior to anterior cortex as evidence accumulates during an information processing episode. An implication of this assumption is that consciousness takes time.

The third assumption is that plasticity is mandatory: The brain learns all the time, whether we intend to or not. Each experience leaves a trace in the brain (Kreiman, Fried, & Koch, 2002).

The social roots of consciousness
by Axel Cleeremans

How does this ability to represent the mental states of other agents get going? While there is considerable debate about this issue, it is probably fair to say that one crucial mechanism involves learning about the consequences of the actions that one directs towards other agents. In this respect, interactions with the natural world are fundamentally different from interactions with other agents, precisely because other agents are endowed with unobservable internal states. If I let a spoon drop on a hard floor, the sound that results will always be the same, within certain parameters that only vary in a limited range. The consequences of my action are thus more or less entirely predictable. But if I smile to someone, the consequences that may result are many. Perhaps the person will smile back to me, but it may also be the case that the person will ignore me or that she will display puzzlement, or even that she will be angry at me. It all depends on the context and on the unobservable mental states that the person currently entertains. Of course, there is a lot I can learn about the space of possible responses based on my knowledge of the person, my history of prior interactions with her, and on the context in which my interactions take place. But the point is simply to say that in order to successfully predict the consequences of the actions that I direct towards other agents, I have to build a model of how these agents work. And this is complex because, unlike what is the case for interactions with the natural world, it is an inverse problem: The same action may result in many different reactions, and those different reactions can themselves be caused by many different internal states.

Based on these observations, one provocative claim about the relationships between self-awareness and one’s ability to represent the mental states of other agents (“theory of mind”, as it is called) is thus that theory of mind comes first, as the philosopher Peter Caruthers has defended. That is, it is in virtue of my learning to correctly anticipate the consequences of the actions that  dIirect towards other agents that I end up developing models of the internal states of such agents, and it is in virtue of the existence of such models that I become able to gain insight about myself (more specifically: about my self). Thus, by this view, self-awareness, and perhaps subjective experience itself, is a consequence of theory of mind as it develops over extended periods of social intercourse.

Trump Tower and the Public Square

In the past, a populist was someone who was popular or who held popular views. A populist, as such, was a man (or woman) of the people or at least one aligned with them. So, why do so many in the media, specifically in the corporate media, repeatedly call Trump a ‘populist’ when he isn’t popular among the populace? The majority of voters didn’t vote for him. And according to numerous polls, at no point have most Americans supported, agreed with, or had even a remotely positive view of him.

Trump was elected by the electoral college which was designed to suppress democracy by protecting the interests and power of the elite. And there are few Americans more elite than Trump, someone who not only has been a key figure among the capitalist class and within corporate media but also was close friends and major supporter of the Clintons as they took over the Democratic Party, shifting it toward the right-wing and reactionary.

Behind the scenes, Trump was one of the anti-populist forces that helped remove any remaining democracy within the Democratic Party. Having made Democrats democratically impotent, he then turned his sights on the Republican Party, taking it over and pushing it even further to the extreme. It was a brilliant one-two punch, a brash show of elitist machinations. Trump was triumphant by using the system to gain control of the system. He was no outsider hoping to tear it all down, much less drain the swamp.

What is Trump symbolized by? Trump Tower. Not Trump Square. He is the ultimate product and embodiment of the rigid hierarchy of late stage capitalism and plutocratic corporatocracy. The network is beginning to challenge that entrenched hierarchy, but it’s been slow process. Trump’s coup is the last gasp of hierarchy as the system becomes dysfunctional and deranged, turning on itself.

The tower, the hierarchy remains dominant. When the tower comes tumbling down, we will know about it. And it won’t come about by an anti-democratic economic, media, and political system placing into power a faux populist.

On a related note, I’ve spent the last couple of decades watching the local public space downtown be destroyed by local plutocratic business interests (and by the way, it is very much a Democratic stronghold). The pedestrian mall, built as part of a downtown renovation project, used to be a thriving public space and public forum where community members gathered and connected. But in recent years it was intentionally and systematically destroyed in service of the tower, quite literally as TIF-funded high-rises were built for the wealthy and the downtown was gentrified.

There was a public space informally known as The People’s Park and formally known as Blackhawk Park (Blackhawk being the native leader who fought to defend his home against powerful interests seeking to steal his people’s land). This park existed before the pedestrian mall’s construction. It was the center of the public space and gave expression to a thriving sense of community, but the tables and benches were removed. Now it is feels like a dead zone, an open space in front of a looming glass edifice that no longer welcomes public use.

This power grab at the local level is mirrored by the power grab at the national and international level, including within supposed networks as the internet increasingly comes under the control of hierarchical transnational corporations. Hierarchy is ascendant, like never before seen. We have barely begun to see the emergence of a network backlash. And the longer the backlash is suppressed, the more radical and revolutionary it will be once finally unleashed.


‘I expect things to get worse before they get better’, says historian Niall Ferguson
by Varghese K. George

Would it be useful to try to understand history as ongoing, cyclical, hierarchy-network swings?

It might be a little too neat. Large networks are complex systems, and they have emergent properties that are rather unpredictable. They are quite capable of sudden changes. The key here is that revolutionary networks like the Bolsheviks were capable of transforming, with amazing speed, into hierarchies of tremendous rigidity and centralisation. That hierarchical structure endured for 70 years, and then fell apart with extraordinary swiftness. I prefer to think of history as a somewhat erratic and chaotic process rather than as one characterised by cycles, or pendulum swings. That is why it is hard to predict history, and it does not operate in a way that submits to nice, neat laws.

You make some predictions and say the current phase of social and political chaos will last for some years.

If one compares our age with the period of the printing press, the striking thing is that there are many, many similarities, though the speed today is an order of magnitude faster. It took a hundred years in the 16th and 17th centuries, in the age of the printing press; now it takes 10 years. If you think about what happened in the 16th century, the printing press… when the Reformation started, it unleashed at least 130 years of religious conflict in Europe. It went on until the end of the Thirty Years’ War and the Peace of Westphalia. In my very rough analogy, we should expect our age’s ideological conflict to last about a tenth of that time. The age of the Internet, certainly the age of Facebook and Twitter, has given rise to a kind of ideological polarisation in many democracies. I would expect that process to continue and get worse for a whole period of conflict that is not as long as 130 years but perhaps 13 years. But this is a very rough analogy. This is about how these technological shocks, these innovations like the Internet or the printing press, change the structure of the public sphere and give rise to conflict, because of polarisation or violence… If you think of it in a rough way, we are having this 16th-17th century experience in the realm of democratic politics… but speeded up. That means I expect things to get worse before they get better. Because I don’t see any change in the state of affairs created by Facebook, YouTube and the rest soon.

Review: Even on the Internet, What’s Old Is New Again
by Jonathan A. Knee

The internet itself is a network of networks. The ability to communicate and transact across its vast reach is indeed unprecedented and represents the basic infrastructure of what has been termed the “network society.” Mr. Ferguson’s book does far more than simply track the use of the word “network” from its introduction in English language publications in the late 19th century, when it “was scarcely used,” to the modern day, when he points out that it appeared in 136 articles in The New York Times during just the first week of 2017. Rather he seeks to reframe the entirety of human history as an endless tug-of-war between eras in which powerful hierarchical institutions predominate (the Tower of the title) only to be undermined by the influence of emerging networks (the corresponding Square). In Professor Ferguson’s telling, these networks are invariably co-opted by reconstituted hierarchies and the process begins again.

For instance, Professor Ferguson argues it was the printing press that was largely responsible for three “network-based revolutions — the Reformation, the Scientific Revolution and the Enlightenment.” These were followed by a hundred-year period of hierarchical international order dominated by five hubs (Austria, Britain, France, Prussia and Russia) leading up to the First World War.

The new industrial, financial and communications networks that emerged during this time did not, however, overturn the hierarchical nature of things. This dominant structure survived both world wars, according to Professor Ferguson, with the mid-twentieth century actually representing the “zenith of hierarchy.” His account shows how the ability to navigate and influence these and other nascent networks determined which empires thrived in the reconfigured hierarchical orders.

Want to understand how history is made? Look for the networks
by David Marquand

Hierarchies, Ferguson argues, have been part of the human condition since the neolithic age. But in the 500 years since Gutenberg invented printing and Martin Luther pinned his 95 theses to the door of Wittenberg church, hierarchies have been challenged again and again by networks, through which like-minded people communicate with each other, independently of those set in authority over them. Sometimes hierarchies have crushed networks; sometimes networks have undermined hierarchies. But the tension between them has been constant and inescapable. […]

But despite the complexity of Ferguson’s story, the basic argument is clear. Though he doesn’t say it in so many words, it is curiously reminiscent of Thomas Hobbes’s Leviathan. For Ferguson, networks are more creative than hierarchies. Their members are more engaged than the hierarchies they confront. Without them, the world would be a harsher, bleaker and crueller place. But when hierarchies fall, and networks carry all before them, the result, too often, is an anarchic war of all against all—like Hobbes’s state of nature. Again and again, Ferguson reminds us, triumphant networks have run amok, plunging their societies into bloodshed. […]

The clear implication of these stories is that stable and legitimate rule depends on a symbiosis between Ferguson’s Square and his Tower: between networks and hierarchies. And half a millennium of human history shows that symbiosis is both extraordinarily difficult to achieve and extraordinarily difficult to maintain.

For most of the 16th and 17th centuries, the main threat to that symbiosis came from the fanatical, intolerant and often bloodthirsty religious networks that devastated central Europe. For most of the 18th, 19th and 20th centuries it came from more or less brutal hierarchists—Peter the Great, Napoleon, Lenin, Stalin, Hitler, Mao Zedong, Pol Pot, Kim Il-Sung and the like. In his brilliantly provocative final chapters, Ferguson shows that the wheel has now come full circle. The frenzied religious networks of the 16th century flourished in what he calls the “first networked era”: the age ushered in by the astonishingly rapid diffusion of print technology all over Europe. Today, he argues, we are living in the second networked age. Ours is the age of the internet, of Tim Berners-Lee’s world wide web and giants such as Facebook and Google. The speedy diffusion of information that these websites facilitate allow individuals to form themselves into networks more easily, and more globally, than ever before. A development that is having profound consequences for once stable, or at least predictable, democracies.

By that very token, though, it is also the age of cyber-warfare, sometimes conducted by hierarchical states, like Vladimir Putin’s Russia, and sometimes by networked individuals like Julian Assange. […]

As in the past, though, the network has quickly been taken over by a hierarchy; the square has become the tower. The most astonishing feature of the second networked age is an explosion of inequality. The returns from the network, he points out, “flow overwhelmingly to the insiders who own it.” Thus, Google is worth $660bn; 16 per cent of its shares are owned by its founders. Facebook is worth $441bn; 28 per cent of its shares are owned by its founder, Mark Zuckerberg. Zuckerberg and his ilk are not alone. They are scooping up a massive rent; and, for decades, successful rent-seeking by the super-rich has been a feature of economic life right across the developed world.

The great question for the future is whether it will be possible to assemble a social coalition of Ferguson’s outsiders to challenge the dominance of the super-rich. In other words can the network strike back? The obstacles are formidable. But it is worth remembering that though left-wing insurgent Bernie Sanders lost the Democratic nomination, he might well have won the presidency if the race had been between him and Trump in his tower. Sanders’s populist campaign might yet turn out to have been the first swallow of a bright summer.

Networks and Hierarchies in the Trump Era: An Interview with Niall Ferguson
by Davis Richardson

You say that these companies in Silicon Valley are decentralized, but it seems they’re very consolidated regarding capital and the concentration of data.

The paradox of Silicon Valley is that it proclaims a very decentralized network era in which cyberspace is inhabited by free and equal netizens; yet in practice, it’s created its own extraordinarily unequal hierarchy personified by the FANG companies and the people who own them. The rhetoric of Silicon Valley has been that we’re going to be more democratized by connectedness, but the reality is that large social networks are not very democratic; they actually magnify the existing inequalities in our society.

Does social media reinforce power structures throughout history?

Or creates a new version. It was new people who became the titans of the 19th century, the Carnegies and Rockefellers. In one sense, the giants of Silicon Valley, like Mark Zuckerberg and Jeff Bezos, are the equivalent to Andrew Carnegie and his contemporaries.

But in our time, we now have a network inequality projected onto an existing market inequality that amplifies it. To give an example, those who are in a position to take big, speculative positions in Bitcoin are already quite wealthy from the last generation of technology.

It’s reminiscent of Marx’s philosophy that the bourgeoisie is never fixed and subject to renewal.

There’s a consolation offered by large monopoly companies which is, “Don’t worry, we won’t be monopolies for too long. New giants will come and displace this.” And that’s the standard way in which Google and Amazon have fended off the anti-trust movement from the Democratic Party. But there’s never really been such a concentration of power in content publishing as now exists.

In the age of the printing press, it was a decentralized public sphere. Whereas, what’s happened, thanks to how Google and Facebook have been run, is unique in that the public sphere is becoming highly concentrated through those network platforms. It does drive a real distortion of the public sphere because it doesn’t matter whether something’s true or false. William Randall Hearst never had that type of market share, even at the height of his power, and I find it oddly disconcerting that the people running those companies act as if they weren’t massive content publishers.

“The Square and the Tower” — Augmenting and Modularizing the Algorithm (a Review and Beyond)
by Richard Reisman

Drawing on a long career as a systems analyst/engineer/designer, manager, entrepreneur and inventor, I have recently come to share much of Ferguson’s fear that we are going off the rails. He cites important examples like the 9/11 attacks, counterattacks, and ISIS, the financial meltdown of 2008, and most concerning to me, the 2016 election as swayed by social media and hacking. However — discouraging as these are — he seems to take an excessively binary view of network structure, and to discount the ability of open networks to better reorganize and balance excesses and abuse. He argues that traditional hierarchies should reestablish dominance.

In that regard, I think Ferguson fails to see the potential for better ways to design, manage, use, and govern our networks — and to better balance the best of hierarchy and openness. To be fair, few technologists are yet focused on the opportunities that I see as reachable, and now urgently needed. […]

Ferguson’s title comes from his metaphor of the medieval city of Sienna, with a large public square that serves as a marketplace and meeting place, and a high tower of government (as well as a nearby cathedral) that displayed the power of those hierarchies. But as he elaborates, networks have complex architectures and governance rules that are far richer than the binary categories of either “network” ( a peer to peer network with informal and emergent rules) or “hierarchy” (a constrained network with more formal directional rankings and restrictions on connectivity).

The crucial differences among all kinds of networks are in the rules (algorithms, code, policies) that determine which nodes connect, and with what powers. While his analysis draws out the rich variety of such structures, in many interesting examples, with diagrams, what he seems to miss is any suggestion of a new synthesis. […]

As Ferguson points out, our vaunted high-tech networks are controlled by corporate hierarchies (he refers to FANG, Facebook, Amazon, Netflix, and Google, and BAT, Baidu, Alibaba, and Tencent) — but subject to levels of government control that vary in the US, EU, and China. This corporate control is a source of tension and resistance to change — and a barrier to more emergent adaptation to changing needs and stressors (such as the Russian interference in our elections). These new monopolistic hierarchies extract high rents from the network — meaning us, the users — mostly in the form of advertising and sales of personal data.

‘The Square and the Tower’ a wobbly view of history
by Mike Fischer

In Ferguson’s hands, that disconnect covers everything and therefore explains nothing; his notion of hierarchy is so narrow and his definition of networks is so generic that the distinction between them becomes meaningless — particularly as Ferguson is forced to admit that “a hierarchy is just a special kind of network.”

What we get instead is a watered down survey of how “networks” spurred by the printing press enabled Luther’s reformation as well as ensuing secular revolution — before reactive “hierarchies” re-established precedence in the 19th century, thereafter themselves coming unglued following World War II.

Ferguson points to this more recent erosion in hierarchical power as cause rather than consequence of a new network revolution involving the Internet and social media, both of which make him nervous because of how readily they’ve been appropriated by populist demagogues on the left and right.

But as has been true of Ferguson before — one thinks of his insistence that the West’s “edge” can be explained by six “killer apps” — his hobby horse du jour sometimes rides roughshod over the facts.

How else, for example, to explain his bizarre view that because network analysis demonstrates that Paul Revere and Joseph Warren were more plugged in than their brethren, they “were the most important revolutionaries in Boston”? Or that it’s “doubtful” George Washington would have enjoyed the influence he did if he hadn’t been a Mason?

Neither claim is tested against the dense historical record suggesting that Washington — and Bostonians like the Adams cousins — were important because of their personal characteristics, unique talents, and ideas; for Ferguson, the content of one’s character and quality of one’s thought matter much less than being in the right place at the right time.

The Square and the Tower by Niall Ferguson review – a restless tour through power
by Andrew Anthony

The problem is that there are simply too many strands and too much disparate information for a coherent thesis to emerge. Indeed, such is Ferguson’s restless desire to uncover connectedness that he can sound like a conspiracy theorist, though he is at pains to distance himself from that perspective. As he notes in the preface, conspiracy theorists see networks as hidden elites in cahoots with the established power structure, while far more often, he argues, networks disrupt the status quo.

But in revisiting such conspiracist tales – the Illuminati and the Rothschilds, for example – he confuses as much as demystifies. The Illuminati, a small 18th-century German order that sought to disseminate Enlightenment ideals, came to be seen – falsely – as the orchestrators of the French Revolution, and, by the modern crank tendency, as the puppet-masters behind everything.

As Ferguson notes, the Illuminati survived by infiltrating the Freemasons, where they achieved little, ultimately collapsing and disappearing long before they were adopted by the lunatic fringe as the all-purpose sinister “they”. So what was their significance? Ferguson doesn’t really explain, other than to say that they were an example of the intellectual networks that were “an integral part of the complex historical process that led Europe from Enlightenment to Revolution to Empire”.

From someone who is not bashful about making bold statements, this is a deeply underwhelming conclusion. But it stands as the basis for his case about the ambiguous, not always progressive nature of networks. It’s an argument that takes in the house of Saxe-Coburg-Gotha, the Cambridge Apostles, the Taiping revolt, Henry Kissinger, al-Qaida and so much else besides, right up to Twitter and Donald Trump.

The effect is dizzying more​ than​ stimulating. Ferguson’s breadth of learning is often impressive, but by the end of the book I was little more secure in my understanding of what ​he was trying to get at than at the beginning.

Meyerism and Unity Church

One of the shows I’ve been following is The Path, about a growing spiritual movement and community called Meyerism (they don’t refer to themselves as a religion). It’s in the third season. My interest has been sustained, even if not quite as good as the first season.

The melodrama has increased over time, but that is probably to be expected. After all, it is about a close-knit faith group that transitions from a cult-like commune to a respectable large-scale organization. It’s a turbulent process with an existential crisis for the community involving a change of leadership. The portrayal of faith feels honest and fair to human nature, the way people struggle and care for what matters most to them.

One aspect I like about the show is the comparison and contrast with Christianity. As the organization grows, they decide to expand their reach to provide more services. Volunteer work and generosity is central to their spiritual vision. So, they invest in a major center in the nearby city, but it is more space than they immediately need. They share the space with others, including a Christian youth group. As a community, they are confident in their faith and so don’t see other groups, religious or otherwise, as competition.

One of the young Meyerists, Hawk, who grew up in the faith soon falls in love with the also young Caleb who leads the youth group. The conflict is that Caleb’s father is a fire-and-brimstone preacher, not accepting of homosexuality. Hawk has to simultaneously come to terms with his own homosexual feelings and those of others. This causes him to question what is faith, what is religion vs a cult, what does it mean to love someone no matter what. His parents raised him in Meyerism, but after his father became the new leader his mother had her own crisis of faith. She has learned to be more accepting and offers Hawk her perspective.

This conflict for Hawk came up again in the most recent episode (ep. 10, The Strongest Souls). Hawk doesn’t want to lose Caleb, but Caleb is afraid of losing his family. Unlike Meyerism, Caleb’s fundamentalist church is not accepting in the slightest. Caleb is feeling unbearable pressure to enter into a program to have his homosexuality cured or whatever they do. In hope of helping Caleb, Hawk looks for a gay-welcoming Christian church and finds himself sitting in a Unity service. That caught my attention. I grew up in the Unity Church (part of New Thought Christianity) and it is the first time I’ve seen it portrayed in any form within mainstream media.

I can be critical of Unity. It is as idealistic and as liberal of a church as you are likely to find. As someone dealing with depression, the idealism I internalized in my youth has been a struggle for me. It has messed up my mind in many ways, a bright light casting a dark shadow. But at the same time, the Unity Church represents some of my happiest memories. I attended Unity youth camps and the experience blew me away. Unity theology is all about love and light. I was never taught any notion about sin, damnation, and hell. These were foreign concepts to me. It is a beautiful religion and the positive feeling and support I felt growing up was immense. It showed me the world could be a different way. But returning to high school after one of those youth camps, it sent me into a tailspin of despair. The idealism of Unity didn’t match the unrelenting oppressiveness of the world I was forced to live in on a daily basis. Positive affirmations and visualizations were no match for the cynical culture that surrounded me. I felt unprepared to deal with adulthood in an utterly depraved world.

Yet that was long ago. For a moment in watching Hawk in that Unity service, I remembered what was so wonderful about the Unity Church. It’s a place where you will be accepted, even the lowest of the low. It’s a church that actually takes Jesus’ message of love seriously. If you think you hate Christianity for all the ugliness of fundamentalism, then you should visit a Unity Church. It has nothing to do with whether or not you want to believe in God or have a personal relationship with Jesus. I can’t say all Unity Churches are equal, as I’ve been to some that felt less openly welcoming than others. But the best of the Unity Churches can give you an experience like few other places.