Dark Triad Domination

It has been noted that some indigenous languages have words that can be interpreted as what, in English, is referred to as psychopathic, sociopathic, narcissistic, Machiavellian, etc. This is the region of the Dark Triad. One Inuit language has the word ‘kunlangeta‘, meaning “his mind knows what to do but he does not do it.” That could be thought of as describing a psychopath’s possession of cognitive empathy while lacking affective empathy. Or consider the Yoruba word ‘arankan‘ that “is applied to a person who always goes his own way regardless of others, who is uncooperative, full of malice, and bullheaded.”

These are tribal societies. Immense value is placed on kinship loyalty, culture of trust, community survival, collective well-being, and public good. Even though they aren’t oppressive authoritarian states, the modern Western notion of hyper-individualism wouldn’t make much sense within these close-knit groups. Sacrifice of individual freedom and rights is a given under such social conditions, since individuals are intimately related to one another and physically dependent upon one another. Actually, it wouldn’t likely be experienced as sacrifice at all since it would simply be the normal state of affairs, the shared reality within which they exist.

This got me thinking about psychopathy and modern society. Research has found that, at least in some Western countries, the rate of psychopathy is not just high in prison populations but equally as high among the economic and political elite. My father left upper management in a major corporation because of how ruthless was the backstabbing, a win at all costs social Darwinism. This is what defines a country like the United States, as these social dominators are the most revered and emulated individuals. Psychopaths and such, instead of being eliminated or banished, are promoted and empowered.

What occurred to me is the difference for tribal societies is that hyper-individualism is seen not only as abnormal but dangerous and so intolerable. Maybe the heavy focus on individualism in the modern West inevitably leads to the psychopathological traits of the Dark Triad. As such, that would mean there is something severely abnormal and dysfunctional about Western societies (WEIRD – Western Educated Industrialized Rich Democratic). Psychopaths and narcissists, in particular, are the ultimate individualists and so they will be the ultimate winners in an individualistic culture — their relentless confidence and ruthless competitiveness, their Machiavellian manipulations and persuasive charm supporting a narcissistic optimism and leading to success.

There are a couple of ways of looking at this. First off, there might be something about urbanization itself or a correlated factor that exacerbates mental illness. Studies have found, for example, an increase in psychosis across the recent generations of city-dwellers — precisely during the period of populations being further urbanized and concentrated. It makes one think of the study done on crowding large numbers of rats in a small contained cage until they turned anti-social, aggressive, and violent. If these rats were humans, we’d describe this behavior in terms of psychopathy or sociopathy.

There is a second thing to consider, as discussed by Barbara Oakley in her book Evil Genes (pp. 265-6). About rural populations, she writes that, “Psychopathy is rare in those settings, notes psychologist David Cooke, who has studied psychopathy across cultures.” And she continues:

“But what about more urban environments? Cooke’s research has shown, surprisingly, that there are more psychopaths from Scotland prisons of England and Wales than there are in Scottish prisons. (Clearly, this is not to say that the Scottish are more given to psychopathy than anyone else.) Studies of migration records showed that many Scottish psychopaths had migrated to the more populated metropolitan areas of the south. Cooke hypothesized that, in the more crowded metropolitan areas, the psychopath could attack or steal with little danger that the victim would recognize or catch him. Additionally, the psychopath’s impulsivity and need for stimulation could also play a role in propelling the move to the dazzling delights of the big city — he would have no affection for family and friends to keep him tethered back home. Densely populated areas, apparently, are the equivalent for psychopaths of ponds and puddles for malarial mosquitoes.”

As Oakley’s book is on genetics, she goes in an unsurprising direction in pointing out how some violent individuals have been able to pass on their genetics to large numbers of descendants. The most famous example being Genghis Khan. She writes that (p. 268),

“These recent discoveries reinforce the findings of the anthropologist Laura Betzig. Her 1986 Despotism and Differential Reproduction provides a cornucopia of evidence documenting the increased capacity of those with more power — and frequently, Machiavellian tendencies — to have offspring. […] As Machiavellian researcher Richard Christie and his colleague Florence Geis aptly note: “[H]igh population density and highly competitive environments have been found to increase the use of antisocial and Machiavellian strategies, and my in fact foster the ability of those who possess those strategies to reproduce.” […] Beltzig’s ultimte point is not that the corrupt attain power but that those corrupted individuals who achieved power in preindustrial agricultural societies had far more opportunity to reproduce, generally through polygyny, and pass on their genes. In fact, the more Machiavellian, that is, despotic, a man might be, the more polygynous he tended to be — grabbing and keeping for himself as many beautiful women as he could. Some researchers have posited that envy is itself a useful, possibly geneticall linked trait, “serving a key role in survival, motivating achievement, serving the conscience of self and other, and alerting us to inequities that, if fueled, can lead to esclaated violence.” Thus, genese related to envy — not to mention other more problematic temperaments — might have gradually found increased prevalence in such environments.”

That kind of genetic hypothesis is highly speculative, to say the least. Their could be some truth value in them, if one wanted to give the benefit of the doubt, but we have no direct evidence that such is the case. At present, these speculations are yet more just-so stories and they will remain so until we can better control confounding factors in order to directly ascertain causal factors. Anyway, genetic determinism in this simplistic sense is largely moot at this point, as the science is moving on into new understandings. Besides being unhelpful, such speculations are unnecessary. We already have plenty of social science research that proves changing environmental conditions alters social behavior — besides what I’ve already mentioned, there is such examples as the fascinating rat park research. There is no debate to be had about the immense influence of external influences, such as factors of socioeconomic class and high inequality: Power Causes Brain Damage by Justin Renteria, How Wealth Reduces Compassion by Daisy Grewal, Got Money? Then You Might Lack Compassion by Jeffrey Kluger, Why the Rich Don’t Give to Charity by Ken Stern, Rich People Literally See the World Differently by Drake Baer, The rich really DO ignore the poor by Cheyenne Macdonald, Propagandopoly: Monopoly as an Ideological Tool by Naomi Russo, A ‘Rigged’ Game Of Monopoly Reveals How Feeling Wealthy Changes Our Behavior [TED VIDEO] by Planetsave, etc.

Knowing the causes is important. But knowing the consequences is just as important. No matter what increases Dark Triad behaviors, they can have widespread and long-lasting repurcussions, maybe even permanently altering entire societies in how they function. Following her speculations, Oakley gets down to the nitty gritty (p. 270):

“Questions we might reasonably ask are — has the percentage of Machiavellians and other more problematic personality types increased in the human population, or in certain human populations, since the advent of agriculture? And if the answer is yes, does the increase in these less savory types change a group’s culture? In other words, is there a tipping point of Machiavellian and emote control behavior that can subtly or not so subtly affect the way the members of a society interact? Certainly a high expectation of meeting a “cheater,” for example, would profoundly impact the trust that appears to form the grease of modern democratic societies and might make the development of democratic processes in certain areas more difficult. Crudely put, an increase in successfully sinister types from 2 percent, say, to 4 percent of a population would double the pool of Machiavellians vying for power. And it is the people in power who set the emotional tone, perhaps through mirroring and emotional contagion, for their followers and those around them. As Judith Rich Harris points out, higher-status members of a group are looked at more, which means they have more influence on how a person becomes socialized.”

The key factor in much of this seems to be concentration. Simply concentrating populations, humans or rats, leads to social problems related to mental health issues. On top of that, there is the troubling concern of what kind of people are being concentrated and where they are being concentrated — psychopaths being concentrated not only in big cities and prisons but worse still in positions of wealth and power, authority and influence. We live in a society that creates the conditions for the Dark Triad to increase and flourish. This is how the success of those born psychopaths encourages others to follow their example in developing into sociopaths, which in turn makes the Dark Triad mindset into a dominant ethos within mainstream culture.

The main thing on my mind is individualism. It’s been on my mind a lot lately, such as in terms of the bundle theory of the mind and the separate individual, related to my long term interest in community and the social nature of humans. In relation to individualism, there is the millennia-old cultural divide between Germanic ‘freedom‘ and Roman ‘liberty‘. But because Anglo-American society mixed up the two, this became incorrectly framed by Isaiah Berlin in terms of positive and negative. In Contemporary Political Theory, J. C. Johari writes that (p. 266), “Despite this all, it may be commented that though Berlin advances the argument that the two aspects of liberty cannot be so distinguished in practical terms, one may differ from him and come to hold that his ultimate preference is for the defence of the negative view of liberty. Hence, he obviously belongs to the category of Mill and Hayek.”  He states this this “is evident from his emphatic affirmation” in the following assertion by Berlin:

“The fundamental sense of freedom is freedom from chains, from imprisonment, from enslavement by others. The rest is extension of this sense or else metaphor. To strive to be free is to seek to remove obstacles; to struggle for personal freedom is to seek to curb interference, exploitation, enslavement by men whose ends are theirs, not one’s own. Freedom, at least in its political sense, is coterminous with the absence of bullying or domination.”

Berlin makes a common mistake here. Liberty was defined by not being a slave in a slave-based society, which is what existed in the Roman Empire. But that isn’t freedom, an entirely different term with an etymology related to ‘friend’ and with a meaning that indicated membership in an autonomous community — such freedom meant not being under the oppression of a slave-based society (e.g., German tribes remaining independent of the Roman Empire). Liberty, not freedom, was determined by one’s individual status of lacking oppression in an oppressive social order. This is why liberty has a negative connotation for it is what you lack, rather than what you possess. A homeless man starving alone on the street with no friend in the world to help him and no community to support him, such a man has liberty but not freedom. He is ‘free’ to do what he wants under those oppressive conditions and constraints, as no one is physically detaining him.

This notion of liberty has had a purchase on the American mind because of the history of racial and socioeconomic oppression. After the Civil War, blacks had negative liberty in no longer being slaves but they definitely did not have positive freedom through access to resources and opportunities, instead being shackled by systemic and institutional racism that maintained their exploited status as a permanent underclass. Other populations such as Native Americans faced a similar dilemma. Is one actually free when the chains holding one down are invisible but still all too real? If liberty is an abstraction detached from lived experience and real world results, of what value is such liberty?

This point is made by another critic of Berlin’s perspective. “It is hard for me to see that Berlin is consistent on this point,” writes L. H. Crocker (Positive Liberty, p. 69). “Surely not all alterable human failures to open doors are cases of bullying. After all, it is often through neglect that opportunities fail to be created for the disadvantaged. It is initially more plausible that all failures to open doors are the result of domination in some sense or another.” I can’t help but think that Dark Triad individuals would feel right at home in a culture of liberty where individuals have the ‘freedom’ to oppress and be oppressed. Embodying this sick mentality, Margaret Thatcher once gave perfect voice to the sociopathic worldview — speaking of the victims of disadvantage and desparation, she claimed that, “They’re casting their problem on society. And, you know, there is no such thing as society.” That is to say, there is no freedom.

The question, then, is whether or not we want freedom. A society is only free to the degree that as a society freedom is demanded. To deny society itself is an attempt to deny the very basis of freedom, but that is just a trick of rhetoric. A free people know their own freedom by acting freely, even if that means fighting the oppressors who seek to deny that freedom. Thatcher intentionally conflated society and government, something never heard in the clear-eyed wisdom of a revolutionary social democrat like Thomas Paine“Society in every state is a blessing, but government, even in its best stage, is but a necessary evil; in its worst state an intolerable one.” These words expressed the values of negative liberty as made perfect sense for someone living in an empire built on colonialism, corporatism, and slavery. But the same words gave hint to a cultural memory of Germanic positive freedom. It wasn’t a principled libertarian hatred of governance, rather the principled radical protest against a sociopathic social order. As Paine made clear, this unhappy situation was neither inevitable nor desirable, much less tolerable.

The Inuits would find a way for psychopaths to ‘accidentally’ fall off the ice, never to trouble the community again. As for the American revolutionaries, they preferred more overt methods, from tar and feathering to armed revolt. So, now to regain our freedom as a people, what recourse do we have in abolishing the present Dark Triad domination?

* * *

Here are some blog posts on individualism and community, as contrasted between far different societies. In these writings, I explore issues of mental health (from depression to addiction), and social problems (from authoritarianism to capitalist realism) — as well as other topics, including carnival and revolution.

Self, Other, & World

Retrieving the Lost Worlds of the Past:
The Case for an Ontological Turn
by Greg Anderson

“[…] This ontological individualism would have been scarcely intelligible to, say, the inhabitants of precolonial Bali or Hawai’i, where the divine king or chief, the visible incarnation of the god Lono, was “the condition of possibility of the community,” and thus “encompasse[d] the people in his own person, as a projection of his own being,” such that his subjects were all “particular instances of the chief’s existence.” 12 It would have been barely imaginable, for that matter, in the world of medieval Europe, where conventional wisdom proverbially figured sovereign and subjects as the head and limbs of a single, primordial “body politic” or corpus mysticum. 13 And the idea of a natural, presocial individual would be wholly confounding to, say, traditional Hindus and the Hagen people of Papua New Guinea, who objectify all persons as permeable, partible “dividuals” or “social microcosms,” as provisional embodiments of all the actions, gifts, and accomplishments of others that have made their lives possible.1

“We alone in the modern capitalist west, it seems, regard individuality as the true, primordial estate of the human person. We alone believe that humans are always already unitary, integrated selves, all born with a natural, presocial disposition to pursue a rationally calculated self-interest and act competitively upon our no less natural, no less presocial rights to life, liberty, and private property. We alone are thus inclined to see forms of sociality, like relations of kinship, nationality, ritual, class, and so forth, as somehow contingent, exogenous phenomena, not as essential constituents of our very subjectivity, of who or what we really are as beings. And we alone believe that social being exists to serve individual being, rather than the other way round. Because we alone imagine that individual humans are free-standing units in the first place, “unsocially sociable” beings who ontologically precede whatever “society” our self-interest prompts us to form at any given time.”

What Kinship Is-And Is Not
by Marshall Sahlins, p. 2

“In brief, the idea of kinship in question is “mutuality of being”: people who are intrinsic to one another’s existence— thus “mutual person(s),” “life itself,” “intersubjective belonging,” “transbodily being,” and the like. I argue that “mutuality of being” will cover the variety of ethnographically documented ways that kinship is locally constituted, whether by procreation, social construction, or some combination of these. Moreover, it will apply equally to interpersonal kinship relations, whether “consanguineal” or “affinal,” as well as to group arrangements of descent. Finally, “mutuality of being” will logically motivate certain otherwise enigmatic effects of kinship bonds— of the kind often called “mystical”— whereby what one person does or suffers also happens to others. Like the biblical sins of the father that descend on the sons, where being is mutual, there experience is more than individual.”

Music and Dance on the Mind

We aren’t as different from ancient humanity as it might seem. Our societies have changed drastically, suppressing old urges and potentialities. Yet the same basic human nature still lurks within us, hidden in the underbrush along the well trod paths of the mind. The hive mind is what the human species naturally falls back upon, from millennia of collective habit. The problem we face is we’ve lost the ability to express well our natural predisposition toward group-mindedness, too easily getting locked into groupthink, a tendency easily manipulated.

Considering this, we have good reason to be wary, not knowing what we could tap into. We don’t understand our own minds and so we naively underestimate the power of humanity’s social nature. With the right conditions, hiving is easy to elicit but hard to control or shut down. The danger is that the more we idolize individuality the more prone we become to what is so far beyond the individual. It is the glare of hyper-individualism that casts the shadow of authoritarianism.

Pacifiers, Individualism & Enculturation

I’ve often thought that individualism, in particular hyper-individualism, isn’t the natural state of human nature. By this, I mean that it isn’t how human nature manifested for the hundreds of thosands of years prior to modern Western civilization. Julian Jaynes theorizes that, even in early Western civilization, humans didn’t have a clear sense of separate individuality. He points out that in the earliest literature humans were all the time hearing voices outside of themselves (giving them advice, telling them what to do, making declarations, chastising them, etc), maybe not unlike in the way we hear a voice in our head.

We moderns have internalized those external voices of collective culture. This seems normal to us. This is not just about pacifiers. It’s about technology in general. The most profound technology ever invented was written text (along with the binding of books and the printing press). All the time I see my little niece absorbed in a book, even though she can’t yet read. Like pacifiers, books are tools of enculturation that help create the individual self. Instead of mommy’s nipple, the baby soothes themselves. Instead of voices in the world, the child becomes focused on text. In both cases, it is a process of internalizing.

All modern civilization is built on this process of individualization. I don’t know if it is overall good or bad. I’m sure much of our destructive tendencies are caused by the relationship between individualization and objectification. Nature as a living world that could speak to us has become mere matter without mind or soul. So, the cost of this process has been high… but then again, the innovative creativeness has exploded as this individualizing process has increasingly taken hold in recent centuries.

“illusion of a completed, unitary self”

The Voices Within: The History and Science of How We Talk to Ourselves
by Charles Fernyhough, Kindle Locations 3337-3342

“And we are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent “me” from moment to moment. We are all more or less dissociated. Our selves are constantly constructed and reconstructed in ways that often work well, but often break down. Stuff happens, and the center cannot hold. Some of us have more fragmentation going on, because of those things that have happened; those people face a tougher challenge of pulling it all together. But no one ever slots in the last piece and makes it whole. As human beings, we seem to want that illusion of a completed, unitary self, but getting there is hard work. And anyway, we never get there.”

Delirium of Hyper-Individualism

Individualism is a strange thing. For anyone who has spent much time meditating, it’s obvious that there is no there there. It slips through one’s grasp like an ancient philosopher trying to study aether. The individual self is the modernization of the soul. Like the ghost in the machine and the god in the gaps, it is a theological belief defined by its absence in the world. It’s a social construct, a statement that is easily misunderstood.

In modern society, individualism has been raised up to an entire ideological worldview. It is all-encompassing, having infiltrated nearly every aspect of our social lives and become internalized as a cognitive frame. Traditional societies didn’t have this obsession with an idealized self as isolated and autonomous. Go back far enough and the records seem to show societies that didn’t even have a concept, much less an experience, of individuality.

Yet for all its dominance, the ideology of individualism is superficial. It doesn’t explain much of our social order and personal behavior. We don’t act as if we actually believe in it. It’s a convenient fiction that we so easily disregard when inconvenient, as if it isn’t all that important after all. In our most direct experience, individuality simply makes no sense. We are social creatures through and through. We don’t know how to be anything else, no matter what stories we tell ourselves.

The ultimate value of this individualistic ideology is, ironically, as social control and social justification.

It’s All Your Fault, You Fat Loser!

Capitalist Realism: Is there no alternative?
By Mark Fisher, pp. 18-20

“[…] In what follows, I want to stress two other aporias in capitalist realism, which are not yet politicized to anything like the same degree. The first is mental health. Mental health, in fact, is a paradigm case of how capitalist realism operates. Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect). In the 1960s and 1970s, radical theory and politics (Laing, Foucault, Deleuze and Guattari, etc.) coalesced around extreme mental conditions such as schizophrenia, arguing, for instance, that madness was not a natural, but a political, category. But what is needed now is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS . In his book The Selfish Capitalist, Oliver James has convincingly posited a correlation between rising rates of mental distress and the neoliberal mode of capitalism practiced in countries like Britain, the USA and Australia. In line with James’s claims, I want to argue that it is necessary to reframe the growing problem of stress (and distress) in capitalist societies. Instead of treating it as incumbent on individuals to resolve their own psychological distress, instead, that is, of accepting the vast privatization of stress that has taken place over the last thirty years, we need to ask: how has it become acceptable that so many people, and especially so many young people, are ill? The ‘mental health plague’ in capitalist societies would suggest that, instead of being the only social system that works, capitalism is inherently dysfunctional, and that the cost of it appearing to work is very high.”

There is always an individual to blame. It sucks to be an individual these days, I tell ya. I should know because I’m one of those faulty miserable individuals. I’ve been one my whole life. If it weren’t for all of us pathetic and depraved individuals, capitalism would be utopia. I beat myself up all the time for failing the great dream of capitalism. Maybe I need to buy more stuff.

“The other phenomenon I want to highlight is bureaucracy. In making their case against socialism, neoliberal ideologues often excoriated the top-down bureaucracy which supposedly led to institutional sclerosis and inefficiency in command economies. With the triumph of neoliberalism, bureaucracy was supposed to have been made obsolete; a relic of an unlamented Stalinist past. Yet this is at odds with the experiences of most people working and living in late capitalism, for whom bureaucracy remains very much a part of everyday life. Instead of disappearing, bureaucracy has changed its form; and this new, decentralized, form has allowed it to proliferate. The persistence of bureaucracy in late capitalism does not in itself indicate that capitalism does not work – rather, what it suggests is that the way in which capitalism does actually work is very different from the picture presented by capitalist realism.”

Neoliberalism: Dream & Reality

in the book Capitalist Realism by Mark Fisher (p. 20):

“[…] But incoherence at the level of what Brown calls ‘political rationality’ does nothing to prevent symbiosis at the level of political subjectivity, and, although they proceeded from very different guiding assumptions, Brown argues that neoliberalism and neoconservatism worked together to undermine the public sphere and democracy, producing a governed citizen who looks to find solutions in products, not political processes. As Brown claims,

“the choosing subject and the governed subject are far from opposites … Frankfurt school intellectuals and, before them, Plato theorized the open compatibility between individual choice and political domination, and depicted democratic subjects who are available to political tyranny or authoritarianism precisely because they are absorbed in a province of choice and need-satisfaction that they mistake for freedom.”

“Extrapolating a little from Brown’s arguments, we might hypothesize that what held the bizarre synthesis of neoconservatism and neoliberalism together was their shared objects of abomination: the so called Nanny State and its dependents. Despite evincing an anti-statist rhetoric, neoliberalism is in practice not opposed to the state per se – as the bank bail-outs of 2008 demonstrated – but rather to particular uses of state funds; meanwhile, neoconservatism’s strong state was confined to military and police functions, and defined itself against a welfare state held to undermine individual moral responsibility.”

[…] what Robin describes touches upon my recent post about the morality-punishment link. As I pointed out, the world of Star Trek: Next Generation imagines the possibility of a social order that serves humans, instead of the other way around. I concluded that, “Liberals seek to promote freedom, not just freedom to act but freedom from being punished for acting freely. Without punishment, though, the conservative sees the world lose all meaning and society to lose all order.” The neoliberal vision subordinates the individual to the moral order. The purpose of forcing the individual into a permanent state of anxiety and fear is to preoccupy their minds and their time, to redirect all the resources of the individual back into the system itself. The emphasis on the individual isn’t because individualism is important as a central ideal but because the individual is the weak point that must be carefully managed. Also, focusing on the individual deflects our gaze from the structure and its attendant problems.

This brings me to how this relates to corporations in neoliberalism (Fisher, pp. 69-70):

“For this reason, it is a mistake to rush to impose the individual ethical responsibility that the corporate structure deflects. This is the temptation of the ethical which, as Žižek has argued, the capitalist system is using in order to protect itself in the wake of the credit crisis – the blame will be put on supposedly pathological individuals, those ‘abusing the system’, rather than on the system itself. But the evasion is actually a two step procedure – since structure will often be invoked (either implicitly or openly) precisely at the point when there is the possibility of individuals who belong to the corporate structure being punished. At this point, suddenly, the causes of abuse or atrocity are so systemic, so diffuse, that no individual can be held responsible. This was what happened with the Hillsborough football disaster, the Jean Charles De Menezes farce and so many other cases. But this impasse – it is only individuals that can be held ethically responsible for actions, and yet the cause of these abuses and errors is corporate, systemic – is not only a dissimulation: it precisely indicates what is lacking in capitalism. What agencies are capable of regulating and controlling impersonal structures? How is it possible to chastise a corporate structure? Yes, corporations can legally be treated as individuals – but the problem is that corporations, whilst certainly entities, are not like individual humans, and any analogy between punishing corporations and punishing individuals will therefore necessarily be poor. And it is not as if corporations are the deep-level agents behind everything; they are themselves constrained by/ expressions of the ultimate cause-that-is-not-a-subject: Capital.”

Sleepwalking Through Our Dreams

The modern self is not normal, by historical and evolutionary standards. Extremely unnatural and unhealthy conditions have developed, our minds having correspondingly grown malformed like the binding of feet. Our hyper-individuality is built on disconnection and, in place of human connection, we take on various addictions, not just to drugs and alcohol but also to work, consumerism, entertainment, social media, and on and on. The more we cling to an unchanging sense of bounded self, the more burdened we become trying to hold it all together, hunched over with the load we carry on our shoulders. We are possessed by the identities we possess.

This addiction angle interests me. Our addiction is the result of our isolated selves. Yet even as our addiction attempts to fill emptiness, to reach out beyond ourselves toward something, anything, a compulsive relationship devoid of the human, we isolate ourselves further. As Johann Hari explained in Chasing the Scream (Kindle Locations 3521-3544):

There were three questions I had never understood. Why did the drug war begin when it did, in the early twentieth century? Why were people so receptive to Harry Anslinger’s message? And once it was clear that it was having the opposite effect to the one that was intended— that it was increasing addiction and supercharging crime— why was it intensified, rather than abandoned?

I think Bruce Alexander’s breakthrough may hold the answer.

“Human beings only become addicted when they cannot find anything better to live for and when they desperately need to fill the emptiness that threatens to destroy them,” Bruce explained in a lecture in London31 in 2011. “The need to fill an inner void is not limited to people who become drug addicts, but afflicts the vast majority of people of the late modern era, to a greater or lesser degree.”

A sense of dislocation has been spreading through our societies like a bone cancer throughout the twentieth century. We all feel it: we have become richer, but less connected to one another. Countless studies prove this is more than a hunch, but here’s just one: the average number of close friends a person has has been steadily falling. We are increasingly alone, so we are increasingly addicted. “We’re talking about learning to live with the modern age,” Bruce believes. The modern world has many incredible benefits, but it also brings with it a source of deep stress that is unique: dislocation. “Being atomized and fragmented and all on [your] own— that’s no part of human evolution and it’s no part of the evolution of any society,” he told me.

And then there is another kicker. At the same time that our bonds with one another have been withering, we are told— incessantly, all day, every day, by a vast advertising-shopping machine— to invest our hopes and dreams in a very different direction: buying and consuming objects. Gabor tells me: “The whole economy is based around appealing to and heightening every false need and desire, for the purpose of selling products. So people are always trying to find satisfaction and fulfillment in products.” This is a key reason why, he says, “we live in a highly addicted society.” We have separated from one another and turned instead to things for happiness— but things can only ever offer us the thinnest of satisfactions.

This is where the drug war comes in. These processes began in the early twentieth century— and the drug war followed soon after. The drug war wasn’t just driven, then, by a race panic. It was driven by an addiction panic— and it had a real cause. But the cause wasn’t a growth in drugs. It was a growth in dislocation.

The drug war began when it did because we were afraid of our own addictive impulses, rising all around us because we were so alone. So, like an evangelical preacher who rages against gays because he is afraid of his own desire to have sex with men, are we raging against addicts because we are afraid of our own growing vulnerability to addiction?

In The Secret Life of Puppets, Victoria Nelson makes some useful observations of reading addiction, specifically in terms of formulaic genres. She discusses Sigmund Freud’s repetition compulsion and Lenore Terr’s post-traumatic games. She sees genre reading as a ritual-like enactment that can’t lead to resolution, and so the addictive behavior becomes entrenched. This would apply to many other forms of entertainment and consumption. And it fits into Derrick Jensen’s discussion of abuse, trauma, and the victimization cycle.

I would broaden her argument in another way. People have feared the written text ever since it was invented. In the 18th century, there took hold a moral panic about reading addiction in general and that was before any fiction genres had developed (Frank Furedi, The Media’s First Moral Panic). The written word is unchanging and so creates the conditions for repetition compulsion. Every time a text is read, it is the exact same text.

That is far different from oral societies. And it is quite telling that oral societies have a much more fluid sense of self. The Piraha, for example, don’t cling to their sense of self nor that of others. When a Piraha individual is possessed by a spirit or meets a spirit who gives them a new name, the self that was there is no longer there. When asked where is that person, the Piraha will say that he or she isn’t there, even if the same body of the individual is standing right there in front of them. They also don’t have a storytelling tradition or concern for the past.

Another thing that the Piraha apparently lack is mental illness, specifically depression along with suicidal tendencies. According to Barbara Ehrenreich from Dancing in the Streets, there wasn’t much written about depression even in the Western world until the suppression of religious and public festivities, such as Carnival. One of the most important aspects of Carnival and similar festivities was the masking, shifting, and reversal of social identities. Along with this, there was the losing of individuality within the group. And during the Middle Ages, an amazing number of days in the year were dedicated to communal celebrations. The ending of this era coincided with numerous societal changes, including the increase of literacy with the spread of the movable type printing press.

Another thing happened with suppression of festivities. Local community began to break down as power became centralized in far off places and the classes became divided, which Ehrenreich details. The aristocracy used to be inseparable from their feudal roles and this meant participating in local festivities where, as part of the celebration, a king might wrestle with a blacksmith. As the divides between people grew into vast chasms, the social identities held and social roles played became hardened into place. This went along with a growing inequality of wealth and power. And as research has shown, wherever there is inequality also there is found high rates of social problems and mental health issues.

It’s maybe unsurprising that what followed from this was colonial imperialism and a racialized social order, class conflict and revolution. A society formed that was simultaneously rigid in certain ways and destabilized in others. The individuals became increasingly atomized and isolated. With the loss of kinship and community, the cheap replacement we got is identity politics. The natural human bonds are lost or constrained. Social relations are narrowed down. Correspondingly, our imaginations are hobbled and we can’t envision society being any other way. Most tragic, we forget that human society used to be far different, a collective amnesia forcing us into a collective trance. Our entire sense of reality is held in the vice grip of historical moment we find ourselves in.

Social Conditions of an Individual’s Condition

A wide variety of research and data is pointing to a basic conclusion. Environmental conditions (physical, social, political, and economic) are of penultimate importance. So, why do we treat as sick individuals those who suffer the consequences of the externalized costs of society?

Here is the sticking point. Systemic and collective problems in some ways are the easiest to deal with. The problems, once understood, are essentially simple and their solutions tend to be straightforward. Even so, the very largeness of these problems make them hard for us to confront. We want someone to blame. But who do we blame when the entire society is dysfunctional?

If we recognize the problems as symptoms, we are forced to acknowledge our collective agency and shared fate. For those who understand this, they are up against countervailing forces that maintain the status quo. Even if a psychiatrist realizes that their patient is experiencing the symptoms of larger social issues, how is that psychiatrist supposed to help the patient? Who is going to diagnose the entire society and demand it seek rehabilitation?

Winter Season and Holiday Spirit

With this revelry and reversal follows, along with licentiousness and transgression, drunkenness and bawdiness, fun and games, song and dance, feasting and festival. It is a time for celebration of this year’s harvest and blessing of next year’s harvest. Bounty and community. Death and rebirth. The old year must be brought to a close and the new year welcomed. This is the period when gods, ancestors, spirits, and demons must be solicited, honored, appeased, or driven out. The noise of song, gunfire, and such serves many purposes.

In the heart of winter, some of the most important religious events took place. This includes Christmas, of course, but also the various celebrations around the same time. A particular winter festival season that began on All Hallows Eve (i.e., Halloween) ended with the Twelfth Night. This included carnival-like revelry and a Lord of Misrule. There was also the tradition of going house to house, of singing and pranks, of demanding treats/gifts and threats if they weren’t forthcoming. It was a time of community and sharing, and those who didn’t willingly participate might be punished. Winter, a harsh time of need, was when the group took precedence. […]

I’m also reminded of the Santa Claus as St. Nick. This invokes an image of jollity and generosity. And this connects to wintertime as period of community needs and interdependence, of sharing and gifting, of hospitality and kindness. This includes enforcement of social norms which easily could transform into the challenging of social norms.

It’s maybe in this context we should think of the masked vigilantes participating in the Boston Tea Party. Like carnival, there had developed a tradition of politics out-of-doors, often occurring on the town commons. And on those town commons, large trees became identified as liberty trees — under which people gathered, upon which notices were nailed, and sometimes where effigies were hung. This was an old tradition that originated in Northern Europe, where a tree was the center of a community, the place of law-giving and community decision-making. In Europe, the commons had become the place of festivals and celebrations, such as carnival. And so the commons came to be the site of revolutionary fervor as well.

The most famous Liberty Tree was a great elm near the Boston common. It was there that many consider the birth of the American Revolution, as it was the site of early acts of defiance. This is where the Sons of Liberty met, organized, and protested. This would eventually lead to that even greater act of defiance on Saturnalia eve, the Boston Tea Party. One of the participants in the Boston Tea Party and later in the Revolutionary War, Samuel Sprague, is buried in the Boston Common.

There is something many don’t understand about the American Revolution. It wasn’t so much a fight against oppression in general and certainly not about mere taxation in particular. What angered those Bostonians and many other colonists was that they had become accustomed to community-centered self-governance and this was being challenged. The tea tax wasn’t just an imposition of imperial power but also colonial corporatism. The East India Company was not acting as a moral member of the community, in its taking advantage by monopolizing trade. Winter had long been the time of year when bad actors in the community would be punished. Selfishness was not to be tolerated.

Those Boston Tea Partiers were simply teaching a lesson about the Christmas spirit. And in the festival tradition, they chose the guise of Native Americans which to their minds would have symbolized freedom and an inversion of power. What revolution meant to them was a demand for return of what was taken from them, making the world right again. It was revelry with a purpose.

Advertisements

Wordplay Schmordplay

What Do You Call Words Like Wishy-Washy or Mumbo Jumbo?

Words like wishy-washy or mumbo-jumbo, or any words that contain two identical or similar parts (a segment, syllable, or morpheme), are called reduplicative words or tautonyms. The process of forming such words is known as reduplication. In many cases, the first word is a real word, while the second part (sometimes nonsensical) is invented to create a rhyme and to create emphasis. Most reduplicative begin as hyphenated words, and through very common usage, eventually lose the hype to become single words. Regardless of their hyphenation, they underscore the playfulness of the English language.

Reduplication isn’t just jibber-jabber

There are several kinds of reduplication. One type replaces a vowel while keeping the initial consonant, as in “flip-flop,” “pish-posh,” and “ping-pong.” Another type keeps the vowel but replaces that first sound, as in “namby-pamby,” “hanky-panky,” “razzle-dazzle,” and “timey-wimey,” a word used by Dr. Who fans for time-travel shenanigans. Reduplication doesn’t get any simpler than when the whole word is repeated, like when you pooh-pooh a couple’s attempt to dress matchy-matchy. My favorite type is “schm” reduplication, though some might say “Favorite, schmavorite!” All the types show that redundancy isn’t a problem in word-making. Grant Barrett, host of the public radio show “A Way with Words,” notes via e-mail that even the word “reduplication” has an unnecessary frill: “I’ve always liked the ‘re’ in ‘reduplicate.’ We’re doing it again! It’s right there in the word!”

Reduplication

Reduplication in linguistics is a morphological process in which the root or stem of a word (or part of it) or even the whole word is repeated exactly or with a slight change.

Reduplication is used in inflections to convey a grammatical function, such as plurality, intensification, etc., and in lexical derivation to create new words. It is often used when a speaker adopts a tone more “expressive” or figurative than ordinary speech and is also often, but not exclusively, iconic in meaning. Reduplication is found in a wide range of languages and language groups, though its level of linguistic productivity varies.

Reduplication is the standard term for this phenomenon in the linguistics literature. Other terms that are occasionally used include cloningdoublingduplicationrepetition, and tautonym when it is used in biological taxonomies, such as “Bison bison”.

The origin of this usage of tautonym is uncertain, but it has been suggested that it is of relatively recent derivation.

Reduplication

The coinage of new words and phrases into English has been greatly enhanced by the pleasure we get from playing with words. There are numerous alliterative and rhyming idioms, which are a significant feature of the language. These aren’t restricted to poets and Cockneys; everyone uses them. We start in the nursery with choo-choos, move on in adult life to hanky-panky and end up in the nursing home having a sing-song.

The repeating of parts of words to make new forms is called reduplication. There are various categories of this: rhyming, exact and ablaut (vowel substitution). Examples, are respectively, okey-dokey, wee-wee and zig-zag. The impetus for the coining of these seems to be nothing more than the enjoyment of wordplay. The words that make up these reduplicated idioms often have little meaning in themselves and only appear as part of a pair. In other cases, one word will allude to some existing meaning and the other half of the pair is added for effect or emphasis.

New coinages have often appeared at times of national confidence, when an outgoing and playful nature is expressed in language; for example, during the 1920s, following the First World War, when many nonsense word pairs were coined – the bee’s knees, heebie-jeebies etc. That said, the introduction of such terms begin with Old English and continues today. Willy-nilly is over a thousand years old. Riff-raff dates from the 1400s and helter-skelter, arsy-versy (a form of vice-versa), and hocus-pocus all date from the 16th century. Coming up to date we have bling-bling, boob-tube and hip-hop. I’ve not yet recorded a 21st century reduplication. Bling-bling comes very close but is 20th century. ‘Bieber Fever’ is certainly 21st century, but isn’t quite a reduplication.

A hotchpotch of reduplication

Argy-bargy and lovey-dovey lie on opposite ends of the interpersonal scale, but they have something obvious in common: both are reduplicatives.

Reduplication is when a word or part of a word is repeated, sometimes modified, and added to make a longer term, such as aye-ayemishmash, and hotchpotch. This process can mark plurality or intensify meaning, and it can be used for effect or to generate new words. The added part may be invented or it may be an existing word whose form and sense are a suitable fit.

Reduplicatives emerge early in our language-learning lives. As infants in the babbling phase we reduplicate syllables to utter mama, dada, nana and papa, which is where these pet names come from. Later we use moo-moo, choo-choo, wee-wee and bow-wow (or similar) to refer to familiar things. The repetition, as well as being fun, might help children develop and practise the pronunciation of sounds.

As childhood progresses, reduplicatives remain popular, popping up in children’s books, songs and rhymes. Many characters in children’s stories have reduplicated names: Humpty Dumpty, Chicken Licken and Handy Andy, to name a few.

The language rule we know – but don’t know we know

Ding dong King Kong

Well, in fact, the Big Bad Wolf is just obeying another great linguistic law that every native English speaker knows, but doesn’t know that they know. And it’s the same reason that you’ve never listened to hop-hip music.

You are utterly familiar with the rule of ablaut reduplication. You’ve been using it all your life. It’s just that you’ve never heard of it. But if somebody said the words zag-zig, or ‘cross-criss you would know, deep down in your loins, that they were breaking a sacred rule of language. You just wouldn’t know which one.

All four of a horse’s feet make exactly the same sound. But we always, always say clip-clop, never clop-clip. Every second your watch (or the grandfather clock in the hall makes the same sound) but we say tick-tock, never tock-tick. You will never eat a Kat Kit bar. The bells in Frère Jaques will forever chime ‘ding dang dong’.

Reduplication in linguistics is when you repeat a word, sometimes with an altered consonant (lovey-dovey, fuddy-duddy, nitty-gritty), and sometimes with an altered vowel: bish-bash-bosh, ding-dang-dong. If there are three words then the order has to go I, A, O. If there are two words then the first is I and the second is either A or O. Mish-mash, chit-chat, dilly-dally, shilly-shally, tip top, hip-hop, flip-flop, tic tac, sing song, ding dong, King Kong, ping pong.

Why this should be is a subject of endless debate among linguists, it might be to do with the movement of your tongue or an ancient language of the Caucasus. It doesn’t matter. It’s the law, and, as with the adjectives, you knew it even if you didn’t know you knew it. And the law is so important that you just can’t have a Bad Big Wolf.

Jibber Jabber: The Unwritten Ablaut Reduplication Rule

In all these ablaut reduplication word pairs, the key vowels appear in a specific order: either i before a, or i before o.

In linguistic terms, you could say that a high vowel comes before a low vowel. The i sound is considered a high vowel because of the location of the tongue relative to the mouth in American speech. The a and o sounds are low vowels.

See-saw doesn’t use the letter i, but the high-vowel-before-low-vowel pattern still applies.

This Weird Grammar Rule is Why We Say “Flip Flop” Instead of “Flop Flip”

As to why this I-A-O pattern has such a firm hold in our linguistic history, nobody can say. Forsyth calls it a topic of “endless debate” among linguists that may originate in the arcane movements of the human tongue or an ancient language of the Caucasus. Whatever the case, the world’s English speakers are on-board, and you will never catch Lucy accusing Charlie Brown of being washy-wishy.

Reduplicative Words

Ricochet Word

wishy-washy, hanky panky – name for this type of word-formation?

argle-bargle

Easy-Peasy

Double Trouble

English Ryming Compound Words

Rhyming Compounds

Reduplicates

REDUPLICATION

English gitaigo: Flip-Flop Words

Arete: History and Etymology

Arete (moral virtue)
Wikipedia

Arete (Greekἀρετή), in its basic sense, means “excellence of any kind”.[1] The term may also mean “moral virtue”.[1] In its earliest appearance in Greek, this notion of excellence was ultimately bound up with the notion of the fulfillment of purpose or function: the act of living up to one’s full potential.

The term from Homeric times onwards is not gender specific. Homer applies the term of both the Greek and Trojan heroes as well as major female figures, such as Penelope, the wife of the Greek hero Odysseus. In the Homeric poems, Arete is frequently associated with bravery, but more often with effectiveness. The man or woman of Arete is a person of the highest effectiveness; they use all their faculties—strength, bravery and wit—to achieve real results. In the Homeric world, then, Arete involves all of the abilities and potentialities available to humans.

In some contexts, Arete is explicitly linked with human knowledge, where the expressions “virtue is knowledge” and “Arete is knowledge” are used interchangeably. The highest human potential is knowledge and all other human abilities are derived from this central capacity. If Arete is knowledge and study, the highest human knowledge is knowledge about knowledge itself; in this light, the theoretical study of human knowledge, which Aristotle called “contemplation”, is the highest human ability and happiness.[2]

History

The Ancient Greeks applied the term to anything: for example, the excellence of a chimney, the excellence of a bull to be bred and the excellence of a man. The meaning of the word changes depending on what it describes, since everything has its own peculiar excellence; the arete of a man is different from the arete of a horse. This way of thinking comes first from Plato, where it can be seen in the Allegory of the Cave.[3] In particular, the aristocratic class was presumed, essentially by definition, to be exemplary of arete: “The root of the word is the same as aristos, the word which shows superlative ability and superiority, and aristos was constantly used in the plural to denote the nobility.”[4]

By the 5th and 4th centuries BC, arete as applied to men had developed to include quieter virtues, such as dikaiosyne (justice) and sophrosyne (self-restraint). Plato attempted to produce a moral philosophy that incorporated this new usage,[5] but it was in the work of Aristotle that the doctrine of arete found its fullest flowering. Aristotle’s Doctrine of the Mean is a paradigm example of his thinking.

Arete has also been used by Plato when talking about athletic training and also the education of young boys. Stephen G. Miller delves into this usage in his book “Ancient Greek Athletics”. Aristotle is quoted as deliberating between education towards arete “…or those that are theoretical”.[6] Educating towards arete in this sense means that the boy would be educated towards things that are useful in life. However, even Plato himself says that arete is not something that can be agreed upon. He says, “Nor is there even an agreement about what constitutes arete, something that leads logically to a disagreement about the appropriate training for arete.”[7] To say that arete has a common definition of excellence or fulfillment may be an overstatement simply because it was very difficult to pinpoint arete, much less the proper ways to go about obtaining it. […]

Homer

In Homer‘s Iliad and Odyssey, “arete” is used mainly to describe heroes and nobles and their mobile dexterity, with special reference to strength and courage, but it is not limited to this. Penelope‘s arete, for example, relates to co-operation, for which she is praised by Agamemnon. The excellence of the gods generally included their power, but, in the Odyssey (13.42), the gods can grant excellence to a life, which is contextually understood to mean prosperity. Arete was also the name of King Alcinous‘s wife.

According to Bernard Knox‘s notes found in the Robert Fagles translation of The Odyssey, “arete” is also associated with the Greek word for “pray”, araomai.[8]

All Things Shining
by Hubert Dreyfus
pp. 61-63

Homer’s epic poems brought into focus a notion of arete, or excellence in life, that was at the center of the Greek understanding of human being.6 Many admirers of Greek culture have attempted to define this notion, but success here requires avoiding two prominent temptations. There is the temptation to patronize that we have already mentioned. But there is also a temptation to read a modern sensibility into Homer’s time. One standard translation of the Greek word arete as “virtue” runs the risk of this kind of retroactive reading: for any attempt to interpret the Homeric Greek notion of human excellence in terms of “virtue”—especially if one hears in this word its typical Christian or even Roman overtones—is bound to go astray. Excellence in the Greek sense involves neither the Christian notion of humility and love nor the Roman ideal of stoic adherence to one’s duty.7 Instead, excellence in the Homeric world depends crucially on one’s sense of gratitude and wonder.

Nietzsche was one of the first to understand that Homeric excellence bears little resemblance to modern moral agency. His view was that the Homeric world understood nobility in terms of the overpowering strength of noble warriors. The effect of the ensuing Judeo-Christian tradition, on this Nietzschean reading, was to enfeeble the Homeric understanding of excellence by substituting the meekness of the lamb for the strength and power of the noble warrior.8

Nietzsche was certainly right that the Homeric tradition valorizes the strong, noble hero; and he was right, too, that in some important sense the Homeric account of excellence is foreign to our basic moralizing assumptions. But there is something that the Nietzschean account leaves out. As Bernard Knox emphasizes, the Greek word arete is etymologically related to the Greek verb “to pray” (araomai).9 It follows that Homer’s basic account of human excellence involves the necessity of being in an appropriate relationship to whatever is understood to be sacred in the culture. Helen’s greatness, on this interpretation, is not properly measured in terms of the degree to which she is morally responsible for her actions.

What makes Helen great in Homer’s world is her ability to live a life that is constantly responsive to golden Aphrodite, the shining example of the sacred erotic dimension of existence. Likewise, Achilles had a special kind of receptivity to Ares and his warlike way of life; Odysseus had Athena, with her wisdom and cultural adaptability, to look out for him. Presumably, the master craftsmen of Homer’s world worked in the light of Hephaestus’s shining. In order to engage with this understanding of human excellence, we will have to think clearly about how the Homeric Greeks understood themselves. Why would it make sense to describe their lives in relation to the presence and absence of the gods?

Several questions focus this kind of approach. What is the phenomenon that Homer is responding to when he says that a god intervened or in some way took part in an action or event? Is this phenomenon recognizable to us, even if only marginally? And if Homer’s reference to the gods is something other than an attempt to pass off moral responsibility for one’s actions, then what exactly is it? Only by facing these questions head on can we understand whether it is possible—or desirable—to lure back Homer’s polytheistic gods.

The gods are essential to the Homeric Greek understanding of what it is to be a human being at all. As Peisistratus—the son of wise old Nestor—says toward the beginning of the Odyssey, “All men need the gods.”10 The Greeks were deeply aware of the ways in which our successes and our failures—indeed, our very actions themselves—are never completely under our control. They were constantly sensitive to, amazed by, and grateful for those actions that one cannot perform on one’s own simply by trying harder: going to sleep, waking up, fitting in, standing out, gathering crowds together, holding their attention with a speech, changing their mood, or indeed being filled with longing, desire, courage, wisdom, and so on. Homer sees each of these achievements as a particular god’s gift. To say that all men need the gods therefore is to say, in part at least, that we are the kinds of beings who are at our best when we find ourselves acting in ways that we cannot—and ought not—entirely take credit for.

The Discovery of the Mind
by Bruno Snell
pp. 158-160

The words for virtue and good, arete and agathos, are at first by no means clearly distinguished from the area of profit. In the early period they are not as palpably moral in content as might be supposed; we may compare the German terms Tu end and gut which originally stood for the ‘suitable’ (taugende) and the ‘fitting’ (cf. Gatte). When Homer says that a man is good, agathos, he does not mean thereby that he is morally unobjectionable, much less good-hearted, but rather that he is useful, proficient, and capable of vigorous action. We also speak of a good warrior or a good instrument. Similarly arete, virtue, does not denote a moral property but nobility, achievement, success and reputation. And yet these words have an unmistakable tendency toward the moral because, unlike ‘happiness’ or ‘profit’, they designate qualities for which a man may win the respect of his whole community. Arete is ‘ability’ and ‘achievement’, characteristics which are expected of a ‘good’, an ‘able’ man, an aner agathos. From Homer to Plato and beyond these words spell out the worth of a man and his work. Any change in their meaning, therefore, would indicate a reassessment of values. It is possible to show how at various times the formation and consolidation of social groups and even of states was connected with people’s ideas about the ‘good’. But that would be tantamount to writing a history of Greek culture. In Homer, to possess ‘virtue’ or to be ‘good’ means to realize one’s nature, and one’s wishes, to perfection. Frequently happiness and profit form the reward, but it is no such extrinsic prospect which leads men to virtue and goodness. The expressions contain a germ of the notion of entelechy. A Homeric hero, for instance, is capable of ‘reminding himself’, or of ‘experiencing’, that he is noble. ‘Use your experience to become what you are’ advises Pindar who adheres to this image of arete. The ‘good’ man fulfils his proper function, prattei ta heautou, as Plato demands it; he achieves his own perfection. And in the early period this also entails that he is good in the eyes of others, for the notions and definitions of goodness are plain and uniform: a man appears to others as he is.

In the Iliad (11.404—410) Odysseus reminds himself that he is an aristocrat, and thereby resolves his doubts how he should conduct himself in a critical situation. He does it by concentrating on the thought that he belongs to a certain social order, and that it is his duty to fulfill the ‘virtue’ of that order. The universal which underlies the predication ‘I am a noble’ is the group; he does not reflect on an abstract ‘good ’but upon the circle of which he claims membership. It is the same as if an officer were to say: ‘As an officer I must do this or that,’ thus gauging his action by the rigid conception of honour peculiar to his caste.

Aretan is ‘to thrive’; arete is the objective which the early nobles attach to achievement and success. By means of arete the aristocrat implements the ideal of his order—and at the same time distinguishes himself above his fellow nobles. With his arete the individual subjects himself to the judgment of his community, but he also surpasses it as an individual. Since the days of Jacob Burckhardt the competitive character of the great Greek achievements has rightly been stressed. Well into the classical period, those who compete for arete are remunerated with glory and honour. The community puts its stamp of approval on the value which the individual sets on himself. Thus honour, time, is even more significant than arete for the growth of the moral consciousness, because it is more evident, more palpable to all. From his earliest boyhood the young nobleman is urged to think of his glory and his honour; he must look out for his good name, and he must see to it that he commands the necessary respect. For honour is a very sensitive plant; wherever it is destroyed the moral existence of the loser collapses. Its importance is greater even than that of life itself; for the sake of glory and honour the knight is prepared to sacrifice his life.

pp. 169-172

The truth of the matter is that it was not the concept of justice but that of arete which gave rise to the call for positive individual achievement, the moral imperative which the early Greek community enjoins upon its members who in turn acknowledge it for themselves. A man may have purely egotistical motives for desiring virtue and achievement, but his group gives him considerably more credit for these ideals than if he were to desire profit or happiness. The community expects, and even demands, arete. Conversely a man who accomplishes a high purpose may convince himself so thoroughly that his deed serves the interests of a supra-personal, a universal cause that the alternative of egotism or altruism becomes irrelevant. What does the community require of the individual? What does the individual regard as universal, as eternal? These, in the archaic age, are the questions about which the speculations on arete revolve.

The problem remains simple as long as the individual cherishes the same values as the rest of his group. Given this condition, even the ordinary things in life are suffused with an air of dignity, because they are part of custom and tradition. The various daily functions, such as rising in the morning and the eating of meals, are sanctified by prayer and sacrifice, and the crucial events in the life of man—birth, marriage, burial—are for ever fixed and rooted in the rigid forms of cult. Life bears the imprint of a permanent authority which is divine, and all activity is, therefore, more than just personal striving. No one doubts the meaning of life; the hallowed tradition is carried on with implicit trust in the holy wisdom of its rules. In such a society, if a man shows unusual capacity he is rewarded as a matter of course. In Homer a signal achievement is, as one would expect, also honoured with a special permanence, through the song of the bard which outlasts the deed celebrated and preserves it for posterity. This simple concept is still to be found in Pindar’s Epinicians. The problem of virtue becomes more complex when the ancient and universally recognized ideal of chivalry breaks down. Already in Homeric times a differentiation sets in. As we have seen in the story of the quarrel over the arms of Achilles, the aretai become a subject for controversy. The word arete itself contains a tendency toward the differentiation of values, since it is possible to speak of the virtues of various men and various things. As more sections of society become aware of their own merit, they are less willing to conform to the ideal of the once-dominant class. It is discovered that the ways of men are diverse, and that arete may be attained in all sorts of professions. Whereas aristocratic society had been held together, not to say made possible by a uniform notion of arete, people now begin to ask what true virtue is. The crisis of the social system is at the same time the crisis of an ideal, and thus of morality. Archilochus says (fr. 41)that different men have their hearts quickened in various ways. But he also states, elaborating a thought which first crops up in the Odyssey: the mind of men is as Zeus ushers in each day, and they think whatever they happen to hit upon (fr. 68). One result of this splitting up of the various forms of life is a certain failure of nerve. Man begins to feel that he is changeable and exposed to many variable forces. This insight deepens the moral reflexions of the archaic period; the search for the good becomes a search for the permanent.

The topic of the virtues is especially prominent in the elegy. Several elegiac poets furnish lists of the various aretai which they exemplify by means of well-known myths. Their purpose is to clarify for themselves their own attitudes toward the conflicting standards of life. Theognis (699 ff.) stands at the end of this development; with righteous indignation he complains that the masses no longer have eyes for anything except wealth. For him material gain has, in contrast with earlier views, become an enemy of virtue.

The first to deal with this general issue is Tyrtaeus. His call to arms pronounces the Spartan ideal; perhaps he was the one to formulate that ideal for the first time. Nothing matters but the bravery of the soldier fighting for his country. Emphatically he rejects all other accomplishments and virtues as secondary: the swiftness of the runner in the arena, or the strength of the wrestler, or again physical beauty, wealth, royal power, and eloquence, are as nothing before bravery. In the Iliad also a hero best proves his virtue by standing firm against the enemy, but that is not his only proof; the heroic figures of Homer dazzle us precisely because of their richness in human qualities. Achilles is not only brave but also beautiful, ‘swift of foot’, he knows how to sing, and so forth. Tyrtaeus sharply reduces the scope of the older arete; what is more, he goes far beyond Homer in magnifying the fame of fortitude and the ignominy which awaits the coward. Of the fallen he actually says that they acquire immortality (9.32). This one-sidedness is due to the fact that the community has redoubled its claim on the individual; Sparta in particular taxed the energies of its citizenry to the utmost during the calamitous period of the Messenian wars. The community is a thing of permanence for whose sake the individual mortal has to lay down his life, and in whose memory lies his only chance for any kind of survival. Even in Tyrtaeus, however, these claims of the group do not lead to a termite morality. Far from prescribing a blind and unthinking service to the whole, or a spirit of slavish self-sacrifice, Tyrtaeus esteems the performance of the individual as a deed worthy of fame. This is a basic ingredient of arete which, in spite of countless shifts and variations, is never wholly lost.

Philosophy Before Socrates
by Richard D. McKirahan
pp. 366-369

Aretē and Agathos These two basic concepts of Greek morality are closely related and not straightforwardly translatable into English. As an approximation, aretē can be rendered “excellence” or “goodness” (sometimes “virtue”), and agathos as “excellent” or “good.” The terms are related in that a thing or person is agathos if and only if it has aretē and just because it has aretē. The concepts apply to objects, conditions, and actions as well as to humans. They are connected with the concept of ergon (plural, erga), which may be rendered as “function” or “characteristic activity.” A good (agathos) person is one who performs human erga well, and similarly a good knife is a knife that performs the ergon of a knife well. The ergon of a knife is cutting, and an agathos knife is one that cuts well. Thus, the aretē of a knife is the qualities or characteristics a knife must have in order to cut well. Likewise, if a human ergon can be identified, an agathos human is one who can and on appropriate occasions does perform that ergon well, and human aretē is the qualities or characteristics that enable him or her to do so. The classical discussion of these concepts occurs after our period, in Aristotle,6 but he is only making explicit ideas that go back to Homer and which throw light on much of the pre-philosophical ethical thought of the Greeks.

This connection of concepts makes it automatic, virtually an analytic truth, that the right goal for a person—any person—is to be or become agathos. Even if that goal is unreachable for someone, the aretē–agathos standard still stands as an ideal against which to measure one’s successes and failures. However, there is room for debate over the nature of human erga, both whether there is a set of erga applicable to all humans and relevant to aretē and, supposing that there is such a set of erga, what those erga are. The existence of the aretē–agathos standard makes it vitally important to settle these issues, for otherwise human life is left adrift with no standards of conduct. […]

The moral scene Homer presents is appropriate to the society it represents and quite alien to our own. It is the starting point for subsequent moral speculation which no one in the later Greek tradition could quite forget. The development of Greek moral thought through the Archaic and Classical periods can be seen as the gradual replacement of the competitive by the cooperative virtues as the primary virtues of conduct and as the recognition and increasing recognition of the significance of people’s intentions as well as their actions.7

Rapid change in Greek society in the Archaic and Classical periods called for new conceptions of the ideal human and the ideal human life and activities. The Archaic period saw different kinds of rulers from the Homeric kings, and individual combat gave way to the united front of a phalanx of hoplites (heavily armed warriors). Even though the Homeric warrior-king was no longer a possible role in society, the qualities of good birth, beauty, courage, honor, and the abilities to give good counsel and rule well remained. Nevertheless, the various strands of the Homeric heroic ideal began to unravel. In particular, good birth, wealth, and fighting ability no longer automatically went together. This situation forced the issue: what are the best qualities we can possess? What constitutes human aretē? The literary sources contain conflicting claims about the best life for a person, the best kind of person to be, and the relative merits of qualities thought to be ingredients of human happiness. In one way or another these different conceptions of human excellence have Homeric origins, though they diverge from Homer’s conception and from one another.

Lack of space makes it impossible to present the wealth of materials that bear on this subject.8 I will confine discussion to two representatives of the aristocratic tradition who wrote at the end of the Archaic period. Pindar shows how the aristocratic ideal had survived and been transformed from the Homeric conception and how vital it remained as late as the early fifth century, and Theognis reveals how social, political, and economic reality was undermining that ideal.

p. 374

The increase in wealth and the shift in its distribution which had begun by the seventh century led to profound changes in the social and political scenes in the sixth and forced a wedge in among the complex of qualities which traditionally constituted aristocratic aretē. Pindar’s unified picture in which wealth, power, and noble birth tend to go together became ever less true to contemporary reality.

The aristocratic response to this changed situation receives its clearest expression in the poems attributed to Theognis and composed in the sixth and early fifth centuries. Even less than with Pindar can we find a consistent set of views advocated in these poems, but among the most frequently recurring themes are the view that money does not make the man, that many undeserving people are now rich and many deserving people (deserving because of their birth and social background) are now poor. It is noteworthy how Theognis plays on the different connotations of uses of the primary terms of value, agathos and aretē, and their opposites kakos and kakia: morally good vs. evil; well-born, noble vs. low-born; and politically and socially powerful vs. powerless. Since the traditional positive attributes no longer regularly all went together, it was important to decide which are most important, indeed which are the essential ingredients of human aretē.

pp. 379-382

In short, Protagoras taught his students how to succeed in public and private life. What he claimed to teach is, in a word, aretē. That this was his boast follows from the intimate connection between agathos and aretē as well as from the fact that a person with aretē is one who enjoys success, as measured by current standards. Anyone with the abilities Protagoras claimed to teach had the keys to a successful life in fifth-century Athens.

In fact, the key to success was rhetoric, the art of public speaking, which has a precedent in the heroic conception of aretē, which included excellence in counsel. But the Sophists’ emphasis on rhetoric must not be understood as hearkening back to Homeric values. Clear reasons why success in life depended on the ability to speak well in public can be found in fifth-century politics and society. […]

That is not to say that every kind of success depended on rhetoric. It could not make you successful in a craft like carpentry and would not on its own make you a successful military commander. Nor is it plausible that every student of Protagoras could have become another Pericles. Protagoras acknowledged that natural aptitude was required over and above diligence. […] Protagoras recognized that he could not make a silk purse out of a sow’s ear, but he claimed to be able to develop a (sufficiently young) person’s abilities to the greatest extent possible.28

Pericles was an effective counselor in part because he could speak well but also by dint of his personality, experience, and intelligence. To a large extent these last three factors cannot be taught, but rhetoric can be offered as a tekhnē, a technical art or skill which has rules of its own and which can be instilled through training and practice. In these ways rhetoric is like medicine, carpentry, and other technical arts, but it is different in its seemingly universal applicability. Debates can arise on any conceivable subject, including technical ones, and rhetorical skill can be turned to the topic at hand whatever it may be. The story goes that Gorgias used his rhetorical skill to convince medical patients to undergo surgery when physicians failed to persuade them.29 Socrates turned the tables on the Sophists, arguing that if rhetoric has no specific subject matter, then so far from being a universal art, it should not be considered an art at all.30 And even if we grant that rhetoric is an art that can be taught, it remains controversial whether aretē can be taught and in what aretē consists. […]

The main charges against the Sophists are of two different sorts. First the charge of prostituting themselves. Plato emphasizes the money-making aspect of the Sophist’s work, which he uses as one of his chief criteria for determining that Socrates was not a Sophist. This charge contains two elements: the Sophists teach aretē for money, and they teach it to anyone who pays. Both elements have aristocratic origins. Traditionally aretē was learned from one’s family and friends and came as the result of a long process of socialization beginning in infancy. Such training and background can hardly be bought. Further, according to the aristocratic mentality most people are not of the right type, the appropriate social background, to aspire to aretē.

Lila
by Robert Pirsig
pp. 436-442

Digging back into ancient Greek history, to the time when this mythos-to-logos transition was taking place, Phædrus noted that the ancient rhetoricians of Greece, the Sophists, had taught what they called aretê , which was a synonym for Quality. Victorians had translated aretê as “virtue” but Victorian “virtue” connoted sexual abstinence, prissiness and a holier-than-thou snobbery. This was a long way from what the ancient Greeks meant. The early Greek literature, particularly the poetry of Homer, showed that aretê had been a central and vital term.

With Homer Phædrus was certain he’d gone back as far as anyone could go, but one day he came across some information that startled him. It said that by following linguistic analysis you could go even further back into the mythos than Homer. Ancient Greek was not an original language. It was descended from a much earlier one, now called the Proto-Indo-European language. This language has left no fragments but has been derived by scholars from similarities between such languages as Sanskrit, Greek and English which have indicated that these languages were fallouts from a common prehistoric tongue. After thousands of years of separation from Greek and English the Hindi word for “mother” is still “Ma.” Yoga both looks like and is translated as “yoke.” The reason an Indian rajah’ s title sounds like “regent” is because both terms are fallouts from Proto-Indo-European. Today a Proto-Indo-European dictionary contains more than a thousand entries with derivations extending into more than one hundred languages.

Just for curiosity’s sake Phædrus decided to see if aretê was in it. He looked under the “a” words and was disappointed to find it was not. Then he noted a statement that said that the Greeks were not the most faithful to the Proto-Indo-European spelling. Among other sins, the Greeks added the prefix “a” to many of the Proto-Indo-European roots. He checked this out by looking for aretê under “r.” This time a door opened.

The Proto-Indo-European root of aretê was the morpheme rt . There, beside aretê , was a treasure room of other derived “rt” words: “arithmetic,” “aristocrat,” “art,” “rhetoric,” “worth,” “rite,” “ritual,” “wright,” “right (handed)” and “right (correct).” All of these words except arithmetic seemed to have a vague thesaurus-like similarity to Quality. Phædrus studied them carefully, letting them soak in, trying to guess what sort of concept, what sort of way of seeing the world, could give rise to such a collection.

When the morpheme appeared in aristocrat and arithmetic the reference was to “firstness.” Rt meant first. When it appeared in art and wright it seemed to mean “created” and “of beauty.” “Ritual” suggested repetitive order. And the word right has two meanings: “right-handed” and “moral and esthetic correctness.” When all these meanings were strung together a fuller picture of the rt morpheme emerged. Rt referred to the “first, created, beautiful repetitive order of moral and esthetic correctness.” […]

There was just one thing wrong with this Proto-Indo-European discovery, something Phædrus had tried to sweep under the carpet at first, but which kept creeping out again. The meanings, grouped together, suggested something different from his interpretation of aretê . They suggested “importance” but it was an importance that was formal and social and procedural and manufactured, almost an antonym to the Quality he was talking about. Rt meant “quality” all right but the quality it meant was static, not Dynamic. He had wanted it to come out the other way, but it looked as though it wasn’t going to do it. Ritual. That was the last thing he wanted aretê to turn out to be. Bad news. It looked as though the Victorian translation of aretê as “virtue” might be better after all since “virtue” implies ritualistic conformity to social protocol. […]

Rta . It was a Sanskrit word, and Phædrus remembered what it meant: Rta was the “cosmic order of things.” Then he remembered he had read that the Sanskrit language was considered the most faithful to the Proto-Indo-European root, probably because the linguistic patterns had been so carefully preserved by the Hindu priests. […]

Rta , from the oldest portion of the Rg Veda , which was the oldest known writing of the Indo-Aryan language. The sun god, Sūrya , began his chariot ride across the heavens from the abode of rta. Varuna , the god for whom the city in which Phædrus was studying was named, was the chief support of rta .

Varuna was omniscient and was described as ever witnessing the truth and falsehood of men—as being “the third whenever two plot in secret.” He was essentially a god of righteousness and a guardian of all that is worthy and good. The texts had said that the distinctive feature of Varuna was his unswerving adherence to high principles. Later he was overshadowed by Indra who was a thunder god and destroyer of the enemies of the Indo-Aryans. But all the gods were conceived as “guardians of ta ,” willing the right and making sure it was carried out.

One of Phædrus’s old school texts, written by M. Hiriyanna, contained a good summary: “Rta , which etymologically stands for ‘course’ originally meant ‘cosmic order,’ the maintenance of which was the purpose of all the gods; and later it also came to mean ‘right,’ so that the gods were conceived as preserving the world not merely from physical disorder but also from moral chaos. The one idea is implicit in the other: and there is order in the universe because its control is in righteous hands.…”

The physical order of the universe is also the moral order of the universe. Rta is both. This was exactly what the Metaphysics of Quality was claiming. It was not a new idea. It was the oldest idea known to man.

This identification of rta and aretê was enormously valuable, Phædrus thought, because it provided a huge historical panorama in which the fundamental conflict between static and Dynamic Quality had been worked out. It answered the question of why aretê meant ritual. Rta also meant ritual. But unlike the Greeks, the Hindus in their many thousands of years of cultural evolution had paid enormous attention to the conflict between ritual and freedom. Their resolution of this conflict in the Buddhist and Vedantist philosophies is one of the profound achievements of the human mind.

Pagan Ethics: Paganism as a World Religion
by Michael York
pp. 59-60

Pirsig contends that Plato incorporated the arete of the Sophists into his dichotomy between ideas and appearances — where it was subordinated to Truth. Once Plato identifies the True with the Good, arete’s position is usurped by “dialectically determined truth.” This, in turn, allows Plato to demote the Good to a lower order and minor branch of knowledge. For Pirsig, the Sophists were those Greek philosophers who exalted quality over truth; they were the true champions of arete or excellence. With a pagan quest for the ethical that develops from an idolatrous understanding of the physical, while Aristotle remains an important consideration, it is to the Sophists (particularly Protagoras, Prodicus and Pirsig’s understanding of them) and a reconstruction of their underlying humanist position that perhaps the most important answers are to be framed if not found as well.

A basic pagan position is an acceptance of the appetites — in fact, their celebration rather than their condemnation. We find the most unbridled expression of the appetites in the actions of the young. Youth may engage in binge-drinking, vandalism, theft, promiscuity and profligate experimentation. Pagan perspectives may recognize the inherent dangers in these as there are in life itself. But they also trust the overall process of learning. In paganism, morality has a much greater latitude than it does in the transcendental philosophy of a Pythagoras, Plato, or Plotinus: it may veer toward a form of relativism, but its ultimate check is always the sanctity of the other animate individuals. An it harm none, do what ye will. The pagan ethic must be found within the appetites and not in their denial.

In fact, paganism is part of a protest against Platonic assertion. The wider denial is that of nature herself. Nature denies the Platonic by refusing to conform to the Platonic ideal. It insists on moments of chaos, the epagomenae, the carnival, that overlap between the real and the ideal that is itself a metaphor for reality. The actual year is a refusal to cooperate with the mathematically ideal year of 360 days — close but only tantalizingly.

In addition, pagans have always loved asking what is arete? This is the fundamental question we encounter with the Sophists, Plato and Aristotle. It is the question that is before us still. The classics considered variously both happiness and the good as alternative answers. The Hedonists pick happiness — but a particular kind of happiness. The underlying principle recognized behind all these possibilities is arete ‘excellence, the best’ however it is embodied — whether god, goddess, goods, the good, gods, virtue, happiness, pleasure or all of these together. Arete is that to which both individual and community aspire. Each wants one’s own individual way of putting it together in excellent fashion — but at the same time wanting some commensurable overlap of the individual way with the community way.

What is the truth of the historical claims about Greek philosophy in Zen and the Art of Motorcycle Maintenance?
answer by Ammon Allred

Arete is usually translated as “virtue,” which is certainly connected up with the good “agathon” — but in Plato an impersonal Good is probably more important than aletheia or truth. See, for instance, the central images at the end of Book VI, where the Good is called the “Father of the Sun.” The same holds in the Philebus. And it wouldn’t be right to say that Plato (or Aristotle) thought virtue was part of some small branch called “ethics” (Plato doesn’t divide his philosophy up this way; Aristotle does — although then we get into fact that we don’t have the dialogues he wrote — but still what he means by ethics is far broader than what we mean).

Certainly the Sophists pushed for a humanistic account of the Good, whereas Plato’s was far more impersonal. And Plato himself had a complex relationship to the Sophists (consider the dialogue of Protagoras, where Socrates and Protagoras both end up about equally triumphant).

That said, Pirsig is almost certainly right about Platonism — that is to say, the approach to philosophy that has been taught as though it were Plato’s philosophy. Certainly, the sophists have gotten a bad rap because of the view that Socrates and Plato were taken to have about the sophists; but even there, many philosophers have tried to rehabilitate them: most famously, Nietzsche.

Edge of the Depths

“In Science there are no ‘depths’; there is surface everywhere.”
~ Rodolf Carnap

I was reading Richard S. Hallam’s Virtual Selves, Real Persons. I’ve enjoyed it, but I find a point of disagreement or maybe merely doubt and questioning. He emphasizes persons as being real, in that they are somehow pre-existing and separate. He distinguishes the person from selves, although this distinction isn’t necessarily relevant to my thoughts here.

I’m not sure to what degree our views diverge, as I find much of the text to be insightful and a wonderful overview. However, to demonstrate my misgivings, the author only mentions David Hume’s bundle theory a couple of times on a few pages (in a several hundred page book), a rather slight discussion for such a key perspective. He does give a bit more space to Julian Jaynes’ bicameral theory, but even Jaynes is isolated to one fairly small section and not fully integrated into the author’s larger analysis.

The commonality between Humes and Jaynes is that they perceived conscious identity as being more nebulous — no there there. In my own experience, that feels more right to me. As one dives down into the psyche, the waters become quite murky, so dark that one can’t even see one’s hands in front of one’s face, much less know what one might be attempting to grasp. Notions of separateness, at a great enough depth, fades away — one finds oneself floating in darkness with no certain sense of distance or direction. I don’t know how to explain this, if one hasn’t experienced altered states of mind, from extended meditation to psychedelic trips.

This is far from a new line of thought for me, but it kept jumping out at me as I read Hallam’s book. His writing is scholarly to a high degree and, for me, that is never a criticism. The downside is that a scholarly perspective alone can’t be taken into the depths. Jaynes solved this dilemma by maintaining a dual focus, intellectual argument balanced with a sense of wonder — speaking of our search for certainty, he said that, “Beyond that, there is only awe.

I keep coming back to that. For all I appreciate of Hallam’s book, I never once experienced awe. Then again, he probably wasn’t attempting to communicate awe. So, it’s not exactly that I judge this as a failing, even if it can feel like an inadequacy from the perspective of human experience or at least my experience. In the throes of awe, we are humbled into an existential state of ignorance. A term like ‘separation’ becomes yet another word. To take consciousness directly and fully is to lose any sense of separateness for, then, there is consciousness alone — not my consciousness and your consciousness, just consciousness.

I could and have made more intellectual arguments about consciousness and how strange it can be. It’s not clear to me, as it is clear to some, that there is any universal experience of consciousness (human or otherwise). There seems to be a wide variety of states of mind found across diverse societies and species. Consider animism that seems so alien to the modern sensibility. What does ‘separation’ mean in an animate world that doesn’t assume the individual as the starting point of human existence?

I don’t need to rationally analyze any of this. Rationality as easily turns into rationalization, justifying what we think we already know. All I can say is that, intuitively, Hume’s bundle theory makes more sense of what I know directly within my own mind, whatever that may say about the minds of others. That viewpoint can’t be scientifically proven for the experience behind it is inscrutable, not an object to be weighed and measured, even as brain scans remain fascinating. Consciousness can’t be found by pulling apart Hume’s bundle anymore than a frog’s soul can be found by dissecting its beating heart — consciousness having a similar metaphysical status as the soul. Something like the bundle theory either makes sense or not. Consciousness is a mystery, no matter how unsatisfying that may seem. Science can take us to the edge of the depths, but that is where it stops. To step off that edge requires something else entirely.

Actually, stepping off rarely happens since few, if any, ever choose to sink into the depths. One slips and falls and the depths envelop one. Severe depression was my initiation experience, the weight dragging me down. There are many possible entry points to this otherness. When that happens, thoughts on consciousness stop being intellectual speculation and thought experiment. One knows consciousness as well as one will ever know it when one drowns in it. If one thrashes their way back to the surface, then and only then can one offer meaningful insight but more likely one is lost in silence, water still choking in one’s throat.

This is why Julian Jaynes, for all of his brilliance and insight, reached the end of his life filled with frustration at what felt like a failure to communicate. As his historical argument went, individuals don’t change their mindsets so much as the social system that maintains a particular mindset is changed, which in the case of bicameralism meant the collapse of the Bronze Age civilizations. Until our society faces a similar crises and is collectively thrown into the depths, separation will remain the dominant mode of experience and understanding. As for what might replace it, that is anyone’s guess.

Here we stand, our footing not entirely secure, at the edge of the depths.

The Art of the Lost Cause

Many people are understandably disappointed, frustrated, or angry when they lose. It’s just not fun to lose, especially in a competitive society. But there are advantages to losing. And losses are as much determined by perspective. Certainly, in more cooperative societies, what may be seen as a loss by outsiders could be taken quite differently by an insider. Western researchers discovered that difference when using games as part of social science studies. Some non-Western people refused win-lose scenarios, at least among members of the community. The individual didn’t lose for everyone gained. I point this out to help shift our thinking.

Recently, the political left in the United States has experienced losses. Bernie Sanders lost the nomination to Hillary Clinton who in turn lost the presidency to Donald Trump. But is this an entirely surprising result and bad outcome? Losses can lead to soul-searching and motivation for change. The Republicans we know now have dominated the political narrative in recent decades, which forced the Democrats to shift far to the right with third way ‘triangulation’. That wasn’t always the case. Republicans went through a period of major losses before being able to reinvent themselves with the southern strategy, Reagan revolution, trickle down voodo economics, the two Santa Claus theory, culture wars, etc.

The Clinton New Democrats were only able to win at all in recent history by sacrificing the political left and, in the process, becoming the new conservative party. So, even when Democrats have been able to win it has been a loss. Consider Obama who turned out to be one of the most neoliberal and neocon presidents in modern history, betraying his every promise: maintaining militarism, refusing to shut down GITMO, passing pro-biz insurance reform, etc. Liberals and leftists would have been better off to have been entirely out of power these past decades, allowing a genuine political left movement to form and so allowing democracy to begin to reassert itself from below. Instead, Democrats have managed to win just enough elections to keep the political left suppressed by co-opting their rhetoric. Democrats have won by forcing the American public to lose.

In the Democratic leadership failing so gloriously, they have been publicly shamed to the point of no redemption. The party is now less popular than the opposition, an amazing feat considering how unpopular is Trump and the GOP at the moment. Yet amidst all of this, Bernie Sanders is more popular than ever, more popular among women than men and more popular among minorities than whites. I never thought Sanders was likely to win and so I wasn’t disappointed. What his campaign did accomplish, as I expected, was to reshape the political narrative and shift the Overton window back toward the political left again. This period of loss will be remembered as a turning point in the future. It was a necessary loss, a reckoning and re-envisioning.

Think about famous lost causes. One that came to mind is that of Jesus and the early Christians. They were a tiny unknown cult in a vast empire filled with hundreds of thousands of similar cults. They were nothing special, of no significance or consequence, such that no one bothered to even take note of them, not even Jewish writers at the time. Then Jesus was killed as a common criminal among other criminals and even that didn’t draw any attention. There is no evidence that the Romans considered Jesus even mildly interesting. After his death, Christianity remained small and splintered into a few communities. It took generations for this cult to grow much at all and finally attract much outside attention.

Early Christians weren’t even important enough to be feared. The persecution stories seem to have been mostly invented by later Christians to make themselves feel more important, as there is no records of any systematic and pervasive persecution. Romans killing a few cultists here and there happened all the time and Christians didn’t stand out as being targeted more than any others. In fact, early Christians were lacking in uniqueness that they were often confused with other groups such as Stoics. By the way, it was the Stoics who were famous at the time for seeking out persecution and so gaining street cred respectability, maybe causing envy among Christians. Even Christian theology was largely borrowed from others, such as natural law also having been taken from the Stoics — related to the idea that a slave can be free in their mind and being, their heart and soul because natural law transcends human law.

Still, this early status of Christians as losers created a powerful narrative that has not only survived but proliferated. Some of that narrative, such as their persecution, was invented. But that is far from unusual — the mythos that develops around lost causes tends to be more invented than not. Still, at the core, the Christians were genuinely pathetic for a couple of centuries. They weren’t a respectable religion in the Roman Empire, until long after Jesus’ death when an emperor decided to use them to shore up his own power. In the waning era of Roman imperialism, I suppose a lost cause theology felt compelling and comforting. It was also a good way to convert other defeated people, as they could be promised victory in heaven. Lost Causes tend to lead to romanticizing of a distant redemption that one day would come. And in the case of Christianity, this would mean that the ultimate sacrificial loser, Jesus himself, would return victorious! Amen! Praise the Lord! Like a Taoist philosopher, Jesus taught that to find oneself was to lose oneself but to lose oneself was to find oneself. This is a loser’s mentality and relates to why some have considered Christianity to be a slaver religion. The lowly are uplifted, at least in words and ideals. But I’d argue there is more to it than seeking comfort by rationalizing suffering, oppression, and defeat.

Winning isn’t always a good thing, at least in the short term. I sometimes wonder if America would be a better place if the American Revolution had been lost. When I compare the United States to Canada, I don’t see any great advantage to American colonists having won. Canada is a much more stable and well-functioning social democracy. And the British Empire ended up enacting sweeping reforms, including abolishing slavery through law long before the US managed to end slavery through bloody conflict. In many ways, Americans were worse off after the revolution than before it. A reactionary backlash took hold as oligarchs co-opted the revolution and turned it into counter-revolution. Through the coup of a Constitutional Convention, the ruling elite seized power of the new government. It was in seeming to win that the average American ended up losing. An overt loss potentially could have been a greater long term victory. In particular for women and blacks, being on the side of the revolutionaries didn’t turn out to be such a great deal. Woman who had gained the vote had it taken away from them again and blacks hoping for freedom were returned to slavery. The emerging radical movement of democratic reform was strangled in the crib.

Later on, the Confederates learned of the power of a lost cause. To such an extent that they have become the poster boys of The Lost Cause, all of American society having been transformed by it. Victory of the United States government, once again, turned out to be far from a clear victory for the oppressed. If Confederates had won or otherwise been allowed to secede, the Confederate government would have been forced to come to terms with the majority black population that existed in the South and they wouldn’t have had the large Northern population to help keep blacks down. It’s possible that some of the worst results could have been avoided: re-enslavement through chain gangs and mass incarceration, Jim Crow laws and Klan terrorism, sundown towns and redlining, etc —  all the ways that racism became further entrenched. After the Civil War, blacks became scattered and would then become a minority. Having lost their position as the Southern majority, they lost most of the leverage they might have had. Instead of weak reforms leading to new forms of oppression, blacks might have been able to have forced a societal transformation within a Confederate government or else to have had a mass exodus in order to secede and create their own separate nation-state. There were many possibilities that became impossible because of Union victory.

Now consider the civil rights movement. The leaders, Martin Luther King in particular, understood the power of a lost cause. They intentionally staged events of getting attacked by police and white mobs, always making sure there were cameras nearby to make it into a national event. It was in losing these confrontations to the greater power of white oppression that they managed to win public support. As a largely Christian movement, the civil rights activists surely had learned from the story of Jesus as a sacrificial loser and his followers as persecuted losers. The real failure of civil rights only came later on when it gained mainstream victories and a corrupt black leadership aligned with white power, such as pushing the racist 1994 Crime Bill which was part of the Democrats becoming the new conservative party. The civil rights movement might have been better able to transform society and change public opinion by having remained a lost cause for a few more generations.

A victory forced can be a victory lost. Gain requires sacrifice, not to be bought cheaply. Success requires risk of failure, putting everything on the line. The greatest losses can come from seeking victory too soon and too easily. Transformative change can only be won by losing what came before. Winning delayed sometimes is progress ensured, slow but steady change. The foundation has to be laid before something can emerge from the ground up. Being brought low is the beginning point, like planting a seed in the soil.

It reminds me of my habit of always looking down as I walk. My father, on the other hand, never looks down and has a habit of stepping on things. It is only by looking down that we can see what is underneath our feet, what we stand on or are stepping toward. Foundation and fundament are always below eye level. Even in my thinking, I’m forever looking down, to what is beneath everyday awareness and oft-repeated words. Just to look down, such a simple and yet radical act.

“Looking down is also a sign of shame or else humility, the distinction maybe being less relevant to those who avoid looking down. To humble means to bring low, to the level of the ground, the soil, humus. To be further down the ladder of respectability, to be low caste or low class, is to have a unique vantage point. One can see more clearly and more widely when one has grown accustomed to looking down, for then one can see the origins of things, the roots of the world, where experience meets the ground of being.”

Damnation: Rural Radicalism

Damnation is a new show on USA Network (co-produced by Netflix). It’s enjoyable entertainment inspired by history and influenced by literature.

As Phil De Semlyen at Empire summarizes the background of the show, it is “a 1930s saga of big business concerns and poor, struggling families, with possibly a sprinkling of Elmer Gantry-like religious hypocrisy, crime and demagoguery thrown in for good measure. “It’s set in the Great Depression and based on true events,” Mackenzie tells Empire of this heady-sounding mix, “It’s about strikers and strike-breakers in Iowa, almost the Dust Bowl, which is bloody interesting.” A bit Steinbeck-y, then? “Kind of. A little bit more amped than that, but yeah.”” And from a Cleveland.com piece by Mark Dawidziak, the show’s creator Tony Tost explained in an interview that,  “They’re unquestionably two of my favorite writers… The world of John Steinbeck as presented in ‘The Grapes of Wrath,’ ‘Of Mice and Men’ and ‘Cannery Row’ was a big influence, as was Dashiell Hammett’s first novel, ‘Red Harvest,” which is set in a Western mining town. All of that went into the soup when writing ‘Damnation.’ ” In mentioning that interview, Bustle’s Jack O’Keeffe writes that,

While the show’s creator has named The Grapes Of Wrath as a touchstone for the series, it also calls to mind one of the most acclaimed period films of the past decade. The 2007 film There Will Be Blood covers the first three decades of 20th Century America, stopping just shy of the Great Depression. However, the small-town rivalry between a suspicious preacher and a business-minded capitalist that arises in There Will Be Blood seems to mirror the central conflict present in Damnation. Damnation seems to be drawing from some pieces of American fiction about the sociopolitical realities of this particular era.

In an interview with Cleveland.com, Tost admitted that Damnation’s influences don’t stop at Steinbeck or the violent filmography of Quentin Tarantino. Tost also listed iconic western director Sam Peckinpah, the Pulitzer-prize winning novel Gilead, and the non-fiction book Hard Times: An Oral History Of The Great Depression among his many inspirations. While Damnation may have invented the details of its story, the creative forces behind the show seemed to do their homework when it came to capturing an accurate picture of what life was like then.

While many of the show’s influences are set 80 years ago, the most surprising source for Damnation may be 2017. Tost told Cleveland.com in the previously mentioned interview, “If you look at the 1930s — a time when there was increasing distrust in institutions, there was fear of finding meaningful work, there is this onslaught of new technology taking away jobs — the relevance [of the show to 2017 audiences] is almost inescapable.”

In a Fayetteville Flyer interview, Tost describes “it as 1/3 Clint Eastwood, 1/3 John Steinbeck, 1/3 James Ellroy. That is, it takes some characters you’d normally see in a tough western, plops them in the world of Grapes of Wrath, and places them in the sort of pulpy paranoid narrative you see in Ellroy’s novels.” About the research, he says:

It’s a blast. Back in my academic days, my field of study was American literature from 1890 to 1945 and I wrote a dissertation on the influence of new technologies in the 20s and 30s on the American imagination. Then I wrote a book about Johnny Cash which delved into the same time period from a different angle, looking at the music and preachers and myths of Americana. So by the time I came up with Damnation as a TV show, I had a good feel for the period, I think. I’ve done plenty of research since then: oral histories and historical accounts of the period and so forth. We have a person who works on the show who daily does research into various arenas we’re interested in, whether it’s carnivals or bootlegging or pornography or baseball or what have you. Largely, I subscribe to David Milch of Deadwood’s advice: do a ton of research, then forget it, and then use your imagination. So Damnation mingles official history with fiction. I sometimes call it a “speculative history” of the time period.

And about “parallels between that period and today,” he states that there are, “Too many to list. I think that’s one of the things that got us the series order from USA network. Populist anger, fears about technologies and immigrants taking away jobs, fascist tendencies, fears of environmental apocalypse (dust bowl), life and death struggles over who is or isn’t a “real” American. The parallels are often spooky.”

So, even as it follows the general pattern of known history, it doesn’t appear to be based on any specific set of events. It is about the farmer revolts in Iowa during the Great Depression (see 1931 Iowa Cow War, 1932 Farmers’ Holiday Association, & 1933 Wisconsin Milk Strike), the kind of topic demonstrating traditional all-American radicalism that triggers the political right and makes them nostalgic for the pro-capitalist political correctness of corporate media propaganda during the Cold War. But I don’t think the fascist wannabes should get too worried since, as we know from history, the capitalists or rather corporatists defeated that threat from below. The days of a radical working class and of the independent farmer were numbered. The show captures that brief moment when the average American fought against the ruling elite with a genuine if desperate hope as a last stand in defending their way of life, but it didn’t have a happy ending for them.

The USA Network can put out a show like this because capitalism is so entrenched that such history of rebellion no longer feels like a serious threat, although this sense of security might turn out to be false in the long run. Capitalist-loving corporations, of course, will sell anything for a profit, even tv shows about a left-wing populist revolt against capitalists — as Marx put it, “The last capitalist we hang shall be the one who sold us the rope.” The heckling complaints from the right-wing peanut gallery are maybe a good sign, as they are sensing that public opinion is turning against them. But as for appreciating the show, it is irrelevant what you think about the historical events themselves. The show doesn’t play into any simplistic narrative of good vs evil, as characters on both sides have complicated pasts. One is free to root for the capitalists as their goons kill the uppity farmers, if that makes one happy.

As for myself, the show is of personal interest as most of the story occurs here in Iowa. The specific location named is Holden County, but I have no idea where that is supposed to be. There presently is no Holden County in Iowa and I don’t know that there ever was. All I could find is a reference to a Holden County School (Hamilton Township) in an obituary from Decatur County, which is along the southern border of Iowa (a county over from Appanoose where is located Centerville with an interesting history). Maybe there used to a Holden County that was absorbed by another county, a common event I’ve come across before in genealogical research, but in this case no historical map shows a Holden County ever having existed.

The probable fictional nature of the county aside, there is a reason the general location is relevant. Iowa is a state that exists in multiple overlapping border regions, between the Mississippi River and the Missouri River, between the Midwest and Far West, between the Upper Midwest and the Upper South. It is technically in the Midwest and typically perceived as the Heart of the Heartland, the precise location of Standard American English. The broad outlines of Iowa was defined according to Indian territory, such as how the northern border of Missouri originally formed. What became a boundary dispute later on almost led to violent conflict between Missouri and Iowa, based on the ideological conflict over slavery that would eventually develop into the Civil War.

Large parts of Iowa has more similarity to the Upper Midwest. It is distinct in being west of the Mississippi River, one of the last areas of refuge for many of what then were still independent Native American tribes and hence one of the last major battlegrounds to fight off Westward expansion. Iowa is the only state where a tribe collectively bought its own land, rather than staying on a federal reservation. As for southern Iowa, there is a clear Southern influence and you can occasionally hear a Southern accent (as found all across the lower edge of the Lower Midwest). That distinguishes it from northern Iowa with more of the northern European (German, Czech, and Scandinvian) culture shared with Minnesota and Wisconsin. And the more urbanized and industrialized Eastern Iowa has some New England influence from early settlers.

Maybe related to the show, southern Iowa had much more racial and ethnic diversity because of the immigrants attracted to mining towns. This led to greater conflict. I know that in Centerville, a town once as diverse as any big city, the Ku Klux Klan briefly used violence and manipulation to take control of the government before being ousted by the community. The area was important for the Underground Railroad, but it wasn’t a safe area to live for blacks until after the Civil War. In Damnation, some of the town residents are members of the Black Legion, the violent militant group that was an offshoot of the KKK (originally formed to guard Klan leaders). In the show, the Black Legion is essentially a fascist group that opposes left-wing politics and  labor organizing, which is historically accurate. The Klan and related groups in the North were more politically oriented, since the black population was fewer in number. In fact, the Klan tended to be found in counties where there were the least number of minorities (racial minorities, ethnic minorities, and religious minorities), as shown in how they couldn’t maintain control in diverse towns like Centerville.

One of the few blacks portrayed in the show is a woman working at a brothel. I supposed that would have been common, as blacks would have had a harder time finding work. In a scene at the brothel, there was one detail that seemed to potentially be historically inaccurate. A Pinkerton goon has all the prostitutes gathered and holds up something with words on it. He wants to find out which of them can read and it turns out that the black woman is the only literate prostitute working there. That seems unlikely. Iowa had a highly educated population early on, largely by design — as Phil Christman explains (On Being Midwestern: The Burden of Normality):

This is a part of the country where, the novelist Neal Stephenson observes, you can find small colleges “scattered about…at intervals of approximately one tank of gas.” Indeed, the grid-based zoning so often invoked to symbolize dullness actually attests to a love of education, he argues: 

People who often fly between the East and West Coasts of the United States will be familiar with the region, stretching roughly from the Ohio to the Platte, that, except in anomalous non-flat areas, is spanned by a Cartesian grid of roads. They may not be aware that the spacing between roads is exactly one mile. Unless they have a serious interest in nineteenth-century Midwestern cartography, they can’t possibly be expected to know that when those grids were laid out, a schoolhouse was platted at every other road intersection. In this way it was assured that no child in the Midwest would ever live more than √2 miles [i.e., about 1.4 miles] from a place where he or she could be educated.7

Minnesota Danish farmers were into Kierkegaard long before the rest of the country.8 They were descended, perhaps, from the pioneers Meridel LeSueur describes in her social history North Star Country: 

Simultaneously with building the sod shanties, breaking the prairie, schools were started, Athenaeums and debating and singing societies founded, poetry written and recited on winter evenings. The latest theories of the rights of man were discussed along with the making of a better breaking plow. Fourier, Marx, Rousseau, Darwin were discussed in covered wagons.9

If you’ve read Marilynne Robinson’s Gilead trilogy, you know that many of these schools were founded as centers of abolitionist resistance, or even as stops on the Underground Railroad.

The rural Midwest was always far different than the rural South. Iowa, in particular, was a bureaucratically planned society with the greatest proportion of developed land of any state in the country. The location of roads, railroads, towns, and schools was determined before most of the population arrived (similar to what China is now attempting with its mass building of cities out of nothing). The South, on the other hand, grew haphazardly and with little government intervention, such as seen with the the crazy zig-zagging of property lines and roads because of the metes-and-bounds system. This orderly design of Iowa fit the orderly culture of Northern European immigrants and New England settlers, contributing to an idealistic mentality about how society should operate (the Iowa college towns surrounded by farmland were built on the New England model).

The farmer revolts didn’t come out of nowhere. The immigrant populations in states like Iowa were already strongly community-focused and civic-minded. With them, they brought values of work ethic, systematic methods of farming, love of education, and much else. As an interesting example, Iowa was once known as the most musical state in the country because every town had local bands.

Unlike the stereotype, Iowans were obsessed with high culture. They saw themselves on the vanguard of Western Civilization. With so many public schools and colleges near every community, Iowans were well educated. The reason school children to this day have summers off was originally to allow farm children to be able to help on the farm while still being able to attend school. These Midwestern farm kids had relatively high rates of college attendance. And Iowa has long been known for having good schools, especially in the past. My mother has noted that so many Iowans she knows who are college-educated professionals went to small rural one-room schoolhouses.

Another factor is that Northern Europeans had a collectivist bent. They didn’t just love building public schools, public libraries, and public parks. They also formed civic institutions, farmer co-ops, credit unions, etc. They had a strong sense of solidarity that held their communities together. As the Iowa farmers stood together against the capitalist elites from the cities (the banksters, robber barons, and railroad tycoons), so did the German-American residents of Templeton, Iowa stood against Prohibition agents:

The most powerful weapon against oppression is community. This is attested to by the separate fates of a Templetonian like Joe Irlbeck and big city mobster like Al Capone. “Just as Al Capone had Eliot Ness, Templeton’s bootleggers had as their own enemy a respected Prohibition agent from the adjacent county named Benjamin Franklin Wilson. Wilson was ardent in his fight against alcohol, and he chased Irlbeck for over a decade. But Irlbeck was not Capone, and Templeton would not be ruled by violence like Chicago” (Kindle Locations 7-9 [Bryce T. Bauer, Gentlemen Bootleggers]). What ruled Templeton was most definitely not violence. Instead, it was a culture of trust. That is a weapon more powerful than all of Al Capone’s hired guns.

Damnation is a fair portrayal of this world that once existed. And it helps us to understand what destroyed that world — as vulture capitalists targeted small family farmers, controlling markets when possible or failing that sending in violent goons to create fear and havoc. That world survived in tatters for a few more decades, but government-subsidized big ag quickly took over. Still, small family farmers didn’t give up without a fight, as they were some of the last defenders of a pre-corporatist free market based on the ideal of meritorious hard work — the Jeffersonian ideal of the yeoman farmer with its vision of agrarian republicanism, in line with Paine’s brand of socially-minded and liberty-loving Anti-Federalism.

On a more prosaic level, one reviewer offers a critical observation. Mike Hale writes, from a New York Times piece (Review: ‘Damnation’ and the Sick Soul of 1930s America):

Any fidelity to the story’s supposed place and time is clearly incidental to Mr. Tost. He’s transposed the clichés of 19th-century Wyoming or South Dakota to 1930s Iowa, and doesn’t even get the look right — shot in Alberta, the locations look nothing like the Midwest.

Perhaps he was drawn to the contemporary echoes of the Depression-era material but wanted to give it some mock-Shakespearean, “Deadwood”-style dramatic heft. There’s a lot of literary straining going on — the characters are more familiar than you’d expect with the work of Wallace Stevens and Theodore Dreiser, and the sordid capitalism and anti-Communist fervor depicted in the story invoke Sinclair Lewis and Jack London.

I’m not sure why Mike Hale thinks the show doesn’t look like Iowa. He supposedly grew up in Iowa, but I don’t know which part. Anyone who has been in Western Iowa or even much of Eastern Iowa would recognize similar terrain. I doubt anything has been transposed.

Iowa is a young state and, as once being part of the Wild West, early on had a cowboy culture. Famous Hollywood cowboys came from the Midwest, specifically this region along the Upper Mississippi River — such as Ronald Reagan who was from western Illinois and worked in Iowa and John Anderson who was born in Western Illinois and was college-educated in Iowa, but also others who were born and raised in Iowa: John Wayne, Hank Worden, Neville Brand, etc (not just playing cowboys on the big screen but growing up around that cowboy culture). This isn’t just farm country with fields of corn and soy. Most of that is feed for animals, such as cattle. Iowa is part of the rodeo circuit and there is a strong horse culture around here. A short distance from where I live, a coworker of mine helps drive cattle down a highway every year to move them from one field to another.

But as I pointed out, none of this contradicts it also being a highly educated and literate population. I don’t know why Hale would think that certain writers would be unknown to Midwesterners, especially popular and populist writers like Jack London. As for Theodore Dreiser, he was a fellow German-American Midwesterner who wrote about rural life and was politically aligned with working class interests, including involvement in the defense of radicals like those Iowa farmers — the kind of writer one would expect Iowans, specifically working class activists, to be reading during the Great Depression era. That would be even more true for Sinclair Lewis who was from neighboring Minnesota, not to mention also writing popular books about Midwestern communities and radical criticisms of growing fascism — the same emergent fascism that threatened those Iowa farmers.

It’s interesting that an Iowan like Mike Hale would be so unaware of Iowa history. But maybe that is because he was born and spent much of his life outside of Iowa, specifically outside of the United States. His family isn’t from Iowa and so he has no roots here. I noticed that he tweeted that he “Was intrigued ‘Damnation’ is set in my state, Iowa. Didn’t expect the crucifixion, gun battles and frontier brothel”; to which someone responded that “If in Palo Alto, San Jose & NYC since ’77, IA hasn’t been ur state 4 awhile.” Besides, part of his childhood wasn’t even spent in Iowa but instead in Asia. And beyond that, many people simply don’t think he is that great of a critic (see Cultural Learnings, Variety, and Mediaite).

A better review is by Jeff Iblings over at The Tracking Board (Damnation Review: “Sam Riley’s Body”). The review is specifically about the first episode, but goes into greater detail:

Damnation is a new show on USA Networks set in the 1930’s during prohibition, the dust bowl era, and the social unrest during the unionization and strikes that accompanied the corruption of that time. It’s an intriguing look at a moment in American history when people began to wrest control away from a government bought and paid for by industrialists, only to have their movement squashed by the collusion of moneyed interests and the politicians they’d paid for. The series begins in Holden, Iowa as farmers have formed a blockade around the town so no more shipments of produce can reach the city. The powerful banker in town, who owns the newspaper and the Sheriff, has bribed the market in town to keep his food prices low, to price the famers out of making a profit on their crops so they’ll default on the loans he’s given them. A preacher in town fans the flames of the farmer’s unhappiness and gets them to revolt against the banker. Who is this mysterious preacher, and what does he have planned? […]

Damnation is clearly well researched, and the true-life stories it uses to flesh out its world are there to service the narrative, not overburden the show. 1930’s America was a desperate, bleak time, where moneyed interests controlled everything. The game was fixed back then, with politicians in the pocket of industrialists and wealthy bankers. The people had nothing more to give, since the wealthy had taken nearly everything from them. It’s a very relevant tale. Almost the same exact thing is going on again in present day America, which I would imagine, is one of the points of Damnation.

Iblings writes in another Damnation review of the second episode:

Tony Tost and his writers room delve into the history of the Great Depression in order to mine forgotten aspects of our political and social movements. It’s incredible how prescient much of the struggles of the farmers depicted still are problems today. Price fixing, bank negligence and dishonesty, politicians in the pockets of big business, the stifling of the labor movement when it’s needed most, and the inherent racism and protectionism of white Americans towards other races are all as topical today as they were in the 1930’s. It’s as if little has actually changed 100 years later. Damnation may be a historical television series, but it’s speaking to the America of today.

And about the third episode, he writes:

There are a few interesting moments I want to point out that really stuck with me. The first is the opening scene of a couple watching their kids playing baseball and taking great joy in it. When the wife goes into the shed to get the kids some cream soda, there are nooses hanging from the ceiling and Black Legion outfits hung up on the walls. The man then exclaims to his wife, “If this isn’t the American dream, I don’t know what is.” Damnation uses this banal setting, and these uneventful people to show how the American dream was an exclusionary ideal. They look like normal people you’d run into, but underneath this veneer are racist secrets. This prejudice was pervasive back then, but in Trump’s America this type of hatred and racism has become the norm once again. It was disgusting then, and it’s disgusting now.

What I like about the show is how it portrays the nature of populist politics during that historical era. The show begins in 1931, a moment of transition for American society in the waning days of Prohibition. The Great Depression followed decades of Populism and set the stage for the Progressivism that would follow. The next year Franklin Delano Roosevelt would be elected and later on re-elected twice more, the most popular president in US history.

What many forget about both Populism and Progressivism is the role that religion played, especially Evangelicalism. In the past, Evangelicals were often radical reformers in their promoting separation of church and state, abolitionism, women’s rights, and such. Think of the 1896 “Cross of Gold” speech given by William Jennings Bryan. This goes back to how Thomas Paine, the original American populist and progressive, used Christian language to advocate radical politics. Interestingly, as Paine was an anti-Christian deist, the leader of the farmers revolt is a guy falsely posing as an itinerant preacher, although he shows signs of genuine religious feeling such as sparing a man’s life when he sees the likeness of a cross marked on the floor near the man’s head. However one takes his persona of religiosity, the preaching of a revolutionary Jesus is perfectly in line with the political rhetoric of the period.

I also can’t help but appreciate how much it resonates with the present. The past, in a sense, always remains relevant — since as William Faulkner so deftly put it,  “The past isn’t over. It isn’t even past.” In a New York Post interview, the show’s creator Tony Tost was asked, “How relevant is the plot about the common man battling the establishment today?” And he replied that, “I wrote the first two episodes, like, three years ago, but contemporary history keeps making the show feel more and more relevant. I’m not necessarily trying to do an allegory about the present, but history is very cyclical. There’s some core elemental conflicts and issues that we keep returning to. In a way, the present day almost caught up.”

As with Hulu’s The Handmaid’s Tale and Amazon’s Man in the High Castle, Damnation has good timing. Such hard-hitting social commentary is important at times like these. And in the form of entertainment, it is more likely to have an impact.

* * *

State of Emergency: The Depression and the Plots to Create an American Dictatorship
by Nate Braden, Kindle Locations 510-571
(see Great Depression, Iowa, & Revolts)

“In September 1932 Fortune published a shocking profile of the effect Depression poverty was having on the American people. Titled “No One Has Starved” – in mocking reference to Herbert Hoover’s comment to that effect – Fortune essentially called the President a liar and explained why in a ten page article. Predicting eleven million unemployed by winter, its grim math figured these eleven million breadwinners were responsible for supporting another sixteen and a half million people, thus putting the total number of Americans without any income whatsoever at 27.5 million. Along with another 6.5 million who were underemployed, this meant 34 million citizens – nearly a third of the country’s population – lived below the poverty line. [1]

“Confidence was low that a Hoover reelection would bring any improvement in the country’s situation. He had ignored calls in 1929 to bail out banks after the stock market crashed on the grounds that the federal government had no business saving failed enterprises. With no liquidity in the financial markets, credit evaporated and deflation pushed prices and wages lower, laying waste to asset values. Two years passed before Hoover responded with the Reconstruction Finance Corporation, created to distribute $300 million in relief funds to state and local governments. It was too little, too late. The money would have been better served shoring up the banks three years earlier.

“With each cold, hungry winter that passed, political discussions grew more radical and less tolerant. Talk of revolution was more openly voiced. Harper’s, reflecting the opinion of East Coast intellectuals, pondered its likelihood and confidently asserted: “Revolutions are made, not by the weak, the unsuccessful, or the ignorant, but by the strong and the informed. They are processes, not merely of decay and destruction, but of advance and building. An old order does not disappear until a new order is ready to take its place.”[2]

“As this smug analysis was rolling off the presses, the weak, the unsuccessful, and the ignorant were already proving it wrong. Most people expected a revolt to start in the cities, but it was in the countryside, in Herbert Hoover’s home state no less, where men first took up arms against a system they had been raised to believe in but no longer did. On August 13, 1932, Milo Reno, the onetime head of the Iowa Farmer’s Union, led a group of five hundred men in an assault on Sioux City. They called it a “farm holiday,” but it was in fact an insurrection. Reno and his supporters blocked all ten highways into the city and confiscated every shipment of milk except those destined for hospitals, dumping it onto the side of the road or taking it into town to give away free. Fed up with getting only two cents for a quart of milk that cost them four cents to bring to market, the farmers were creating their own scarcities in an attempt to drive up prices.

“The insurgents enjoyed local support. Telephone operators gave advance warning of approaching lawmen, who were promptly ambushed and disarmed. When 55 men were arrested for picketing the highway to Omaha, a crowd of a thousand angry farmers descended on the county jail in Council Bluffs and forced their release. The uprising just happened to coincide with the Iowa National Guard’s annual drill in Des Moines, but Governor Dan Turner declined to use these troops to break up the disturbance, saying he had “faith in the good judgment of the farmers of Iowa that they will not resort to violence.”[3]

“The rebellion spread to Des Moines, Spencer, and Boone. Farmers in Nebraska, South Dakota, and Minnesota declared their own holidays. Milo Reno issued a press release vowing to continue “until the buying power of the farmer is restored – which can be done only by conceding him the right to cost of production, based on an American standard of existence.” Business institutions, he added, “whether great or small, important or humble, must suffer.” While advising his followers to obey the law and engage only in “peaceful picketing,” Reno issued this warning: “The day for pussyfooting and deception in the solution of the farmers’ problems is past, and the politicians who have juggled with the agricultural question and used it as a pawn with which to promote their own selfish interests can succeed no longer.”[4]

“Reno and his men had laid down their marker. Aware that the insurrectionists might call his bluff, the governor stopped short of issuing an ultimatum, but he kept his Guardsmen in Des Moines just in case. The showdown never came – a mysterious shotgun attack on one of Reno’s camps near Cherokee was enough to persuade him to call off the holiday – but others weren’t cowed by the violence. The same day Reno issued his press release, coal miners in neighboring Illinois went on strike after their pay was cut to five dollars a day. Fifteen thousand of them shut down shafts all over Franklin County, the state’s largest mining region, and took over the town of Coulterville for several hours, “exhausting provisions at the restaurant, swamping the telephone exchange with calls and choking roads and fields for a mile around” the New York Times reported. Governor Louis Emmerson ordered state troopers to take the town back. Wading into a hostile, sneering crowd who shouted “Cossacks!” at them, the police broke it up with pistols and clubs, putting eight miners in the hospital.

“The rebels were bloodied but unbowed. Vowing to march back in to coal country, strike leader Pat Ansbury told a journalist, “if we go back it must be with weapons. We can’t face the machine guns of those Franklin County jailbirds with our naked hands. Not a man in our midst had even a jackknife. When we go back we must have arms, organization and cooperation from the other side.” Shaking his head at the lost opportunity, he made sure the reporter hadn’t misunderstood him. “This policy of peaceful picketing is out from now on.” Reno conducted a similar post-mortem, acknowledging that his side may have lost the battle but would not lose the war: “You can no more stop this movement than you could stop the revolution. I mean the revolution of 1776.”[5]

“Not only were farmers burdened by low commodity prices, they were also swamped with high-interest mortgages and crushing taxes. In February 1933 Prudential Insurance, the nation’s largest land creditor, announced it would suspend foreclosures on the 37,000 farm titles it held, valued at $209 million. Mutual Benefit and Metropolitan Life followed suit, all of them finally coming to the conclusion that they couldn’t get blood from a rock.

“It was also getting very dangerous to be a repo man in the Midwest. When farms were foreclosed and the land put up for auction, neighbors of the dispossessed property holder would often show up at the sale, drive away any serious bidders, then buy the land for a few dollars and deed it back to the original owner. By this subterfuge a debt of $400 at one Ohio auction was settled for two dollars and fifteen cents. A mortgage broker in Illinois received only $4.90 for the $2,500 property he had put into receivership. An Oklahoma attorney who tried to serve foreclosure papers to a farm widow was promptly waylaid by her neighbors, including the county sheriff, driven ten miles out of town and dumped unceremoniously on the side of the road. A Kansas City realtor who had foreclosed on a 500-acre farm turned up with a bullet in his head, his killers never brought to justice. [6]”

We’re Ready For Democracy

Routing the progressive movement back into the establishment parties for decades is what got us into this mess. “Playing it safe” turned out to be extremely dangerous.

THE CASE FOR A PEOPLE’S PARTY
From Resistance to Revolution

Americans are Progressive and Want a New Party

❖ Issue polls show that the majority of Americans are progressive. They want single-payer health care, money out of politics, free public college, and much more.

❖ The majority of Americans want a major new party: 57% to 37%. In the 2016 general election, 55% of Americans wanted a major third party option on the ballot.

Affiliation with the Democratic and Republican parties has been declining for a decade and is near historic lows. Democrats account for 28% of the country, Republicans for 29%, and independents for 40%. Gallup projects that 50% of Americans will be independents by 2020.

Gallup figures reveal an alarming trend: since the 2016 general election, affiliation with the Democratic Party is declining while the Republican Party is holding steady, even growing slightly. The Democratic Party is losing supporters at the time when it should be growing most. Despite Trump’s attacks on working people and Bernie’s monumental efforts to bring people into the Democratic Party, more and more Democrats are becoming independents[…]

The political revolution has already been won in the hearts and minds of the next generation. Millennials almost universally reject the status quo and the parties that enforce it. 91% of people under 29 wanted a major third party option on the ballot in 2016. People under 29 have a much more favorable view of socialism than capitalism.

The electorate is rapidly becoming even more progressive. As of 2016, Millennials are the largest age-group voting bloc. Four years of highly-progressive Millennials will replace four years of Silent Generation conservatives in the electorate by 2020.

The Democratic Party Remains Firmly in Neoliberal Control […]

Americans have a less favorable view of the Democratic Party than they have of Trump and the Republican Party. Two-thirds of Americans say that the Democratic Party is out of touch with the concerns of most people. More Americans believe that Trump and the Republican Party are in touch with their concerns.

In a poll of swing voters who supported Obama and then supported Trump, twice as many people said that the Democratic Party favors the wealthy versus the Republican Party. The Democratic Party’s brand is destroyed. Working people have no confidence in it. […]

Sanders can Create a Party for the Progressive Majority

Bernie is the most popular politician in the country and has an 80% favorability rating among Democrats and 57% favorability among independents. His appeal with conservatives would attract many anti-establishment Republicans to the new party as well.

A new party that attracts just half of the Democrats and half of the independents would be the largest party in America by far.

❖ If Bernie starts a new party, we would begin with at least half of the Democratic Party. Then we would add independents, young voters, anti-establishment voters, the white working class, people of color, third party voters, people who have given up on voting, and many conservatives who have a favorable impression of Bernie. This would make the party significantly larger than what remains of the Democratic Party.

❖ The spoiler effect leads voters to consolidate around two major parties, one on the left and one on the right. Our new party will be the largest party on the left, leading whatever remains of the Democratic Party to consolidate around us. The spoiler effect will accelerate rather than hinder the new party’s growth, as the progressive majority and everyone opposed to Trump gathers around the largest opposition party. […]

Only a New Party Can Defeat Trump and his Agenda

❖ This past November, we witnessed a spectacular failure of an attempt to defeat Trump and authoritarianism from a neoliberal party. Since November, the Democratic Party has only exacerbated the conditions that depressed turnout and led Americans to support Trump in the first place.

❖ Republicans are decimating Democrats because the country is growing more progressive on the issues. As Americans grow more progressive, they realize that
the Democratic Party doesn’t represent them and are not inspired to turn out. The more progressive the country gets, the less motivated voters are to support a corporate party.

The people who need to vote in Democratic Primaries for progressives to win are leaving the party and becoming independents, or not voting at all. The party’s declining affiliation and favorability numbers are reiterating what we learned in 2016: opposing Trump without offering a populist alternative is the path to failure. The Democrats are poised to continue losing and our progressive country will continue moving to the right. An arrangement that suits the corporations and billionaires who fund both establishment parties. […]

The Numbers
Americans are Progressive

Issue polls show that a large majority of Americans are progressive. They would overwhelmingly support the new party’s platform. All figures are percentages.

Americans support:

Equal pay for men and women 93%
Overhaul campaign finance system 85%
Money has too much influence on campaigns 84%
Paid family and medical leave 82%
Some corporations don’t pay their fair share 82%
Some wealthy people don’t pay their fair share 79%
Allow government to negotiate drug prices 79%
Increase financial regulation 79%
Expand Social Security benefits by taxing the wealthy 72%
Infrastructure jobs program 71%
Close offshore corporate tax loopholes 70%
Raise the minimum wage to $15 63%
The current distribution of wealth is unfair 63%
Free public college 62%
Require special prosecutor for police killings 61%
Ensure net neutrality 61%
Ban the revolving door for corporate executives in government 59%
Replace the ACA with single payer health care 58%
Break up the big banks 58%
Government should do more to solve problems 57%
Public banking at post offices 56%

Trauma, Embodied and Extended

One of the better books on trauma I’ve seen is by Resmaa Menakem. He is a trauma therapist with a good range of personal and professional experience, which allows him to persuasively combine science with anecdotes. I heard him speak at Prairie Lights bookstore. He was at the end of his book tour and, instead of reading from his book My Grandmother’s Hands, he discussed what inspired it.

He covered his experience working with highly traumatized contract workers on military bases in Afghanistan. And he grounded it with stories about his grandmother. But more interestingly, he mentioned a key scientific study (see note 15 below). Although I had come across it before, I had forgotten about it. Setting up his discussion, he asked the audience, “Have any of you been to Washington, DC and smelled the cherry blossoms?” He described the warm, pleasant aroma. And then he gave the details of the study.

Mice were placed in a special enclosure. It was the perfect environment to fulfill a mouse’s every need and desire. But the wire mesh on the bottom was connected to electrical wires. The researchers would pump in the smell of cherries and then switch on the electricity. The mice jumped, ran around, clambered over each other, and struggled to escape — what any animal, including humans, would do in a similar situation. This was repeated many times, until finally the mice would have this Pavlovian response to cherry smell alone without any electric shock.

That much isn’t surprising. Thousands of studies have demonstrated such behavioralism. Where it gets intriguing is that the mice born to these traumatized mice also responded the same way to the cherry smell, despite never having been shocked. And the same behavior was observed with the generation of mice following that. Traumatic memory to something so specific as a smell became internalized and engrained within the body itself, passed on through genetics (or, to be specific, epigenetics). It became free-floating trauma disconnected from its originating source.

Menakem asked what would another scientist think who came in after the initial part of the study. The new scientist would not have seen the traumatizing shocks, but instead would only observe the strange response to the smell of cherries. Based on this limited perspective, this scientist would conclude that there was something wrong with those mice. From the book, here is how he describes it in human terms:

“Unhealed trauma acts like a rock thrown into a pond; it causes ripples that move outward, affecting many other bodies over time. After months or years, unhealed trauma can appear to become part of someone’s personality. Over even longer periods of time, as it is passed on and gets compounded through other bodies in a household, it can become a family norm. And if it gets transmitted and compounded through multiple families and generations, it can start to look like culture.”

This is a brilliant yet grounded way of explaining trauma. It goes beyond a victimization cycle. The trauma gets passed on, with or without a victimizer to mediate the transmission, although typically this process goes hand in hand with continuing victimization. Trauma isn’t a mere psychological phenomenon manifesting as personal dysfunction. It can become embodied and expressed as a shared experience, forming the background to the lives, relationships, and communities within an entire society — over the centuries, it could solidify into a well-trod habitus and entrenched social order. The personal becomes intergenerational becomes historical.

This helps explain the persistence of societal worldviews and collective patterns, what most often gets vaguely explained as ‘culture’. It’s not just about trauma for anything can be passed on in similar ways, such as neurocognitive memes involving thought, perception, and behavior — and it is plausible that, whether seeming harmful or beneficial, much of this is supported by epigenetic mechanisms in contributing to specific expressions of nature-nurture dynamics. Related to this, Christine Kenneally offers a corroborating perspective (The Invisible History of the Human Race, Kindle Locations 2430-2444):

“It seemed that both families and social institutions matter but that the former is more powerful. The data suggested that a region might develop its own culture of distrust and that it could affect people who moved into that area, even if their ancestors had not been exposed to the historical event that destroyed trust in the first place. But if someone’s ancestors had significant exposure to the slave trade, then even if he moved away from the area where he was born to an area where there was no general culture of mistrust, he was still less likely to be trusting. Indeed, Nunn and Wantchekon found evidence that the inheritance of distrust within a family was twice as powerful as the distrust that is passed down in a community.”

Kenneally doesn’t frame this according to epigenetics. But that would be a highly probable explanation, considering the influence happens mostly along familial lines, potentially implying a biological component. Elsewhere, the author does mention it in passing, using the same mouse study along with a human study (Kindle Locations 4863-4873):

“The lives that our parents and grandparents lived may also affect the way genetic conditions play out in our bodies. One of the central truths of twentieth-century genetics was that the genome is passed on from parents to child unaffected by the parents’ lives. But it has been discovered in the last ten years that there are crucial exceptions to this rule. Epigenetics tells us that events in your grandfather’s life may have tweaked your genes in particular ways. The classic epigenetics study showed that the DNA of certain adults in the Netherlands was irrevocably sculpted by the experience of their grandparents in a 1944 famine. In cases like this a marker that is not itself a gene is inherited and plays out via the genes. More recent studies have shown complex multigenerational effects. In one, mice were exposed to a traumatic event, which was accompanied by a particular odor. The offspring of the mice, and then their offspring, showed a greater reactivity to the odor than mice whose “grandparents” did not experience such conditioning. In 2014 the first ancient epigenome, from a four-thousand-year-old man from Greenland, was published. Shortly after that, drafts of the Neanderthal and Denisovan epigenomes were published. They may open up an entirely new way to compare and contrast our near-relatives and ancestors and to understand the way that they passed down experiences and predispositions. As yet it’s unclear for how many generations these attachments to our genes might be passed down.”

In emphasizing this point, she continues her thought with the comment that (Kindle Locations 4874-4876), “Even given our ability to read hundred of thousands of letters in the DNA of tens of thousands of people, it turns out that— at least for the moment— family history is still a better predictor of many health issues. For example, it is the presence of a BRCA mutation plus a family history of breast cancer that most significantly raises a woman’s risk of the disease.”

Much of that ‘family history’ would be epigenetic or else other biological mechanisms such as stress-induced hormones within the fetal environment of the womb. Also, microbiomes are inherited and have been proven to alter epigenetics, which means the non-human genes of bacteria can alter the expression of human genes (this can be taken a further step back, since presumably bacterial genetics also involve epigenetics). Besides all of this, there is much else that gets passed on by those around us, from viruses to parasites.

Another pathway of transmission would be shared environmental conditions, specifically considering that people tend to share environments to the degree their relationships are close. Those in the same society would have more shared environment than those in other societies, those in the same community moreso than those in other communities in the same society, those in the same neighborhood moreso than those in other neighborhoods in the same community, and those in the same family moreso than those in other families in the same neighborhood. The influence of environments is powerfully demonstrated with the rat park research. And the environmental factors easily remain hidden, even under careful laboratory conditions.

What we inherit is diverse and complex. But inheritance isn’t fatalism. Consider another mouse study involving electric shocks (Genetic ‘switch’ for memories, The Age), showing that the effects of trauma can be epigenetically reversed within the body:

“Both sets of mice were trained to fear a certain cage by giving them a mild electric shock every time they were put inside.
“Mice whose Tet1 gene was disabled learned to associate the cage with the shock, just like the normal mice. However, when the mice were put in the cage without an electric shock, the two groups behaved differently.
“To the scientists’ astonishment, mice with the Tet1 gene did not fear the cage because their memory of being hurt had already been replaced by new information. The mice with the disabled gene, whose memories had not been replaced, were still traumatised by the experience.”

Trauma isn’t a personal failing or weakness. In a sense, it isn’t even personal. It’s a biological coping mechanism, passed on from body to body, across generations and centuries. Trauma is a physical condition, based on a larger context of environmental conditions. And maybe one day we will be able to as easily treat it as any other physical condition. In turn, this could have a profound impact on so much of what has been considered ‘psychological’ and ‘cultural’. There are immense implications for the overlap of personal healthcare and public health.

* * *

My Grandmother’s Hands: Racialized Trauma and the Pathway to Mending Our Hearts and Bodies
by Resmaa Menakem
Chapter 3 Body to Body, Generation to Generation
pp. 23-34

Not to know what happened before you were born is to remain forever a child.
Cicero

No man can know where he is going unless he knows exactly where he has been and exactly how he arrived at his present place.
Maya Angelou

Most of us think of trauma as something that occurs in an individual body, like a toothache or a broken arm. But trauma also routinely spreads between bodies, like a contagious disease. […]

It’s not hard to see how trauma can spread like a contagion within couples, families, and other close relationships. What we don’t often consider is how trauma can spread from body to body in any relationship.

Trauma also spreads impersonally, of course, and has done so throughout human history. Whenever one group oppresses, victimizes, brutalizes, or marginalizes another, many of the victimized people may suffer trauma, and then pass on that trauma response to their children as standard operating procedure. 13 Children are highly susceptible to this because their young nervous systems are easily overwhelmed by things that older, more experienced nervous systems are able to override. As we have seen, the result is a soul wound or intergenerational trauma. When the trauma continues for generation after generation, it is called historical trauma. Historical trauma has been likened to a bomb going off, over and over again.

When one settled body encounters another, this can create a deeper settling of both bodies. But when one unsettled body encounters another, the unsettledness tends to compound in both bodies. In large groups, this compounding effect can turn a peaceful crowd into an angry mob. The same thing happens in families, especially when multiple family members face painful or stressful situations together. It can also occur more subtly over time, when one person repeatedly passes on their unsettledness to another. In her book Everyday Narcissism, therapist Nancy Van Dyken calls this hazy trauma: trauma that can’t be traced back to a single specific event.

Unhealed trauma acts like a rock thrown into a pond; it causes ripples that move outward, affecting many other bodies over time. After months or years, unhealed trauma can appear to become part of someone’s personality. Over even longer periods of time, as it is passed on and gets compounded through other bodies in a household, it can become a family norm. And if it gets transmitted and compounded through multiple families and generations, it can start to look like culture.

But it isn’t culture. It’s a traumatic retention that has lost its context over time. Though without context, it has not lost its power. Traumatic retentions can have a profound effect on what we do, think, feel, believe, experience, and find meaningful. (We’ll look at some examples shortly.)

What we call out as individual personality flaws, dysfunctional family dynamics, or twisted cultural norms are sometimes manifestations of historical trauma. These traumatic retentions may have served a purpose at one time—provided protection, supported resilience, inspired hope, etc.—but generations later, when adaptations continue to be acted out in situations where they are no longer necessary or helpful, they get defined as dysfunctional behavior on the individual, family, or cultural level.

The transference of trauma isn’t just about how human beings treat each other. Trauma can also be inherited genetically. Recent work in genetics has revealed that trauma can change the expression of the DNA in our cells, and these changes can be passed from parent to child. 14

And it gets weirder. We now have evidence that memories connected to painful events also get passed down from parent to child—and to that child’s child. What’s more, these experiences appear to be held, passed on, and inherited in the body, not just in the thinking brain. 15 Often people experience this as a persistent sense of imminent doom—the trauma ghosting I wrote about earlier.

We are only beginning to understand how these processes work, and there are a lot of details we don’t know yet. Having said that, here is what we do know so far:

  • A fetus growing inside the womb of a traumatized mother may inherit some of that trauma in its DNA expression. This results in the repeated release of stress hormones, which may affect the nervous system of the developing fetus.
  • A man with unhealed trauma in his body may produce sperm with altered DNA expression. These in turn may inhibit the healthy functioning of cells in his children.
  • Trauma can alter the DNA expression of a child or grandchild’s brain, causing a wide range of health and mental health issues, including memory loss, chronic anxiety, muscle weakness, and depression.
  • Some of these effects seem particularly prevalent among African Americans, Jews, and American Indians, three groups who have experienced an enormous amount of historical trauma.

Some scientists theorize this genetic alteration may be a way to protect later generations. Essentially, genetic changes train our descendants’ bodies through heredity rather than behavior. This suggests that what we call genetic defects may actually be ways to increase our descendants’ odds of survival in a potentially dangerous environment, by relaying hormonal information to the fetus in the womb.

The womb is itself an environment: a watery world of sounds, movement, and human biochemicals. Recent research suggests that, during the last trimester of pregnancy, fetuses in the womb can learn and remember just as well as newborns. 16 Part of what they may learn, based on what their mothers go through during pregnancy, is whether the world outside the womb is safe and healthy or dangerous and toxic. […]

Zoë Carpenter sums this up in a simple, stark observation:

Health experts now think that stress throughout the span of a woman’s life can prompt biological changes that affect the health of her future children. Stress can disrupt immune, vascular, metabolic, and endocrine systems, and cause cells to age more quickly. 17 […]

These are the effects of trauma involving specific incidents. But what about the effects of repetitive trauma: unhealed traumas that accumulate over time? The research is now in: the effects on the body from trauma that is persistent (or pervasive, repetitive, or long-held) are significantly negative, sometimes profoundly so. While many studies support this conclusion, 19 the largest and best known is the Adverse Childhood Experiences Study (ACES), a large study of 17,000 people 20 conducted over three decades by the Centers for Disease Control and Prevention (CDC) and the healthcare conglomerate Kaiser Permanente. Published in 2014, ACES clearly links childhood trauma (and other “adverse childhood events” involving abuse or neglect 21) to a wide range of long-term health and social consequences, including illness, disability, social problems, and early death—all of which can get passed down through the generations. The ACE study also demonstrates a strong link between the number of “adverse childhood events” and increased rates of heart disease, cancer, stroke, diabetes, chronic lung disease, alcoholism, depression, liver disease, and sexually transmitted diseases, as well as illicit drug use, financial stress, poor academic and work performance, pregnancy in adolescence, and attempted suicide. People who have experienced four or more “adverse events” as children are twice as likely to develop heart disease than people who have experienced none. They are also twice as likely to develop autoimmune diseases, four and a half times as likely to be depressed, ten times as likely to be intravenous drug users, and twelve times as likely to be suicidal. As children, they are thirty-three times as likely to have learning and behavior problems in school.

Pediatrician Nadine Burke-Harris offers the following apt comparison: “If a child is exposed to lead while their brain is developing, it affects the long-term development of their brain . . . It’s the same way when a child is exposed to high doses of stress and trauma while their brain is developing . . . Exposure to trauma is particularly toxic for children.” In other words, there is a biochemical component behind all this.

When people experience repeated trauma, abuse, or high levels of stress for long stretches of time, a variety of stress hormones get secreted into their bloodstreams. In the short term, the purpose of these chemicals is to protect their bodies. But when the levels of these chemicals 22 remain high over time, they can have toxic effects, making a person less healthy, less resilient, and more prone to illness. High levels of one or more of these chemicals can also crowd out other, healthier chemicals—those that encourage trust, intimacy, motivation, and meaning. […]

The results of the ACE study are dramatic. Yet it covered only fifteen years. How much more dramatic might the results be for people who have experienced (or whose ancestors experienced) centuries of enslavement or genocide? 23

Historical trauma, intergenerational trauma, institutionalized trauma (such as white-body supremacy, gender discrimination, sexual orientation discrimination, etc.), and personal trauma (including any trauma we inherit from our families genetically, or through the way they treat us, or both) often interact. As these traumas compound each other, or as each new or recent traumatic experience triggers the energy of older experiences, they can create ever-increasing damage to human lives and human bodies.

* * *

Notes:

13 Over time, roles can switch and the oppressed may become the oppressors. They then pass on trauma not only to their children, but also to a new group of victims. 14 This research has led to the creation of a new field of scientific inquiry known as epigenetics, the study of inheritable changes in gene expression. Epigenetics has transformed the way scientists think about genomes. The first study to clearly show that stress can cause inheritable gene defects in humans was published in 2015 by Rachel Yehuda and her colleagues, titled “Holocaust Exposure Induced Intergenerational Effects n FKBP5 Methylation” ( Biological Psychiatry 80, no. 5, September, 2016: 372–80). (Earlier studies identified the same effect in animals.) Yehuda’s study demonstrated that damaged genes in the bodies of Jewish Holocaust survivors—the result of the trauma they suffered under Nazism—were passed on to their children. Later research confirms Yehuda’s conclusions.

15 A landmark study demonstrating this effect in mice was published in 2014 by Kerry Ressler and Brian Dias (“Parental Olfactory Experience Influences Behavior and Neural Structure in Subsequent Generations,” Nature Neuroscience 17: 89–96). Ressler and Dias put male mice in a small chamber, then occasionally exposed them to the scent of acetophenone (which smells like cherries)—and, simultaneously, to small electric shocks. Eventually the mice associated the scent with pain; they would shudder whenever they were exposed to the smell, even after the shocks were discontinued. The children of those mice were born with a fear of the smell of acetophenone. So were their grandchildren. As of this writing, no one has completed a similar study on humans, both for ethical reasons and because we take a lot longer than mice to produce a new generation.

16 A good, if very brief, overview of these studies appeared in Science: http://www.sciencemag.org/news/2013/08/babies-learn-recognize-words-womb .

17 This quote is from an eye-opening article in The Nation, “What’s Killing America’s Black Infants?”: https://www.thenation.com/article/whats-killing-americas-black-infants . Carpenter also notes that in the United States, Black infants die at a rate that’s over twice as high as for white infants. In some cities, the disparity is much worse: in Washington, DC, the infant mortality rate in Ward 8, which is over 93 percent Black, is ten times the rate in Ward 3, which is well-to-do and mostly white. […]

19 See, for example: “Early Trauma and Inflammation” ( Psychosomatic Medicine 74, no. 2, February/March 2012: 146–52); “Chronic Stress, Glucocorticoid Receptor Resistance, Inflammation, and Disease Risk” ( Proceedings of the National Academy of Sciences 109, no. 16, April 17, 2012: 5995–99); and “Adverse Childhood Experiences and Adult Risk Factors for Age-Related Disease: Depression, Inflammation, and Clustering of Metabolic Risk Markers” ( Archives of Pediatrics and Adolescent Medicine 163, no. 12, December 2009: 1135–43).

20 Of the people studied, 74.8 percent were white; 4.5 percent were African American; 54 percent were female; and 46 percent were male.

21 The ten “adverse childhood events” are divorced or separated parents; physical abuse; physical neglect; emotional abuse; emotional neglect; sexual abuse; domestic violence that the child witnessed; substance abuse in the household; mental illness in the household; and a family member in prison.

22 These chemicals are cortisol, adrenaline, and norepinephrine. They are secreted by the adrenal gland.

23 Please don’t imagine that we African Americans claim to have cornered the market on adverse childhood experiences. In fact, in his brilliant book Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (New York: HarperCollins, 2016), white Appalachian J. D. Vance cites the ACE study in reference to himself, his sister Lindsay, and “my corner of the demographic world”: working-class Americans. As Vance notes, “Four in every ten working-class people had faced multiple instances of childhood trauma. If you want to deeply understand the hearts, psyches, and bodies of many Americans today, you can do no better than to read both Hillbilly Elegy and Ta-Nehisi Coates’s Between the World and Me (New York: Spiegel & Grau, 2015).

* * *

What white bodies did to Black bodies they did to other white bodies first.
Janice Barbee

* * *

From Genetic Literacy Project:

Childhood trauma: The kids are not alright and part of the explanation may be linked to epigenetics
Your DNA may have been altered by childhood stress and traumas
Childhood trauma leaves mark on DNA of some victims
Is the genetic imprint of traumatic experiences passed on to our children?
Do parents pass down trauma to their children?
Was trauma from Holocaust passed on to children of survivors?
Holocaust survivors studied to determine if trauma-induced mental illness can be inherited
Epigenetics, pregnancy and the Holocaust: How trauma can shape future generations
Epigenetic inheritance: Holocaust survivors passed genetic marks of trauma to children
How epigenetics, our gut microbiome and the environment interact to change our lives
Skin microbiomes differ largely between cultures, more diverse sampling is needed
Cities have unique microbiome ‘fingerprint,’ study finds
Your microbiome isn’t just in you: It’s all around you
Microbes, like genes, pass from one generation to next
Microbiome profile highlights diet, upbringing and birth
Baby’s microbiome may come from mom’s mouth via placenta

The Group Conformity of Hyper-Individualism

When talking to teens, it’s helpful to understand how their tendency to form groups and cliques is partly a consequence of American culture. In America, we encourage individuality. Children freely and openly develop strong preferences—defining their self-identity by the things they like and dislike. They learn to see differences. Though singular identity is the long-term goal, in high school this identity-quest is satisfied by forming and joining distinctive subgroups. So, in an ironic twist, the more a culture emphasizes individualism, the more the high school years will be marked by subgroupism. Japan, for instance, values social harmony over individualism, and children are discouraged from asserting personal preferences. Thus, less groupism is observed in their high schools.

That is from Bronson and Merryman’s NurtureShock (p. 45). It touches on a number of points. The most obvious point is made clear by the authors. American culture is defined by groupism. The authors discussed this in a chapter about race, explaining why group stereotypes are so powerful in this kind of society. They write that, “The security that comes from belonging to a group, especially for teens, is palpable. Traits that mark this membership are—whether we like it or not—central to this developmental period.” This was emphasized with a University Michigan study done on Detroit black high school students “that shows just how powerful this need to belong is, and how much it can affect a teen.”

Particularly for the boys, those who rated themselves as dark-skinned blacks had the highest GPAs. They also had the highest ratings for social acceptance and academic confidence. The boys with lighter skin tones were less secure socially and academically.

The researchers subsequently replicated these results with students who “looked Latino.”

The researchers concluded that doing well in school could get a minority teen labeled as “acting white.” Teens who were visibly sure of membership within the minority community were protected from this insult and thus more willing to act outside the group norm. But the light-skinned blacks and the Anglo-appearing Hispanics—their status within the minority felt more precarious. So they acted more in keeping with their image of the minority identity—even if it was a negative stereotype—in order to solidify their status within the group.

A group-minded society reinforces stereotypes at a very basic level of human experience and relationships. Along with a weak culture of trust, American hyper-individualism creates the conditions for strong group identities and all that goes with it. Stereotypes become the defining feature of group identities.

The worst part isn’t the stereotypes projected onto us but the stereotypes we internalize. And those who least fit the stereotypes are those who feel the greatest pressure to conform to them in dressing and speaking, acting and behaving in stereotypical ways. There isn’t a strong national identity to create social belonging and support. So, Americans turn to sub-groups and the population becomes splintered, the citizenry divided against itself.

The odd part about this is how non-intuitive it seems , according to the dominant paradigm. The ironic part about American hyper-individualism is that it is a social norm demanding social conformity through social enforcement. In many ways, American society is one of the most conformist countries in the world, related to how much we are isolated into enclaves of groupthink by media bubbles and echo chambers.

This isn’t inevitable, as the comparison to the Japanese makes clear. Not all societies operate according to hyper-individualistic ideology. In Japan, it’s not just the outward expression of the individual that is suppressed but also separate sub-group identities within the larger society. According to one study, this leads to greater narcissism among the Japanese. Because it is taboo to share personal issues in the public sphere, the Japanese spend more time privately dwelling on their personal issues (i.e., narcissism as self-obsession). This is exacerbated by the lack of sub-groups through which to publicly express the personal and socially strengthen individuality. Inner experience, for the Japanese, has fewer outlets to give it form and so there are fewer ways to escape the isolated self.

Americans, on the other hand, are so group-oriented that even their personal issues are part of the public sphere. It is valuing both the speaking of personal views and the listening to the personal views of others — upheld by liberal democratic ideals of free speech, open dialogue, and public debate. For Americans, the personal is the public in the way that the individualistic is the groupish. If we are to apply narcissism to Americans, it is mostly in terms of what is called collective narcissism. We Americans are narcissistic about the groups we belong to. And our entire self-identities get filtered through group identities, presumably with a less intense self-awareness than the Japanese experience.

This is why American teens show a positive response to being perceived as closely conforming to a stereotypical group such as within a racial community. The same pattern, though, wouldn’t be found in a country like Japan. For a Japanese to be strongly identified with a separate sub-group would be seen as unacceptable to larger social norms. Besides, there is little need for sub-group belonging in Japan, since most Japanese would grow up with a confident sense of simply being Japanese — no effort required. Americans have to work much harder for their social identities and so, in compensation, Americans also have to go to a greater extent in proving their individuality.

It’s not that one culture is superior to the other. The respective problems are built into each society. In fact, the problems are necessary in maintaining the social orders. To eliminate the problems would be to chip away at the foundations, either leading to destruction or requiring a restructuring. That is the reason that, in the United States, racism is so persistent and so difficult to talk about. The very social order is at stake.

State and Non-State Violence Compared

There is a certain kind of academic that simultaneously interests me and infuriates me. Jared Diamond, in The World Until Yesterday, is an example of this. He is knowledgeable guy and is able to communicate that knowledge in a coherent way. He makes many worthy observations and can be insightful. But there is also naivete that at times shows up in his writing. I get the sense that occasionally his conclusions preceded the evidence he shares. Also, he’ll point out the problems with the evidence and then, ignoring what he admitted, will treat that evidence as strongly supporting his biased preconceptions.

Despite my enjoyment of Diamond’s book, I was disappointed specifically in his discussion of violence and war (much of the rest of the book, though, is worthy and I recommend it). Among the intellectual elite, it seems fashionable right now to describe modern civilization as peaceful — that is fashionable among the main beneficiaries of modern civilization, not so much fashionable according to those who bear the brunt of the costs.

In Chapter 4, he asks, “Did traditional warfare increase, decrease, or remain unchanged upon European contact?” That is a good question. And as he makes clear, “This is not a straightforward question to decide, because if one believes that contact does affect the intensity of traditional warfare, then one will automatically distrust any account of it by an outside observer as having been influenced by the observer and not representing the pristine condition.” But he never answers the question. He simply assumes that that the evidence proves what he appears to have already believed.

I’m not saying he doesn’t take significant effort to make a case. He goes on to say, “However, the mass of archaeological evidence and oral accounts of war before European contact discussed above makes it far-fetched to maintain that people were traditionally peaceful until those evil Europeans arrived and messed things up.” The archaeological and oral evidence, like the anthropological evidence, is diverse. For example, in northern Europe, there is no evidence of large-scale warfare before the end of the Bronze Age when multiple collapsing civilizations created waves of refugees and marauders.

All the evidence shows us is that some non-state societies have been violent and others non-violent, no different than in comparing state societies. But we must admit, as Diamond does briefly, that contact and the rippling influences of contact across wide regions can lead to greater violence along with other alterations in the patterns of traditional culture and lifestyle. Before contact ever happens, most non-state societies have already been influenced by trade, disease, environmental destruction, invasive species, refugees, etc. That pre-contact indirect influences can last for generations or centuries prior to final contact, especially with non-state societies that were more secluded. And those secluded populations are the most likely to be studied as supposedly representative of uncontacted conditions.

We should be honest in admitting our vast ignorance. The problem is that, if Diamond fully admitted this, he would have little to write about on such topics or it would be a boring book with all of the endless qualifications (I personally like scholarly books filled with qualifications, but most people don’t). He is in the business of popular science and so speculation is the name of the game he is playing. Some of his speculations might not hold up to much scrutiny, not that the average reader will offer much scrutiny.

He continues to claim that, “the evidence of traditional warfare, whether based on direct observation or oral histories or archaeological evidence, is so overwhelming.” And so asks, “why is there still any debate about its importance?” What a silly question. We simply don’t know. He could be right, just as easily as he could be wrong. Speculations are dime a dozen. The same evidence can and regularly is made to conform to and confirm endless hypotheses that are mostly non-falsifiable. We don’t know and probably will never know. It’s like trying to use chimpanzees as a comparison for human nature, even though chimpanzees have for a long time been in a conflict zone with human encroachment, poaching, civil war, habitat loss, and ecosystem destabilization. No one knows what chimpanzees were like pre-contact. But we do know that bonobos that live across a major river in a less violent area express less violent behavior. Maybe there is a connection, not that Diamond is as likely to mention these kinds of details.

I do give him credit, though. He knows he is on shaky ground. In pointing out the problems he previously discussed, he writes that, “One reason is the real difficulties, which we have discussed, in evaluating traditional warfare under pre-contact or early-contact conditions. Warriors quickly discern that visiting anthropologists disapprove of war, and the warriors tend not to take anthropologists along on raids or allow them to photograph battles undisturbed: the filming opportunities available to the Harvard Peabody Expedition among the Dani were unique. Another reason is that the short-term effects of European contact on tribal war can work in either direction and have to be evaluated case by case with an open mind.” In between the lines, Jared Diamond makes clear that he can’t really know much of anything about earlier non-state warfare.

Even as he mentions some archaeological sites showing evidence of mass violence, he doesn’t clarify that these sites are a small percentage of archaeological sites, most of which don’t show mass violence. It’s not as if anyone is arguing mass violence never happened prior to civilization. The Noble Savage myth is not widely supported these days and so there is no point in his propping it up as a straw man to knock down.

From my perspective, it goes back to what comparisons one wishes to make. Non-state societies may or may not be more violent per capita. But that doesn’t change the reality that state societies cause more harm, as a total number. Consider one specific example of state warfare. The United States has been continuously at war since it was founded, which is to say not a year has gone by without war (against both state and non-state societies), and most of that has been wars of aggression. The US military, CIA covert operations, economic sanctions, etc surely has killed at least hundreds of millions of people in my lifetime — probably more people killed than all non-states combined throughout human existence.

Here is the real difference in violence between non-states and states. State violence is more hierarchically controlled and targeted in its destruction. Non-state societies, on the other hand, tend to spread the violence across entire populations. When a tribe goes to war, often the whole tribe is involved. So state societies are different in that usually only the poor and minorities, the oppressed and disadvantaged experience the harm. If you look at the specifically harmed populations in state societies, the mortality rate is probably higher than seen in non-state societies. The essential point is that this violence is concentrated and hidden.

Immensely larger numbers of people are the victims of modern state violence, overt violence and slow violence. But the academics who write about it never have to personally experience or directly observe these conditions of horror, suffering, and despair. Modern civilization is less violent for the liberal class, of which academics are members. That doesn’t say much about the rest of the global population. The permanent underclass lives in constant violence within their communities and from state governments, which leads to a different view on the matter.

To emphasize this bias, one could further note what Jared Diamond ignores or partly reports. In the section where he discusses violence, he briefly mentions the Piraha. He could have pointed out that they are a non-violent non-state society. They have no known history of warfare, capital punishment, abuse, homicide, or suicide — at least none has been observed or discovered through interviews. Does he write about this evidence that contradicts his views? Of course not. Instead, lacking any evidence of violence, he speculates about violence. Here is the passage from Chapter 2 (pp. 93-94):

“Among still another small group, Brazil’s Piraha Indians (Plate 11), social pressure to behave by the society’s norms and to settle disputes is applied by graded ostracism. That begins with excluding someone from food-sharing for a day, then for several days, then making the person live some distance away in the forest, deprived of normal trade and social exchanges. The most severe Piraha sanction is complete ostracism. For instance, a Piraha teen-ager named Tukaaga killed an Apurina Indian named Joaquim living nearby, and thereby exposed the Piraha to the risk of a retaliatory attack. Tukaaga was then forced to live apart from all other Piraha villages, and within a month he died under mysterious circumstances, supposedly of catching a cold, but possibly instead murdered by other Piraha who felt endangered by Tukaaga’s deed.”

Why did he add that unfounded speculation at the end? The only evidence he has is that their methods of social conformity are non-violent. Someone is simply ostracized. But that doesn’t fit his beliefs. So he assumes there must be some hidden violence that has never been discovered after generations of observers having lived among them. Even the earliest account of contact from centuries ago, as far as I know, indicates absolutely no evidence of violence. It makes one wonder how many more examples he ignores, dismisses, or twists to fit his preconceptions.

This reminds me of Julian Jaynes’ theory of bicameral societies. He noted that these Bronze Age societies were non-authoritarian, despite having high levels of social conformity. There is no evidence of these societies having written laws, courts, police forces, formal systems of punishment, and standing armies. Like non-state tribal societies, when they went to war, the whole population sometimes was mobilized. Bicameral societies were smaller, mostly city-states, and so still had elements of tribalism. But the point is that the enculturation process itself was powerful enough to enforce order without violence. That was only within a society, as war still happened between societies, although it was limited and usually only involved neighboring societies. I don’t think there is evidence of continual warfare. Yet when conflict erupted, it could lead to total war.

It’s hard to compare either tribes or ancient city-states to modern nation-states. Their social orders and how they maintained them are far different. And the violence involved is of a vastly disparate scale. Besides, I wouldn’t take the past half century of relative peace in the Western world as being representative of modern civilization. In this new century, we might see billions of deaths from all combined forms of violence. And the centuries earlier were some of the bloodiest and destructive ever recorded. Imperialism and colonialism, along with the legacy systems of neo-imperialism and neo-colonialism, have caused and contributed to the genocide or cultural destruction of probably hundreds of thousands of societies worldwide, in most cases with all evidence of their existence having disappeared. This wholesale massacre has led to a dearth of societies left remaining with which to make comparisons. The survivors living in isolated niches may not be representative of the societal diversity that once existed.

Anyway, the variance of violence and war casualty rates likely is greater in comparing societies of the same kind than in comparing societies of different kinds. As the nearby bonobos are more peaceful than chimpanzees, the Piraha are more peaceful than the Yanomami who live in the same region — as Canada is more peaceful than the US. That might be important to explain and a lot more interesting. But this more incisive analysis wouldn’t fit Western propaganda, specifically the neo-imperial narrative of Pax Americana. From Pax Hispanica to Pax Britannica to Pax Americana, quite possibly billions of combatants have died in wars and billions more of innocents as casualties. That is neither a small percentage nor a small total number, if anyone is genuinely concerned about body counts.

* * *

Rebutting Jared Diamond’s Savage Portrait
by Paul Sillitoe & Mako John Kuwimb, iMediaEthics

Why Does Jared Diamond Make Anthropologists So Mad?
by Barbara J. King, NPR

In a beautifully written piece for The Guardian, Wade Davis says that Diamond’s “shallowness” is what “drives anthropologists to distraction.” For Davis, geographer Diamond doesn’t grasp that “cultures reside in the realm of ideas, and are not simply or exclusively the consequences of climatic and environmental imperatives.”

Rex Golub at Savage Minds slams the book for “a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomena.” In a fit of vexed humor, the Wenner-Gren Foundation for anthropological research tweeted Golub’s post along with this comment: “@savageminds once again does the yeoman’s work of exploring Jared Diamond’s new book so the rest of us don’t have to.”

This biting response isn’t new; see Jason Antrosio’s post from last year in which he calls Diamond’s Pulitzer Prize-winning Guns, Germs, and Steel a “one-note riff,” even “academic porn” that should not be taught in introductory anthropology courses.

Now, in no way do I want to be the anthropologist who defends Diamond because she just doesn’t “get” what worries all the cool-kid anthropologists about his work. I’ve learned from their concerns; I’m not dismissing them.

In point of fact, I was startled at this passage on the jacket of The World Until Yesterday: “While the gulf that divides us from our primitive ancestors may seem unbridgably wide, we can glimpse most of our former lifestyle in those largely traditional societies that still exist or were recently in existence.” This statement turns small-scale societies into living fossils, the human equivalent of ancient insects hardened in amber. That’s nonsense, of course.

Lest we think to blame a publicist (rather than the author) for that lapse, consider the text itself. Near the start, Diamond offers a chronology: until about 11,000 years ago, all people lived off the land, without farming or domesticated animals. Only around 5,400 years ago did the first state emerge, with its dense population, labor specialization and power hierarchy. Then Diamond fatally overlays that past onto the present: “Traditional societies retain features of how all of our ancestors lived for tens of thousands of years, until virtually yesterday.” Ugh.

Another problem, one I haven’t seen mentioned elsewhere, bothers me just as much. When Diamond urges his WEIRD readers to learn from the lifeways of people in small-scale societies, he concludes: “We ourselves are the only ones who created our new lifestyles, so it’s completely in our power to change them.” Can he really be so unaware of the privilege that allows him to assert — or think — such a thing? Too many people living lives of poverty within industrialized nations do not have it “completely in their power” to change their lives, to say the least.

Patterns of Culture by Ruth Benedict (1934) wins Jared Diamond (2012)
by Jason Antrosio, Living Anthropologically

Compare to Jared Diamond. Diamond has of course acquired some fame for arguing against biological determinism, and his Race Without Color was once a staple for challenging simplistic tales of biological race. But by the 1990s, Diamond simply echoes perceived liberal wisdom. Benedict and Weltfish’s Races of Mankind was banned by the Army as Communist propaganda, and Weltfish faced persecution from McCarthyism (Micaela di Leonardo, Exotics at Home 1998:196,224; see also this Jon Marks comment on Gene Weltfish). Boas and Benedict swam against the current of the time, when backlash could be brutal. In contrast, Diamond’s claims on race and IQ have mostly been anecdotal. They have never been taken seriously by those who call themselves “race realists” (see Jared Diamond won’t beat Mitt Romney). Diamond has never responded scientifically to the re-assertion of race from sources like “A Family Tree in Every Gene,” and he helped propagate a medical myth about racial differences in hypertension.

And, of course, although Guns, Germs, and Steel has been falsely branded as environmental or geographical determinism, there is no doubt that Diamond leans heavily on agriculture and geography as explanatory causes for differential success. […]

Compare again Jared Diamond. Diamond has accused anthropologists of falsely romanticizing others, but by subtitling his book What Can We Learn from Traditional Societies, Diamond engages in more than just politically-correct euphemism. When most people think of a “traditional society,” they are thinking of agrarian peasant societies or artisan handicrafts. Diamond, however, is referring mainly to what we might term tribal societies, or hunters and gatherers with some horticulture. Curiously, for Diamond the dividing line between the yesterday of traditional and the today of the presumably modern was somewhere around 5,000-6,000 years ago (see The Colbert Report). As John McCreery points out:

Why, I must ask, is the category “traditional societies” limited to groups like Inuit, Amazonian Indians, San people and Melanesians, when the brute fact of the matter is that the vast majority of people who have lived in “traditional” societies have been peasants living in traditional agricultural civilizations over the past several thousand years since the first cities appeared in places like the valleys of the Nile, the Tigris-Euphrates, the Ganges, the Yellow River, etc.? Talk about a big blind spot.

Benedict draws on the work of others, like Reo Fortune in Dobu and Franz Boas with the Kwakiutl. Her own ethnographic experience was limited. But unlike Diamond, Benedict was working through the best ethnographic work available. Diamond, in contrast, splays us with a story from Allan Holmberg, which then gets into the New York Times, courtesy of David Brooks. Compare bestselling author Charles Mann on “Holmberg’s Mistake” (the first chapter of his 1491: New Revelations of the Americas Before Columbus):

The wandering people Holmberg traveled with in the forest had been hiding from their abusers. At some risk to himself, Holmberg tried to help them, but he never fully grasped that the people he saw as remnants from the Paleolithic Age were actually the persecuted survivors of a recently shattered culture. It was as if he had come across refugees from a Nazi concentration camp, and concluded that they belonged to a culture that had always been barefoot and starving. (Mann 2005:10)

As for Diamond’s approach to comparing different groups: “Despite claims that Diamond’s book demonstrates incredible erudition what we see in this prologue is a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomenon” (Alex Golub, How can we explain human variation?).

Finally there is the must-read review Savaging Primitives: Why Jared Diamond’s ‘The World Until Yesterday’ Is Completely Wrong by Stephen Corry, Director of Survival International:

Diamond adds his voice to a very influential sector of American academia which is, naively or not, striving to bring back out-of-date caricatures of tribal peoples. These erudite and polymath academics claim scientific proof for their damaging theories and political views (as did respected eugenicists once). In my own, humbler, opinion, and experience, this is both completely wrong–both factually and morally–and extremely dangerous. The principal cause of the destruction of tribal peoples is the imposition of nation states. This does not save them; it kills them.

[…] Indeed, Jared Diamond has been praised for his writing, for making science popular and palatable. Others have been less convinced. As David Brooks reviews:

Diamond’s knowledge and insights are still awesome, but alas, that vividness rarely comes across on the page. . . . Diamond’s writing is curiously impersonal. We rarely get to hear the people in traditional societies speak for themselves. We don’t get to meet any in depth. We don’t get to know what their stories are, what the contents of their religions are, how they conceive of individual selfhood or what they think of us. In this book, geographic and environmental features play a much more important role in shaping life than anything an individual person thinks or feels. The people Diamond describes seem immersed in the collective. We generally don’t see them exercising much individual agency. (Tribal Lessons; of course, Brooks may be smarting from reviews that called his book The Dumbest Story Ever Told)

[…] In many ways, Ruth Benedict does exactly what Wade Davis wanted Jared Diamond to do–rather than providing a how-to manual of “tips we can learn,” to really investigate the existence of other possibilities:

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space. This is not to suggest naively that we abandon everything and attempt to mimic the ways of non-industrial societies, or that any culture be asked to forfeit its right to benefit from the genius of technology. It is rather to draw inspiration and comfort from the fact that the path we have taken is not the only one available, that our destiny therefore is not indelibly written in a set of choices that demonstrably and scientifically have proven not to be wise. By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. (Wade Davis review of Jared Diamond; and perhaps one of the best contemporary versions of this project is Wade Davis, The Wayfinders: Why Ancient Wisdom Matters in the Modern World)

[…] This history reveals the major theme missing from both Benedict’s Patterns of Culture and especially missing from Diamond–an anthropology of interconnection. That as Eric Wolf described in Europe and the People Without History peoples once called primitive–now perhaps more politely termed tribal or traditional–were part of a co-production with Western colonialism. This connection and co-production had already been in process long before anthropologists arrived on the scene. Put differently, could the Dobuan reputation for being infernally nasty savages have anything to do with the white recruiters of indentured labour, which Benedict mentions (1934:130) but then ignores? Could the revving up of the Kwakiutl potlatch and megalomaniac gamuts have anything to do with the fur trade?

The Collapse Of Jared Diamond
by Louis Proyect, Swans Commentary

In general, the approach of the authors is to put the ostensible collapse into historical context, something that is utterly lacking in Diamond’s treatment. One of the more impressive record-correcting exercises is Terry L. Hunt and Carl P. Lipo’s Ecological Catastrophe, Collapse, and the Myth of “Ecocide” on Rapa Nui (Easter Island). In Collapse, Diamond judged Easter Island as one of the more egregious examples of “ecocide” in human history, a product of the folly of the island’s rulers whose decision to construct huge statues led to deforestation and collapse. By chopping down huge palm trees that were used to transport the stones used in statue construction, the islanders were effectively sealing their doom. Not only did the settlers chop down trees, they hunted the native fauna to extinction. The net result was a loss of habitat that led to a steep population decline.

Diamond was not the first observer to call attention to deforestation on Easter Island. In 1786, a French explorer named La Pérouse also attributed the loss of habitat to the “imprudence of their ancestors for their present unfortunate situation.”

Referring to research about Easter Island by scientists equipped with the latest technologies, the authors maintain that the deforestation had nothing to do with transporting statues. Instead, it was an accident of nature related to the arrival of rats in the canoes of the earliest settlers. Given the lack of native predators, the rats had a field day and consumed the palm nuts until the trees were no longer reproducing themselves at a sustainable rate. The settlers also chopped down trees to make a space for agriculture, but the idea that giant statues had anything to do with the island’s collapse is as much of a fiction as Diamond’s New Yorker article.

Unfortunately, Diamond is much more interested in ecocide than genocide. If people interested him half as much as palm trees, he might have said a word or two about the precipitous decline in population that occurred after the island was discovered by Europeans in 1722. Indeed, despite deforestation there is evidence that the island’s population grew between 1250 and 1650, the period when deforestation was taking place — leaving aside the question of its cause. As was the case when Europeans arrived in the New World, a native population was unable to resist diseases such as smallpox and died in massive numbers. Of course, Diamond would approach such a disaster with his customary Olympian detachment and write it off as an accident of history.

While all the articles pretty much follow the narrowly circumscribed path as the one on Easter Island, there is one that adopts the Grand Narrative that Jared Diamond has made a specialty of and beats him at his own game. I am referring to the final article, Sustainable Survival by J.R. McNeill, who describes himself in a footnote thusly: “Unlike most historians, I have no real geographic specialization and prefer — like Jared Diamond — to hunt for large patterns in the human past.”

And one of those “large patterns” ignored by Diamond is colonialism. The greatest flaw in Collapse is that it does not bother to look at the impact of one country on another. By treating countries in isolation from one another, it becomes much easier to turn the “losers” into examples of individual failing. So when Haiti is victimized throughout the 19th century for having the temerity to break with slavery, this hardly enters into Diamond’s moral calculus.

Compassion Sets Humans Apart
by Penny Spikins, Sapiens

There are, perhaps surprisingly, only two known cases of likely interpersonal violence in the archaic species most closely related to us, Neanderthals. That’s out of a total of about 30 near-complete skeletons and 300 partial Neanderthal finds. One—a young adult living in what is now St. Césaire, France, some 36,000 years ago—had the front of his or her skull bashed in. The other, a Neanderthal found in Shanidar Cave in present-day Iraq, was stabbed in the ribs between 45,000 and 35,000 years ago, perhaps by a projectile point shot by a modern human.

The earliest possible evidence of what might be considered warfare or feuding doesn’t show up until some 13,000 years ago at a cemetery in the Nile Valley called Jebel Sahaba, where many of the roughly 60 Homo sapiens individuals appear to have died a violent death.

Evidence of human care, on the other hand, goes back at least 1.5 million years—to long before humans were anatomically modern. A Homo ergaster female from Koobi Fora in Kenya, dated to about 1.6 million years ago, survived several weeks despite a toxic overaccumulation of vitamin A. She must have been given food and water, and protected from predators, to live long enough for this disease to leave a record in her bones.

Such evidence becomes even more notable by half a million years ago. At Sima de los Huesos (Pit of Bones), a site in Spain occupied by ancestors of Neanderthals, three of 28 individuals found in one pit had severe pathology—a girl with a deformed head, a man who was deaf, and an elderly man with a damaged pelvis—but they all lived for long periods of time despite their conditions, indicating that they were cared for. At the same site in Shanidar where a Neanderthal was found stabbed, researchers discovered another skeleton who was blind in one eye and had a withered arm and leg as well as hearing loss, which would have made it extremely hard or impossible to forage for food and survive. His bones show he survived for 15 to 20 years after injury.

At a site in modern-day Vietnam called Man Bac, which dates to around 3,500 years ago, a man with almost complete paralysis and frail bones was looked after by others for over a decade; he must have received care that would be difficult to provide even today.

All of these acts of caring lasted for weeks, months, or years, as opposed to a single moment of violence.

Violence, Okinawa, and the ‘Pax Americana’
by John W. Dower, The Asia-Pacific Journal

In American academic circles, several influential recent books argue that violence declined significantly during the Cold War, and even more precipitously after the demise of the Soviet Union in 1991. This reinforces what supporters of US strategic policy including Japan’s conservative leaders always have claimed. Since World War II, they contend, the militarized Pax Americana, including nuclear deterrence, has ensured the decline of global violence.

I see the unfolding of the postwar decades through a darker lens.

No one can say with any certainty how many people were killed in World War II. Apart from the United States, catastrophe and chaos prevailed in almost every country caught in the war. Beyond this, even today criteria for identifying and quantifying war-related deaths vary greatly. Thus, World War II mortality estimates range from an implausible low of 50 million military and civilian fatalities worldwide to as many as 80 million. The Soviet Union, followed by China, suffered by far the greatest number of these deaths.

Only when this slaughter is taken as a baseline does it make sense to argue that the decades since World War II have been relatively non-violent.

The misleading euphemism of a “Cold War” extending from 1945 to 1991 helps reinforce the decline-of-violence argument. These decades were “cold” only to the extent that, unlike World War II, no armed conflict took place pitting the major powers directly against one another. Apart from this, these were years of mayhem and terror of every imaginable sort, including genocides, civil wars, tribal and ethnic conflicts, attempts by major powers to suppress anti-colonial wars of liberation, and mass deaths deriving from domestic political policies (as in China and the Soviet Union).

In pro-American propaganda, Washington’s strategic and diplomatic policies during these turbulent years and continuing to the present day have been devoted to preserving peace, defending freedom and the rule of law, promoting democratic values, and ensuring the security of its friends and allies.

What this benign picture ignores is the grievous harm as well as plain folly of much postwar US policy. This extends to engaging in atrocious war conduct, initiating never-ending arms races, supporting illiberal authoritarian regimes, and contributing to instability and humanitarian crises in many part of the world.

Such destructive behavior was taken to new levels in the wake of the September 11, 2001, attack on the World Trade Center and Pentagon by nineteen Islamist hijackers. America’s heavy-handed military response has contributed immeasurably to the proliferation of global terrorist organizations, the destabilization of the Greater Middle East, and a flood of refugees and internally displaced persons unprecedented since World War II.

Afghanistan and Iraq, invaded following September 11, remain shattered and in turmoil. Neighboring countries are wracked with terror and insurrection. In 2016, the last year of Barack Obama’s presidency, the US military engaged in bombing and air strikes in no less than seven countries (Afghanistan, Iraq, Pakistan, Somalia, Yemen, Libya, and Syria). At the same time, elite US “special forces” conducted largely clandestine operations in an astonishing total of around 140 countries–amounting to almost three-quarters of all the nations in the world.

Overarching all this, like a giant cage, is America’s empire of overseas military bases. The historical core of these bases in Germany, Japan, and South Korea dates back to after World War II and the Korean War (1950-1953), but the cage as a whole spans the globe and is constantly being expanded or contracted. The long-established bases tend to be huge. Newer installations are sometimes small and ephemeral. (The latter are known as “lily pad” facilities, and now exist in around 40 countries.) The total number of US bases presently is around 800.

Okinawa has exemplified important features of this vast militarized domain since its beginnings in 1945. Current plans to relocate US facilities to new sites like Henoko, or to expand to remote islands like Yonaguni, Ishigaki, and Miyako in collaboration with Japanese Self Defense Forces, reflect the constant presence but ever changing contours of the imperium. […]

These military failures are illuminating. They remind us that with but a few exceptions (most notably the short Gulf War against Iraq in 1991), the postwar US military has never enjoyed the sort of overwhelming victory it experienced in World War II. The “war on terror” that followed September 11 and has dragged on to the present day is not unusual apart from its seemingly endless duration. On the contrary, it conforms to this larger pattern of postwar US military miscalculation and failure.

These failures also tell us a great deal about America’s infatuation with brute force, and the double standards that accompany this. In both wars, victory proved elusive in spite of the fact that the United States unleashed devastation from the air greater than anything ever seen before, short of using nuclear weapons.

This usually comes as a surprise even to people who are knowledgeable about the strategic bombing of Germany and Japan in World War II. The total tonnage of bombs dropped on Korea was four times greater than the tonnage dropped on Japan in the US air raids of 1945, and destroyed most of North Korea’s major cities and thousands of its villages. The tonnage dropped on the three countries of Indochina was forty times greater than the tonnage dropped on Japan. The death tolls in both Korea and Indochina ran into the millions.

Here is where double standards enter the picture.

This routine US targeting of civilian populations between the 1940s and early 1970s amounted to state-sanctioned terror bombing aimed at destroying enemy morale. Although such frank labeling can be found in internal documents, it usually has been taboo in pro-American public commentary. After September 11, in any case, these precedents were thoroughly scrubbed from memory.

“Terror bombing” has been redefined to now mean attacks by “non-state actors” motivated primarily by Islamist fundamentalism. “Civilized” nations and cultures, the story goes, do not engage in such atrocious behavior. […]

Nuclear weapons were removed from Okinawa after 1972, and the former US and Soviet nuclear arsenals have been substantially reduced since the collapse of the USSR. Nonetheless, today’s US and Russian arsenals are still capable of destroying the world many times over, and US nuclear strategy still explicitly targets a considerable range of potential adversaries. (In 2001, under President George W. Bush, these included China, Russia, Iraq, Iran, North Korea, Syria, and Libya.)

Nuclear proliferation has spread to nine nations, and over forty other countries including Japan remain what experts call “nuclear capable states.” When Barack Obama became president in 2009, there were high hopes he might lead the way to eliminating nuclear weapons entirely. Instead, before leaving office his administration adopted an alarming policy of “nuclear modernization” that can only stimulate other nuclear nations to follow suit.

There are dynamics at work here that go beyond rational responses to perceived threats. Where the United States is concerned, obsession with absolute military supremacy is inherent in the DNA of the postwar state. After the Cold War ended, US strategic planners sometimes referred to this as the necessity of maintaining “technological asymmetry.” Beginning in the mid 1990s, the Joint Chiefs of Staff reformulated their mission as maintaining “full spectrum dominance.”

This envisioned domination now extends beyond the traditional domains of land, sea, and air power, the Joint Chiefs emphasized, to include space and cyberspace as well.