Fantasyland, An American Tradition

“The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, every individual free to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, each of us free to reinvent himself by imagination and will. In America those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts.”
~ Kurt Andersen, Fantasyland

It’s hard to have public debate in the United States for a number of reasons. The most basic reason is that Americans are severely uninformed and disinformed. We also tend to lack a larger context for knowledge. Historical amnesia is rampant and scientific literacy is limited, exacerbated by centuries old strains of anti-intellectualism and dogmatic idealism, hyper-individualism and sectarian groupthink, public distrust and authoritarian demagoguery.

This doesn’t seem as common in countries elsewhere. Part of this is that Americans are less aware and informed about other countries than the citizens of other countries are of the United States. Living anywhere else in the world, it is near impossible to not know in great detail about the United States and other Western powers as the entire world cannot escape these influences that cast a long shadow of colonial imperialism, neoliberal globalization, transnational corporations, mass media, monocultural dominance, soft power, international propaganda campaigns during the Cold War, military interventionism, etc. The rest of the world can’t afford the luxury of ignorance that Americans enjoy.

Earlier last century when the United States was a rising global superpower competing against other rising global superpowers, the US was known for having one of the better education systems in the world. International competition motivated us in investing in education. Now we are famous for how pathetic recent generations of students compare to many other developed countries. But even the brief moment of seeming American greatness following World War II might have had more to do with the wide scale decimation of Europe, a temporary lowering of other developed countries rather than a vast improvement in the United States.

There has also been a failure of big biz mass media to inform the public and the continuing oligopolistic consolidation of corporate media into a few hands has not allowed for a competitive free market to force corporations to offer something better. On top of that, Americans are one of the most propagandized and indoctrinated populations on the planet, with only a few comparable countries such as China and Russia exceeding us in this area.

See how the near unanimity of the American mass media was able, by way of beating the war drum, to change majority public opinion from being against the Iraq War to being in support of it. It just so happens that the parent companies of most of the corporate media, with ties to the main political parties and the military-industrial complex, profits immensely from the endless wars of the war state.

Corporate media is in the business of making money which means selling a product. In late stage capitalism, all of media is entertainment and news media is infotainment. Even the viewers are sold as a product to advertisers. There is no profit in offering a public service to inform the citizenry and create the conditions for informed public debate. As part of consumerist society, we consume as we are consumed by endless fantasies, just-so stories, comforting lies, simplistic narratives, and political spectacle.

This is a dark truth that should concern and scare Americans. But that would require them to be informed first. There is the rub.

Every public debate in the United States begins with mainstream framing. It requires hours of interacting with a typical American even to maybe get them to acknowledge their lack of knowledge, assuming they have the intellectual humility that makes that likely. Americans are so uninformed and misinformed that they don’t realize they are ignorant, so indoctrinated that they don’t realize how much their minds are manipulated and saturated in bullshit (I speak from the expertise of being an American who has been woefully ignorant for most of my life). To simply get to the level of knowledge where debate is even within the realm of possibility is itself almost an impossible task. To say it is frustrating is an extreme understatement.

Consider how most Americans know that tough-on-crime laws, stop-and-frisk, broken window policies, heavy policing, and mass incarceration were the cause of decreased crime. How do they know? Because decades of political rhetoric and media narratives have told them so. Just as various authority figures in government and media told them or implied or remained silent while others pushed the lies that the 9/11 terrorist attack was somehow connected to Iraq which supposedly had weapons of mass destruction, despite that the US intelligence agencies and foreign governments at the time knew these were lies.

Sure, you can look to alternative media for regularly reporting of different info that undermines and disproves these beliefs. But few Americans get much if any of their news from alternative media. There have been at least hundreds of high quality scientific studies, careful analyses, and scholarly books that have come out since the violent crime decline began. This information, however, is almost entirely unknown to the average American citizen and one suspects largely unknown to the average American mainstream news reporter, media personality, talking head, pundit, think tank hack, and politician.

That isn’t to say there isn’t ignorance found in other populations as well. Having been in the online world since the early naughts, I’ve met and talked with many people from other countries and admittedly some of them are less than perfectly informed. Still, the level of ignorance in the United States is unique, at least in the Western world.

That much can’t be doubted. Other serious thinkers might have differing explanations for why the US has diverged so greatly from much of the rest of the world, from its level of education to its rate of violence. But one way or another, it needs to be explained in the hope of finding a remedy. Sadly, even if we could agree on a solution, those in power benefit too greatly from the ongoing state of an easily manipulated citizenry that lacks knowledge and critical thinking skills.

This isn’t merely an attack on low-information voters and right-wing nut jobs. Even in dealing with highly educated Americans among the liberal class, I rarely come across someone who is deeply and widely informed across various major topics of public concern.

American society is highly insular. We Americans are not only disconnected from the rest of the world but disconnected from each other. And so we have little sense of what is going on outside of the narrow constraints of our neighborhoods, communities, workplaces, social networks, and echo chambers. The United States is psychologically and geographically segregated into separate reality tunnel enclaves defined by region and residency, education and class, race and religion, politics and media.

It’s because we so rarely step outside of our respective worlds that we so rarely realize how little we know and how much of what we think we know is not true. Most of us live in neighborhoods, go to churches and stores, attend or send our kids to schools, work and socialize with people who are exactly like ourselves. They share our beliefs and values, our talking points and political persuasion, our biases and prejudices, our social and class position. We are hermetically sealed within our safe walled-in social identities. Nothing can reach us, threaten us, or change us.

That is until something happens like Donald Trump being elected. Then there is a panic about what has become of America in this post-fact age. The sad reality, however, is America has always been this way. It’s just finally getting to a point where it’s harder to ignore and that potential for public awakening offers some hope.

* * *

by Kurt Anderson
pp. 10-14

Why are we like this?

. . . The short answer is because we’re Americans, because being American means we can believe any damn thing we want, that our beliefs are equal or superior to anyone else’s, experts be damned. Once people commit to that approach, the world turns inside out, and no cause-and-effect connection is fixed. The credible becomes incredible and the incredible credible.

The word mainstream has recently become a pejorative, shorthand for bias, lies, oppression by the elites. Yet that hated Establishment, the institutions and forces that once kept us from overdoing the flagrantly untrue or absurd—media, academia, politics, government, corporate America, professional associations, respectable opinion in the aggregate—has enabled and encouraged every species of fantasy over the last few decades.

A senior physician at one of America’s most prestigious university hospitals promotes miracle cures on his daily TV show. Major cable channels air documentaries treating mermaids, monsters, ghosts, and angels as real. A CNN anchor speculated on the air that the disappearance of a Malaysian airliner was a supernatural event. State legislatures and one of our two big political parties pass resolutions to resist the imaginary impositions of a New World Order and Islamic law. When a political scientist attacks the idea that “there is some ‘public’ that shares a notion of reality, a concept of reason, and a set of criteria by which claims to reason and rationality are judged,” colleagues just nod and grant tenure. A white woman felt black, pretended to be, and under those fantasy auspices became an NAACP official—and then, busted, said, “It’s not a costume…not something that I can put on and take off anymore. I wouldn’t say I’m African American, but I would say I’m black.” Bill Gates’s foundation has funded an institute devoted to creationist pseudoscience. Despite his nonstop lies and obvious fantasies—rather, because of them—Donald Trump was elected president. The old fringes have been folded into the new center. The irrational has become respectable and often unstoppable. As particular fantasies get traction and become contagious, other fantasists are encouraged by a cascade of out-of-control tolerance. It’s a kind of twisted Golden Rule unconsciously followed: If those people believe that , then certainly we can believe this.

Our whole social environment and each of its overlapping parts—cultural, religious, political, intellectual, psychological—have become conducive to spectacular fallacy and make-believe. There are many slippery slopes, leading in various directions to other exciting nonsense. During the last several decades, those naturally slippery slopes have been turned into a colossal and permanent complex of interconnected, crisscrossing bobsled tracks with no easy exit. Voilà: Fantasyland. . . .

When John Adams said in the 1700s that “facts are stubborn things,” the overriding American principle of personal freedom was not yet enshrined in the Declaration or the Constitution, and the United States of America was itself still a dream. Two and a half centuries later the nation Adams cofounded has become a majority-rule de facto refutation of his truism: “our wishes, our inclinations” and “the dictates of our passions” now apparently do “alter the state of facts and evidence,” because extreme cognitive liberty and the pursuit of happiness rule.

This is not unique to America, people treating real life as fantasy and vice versa, and taking preposterous ideas seriously. We’re just uniquely immersed. In the developed world, our predilection is extreme, distinctly different in the breadth and depth of our embrace of fantasies of many different kinds. Sure, the physician whose fraudulent research launched the antivaccine movement was a Brit, and young Japanese otaku invented cosplay, dressing up as fantasy characters. And while there are believers in flamboyant supernaturalism and prophecy and religious pseudoscience in other developed countries, nowhere else in the rich world are such beliefs central to the self-identities of so many people. We are Fantasyland’s global crucible and epicenter.

This is American exceptionalism in the twenty-first century. America has always been a one-of-a-kind place. Our singularity is different now. We’re still rich and free, still more influential and powerful than any nation, practically a synonym for developed country . But at the same time, our drift toward credulity, doing our own thing, and having an altogether uncertain grip on reality has overwhelmed our other exceptional national traits and turned us into a less-developed country as well.

People tend to regard the Trump moment—this post-truth, alternative facts moment—as some inexplicable and crazy new American phenomenon. In fact, what’s happening is just the ultimate extrapolation and expression of attitudes and instincts that have made America exceptional for its entire history—and really, from its prehistory. . . .

America was created by true believers and passionate dreamers, by hucksters and their suckers—which over the course of four centuries has made us susceptible to fantasy, as epitomized by everything from Salem hunting witches to Joseph Smith creating Mormonism, from P. T. Barnum to Henry David Thoreau to speaking in tongues, from Hollywood to Scientology to conspiracy theories, from Walt Disney to Billy Graham to Ronald Reagan to Oprah Winfrey to Donald Trump. In other words: mix epic individualism with extreme religion; mix show business with everything else; let all that steep and simmer for a few centuries; run it through the anything-goes 1960s and the Internet age; the result is the America we inhabit today, where reality and fantasy are weirdly and dangerously blurred and commingled.

I hope we’re only on a long temporary detour, that we’ll manage somehow to get back on track. If we’re on a bender, suffering the effects of guzzling too much fantasy cocktail for too long, if that’s why we’re stumbling, manic and hysterical, mightn’t we somehow sober up and recover? You would think. But first you need to understand how deeply this tendency has been encoded in our national DNA.

Fake News: It’s as American as George Washington’s Cherry Tree
by Hanna Rosin

Fake news. Post-truth. Alternative facts. For Andersen, these are not momentary perversions but habits baked into our DNA, the ultimate expressions of attitudes “that have made America exceptional for its entire history.” The country’s initial devotion to religious and intellectual freedom, Andersen argues, has over the centuries morphed into a fierce entitlement to custom-made reality. So your right to believe in angels and your neighbor’s right to believe in U.F.O.s and Rachel Dolezal’s right to believe she is black lead naturally to our president’s right to insist that his crowds were bigger.

Andersen’s history begins at the beginning, with the first comforting lie we tell ourselves. Each year we teach our children about Pilgrims, those gentle robed creatures who landed at Plymouth Rock. But our real progenitors were the Puritans, who passed the weeks on the trans-Atlantic voyage preaching about the end times and who, when they arrived, vowed to hang any Quaker or Catholic who landed on their shores. They were zealots and also well-educated British gentlemen, which set the tone for what Andersen identifies as a distinctly American endeavor: propping up magical thinking with elaborate scientific proof.

While Newton and Locke were ushering in an Age of Reason in Europe, over in America unreason was taking new seductive forms. A series of mystic visionaries were planting the seeds of extreme entitlement, teaching Americans that they didn’t have to study any book or old English theologian to know what to think, that whatever they felt to be true was true. In Andersen’s telling, you can easily trace the line from the self-appointed 17th-century prophet Anne Hutchinson to Kanye West: She was, he writes, uniquely American “because she was so confident in herself, in her intuitions and idiosyncratic, subjective understanding of reality,” a total stranger to self-doubt.

What happens next in American history, according to Andersen, happens without malevolence, or even intention. Our national character gels into one that’s distinctly comfortable fogging up the boundary between fantasy and reality in nearly every realm. As soon as George Washington dies fake news is born — the story about the cherry tree, or his kneeling in prayer at Valley Forge. Enterprising businessmen quickly figure out ways to make money off the Americans who gleefully embrace untruths.


Cultural Body-Mind

Daniel Everett is an expert on the Piraha, although he has studied other cultures. It’s unsurprising then to find him use the same example in different books. One particular example (seen below) is about bodily form. I bring it up becomes it contradicts much of the right-wing and reactionary ideology found in genetic determinism, race realism, evolutionary psychology, and present human biodiversity (as opposed to the earlier HBD theory originated by Jonathan Marks).

From the second book below, the excerpt is part of a larger section where Everett responded to the evolutionary psychologist John Tooby, the latter arguing that there is no such thing as ‘culture’ and hence everything is genetic or otherwise biological. Everett’s use of dark matter of the mind is his way of attempting to get at more deeply complex view. This dark matter is of the mind but also of the body.

* * *

How Language Began:
The Story of Humanity’s Greatest Invention

by Daniel L. Everett
pp. 220-221

Culture, patterns of being – such as eating, sleeping, thinking and posture – have been cultivated. A Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that their minds have been cultivated – because of the roles they play in a particular set of values and because of how they define, live out and prioritise these values, the roles of individuals in a society and the knowledge they have acquired.

It would be worth exploring further just how understanding language and culture together can enable us better to understand each. Such an understanding would also help to clarify how new languages or dialects or any other variants of speech come about. I think that this principle ‘you talk like who you talk with’ represents all human behaviour. We also eat like who we eat with, think like those we think with, etc. We take on a wide range of shared attributes – our associations shape how we live and behave and appear – our phenotype. Culture affects our gestures and our talk. It can even affect our bodies. Early American anthropologist Franz Boas studied in detail the relationship between environment, culture and bodily form. Boas made a solid case that human body types are highly plastic and change to adapt to local environmental forces, both ecological and cultural.

Less industrialised cultures show biology-culture connections. Among the Pirahã, facial features range impressionistically from slightly Negroid to East Asian, to Native American. Differences between villages or families may have a biological basis, originating in different tribes merging over the last 200 years. One sizeable group of Pirahãs (perhaps thirty to forty) – usually found occupying a single village – are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two centuries ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different – broader noses, some with epicanthic folds, large foreheads – giving an overall impression of similarity to East Asian features. ‡ Yet body dimensions across all Pirahãs are constant. Men’s waists are, or were when I worked with them, uniformly 27 inches (68 cm), their average height 5 feet 2 inches (157.5 cm) and their average weight 55 kilos (121 pounds). The Pirahã phenotypes are similar not because all Pirahãs necessarily share a single genotype, but because they share a culture, including values, knowledge of what to eat and values about how much to eat, when to eat and the like.

These examples show that even the body does not escape our earlier observation that studies of culture and human social behaviour can be summed up in the slogan that ‘you talk like who you talk with’ or ‘grow like who you grow with’. And the same would have held for all our ancestors, even erectus .

Dark Matter of the Mind:
The Culturally Articulated Unconscious

by Daniel L. Everett
Kindle Locations 1499-1576

Thus while Tooby may be absolutely right that to have meaning, “culture” must be implemented in individual minds, this is no indictment of the concept. In fact, this requirement has long been insisted on by careful students of culture, such as Sapir. Yet unlike, say, Sapir, Tooby has no account of how individual minds— like ants in a colony or neurons in a brain or cells in a body— can form a larger entity emerging from multi-individual sets of knowledge, values, and roles. His own nativist views offer little insight into the unique “unconscious patterning of society” (to paraphrase Sapir) that establishes the “social set” to which individuals belong.

The idea of culture, after all, is just that certain patterns of being— eating, sleeping, thinking, posture, and so forth— have been cultivated and that minds arising from one such “field” will not be like minds cultivated in another “field.” The Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that his or her mind has been cultivated— because of the roles he or she plays in a particular value grouping, because of the ranking of values that her or she has come to share, and so on.

We must be clear, of course, that the idea of “cultivation” we are speaking of here is not merely of minds, but of entire individuals— their minds a way of talking about their bodies. From the earliest work on ethnography in the US, for example, Boas showed how cultures affect even body shape. And body shape is a good indication that it is not merely cognition that is effected and affected by culture. The uses, experiences, emotions, senses, and social engagements of our bodies forget the patterns of thought we call mind. […]

Exploring this idea that understanding language can help us understand culture, consider how linguists account for the rise of languages, dialects, and all other local variants of speech. Part of their account is captured in linguistic truism that “you talk like who you talk with.” And, I argue, this principle actually impinges upon all human behavior. We not only talk like who we talk with, but we also eat like who we eat with, think like those we think with, and so on. We take on a wide range of shared attributes; our associations shape how we live and behave and appear— our phenotype. Culture can affect our gestures and many other aspects of our talk. Boas (1912a, 1912b) takes up the issue of environment, culture, and bodily form. He provides extensive evidence that human body phenotypes are highly plastic and subject to nongenetic local environmental forces (whether dietary, climatological, or social). Had Boas lived later, he might have studied a very clear and dramatic case; namely, the body height of Dutch citizens before and after World War II. This example is worth a close look because it shows that bodies— like behaviors and beliefs— are cultural products and shapers simultaneously.

The curious case of the Netherlanders fascinates me. The Dutch went from among the shortest peoples of Europe to the tallest in the world in just over one century. One account simplistically links the growth in Dutch height with the change in political system (Olson 2014): “The Dutch growth spurt of the mid-19th century coincided with the establishment of the first liberal democracy. Before this time, the Netherlands had grown rich off its colonies but the wealth had stayed in the hands of the elite. After this time, the wealth began to trickle down to all levels of society, the average income went up and so did the height.” Tempting as this single account may be, there were undoubtedly other factors involved, including gene flow and sexual selection between Dutch and other (mainly European) populations, that contribute to explain European body shape relative to the Dutch. But democracy, a new political change from strengthened and enforced cultural values, is a crucial component of the change in the average height of the Dutch, even though the Dutch genotype has not changed significantly in the past two hundred years. For example, consider figures 2.1 and 2.2. In 1825, US male median height was roughly ten centimeters (roughly four inches) taller than the average Dutch. In the 1850s, the median heights of most males in Europe and the USA were lowered. But then around 1900, they begin to rise again. Dutch male median height lagged behind that of most of the world until the late ’50s and early ’60s, when it began to rise at a faster rate than all other nations represented in the chart. By 1975 the Dutch were taller than Americans. Today, the median Dutch male height (183 cm, or roughly just above six feet) is approximately three inches more than the median American male height (177 cm, or roughly five ten). Thus an apparent biological change turns out to be largely a cultural phenomenon.

To see this culture-body connection even more clearly, consider figure 2.2. In this chart, the correlation between wealth and height emerges clearly (not forgetting that the primary determiner of height is the genome). As wealth grew, so did men (and women). This wasn’t matched in the US, however, even though wealth also grew in the US (precise figures are unnecessary). What emerges from this is that Dutch genes are implicated in the Dutch height transformation, from below average to the tallest people in the world. And yet the genes had to await the right cultural conditions before they could be so dramatically expressed. Other cultural differences that contribute to height increases are: (i) economic (e.g., “white collar”) background; (ii) size of family (more children, shorter children); (iii) literacy of the child’s mother (literate mothers provide better diets); (iv) place of residence (residents of agricultural areas tend to be taller than those in industrial environments— better and more plentiful food); and so on (Khazan 2014). Obviously, these factors all have to do with food access. But looked at from a broader angle, food access is clearly a function of values, knowledge, and social roles— that is, culture.

Just as with the Dutch, less-industrialized cultures show culture-body connections. For example, Pirahã phenotype is also subject to change. Facial features among the Pirahãs range impressionistically from slightly Negroid to East Asian to American Indian (to use terms from physical anthropology). Phenotypical differences between villages or families seem to have a biological basis (though no genetic tests have been conducted). This would be due in part to the fact Pirahã women have trysts with various non-Pirahã visitors (mainly river traders and their crews, but also government workers and contract employees on health assistance assignments, demarcating the Pirahã reservation, etc.). The genetic differences are also partly historical. One sizeable group of Pirahãs (perhaps thirty to forty)— usually found occupying a single village— are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two hundred years ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different— broader noses; some with epicanthic folds; large foreheads— giving an overall impression of similarity to Cambodian features. This and other evidence show us that the Pirahã gene pool is not closed. 4 Yet body dimensions across all Pirahãs are constant. Men’s waists are or were uniformly 83 centimeters (about 32.5 inches), their average height 157.5 centimeters (five two), and their average weight 55 kilos (about 121 pounds).

I learned about the uniformity in these measurements over the past several decades as I have taken Pirahã men, women, and children to stores in nearby towns to purchase Western clothes, when they came out of their villages for medical help. (The Pirahãs always asked that I purchase Brazilian clothes for them so that they would not attract unnecessary stares and comments.) Thus I learned that the measurements for men were nearly identical. Biology alone cannot account for this homogeneity of body form; culture is implicated as well. For example, Pirahãs raised since infancy outside the village are somewhat taller and much heavier than Pirahãs raised in their culture and communities. Even the body does not escape our earlier observation that studies of culture and human social behavior can be summed up in the slogan that “you talk like who you talk with” or “grow like who you grow with.”


Connecting the Dots of Violence

When talking to people or reading articles, alternative viewpoints and interpretations often pop up in my mind. It’s easy for me to see multiple perspectives simultaneously, to hold multiple ideas. I have a creative mind, but I’m hardly a genius. So, why does this ability seem so rare?

The lack of this ability is not simply a lack of knowledge. I spent the first half of my life in an overwhelming state of ignorance because of inferior public education, exacerbated by a learning disability and depression. But I always had the ability of divergent thinking. It’s just hard to do much with divergent thinking without greater knowledge to work with. I’ve since then remedied my state of ignorance with an extensive program of self-education.

I still don’t know exactly what is this ability to see what others don’t see. There is an odd disconnect I regularly come across, even among the well educated. I encountered a perfect example of this from Yes! Magazine. It’s an article by Mike Males, Gun Violence Has Dropped Dramatically in 3 States With Very Different Gun Laws.

In reading that article, I immediately noticed the lack of any mention of lead toxicity. Then I went to the comments section and saw other people noticed this as well. The divergent thinking it takes to make this connection doesn’t require all that much education and brain power. I’m not particularly special in seeing what the author didn’t see. What is strange is precisely that the author didn’t see it, that the same would be true for so many like him. It is strange because the author isn’t some random person opinionating on the internet.

This became even stranger when I looked into Mike Males’ previous writing elsewhere. In the past, he himself had made this connection between violent crime and lead toxicity. Yet  somehow the connection slipped from his mind in writing this article. This more recent article was in response to an event, the Parkland school shooting in Florida. And the author seems to have gotten caught up in the short term memory of the news cycle, not only unable to connect it to other data but failing to connect it to his own previous writing on that data. Maybe it shows the power of context-dependent memory. The school shooting was immediately put into the context of gun violence and so the framing elicited certain ways of thinking while excluding others. Like so many others, the author got pulled into the media hype of the moment, entirely forgetting what he otherwise would have considered.

This is how people can simultaneously know and not know all kinds of things. The human mind is built on vast disconnections, maybe because there has been little evolutionary advantage to constantly perceive larger patterns of causation beyond immediate situations. I’m not entirely sure what to make of this. It’s all too common. The thing is when such a disconnect happens the person is unaware of it — we don’t know what we don’t know and, as bizarre as it sounds, sometimes we don’t even know what we do know. So, even if I’m better than average at divergent thinking, there is no doubt that in other areas I too demonstrate this same cognitive limitation. It’s hard to see what doesn’t fit into our preconception, our worldview.

For whatever reason, lead toxicity has struggled to become included within public debate and political framing. Lead toxicity doesn’t fit into familiar narratives and the dominant paradigm, specifically in terms of a hyper-individualistic society. Even mental health tends to fall into this attitude of emphasizing the individual level, such as how the signs of mental illness could have been detected so that intervention could have stopped an individual from committing mass murder. It’s easier to talk about someone being crazy and doing crazy things than to question what caused them to become that way, be it toxicity or something else.

As such, Males’ article focuses narrowly without even entertaining fundamental causes, not limited to his overlooking lead toxicity. This is odd. We already know so much about what causes violence. The author himself has written multiple times on the topic, specifically in his professional capacity as a Senior Research Fellow at the Center on Juvenile and Criminal Justice (CJCJ). It’s his job to look for explanations and to communicate them, having written several hundred articles for CJCJ alone.

The human mind tends to go straight to the obvious, that is to say what is perceived as obvious within conventional thought. If the problem is gun violence, then the solution is gun control. Like most Americans (and increasingly so), I support more effective gun control. Still, that is merely dealing with the symptoms and doesn’t explain why someone wants to kill others. The views of the American public, though, don’t stop there. What the majority blames mass gun violence on is mental illness, a rather nebulous explanation. Mental illness also is a symptom.

That is what stands out about the omission I’m discussing here. Lead toxicity is one of most strongly proven causes of neugocognitive problems: stunted brain development, lowered IQ, learning disabilities, autism and Asperger’s, ADHD, depression, impulsivity, nervousness, irritability, anger, aggression, etc. All the heavy metals mess people up in the head, along with causing physical ailments such as hearing impairment, asthma, obesity, kidney failure, and much else. And that is talking about only one toxin among many, mercury being another widespread pollutant but there are many beyond that — this being directly relevant to the issue of violent behavior and crime, such as the high levels of toxins found in mass murderers:

“Three studies in the California prison system found those in prison for violent activity had significantly higher levels of hair manganese than controls, and studies of an area in Australia with much higher levels of violence as well as autopsies of several mass-murderers also found high levels of manganese to be a common factor. Such violent behavior has long been known in those with high manganese exposure. Other studies in the California prison and juvenile justice systems found that those with 5 or more essential mineral imbalances were 90% more likely to be violent and 50% more likely to be violent with two or more mineral imbalances. A study analyzing hair of 28 mass-murderers found that all had high metals and abnormal essential mineral levels.”

(See also: Lead was scourge before and after Beethoven by Kristina R. Anderson; Violent Crime, Hyperactivity and Metal Imbalance by Neil Ward; The Seeds that Give Birth to Terrorism by Kimberly Key; and An Updated Lead-Crime Roundup for 2018 by Kevin Drum)

Besides toxins, other factors have also been seriously studied. For example, high inequality is strongly correlated to increased mental illness rates along with aggressive, risky and other harmful behaviors (as written about in Keith Payne’s The Broken Ladder; an excerpt can be found at the end of this post). And indeed, even as lead toxicity has decreased overall (while remaining a severe problem among the poor), inequality has worsened.

There are multiple omissions going on here. And they are related. Where there are large disparities of wealth, there are also large disparities of health. Because of environmental classism and racism, toxic dumps are more likely to be located in poor and minority communities along with the problem of old housing with lead paint found where poverty is concentrated, all of it being related to a long history of economic and racial segregation. And I would point out that the evidence supports that, along with inequality, segregation creates a culture of distrust — as Eric Uslaner concluded: “It wasn’t diversity but segregation that led to less trust” (Segregation and Mistrust). In post-colonial countries like the United States, inequality and segregation go hand in hand, built on a socioeconomic system ethnic/racial castes and a permanent underclass that has developed over several centuries. The fact that this is the normal conditions of our country makes it all the harder for someone born here to fully sense its enormity. It’s simply the world we Americans have always known — it is our shared reality, rarely perceived for what it is and even more rarely interrogated.

These are far from being problems limited to those on the bottom of society. Lead toxicity ends up impacting a large part of the population. In reference to serious health concerns, Mark Hyman wrote, “that nearly 40 percent of all Americans are estimated to have blood levels of lead high enough to cause these problems” (Why Lead Poisoning May Be Causing Your Health Problems). The same thing goes for high inequality that creates dysfunction all across society, increasing social and health problems even among the upper classes, not to mention breeding an atmosphere of conflict and divisiveness (see James Gilligan’s Preventing Violence; an excerpt can be found at the end of this post). Everyone is worse off in a high amidst the unhappiness and dysfunction of a highly unequal society, far beyond homicides but also suicides, along with addiction and stress-related diseases.

Let’s look at the facts. Besides lead toxicity remaining a major problem in poor communities and old industrial inner cities, the United States has one of the highest rates of inequality in the world and the highest in the Western world, and this problem has been worsening for decades with present levels not seen since the Wall Street crash that led to the Great Depression. To go into the details, Florida has the fifth highest inequality in the United States, according to Mark Price and Estelle Sommeiller, with Florida having “all income growth between 2009 and 2011 accrued to the top 1 percent” (Economic Policy Institute). And Parkland, where the school shooting happened, specifically has high inequality: “The income inequality of Parkland, FL (measured using the Gini index) is 0.529 which is higher than the national average” (DATA USA).

In a sense, it is true that guns don’t kill people, that people kill people. But then again, it could be argued that people don’t kill people, that entire systemic problems triggers the violence that kills people, not even to talk about the immensity of slow violence that slowly kills people in even higher numbers. Lead toxicity is a great example of slow violence because of the 20 year lag time to fully measure its effects, disallowing the direct observation and visceral experience of causality and consequence. The topic of violence is important taken on its own terms (e.g., eliminating gun sales and permits to those with a history of violence would decrease gun violence), but my concern is exploring why it is so difficult to talk about violence in a larger and more meaningful way.

Lead toxicity is a great example for many reasons. It has been hard for advocates to get people to pay attention and take this seriously. Lead toxicity momentarily fell under the media spotlight with the Flint, Michigan case but that was just one of thousands of places with such problems, many of them with far worse rates. As always, as the media’s short attention span turned to some new shiny object, the lead toxicity crisis was forgotten again, as the poisoning continues. You can’t see it happening because it is always happening, an ever present tragedy that even when known remains abstract data. It is in the background and so has become part of our normal experience, operating at a level outside of our awareness.

School shootings are better able to capture the public imagination and so make for compelling dramatic narratives that the media can easily spin. Unlike lead toxicity, school shootings and their victims aren’t invisible. Lead toxins are hidden in the soil of playgrounds and the bodies of children (and prisoners who, as disproportionate victims of lead toxicity, are literally hidden away), whereas a bullet leaves dead bodies, splattered blood, terrified parents, and crying students. Neither can inequality compete with such emotional imagery. People can understand poverty because you can see poor people and poor communities, but you can’t see the societal pattern of dysfunction that exits between the dynamics of extreme poverty and extreme wealth. It can’t be seen, can’t be touched or felt, can’t be concretely known in personal experience.

Whether lead toxicity or high inequality, it is yet more abstract data that never quite gets a toehold within the public mind and the moral imagination. Even for those who should know better, it’s difficult for them to put the pieces together.

* * *

Here is the comment I left at Mike Males’ article:

I was earlier noting that Mike Males doesn’t mention lead exposure/toxicity/poisoning. I’m used to this being ignored in articles like this. Still, it’s disappointing.

It is the single most well supported explanation that has been carefully studied for decades. And the same conclusions have been found in other countries. But for whatever reason, public debate has yet to fully embrace this evidence.

Out of curiosity, I decided to do a web search. Mike Males works for Center on Juvenile and Criminal Justice. He writes articles there. I was able to find two articles where he directly and thoroughly discusses this topic:

He also mentions lead toxicity in passing in another article:

And Mike Males’ work gets referenced in a piece by Kevin Drum:

This makes it odd that he doesn’t even mention it in passing here in this article. It’s not because he doesn’t know about the evidence, as he has already written about it. So, what is the reason for not offering the one scientific theory that is most relevant to the data he shares?

This seems straightforward to me. Consider the details from the article.

“Over the last 25 years—though other time periods show similar results—New York, California, and Texas show massive declines in gun homicides, ones that far exceed those of any other state. These three states also show the country’s largest decreases in gun suicide and gun accident death rates.”

The specific states in question were among the most polluting and hence polluted states. This means they had high rates of lead toxicity. And that means they had the most room for improvement. It goes without saying that national regulations and local programs will have the greatest impact where there are the worst problems (similar to the reason, as studies show, it is easier to increase the IQ of the poor than the wealthy by improving basic conditions).

“These major states containing seven in 10 of the country’s largest cities once had gun homicide rates far above the national average; now, their rates are well below those elsewhere in the country.”

That is as obvious as obvious can be. Yeah, the largest cities are also the places of the largest concentrations of pollution. Hence, one would expect to find the highest rates and largest improvements in lead toxicity, which has been proven to directly correlate to violent crime rates (with causality proven through dose-response curve, the same methodology used to prove efficacy of pharmaceuticals).

“The declines are most pronounced in urban young people.”

Once again, this is the complete opposite of surprising. It is exactly as what we would expect. Urban areas have the heaviest and most concentrated vehicular traffic along with the pollution that goes with it. And urban areas are often old industrial centers with a century of accumulated toxins in the soil, water, and elsewhere in the environment. These specific old urban areas are also where old houses are found which are affordable for the poor, but unfortunately are more likely to have old lead paint that is chipping away and turning into dust.

So, problem solved. The great mystery is no more. You’re welcome.

“Congress passed the landmark Clean Air Act in 1970 and gave the newly-formed EPA the legal authority to regulate pollution from cars and other forms of transportation. EPA and the State of California have led the national effort to reduce vehicle pollution by adopting increasingly stringent standards.”

The progress has been dramatic. For both children and adults, the number and severity of poisonings has declined. At the same time, blood lead testing rates have increased, especially in populations at high risk for lead poisoning. This public health success is due to a combination of factors, most notably commitment to lead poisoning prevention at the federal, state and city levels. New York City and New York State have implemented comprehensive policies and programs that support lead poisoning prevention. […]

“New York City’s progress in reducing childhood lead poisoning has been striking. Not only has the number of children with lead poisoning declined —a 68% drop from 2005 to 2012 — but the severity of poisonings has also declined. In 2005, there were 14 children newly identified with blood lead levels of 45 µg/dL and above, and in 2012 there were 5 children. At these levels, children require immediate medical intervention and may require hospitalization for chelation, a treatment that removes lead from the body.

“Forty years ago, tackling childhood lead poisoning seemed a daunting task. In 1970, when New York City established the Health Department’s Lead Poisoning Prevention Program, there were over 2,600 children identified with blood lead levels of 60 µg/dL or greater — levels today considered medical emergencies. Compared with other parts of the nation, New York City’s children were at higher risk for lead poisoning primarily due to the age of New York City’s housing stock, the prevalence of poverty and the associated deteriorated housing conditions. Older homes and apartments, especially those built before 1950, are most likely to contain lead­based paint. In New York City, more than 60% of the housing stock — around 2 million units — was built before 1950, compared with about 22% of housing nationwide.

“New York City banned the use of lead­based paint in residential buildings in 1960, but homes built before the ban may still have lead in older layers of paint. Lead dust hazards are created when housing is poorly maintained, with deteriorated and peeling lead paint, or when repair work in old housing is done unsafely. Young children living in such housing are especially at risk for lead poisoning. They are more likely to ingest lead dust because they crawl on the floor and put their hands and toys in their mouths.

“While lead paint hazards remain the primary source of lead poisoning in New York City children, the number and rate of newly identified cases and the associated blood lead levels have greatly declined.

“Strong Policies Aimed at Reducing Childhood Lead Exposure

“Declines in blood lead levels can be attributed largely to government regulations instituted in the 1960s, 1970s and 1980s that banned or limited the use of lead in gasoline, house paint, water pipes, solder for food cans and other consumer products. Abatement and remediation of lead­based paint hazards in housing, and increased consumer awareness of lead hazards have also contributed to lower blood lead levels.

“New York City developed strong policies to support lead poisoning prevention. Laws and regulations were adopted to prevent lead exposure before children are poisoned and to protect those with elevated blood lead levels from further exposure.”

“But if all of this solves one mystery, it shines a high-powered klieg light on another: Why has the lead/crime connection been almost completely ignored in the criminology community? In the two big books I mentioned earlier, one has no mention of lead at all and the other has a grand total of two passing references. Nevin calls it “exasperating” that crime researchers haven’t seriously engaged with lead, and Reyes told me that although the public health community was interested in her paper, criminologists have largely been AWOL. When I asked Sammy Zahran about the reaction to his paper with Howard Mielke on correlations between lead and crime at the city level, he just sighed. “I don’t think criminologists have even read it,” he said. All of this jibes with my own reporting. Before he died last year, James Q. Wilson—father of the broken-windows theory, and the dean of the criminology community—had begun to accept that lead probably played a meaningful role in the crime drop of the ’90s. But he was apparently an outlier. None of the criminology experts I contacted showed any interest in the lead hypothesis at all.

“Why not? Mark Kleiman, a public policy professor at the University of California-Los Angeles who has studied promising methods of controlling crime, suggests that because criminologists are basically sociologists, they look for sociological explanations, not medical ones. My own sense is that interest groups probably play a crucial role: Political conservatives want to blame the social upheaval of the ’60s for the rise in crime that followed. Police unions have reasons for crediting its decline to an increase in the number of cops. Prison guards like the idea that increased incarceration is the answer. Drug warriors want the story to be about drug policy. If the actual answer turns out to be lead poisoning, they all lose a big pillar of support for their pet issue. And while lead abatement could be big business for contractors and builders, for some reason their trade groups have never taken it seriously.

“More generally, we all have a deep stake in affirming the power of deliberate human action. When Reyes once presented her results to a conference of police chiefs, it was, unsurprisingly, a tough sell. “They want to think that what they do on a daily basis matters,” she says. “And it does.” But it may not matter as much as they think.”

* * *

The Broken Ladder:
How Inequality Affects the Way We Think, Live, and Die

by Keith Payne
pp. 69-80

How extensive are the effects of the fast-slow trade-off among humans? Psychology experiments suggest that they are much more prevalent than anyone previously suspected, influencing people’s behaviors and decisions in ways that have nothing to do with reproduction. Some of the most important now versus later trade-offs involve money. Financial advisers tell us that if we skip our daily latte and instead save that three dollars a day, we could increase our savings by more than a thousand dollars a year. But that means facing a daily choice: How much do I want a thousand dollars in the bank at the end of the year? And how great would a latte taste right now?

The same evaluations lurk behind larger life decisions. Do I invest time and money in going to college, hoping for a higher salary in the long run, or do I take a job that guarantees an income now? Do I work at a regular job and play by the rules, even if I will probably struggle financially all my life, or do I sell drugs? If I choose drugs, I might lose everything in the long run and end up broke, in jail, or dead. But I might make a lot of money today.

Even short-term feelings of affluence or poverty can make people more or less shortsighted. Recall from the earlier chapters that subjective sensations of poverty and plenty have powerful effects, and those are usually based on how we measure ourselves against other people. Psychologist Mitch Callan and colleagues combined these two principles and predicted that when people are made to feel poor, they will become myopic, taking whatever they can get immediately and ignoring the future. When they are made to feel rich, they would take the long view.

Their study began by asking research participants a long series of probing questions about their finances, their spending habits, and even their personality traits and personal tastes. They told participants that they needed all this detailed information because their computer program was going to calculate a personalized “Comparative Discretionary Income Index.” They were informed that the computer would give them a score that indicated how much money they had compared with other people who were similar to them in age, education level, personality traits, and so on. In reality, the computer program did none of that, but merely displayed a little flashing progress bar and the words “Calculating. Please wait . . .” Then it provided random feedback to participants, telling half that they had more money than most people like them, and the other half that they had less money than other people like them.

Next, participants were asked to make some financial decisions, and were offered a series of choices that would give them either smaller rewards received sooner or larger rewards received later. For example, they might be asked, “Would you rather have $ 100 today or $ 120 next week? How about $ 100 today or $ 150 next week?” After they answered many such questions, the researchers could calculate how much value participants placed on immediate rewards, and how much they were willing to wait for a better long-term payoff.

The study found that, when people felt poor, they tilted to the fast end of the fast-slow trade-off, preferring immediate gratification. But when they felt relatively rich, they took the long view. To underscore the point that this was not simply some abstract decision without consequences in the real world, the researchers performed the study again with a second group of participants. This time, instead of hypothetical choices, the participants were given twenty dollars and offered the chance to gamble with it. They could decline, pocket the money, and go home, or they could play a card game against the computer and take their chances, in which case they either would lose everything or might make much more money. When participants were made to feel relatively rich, 60 percent chose to gamble. When they were made to feel poor, the number rose to 88 percent. Feeling poor made people more willing to roll the dice.

The astonishing thing about these experiments was that it did not take an entire childhood spent in poverty or affluence to change people’s level of shortsightedness. Even the mere subjective feeling of being less well-off than others was sufficient to trigger the live fast, die young approach to life.

Nothing to Lose

Most of the drug-dealing gang members that Sudhir Venkatesh followed were earning the equivalent of minimum wage and living with their mothers. If they weren’t getting rich and the job was so dangerous, then why did they choose to do it? Because there were a few top gang members who were making several hundred thousand dollars a year. They made their wealth conspicuous by driving luxury cars and wearing expensive clothes and flashy jewelry. They traveled with entourages. The rank-and-file gang members did not look at one another’s lives and conclude that this was a terrible job. They looked instead at the top and imagined what they could be. Despite the fact that their odds of success were impossibly low, even the slim chance of making it big drove them to take outrageous risks.

The live fast, die young theory explains why people would focus on the here and now and neglect the future when conditions make them feel poor. But it does not tell the whole story. The research described in Chapter 2 revealed that rates of many health and social problems were higher, even among members of the middle class, in societies where there was more inequality. One of the puzzling aspects of the rapid rise of inequality over the past three decades is that almost all of the change in fortune has taken place at the top. The incomes of the poor and the middle class are not too different from where they were in 1980, once the numbers are adjusted for inflation. But the income and wealth of the top 1 percent have soared, and those of the top one tenth of a percent dwarfed even their increases. How are the gains of the superrich having harmful effects on the health and well-being of the rest of us? […]

As Cartar suspected, when the bees received bonus nectar, they played it safe and fed in the seablush fields. But when their nectar was removed, they headed straight for the dwarf huckleberry fields.

Calculating the best option in an uncertain environment is a complicated matter; even humans have a hard time with it. According to traditional economic theories, rational decision making means maximizing your payoffs. You can calculate your “expected utility” by multiplying the size of the reward by the likelihood of getting it. So, an option that gives you a 90 percent chance of winning $ 500 has a greater expected utility than an option that gives you a 40 percent chance of winning $ 1,000 ($ 500 × .90 = $ 450 as compared with $ 1,000 × .40 = $ 400). But the kind of decision making demonstrated by the bumblebees doesn’t necessarily line up well with the expected utility model. Neither, it turns out, do the risky decisions made by the many other species that also show the same tendency to take big risks when they are needy.

Humans are one of those species. Imagine what you would do if you owed a thousand dollars in rent that was due today or you would lose your home. In a gamble, would you take the 90 percent chance of winning $ 500, or the 40 percent chance of winning $ 1,000? Most people would opt for the smaller chance of getting the $ 1,000, because if they won, their need would be met. Although it is irrational from the expected utility perspective, it is rational in another sense, because meeting basic needs is sometimes more important than the mathematically best deal. The fact that we see the same pattern across animal species suggests that evolution has found need-based decision making to be adaptive, too. From the humble bumblebee, with its tiny brain, to people trying to make ends meet, we do not always seek to maximize our profits. Call it Mick Jagger logic: If we can’t always get what we want, we try to get what we need. Sometimes that means taking huge risks.

We saw in Chapter 2 that people judge what they need by making comparisons to others, and the impact of comparing to those at the top is much larger than comparing to those at the bottom. If rising inequality makes people feel that they need more, and higher levels of need lead to risky choices, it implies a fundamentally new relationship between inequality and risk: Regardless of whether you are poor or middle class, inequality itself might cause you to engage in riskier behavior. […]

People googling terms like “lottery tickets” and “payday loans,” for example, are probably already involved in some risky spending. To measure sexual riskiness, we counted searches for the morning-after pill and for STD testing. And to measure drug- and alcohol-related risks, we counted searches for how to get rid of a hangover and how to pass a drug test. Of course, a person might search for any of these terms for reasons unrelated to engaging in risky behaviors. But, on average, if there are more people involved in sex, drugs, and money risks, you would expect to find more of these searches.

Armed with billions of such data points from Google, we asked whether the states where people searched most often for those terms were also the states with higher levels of income inequality. To help reduce the impact of idiosyncrasies related to each search term, we averaged the six terms together into a general risk-taking index. Then we plotted that index against the degree of inequality in each state. The states with higher inequality had much higher risk taking, as estimated from their Google searches. This relationship remained strong after statistically adjusting for the average income in each state.

If the index of risky googling tracks real-life risky behavior, then we would expect it to be associated with poor life outcomes. So we took our Google index and tested whether it could explain the link, reported in Chapter 2, between inequality and Richard Wilkinson and Kate Pickett’s index of ten major health and social problems. Indeed, the risky googling index was strongly correlated with the index of life problems. Using sophisticated statistical analyses, we found that inequality was a strong predictor of risk taking, which in turn was a strong predictor of health and social problems. These findings suggest that risky behavior is a pathway that helps explain the link between inequality and bad outcomes in everyday life. The evidence becomes much stronger still when we consider these correlations together with the evidence of cause and effect provided by the laboratory experiments.

Experiments like the ones described in this chapter are essential for understanding the effects of inequality, because only experiments can separate the effects of the environment from individual differences in character traits. Surely there were some brilliant luminaries and some dullards in each experimental group. Surely there were some hearty souls endowed with great self-control, and some irresponsible slackers, too. Because they were assigned to the experimental groups at random, it is exceedingly unlikely that the groups differed consistently in their personalities or abilities. Instead, we can be confident that the differences we see are caused by the experimental factor, in this case making decisions in a context of high or low inequality. […]

Experiments are gentle reminders that, in the words of John Bradford, “There but for the grace of God go I.” If we deeply understand behavioral experiments, they make us humble. They challenge our assumption that we are always in control of our own successes and failures. They remind us that, like John Bradford, we are not simply the products of our thoughts, our plans, or our bootstraps.

These experiments suggest that any average person, thrust into these different situations, will start behaving differently. Imagine that you are an evil scientist with a giant research budget and no ethical review board. You decide to take ten thousand newborn babies and randomly assign them to be raised by families in a variety of places. You place some with affluent, well-educated parents in the suburbs of Atlanta. You place others with single mothers in inner-city Milwaukee, and so on. The studies we’ve looked at suggest that the environments you assign them to will have major effects on their futures. The children you assign to highly unequal places, like Texas, will have poorer outcomes than those you assign to more equal places, like Iowa, even though Texas and Iowa have about the same average income.

In part, this will occur because bad things are more likely to happen to them in unequal places. And in part, it will occur because the children raised in unequal places will behave differently. All of this can transpire even though the babies you are randomly assigning begin life with the same potential abilities and values.

pp. 116-121

If you look carefully at Figure 5.1, you’ll notice that the curve comparing different countries is bent. The relatively small income advantage that India has over Mozambique, for example, translates into much longer lives in India. Once countries reach the level of development of Chile or Costa Rica, something interesting happens: The curve flattens out. Very rich countries like the United States cease to have any life expectancy advantage over moderately rich countries like Bahrain or even Cuba. At a certain level of economic development, increases in average income stop mattering much.

But within a rich country, there is no bend; the relationship between money and longevity remains linear. If the relationship was driven by high mortality rates among the very poor, you would expect to see a bend. That is, you would expect dramatically shorter lives among the very poor, and then, once above the poverty line, additional income would have little effect. This curious absence of the bend in the line suggests that the link between money and health is not actually a reflection of poverty per se, at least not among economically developed countries. If it was extreme poverty driving the effect, then there would be a big spike in mortality among the very poorest and little difference between the middle- and highest-status groups.

The linear pattern in the British Civil Service study is also striking, because the subjects in this study all have decent government jobs and the salaries, health insurance, pensions, and other benefits that are associated with them. If you thought that elevated mortality rates were only a function of the desperately poor being unable to meet their basic needs, this study would disprove that, because it did not include any desperately poor subjects and still found elevated mortality among those with lower status.

Psychologist Nancy Adler and colleagues have found that where people place themselves on the Status Ladder is a better predictor of health than their actual income or education. In fact, in collaboration with Marmot, Adler’s team revisited the study of British civil servants and asked the research subjects to rate themselves on the ladder. Their subjective assessments of where they stood compared with others proved to be a better predictor of their health than their occupational status. Adler’s analyses suggest that occupational status shapes subjective status, and this subjective feeling of one’s standing, in turn, affects health.

If health and longevity in developed countries are more closely linked to relative comparisons than to income, then you would expect that societies with greater inequality would have poorer health. And, in fact, they do. Across the developed nations surveyed by Wilkinson and Pickett, those with greater income equality had longer life expectancies (see Figure 5.3). Likewise, in the United States, people who lived in states with greater income equality lived longer (see Figure 5.4). Both of these relationships remain once we statistically control for average income, which means that inequality in incomes, not just income itself, is responsible.

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Taken together, this evidence implies that inequality leads to illness and shorter lives in part because it gives rise to unhealthy behaviors. That conclusion has been very controversial, especially on the political left. Some argue that it blames the victim because it implies that the poor and those who live in high-inequality areas are partly responsible for their fates by making bad choices. But I don’t think it’s assigning blame to point out the obvious fact that health is affected by smoking, drinking too much, poor diet and exercise, and so on. It becomes a matter of blaming the victim only if you assume that these behaviors are exclusively the result of the weak characters of the less fortunate. On the contrary, we have seen plenty of evidence that poverty and inequality have effects on the thinking and decision making of people living in those conditions. If you or I were thrust into such situations, we might well start behaving in more unhealthy ways, too.

The link between inequality and unhealthy behaviors helps shed light on a surprising trend discovered in a 2015 paper by economists Anne Case and Angus Deaton. Death rates have been steadily declining in the United States and throughout the economically developed world for decades, but these authors noticed a glaring exception: Since the 1990s, the death rate for middle-aged white Americans has been rising. The increase is concentrated among men and whites without a college degree. The death rate for black Americans of the same age remains higher, but is trending slowly downward, like that of all other minority groups.

The wounds in this group seem to be largely self-inflicted. They are not dying from higher rates of heart disease or cancer. They are dying of cirrhosis of the liver, suicide, and a cycle of chronic pain and overdoses of opiates and painkillers.

The trend itself is striking because it speaks to the power of subjective social comparisons. This demographic group is dying of violated expectations. Although high school– educated whites make more money on average than similarly educated blacks, the whites expect more because of their history of privilege. Widening income inequality and stagnant social mobility, Case and Deaton suggest, mean that this generation is likely to be the first in American history that is not more affluent than its parents.

Unhealthy behaviors among those who feel left behind can explain part of the link between inequality and health, but only part. The best estimates have found that such behavior accounts for about one third of the association between inequality and health. Much of the rest is a function of how the body itself responds to crises. Just as our decisions and actions prioritize short-term gains over longer-term interests when in a crisis, the body has a sophisticated mechanism that adopts the same strategy. This crisis management system is specifically designed to save you now, even if it has to shorten your life to do so.

* * *

Preventing Violence
by James Gilligan
Kindle Locations 552-706

The Social Cause of Violence

In order to understand the spread of contagious disease so that one can prevent epidemics, it is just as important to know the vector by which the pathogenic organism that causes the disease is spread throughout the population as it is to identify the pathogen itself. In the nineteenth century, for example, the water supply and the sewer system were discovered to be vectors through which some diseases became epidemic. What is the vector by which shame, the pathogen that causes violence, is spread to its hosts, the people who succumb to the illness of violence? There is a great deal of evidence, which I will summarize here, that shame is spread via the social and economic system. This happens in two ways. The first is through what we might call the “vertical” division of the population into a hierarchical ranking of upper and lower status groups, chiefly classes, castes, and age groups, but also other means by which people are divided into in-groups and out-groups, the accepted and the rejected, the powerful and the weak, the rich and the poor, the honored and the dishonored. For people are shamed on a systematic, wholesale basis, and their vulnerability to feelings of humiliation is increased when they are assigned an inferior social or economic status; and the more inferior and humble it is, the more frequent and intense the feelings of shame, and the more frequent and intense the acts of violence. The second way is by what we could call the “horizontal” asymmetry of social roles, or gender roles, to which the two sexes are assigned in patriarchal cultures, one consequence of which is that men are shamed or honored for different and in some respects opposite behavior from that which brings shame or honor to women. That is, men are shamed for not being violent enough (called cowards or even shot as deserters), and are more honored the more violent they are (with medals, promotions, titles, and estates)—violence for men is successful as a strategy. Women, however, are shamed for being too active and aggressive (called bitches or unfeminine) and honored for being passive and submissive—violence is much less likely to protect them against shame.

Relative Poverty and Unemployment

The most powerful predictor of the homicide rate in comparisons of the different nations of the world, the different states in the United States, different counties, and different cities and census tracts, is the size of the disparities in income and wealth between the rich and the poor. Some three dozen studies, at least, have found statistically significant correlations between the degree of absolute as well as relative poverty and the incidence of homicide, Hsieh and Pugh in 1993 did a meta-analysis of thirty-four such studies and found strong statistical support for these findings, as have several other reviews of this literature: two on homicide by Smith and Zahn in 1999; Chasin in 1998; Short in 1997; James in 1995; and individual studies, such as Braithwaite in 1979 and Messner in 1980.

On a worldwide basis, the nations with the highest inequities in wealth and income, such as many Third World countries in Latin America, Africa, and Asia, have the highest homicide rates (and also the most collective or political violence). Among the developed nations, the United States has the highest inequities in wealth and income, and also has by far the highest homicide rates, five to ten times larger than the other First World nations, all of which have the lowest levels of inequity and relative poverty in the world, and the lowest homicide rates. Sweden and Japan, for example, have had the lowest degree of inequity in the world in recent years, according to the World Bank’s measures; but in fact, all the other countries of western Europe, including Ireland and the United Kingdom, as well as Canada, Australia, and New Zealand, have a much more equal sharing of their collective wealth and income than either the United States or virtually any of the Second or Third World countries, as well as the lowest murder rates.

Those are cross-sectional studies, which analyze the populations being studied at one point in time. Longitudinal studies find the same result: violence rates climb and fall over time as the disparity in income rises and decreases, both in the less violent and the more violent nations. For example, in England and Wales, as Figures 1 and 2 show, there was an almost perfect fit between the rise in several different measures of the size of the gap between the rich and the poor, and the number of serious crimes recorded by the police between 1950 and 1990. Figure 1 shows two measures of the gradual widening of income differences, which accelerated dramatically from 1984 and 1985. Figure 2 shows the increasing percentage of households and families living in relative poverty, a rate that has been particularly rapid since the late 1970s, and also the number of notifiable offences recorded by the police during the same years. As you can see, the increase in crime rates follows the increase in rates of relative poverty almost perfectly. As both inequality and crime accelerated their growth rates simultaneously, the annual increases in crime from one year to the next became larger than the total crime rate had been in the early 1950s. If we examine the rates for murder alone during the same period, as reported by the Home Office, we find the same pattern, namely a progression from a murder rate that averaged 0.6 per 100,000 between 1946 and 1970, increased to 0.9 from 1971–78, and increased yet again to an average of 1.1 between 1979 and 1997 (with a range of 1.0 to 1.3) To put it another way, 1.2 and 1.3, the five highest levels since the end of World War II, were recorded in 1987, 1991, 1994, 1995 and 1997, ail twice as high as the 1946–70 average.

The same correlation between violence and relative poverty has been found in the United States. The economist James Galbraith in Created Unequal (1997) has used inequity in wages as one measure of the size and history of income inequity between the rich and the poor from 1920 to 1992. If we correlate this with fluctuations in the American homicide rate during the same period, we find that both wage inequity and the homicide rate increased sharply in the slump of 1920–21, and remained at those historically high levels until the Great Crash of 1929, when they both jumped again, literally doubling together and suddenly, to the highest levels ever observed up to that time. These record levels of economic inequality (which increase, as Galbraith shows, when unemployment increases) were accompanied by epidemic violence; both murder rates and wage inequity remained twice as high as they had previously been, until the economic leveling effects of Roosevelt’s New Deal, beginning in 1933, and the Second World War a few years later, combined to bring both violence and wage inequity down by the end of the war to the same low levels as at the end of the First World War, and they both remained at those low levels for the next quarter of a century, from roughly 1944 to 1968.

That was the modern turning point. In 1968 the median wage began falling, after having risen steadily for the previous three decades, and “beginning in 1969 inequality started to rise, and continued to increase sharply for fifteen years,” (J. K. Galbraith). The homicide rate soon reached levels twice as high as they had been during the previous quarter of a century (1942–66). Both wage inequality and homicide rates remained at those relatively high levels for the next quarter of a century, from 1973 to 1997. That is, the murder rate averaged 5 per 100,000 population from 1942 to 1966, and 10 per 100,000 from 1970 to 1997. Finally, by 1998 unemployment dropped to the lowest level since 1970; both the minimum wage and the median wage began increasing again in real terms for the first time in thirty years; and the poverty rate began dropping. Not surprisingly, the homicide rate also fell, for the first time in nearly thirty years, below the range in which it had been fluctuating since 1970–71 (though both rates, of murder and of economic inequality, are still higher than they were from the early 1940s to the mid-1960s).

As mentioned before, unemployment rates are also relevant to rates of violence. M. H. Brenner found that every one per cent rise in the unemployment rate is followed within a year by a 6 per cent rise in the homicide rate, together with similar increases in the rates of suicide, imprisonment, mental hospitalization, infant mortality, and deaths from natural causes such as heart attacks and strokes (Mental Illness and the Economy, 1973, and uPersonal Stability and Economic Security,” 1977). Theodore Chiricos reviewed sixty-three American studies and concluded that while the relationship between unemployment and crime may have been inconsistent during the 1960s (some studies found a relationship, some did not), it became overwhelmingly positive in the 1970s, as unemployment changed from a brief interval between jobs to enduring worklessness (“Rates of Crime and Unemployment,” 1987). David Dickinson found an exceptionally close relationship between rates of burglary and unemployment for men under twenty-five in the U.K. in the 1980s and 1990s (“Crime and Unemployment,” 1993). Bernstein and Houston have also found statistically significant correlations between unemployment and crime rates, and negative correlations between wages and crime rates, in the U.S. between 1989 and 1998 (Crime and Work, 2000).

If we compare Galbraith’s data with U.S. homicide statistics, we find that the U.S. unemployment rate has moved in the same direction as the homicide rate from 1920 to 1992: increasing sharply in 1920–21, then jumping to even higher levels from the Crash of 1929 until Roosevelt’s reforms began in 1933, at which point the rates of both unemployment and homicide also began to fall, a trend that accelerated further with the advent of the war. Both rates then remained low (with brief fluctuations) until 1968, when they began a steady rise which kept them both at levels higher than they had been in any postwar period, until the last half of 1997, when unemployment fell below that range and has continued to decline ever since, followed closely by the murder rate.

Why do economic inequality and unemployment both stimulate violence? Ultimately, because both increase feelings of shame (Gilligan, Violence). For example, we speak of the poor as the lower classes, who have lower social and economic status, and the rich as the upper classes who have higher status. But the Latin for lower is inferior, and the word for the lower classes in Roman law was the humiliores. Even in English, the poor are sometimes referred to as the humbler classes. Our language itself tells us that to be poor is to be humiliated and inferior, which makes it more difficult not to feel inferior. The word for upper or higher was superior, which is related to the word for pride, superbia (the opposite of shame), also the root of our word superb (another antonym of inferior). And a word for the upper classes, in Roman law, was the honestiores (related to the word honor, also the opposite of shame and dishonor).

Inferiority and superiority are relative concepts, which is why it is relative poverty, not absolute poverty, that exposes people to feelings of inferiority. When everyone is on the same level, there is no shame in being poor, for in those circumstances the very concept of poverty loses its meaning. Shame is also a function of the gap between one’s level of aspiration and one’s level of achievement. In a society with extremely rigid caste or class hierarchies, it may not feel so shameful to be poor, since it is a matter of bad luck rather than of any personal failing. Under those conditions, lower social status may be more likely to motivate apathy, fatalism, and passivity (or “passive aggressiveness”), and to inhibit ambition and the need for achievement, as Gunnar Myrdal noted in many of the caste-ridden peasant cultures that he studied in Asian Drama (1968). Caste-ridden cultures, however, may have the potential to erupt into violence on a revolutionary or even genocidal scale, once they reject the notion that the caste or class one is born into is immutable, and replace it with the notion that one has only oneself to blame if one remains poor while others are rich. This we have seen repeatedly in the political and revolutionary violence that has characterized the history of Indonesia, Kampuchea, India, Ceylon, China, Vietnam, the Philippines, and many other areas throughout Asia during the past half-century.

All of which is another way of saying that one of the costs people pay for the benefits associated with belief in the “American Dream,” the myth of equal opportunity, is an increased potential for violence. In fact, the social and economic system of the United States combines almost every characteristic that maximizes shame and hence violence. First, there is the “Horatio Alger” myth that everyone can get rich if they are smart and work hard (which means that if they are not rich they must be stupid or lazy, or both). Second, we are not only told that we can get rich, we are also stimulated to want to get rich. For the whole economic system of mass production depends on whetting people’s appetites to consume the flood of goods that are being produced (hence the flood of advertisements). Third, the social and economic reality is the opposite of the Horatio Alger myth, since social mobility is actually less likely in the U.S. than in the supposedly more rigid social structures of Europe and the U.K. As Mishel, Bernstein and Schmitt have noted:

Contrary to widely held perceptions, the U.S. offers less economic mobility than other rich countries. In one study, for example, low-wage workers in the U.S. were more likely to remain in the low-wage labor market five years longer than workers in Germany, France, Italy, the United Kingdom, Denmark, Finland, and Sweden (all the other countries studied in this analysis). In another study, poor households in the U.S. were less likely to leave poverty from one year to the next than were poor households in Canada, Germany, the Netherlands, Sweden, and the United Kingdom (all the countries included in this second analysis).
(The State of Working America 2000–2001, 2001)

Fourth, as they also mention, “the U.S. has the most unequal income distribution and the highest poverty rates among all the advanced economies in the world. The U.S. tax and benefit system is also one of the least effective in reducing poverty.” The net effect of all these features of U.S. society is to maximize the gap between aspiration and attainment, which maximizes the frequency and intensity of feelings of shame, which maximizes the rates of violent crimes.

It is difficult not to feel inferior if one is poor when others are rich, especially in a society that equates self-worth with net worth; and it is difficult not to feel rejected and worthless if one cannot get or hold a job while others continue to be employed. Of course, most people who lose jobs or income do not commit murders as a result; but there are always some men who are just barely maintaining their self-esteem at minimally tolerable levels even when they do have jobs and incomes. And when large numbers of them lose those sources of self-esteem, the number who explode into homicidal rage increases as measurably, regularly, and predictably as any epidemic does when the balance between pathogenic forces and the immune system is altered.

And those are not just statistics. I have seen many individual men who have responded in exactly that way under exactly these circumstances. For example, one African-American man was sent to the prison mental hospital I directed in order to have a psychiatric evaluation before his murder trial. A few months before that, he had had a good job. Then he was laid off at work, but he was so ashamed of this that he concealed the fact from his wife (who was a schoolteacher) and their children, going off as if to work every morning and returning at the usual time every night. Finally, after two or three months of this, his wife noticed that he was not bringing in any money. He had to admit the truth, and then his wife fatally said, “What kind of man are you? What kind of man would behave this way?” To prove that he was a man, and to undo the feeling of emasculation, he took out his gun and shot his wife and children. (Keeping a gun is, of course, also a way that some people reassure themselves that they are really men.) What I was struck by, in addition to the tragedy of the whole story, was the intensity of the shame he felt over being unemployed, which led him to go to such lengths to conceal what had happened to him.

Caste Stratification

Caste stratification also stimulates violence, for the same reasons. The United States, perhaps even more than the other Western democracies, has a caste system that is just as real as that of India, except that it is based on skin color and ethnicity more than on hereditary occupation. The fact that it is a caste system similar to India’s is registered by the fact that in my home city, Boston, members of the highest caste are called “Bsoston Brahmins” (a.k.a. “WASPs,” or White Anglo-Saxon Protestants). The lowest rung on the caste ladder, corresponding to the “untouchables” or Harijan, of India, is occupied by African-Americans, Native Americans, and some Hispanic-Americans. To be lower caste is to be rejected, socially and vocationally, by the upper castes, and regarded and treated as inferior. For example, whites often move out of neighborhoods when blacks move in; blacks are “the last to be hired and the first to be fired,” so that their unemployment rate has remained twice as high as the white rate ever since it began being measured; black citizens are arrested and publicly humiliated under circumstances in which no white citizen would be; respectable white authors continue to write books and articles claiming that blacks are intellectually inferior to whites; and so on and on, ad infinitum. It is not surprising that the constant shaming and attributions of inferiority to which the lower caste groups are subjected would cause members of those groups to feel shamed, insulted, disrespected, disdained, and treated as inferior—because they have been, and because many of their greatest writers and leaders have told us that this is how they feel they have been treated by whites. Nor is it surprising that this in turn would give rise to feelings of resentment if not rage, nor that the most vulnerable, those who lacked any non-violent means of restoring their sense of personal dignity, such as educational achievements, success, and social status, might well see violence as the only way of expressing those feelings. And since one of the major disadvantages of lower-caste status is lack of equal access to educational and vocational opportunities, it is not surprising that the rates of homicide and other violent crimes among all the lower-caste groups mentioned are many times higher, year after year, than those of the upper-caste groups.

Kindle Locations 1218-1256

Single-Parent Families Another factor that correlates with rates of violence in the United States is the rate of single-parent families: children raised in them are more likely to be abused, and are more likely to become delinquent and criminal as they grow older, than are children who are raised by two parents. For example, over the past three decades those two variables—the rates of violent crime and of one-parent families—have increased in tandem with each other; the correlation is very close. For some theorists, this has suggested that the enormous increase in the rate of youth violence in the U.S. over the past few decades has been caused by the proportionately similar increase in the rate of single-parent families.

As a parent myself, I would be the first to agree that child-rearing is such a complex and demanding task that parents need all the help they can get, and certainly having two caring and responsible parents available has many advantages over having only one. In addition, children, especially boys, can be shown to benefit in many ways, including diminished risk of delinquency and violent criminality, from having a positive male role-model in the household. The adult who is most often missing in single-parent families is the father. Some criminologists have noticed that Japan, for example, has practically no single-parent families, and its murder rate is only about one-tenth as high as that of the United States.

Sweden’s rate of one-parent families, however, has grown almost to equal that in the United States, and over the same period (the past few decades), yet Sweden’s homicide rate has also been on average only about one-tenth as high as that of the U.S., during that same time. To understand these differences, we should consider another variable, namely, the size of the gap between the rich and the poor. As stated earlier, Sweden and Japan both have among the lowest degrees of economic inequity in the world, whereas the U.S. has the highest polarization of both wealth and income of any industrialized nation. And these differences exist even when comparing different family structures. For example, as Timothy M. Smeeding has shown, the rate of relative poverty is very much lower among single-parent families in Sweden than it is among those in the U.S. Even more astonishing, however, is the fact that the rate of relative poverty among single-parent families in Sweden is much lower than it is among two-parent families in the United States (“Financial Poverty in Developed Countries,” 1997). Thus, it would seem that however much family structure may influence the rate of violence in a society, the overall social and economic structure of the society—the degree to which it is or is not stratified into highly polarized upper and lower social classes and castes—is a much more powerful determinant of the level of violence.

There are other differences between the cultures of Sweden and the U.S. that may also contribute to the differences in the correlation between single-parenthood and violent crime. The United States, with its strongly Puritanical and Calvinist cultural heritage, is much more intolerant of both economic dependency and out-of-wedlock sex than Sweden. Thus, the main form of welfare support for single-parent families in the U.S. (until it was ended a year ago) A.F.D.C., Aid to Families with Dependent Children, was specifically denied to families in which the father (or any other man) was living with the mother; indeed, government agents have been known to raid the homes of single mothers with no warning in the middle of the night in order to “catch” them in bed with a man, so that they could then deprive them (and their children) of their welfare benefits. This practice, promulgated by politicians who claimed that they were supporting what they called “family values,” of course had the effect of destroying whatever family life did exist. Fortunately for single mothers in Sweden, the whole society is much more tolerant of people’s right to organize their sexual life as they wish, and as a result many more single mothers are in fact able to raise their children with the help of a man.

Another difference between Sweden and the U.S. is that fewer single mothers in Sweden are actually dependent on welfare than is true in the U.S. The main reason for this is that mothers in Sweden receive much more help from the government in getting an education, including vocational training; more help in finding a job; and access to high-quality free childcare, so that mothers can work without leaving their children uncared for. The U.S. system, which claims to be based on opposition to dependency, thus fosters more welfare dependency among single mothers than Sweden’s does, largely because it is so more miserly and punitive with the “welfare” it does provide. Even more tragically, however, it also fosters much more violence. It is not single motherhood as such that causes the extremely high levels of violence in the United States, then; it is the intense degree of shaming to which single mothers and their children are exposed by the punitive, miserly, Puritanical elements that still constitute a powerful strain in the culture of the United States.

Kindle Locations 1310-1338

Social and Political Democracy Since the end of the Second World War, the homicide rates of the nations of western Europe, and Japan, for example, have been only about a tenth as high as those of the United States, which is another way of saying that they have been preventing 90 per cent of the violence that the U.S. still experiences. Their rates of homicide were not lower than those in the U.S. before. On the contrary, Europe and Asia were scenes of the largest numbers of homicides ever recorded in the history of the world, both in terms of absolute numbers killed and in the death rates per 100,000 population, in the “thirty years’ war” that lasted from 1914 to 1945. Wars, and governments, have always caused far more homicides than all the individual murderers put together (Richardson, Statistics of Deadly Quarrels, 1960; Keeley, War Before Civilization, 1996.) After that war ended, however, they all took two steps which have been empirically demonstrated throughout the world to prevent violence. They instituted social democracy (or “welfare states,” as they are sometimes called), and achieved an unprecedented decrease in the inequities in wealth and income between the richest and poorest groups in the population, one effect of which is to reduce the frequency of interpersonal or “criminal” violence. And Germany, Japan and Italy adopted political democracy as well, the effect of which is to reduce the frequency of international violence, or warfare (including “war crimes”).

While the United States adopted political democracy at its inception, it is the only developed nation on earth that has never adopted social democracy (a “welfare state”). The United States alone among the developed nations does not provide universal health insurance for all its citizens; it has the highest rate of relative poverty among both children and adults, and the largest gap between the rich and the poor, of any of the major economies; vastly less adequate levels of unemployment insurance and other components of shared responsibility for human welfare; and so on. Thus, it is not surprising that it also has murder rates that have been five to ten times as high as those of any other developed nation, year after year. It is also consistent with that analysis that the murder rate finally fell below the epidemic range in which it had fluctuated without exception for the previous thirty years (namely, 8 to II homicides per 100,000 population per year), only in 1998, after the unemployment rate reached its lowest level in thirty years and the rate of poverty among the demographic groups most vulnerable to violence began to diminish—slightly—for the first time in thirty years.

Some American politicians, such as President Eisenhower, have suggested that the nations of western Europe have merely substituted a high suicide rate for the high homicide rate that the U.S. has. In fact, the suicide rates in most of the other developed nations are also substantially lower than those of the United States, or at worst not substantially higher. The suicide rates throughout the British Isles, the Netherlands, and the southern European nations are around one-third lower than those of the U.S.; the rates in Canada, Australia, and New Zealand, as well as Norway and Luxembourg, are about the same. Only the remaining northern and central European countries and Japan have suicide rates that are higher, ranging from 30 per cent higher to roughly twice as high as the suicide rate of the U.S. By comparison, the U.S. homicide rate is roughly ten times as high as those of western Europe (including the U.K., Scandinavia, France, Germany, Switzerland, Austria), southern Europe, and Japan; and five times as high as those of Canada, Australia and New Zealand. No other developed nation has a homicide rate that is even close to that of the U.S.


Verbal Behavior

There is a somewhat interesting discussion of the friendship between B.F. Skinner and W.V.O. Quine. The piece explores their shared interests and possible influences on one another. It’s not exactly an area of personal interest, but it got me thinking about Julian Jaynes.

Skinner is famous for his behaviorist research. When behaviorism is mentioned, what immediately comes to mind for most people is Pavlov’s dog. But behaviorism wasn’t limited to animals and simple responses to stimuli. Skinner developed his theory toward verbal behavior as well. As Michael Karson explains,

“Skinner called his behaviorism “radical,” (i.e., thorough or complete) because he rejected then-behaviorism’s lack of interest in private events. Just as Galileo insisted that the laws of physics would apply in the sky just as much as on the ground, Skinner insisted that the laws of psychology would apply just as much to the psychologist’s inner life as to the rat’s observable life.

“Consciousness has nothing to do with the so-called and now-solved philosophical problem of mind-body duality, or in current terms, how the physical brain can give rise to immaterial thought. The answer to this pseudo-problem is that even though thought seems to be immaterial, it is not. Thought is no more immaterial than sound, light, or odor. Even educated people used to believe, a long time ago, that these things were immaterial, but now we know that sound requires a material medium to transmit waves, light is made up of photons, and odor consists of molecules. Thus, hearing, seeing, and smelling are not immaterial activities, and there is nothing in so-called consciousness besides hearing, seeing, and smelling (and tasting and feeling). Once you learn how to see and hear things that are there, you can also see and hear things that are not there, just as you can kick a ball that is not there once you have learned to kick a ball that is there. Engaging in the behavior of seeing and hearing things that are not there is called imagination. Its survival value is obvious, since it allows trial and error learning in the safe space of imagination. There is nothing in so-called consciousness that is not some version of the five senses operating on their own. Once you have learned to hear words spoken in a way that makes sense, you can have thoughts; thinking is hearing yourself make language; it is verbal behavior and nothing more. It’s not private speech, as once was believed; thinking is private hearing.”

It’s amazing how much this is resonates with Jaynes’ bicameral theory. This maybe shouldn’t be surprising. After all, Jaynes was trained in behaviorism and early on did animal research. He was mentored by the behaviorist Frank A. Beach and was friends with Edward Boring who wrote a book about consciousness in relation to behaviorism. Reading about Skinner’s ideas about verbal behavior, I was reminded of Jaynes’ view of authorization as it relates to linguistic commands and how they become internalized to form an interiorized mind-space (i.e., Jaynesian consciousness).

I’m not the only person to think along these lines. On Reddit, someone wrote: “It is possible that before there were verbal communities that reinforced the basic verbal operants in full, people didn’t have complete “thinking” and really ran on operant auto-pilot since they didn’t have a full covert verbal repertoire and internal reinforcement/shaping process for verbal responses covert or overt, but this would be aeons before 2-3 kya. Wonder if Jaynes ever encountered Skinner’s “Verbal Behavior”…” Jaynes only references Skinner once in his book on bicameralism and consciousness. But he discusses behaviorism in general to some extent.

In the introduction, he describes behaviorism in this way: “From the outside, this revolt against consciousness seemed to storm the ancient citadels of human thought and set its arrogant banners up in one university after another. But having once been a part of its major school, I confess it was not really what it seemed. Off the printed page, behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious. And there was a very real hypocrisy abroad, as those interested in its problems were forcibly excluded from academic psychology, as text after text tried to smother the unwanted problem from student view. In essence, behaviorism was a method, not the theory that it tried to be. And as a method, it exorcised old ghosts. It gave psychology a thorough house cleaning. And now the closets have been swept out and the cupboards washed and aired, and we are ready to examine the problem again.” As dissatisfying as animal research was for Jaynes, it nonetheless set the stage for deeper questioning by way of a broader approach. It made possible new understanding.

Like Skinner, he wanted to take the next step, shifting from behavior to experience. Even their strategies to accomplish this appear to have been similar. Sensory experience itself becomes internalized, according to both of their theories. For Jaynes, perception of external space becomes the metaphorical model for a sense of internal space. When Karson says of Skinner’s view that “thinking is hearing yourself make language,” that seems close to Jaynes discussion of hearing voices as it develops into an ‘I’ and a ‘me’, the sense of identity split into subject and object which asserted was required for one to hear one’s own thoughts.

I don’t know Skinner’s thinking in detail or how it changed over time. He too pushed beyond the bounds of behavioral research. It’s not clear that Jaynes’ ever acknowledged this commonality. In his 1990 afterword to his book, Jaynes’ makes his one mention of Skinner without pointing out Skinner’s work on verbal behavior:

“This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

“This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.”

Jaynes was making the simple, if oft overlooked, point that perception of body is not the same thing as consciousness of mind. A behavioral response to one’s own body isn’t fundamentally different than a behavioral response to anything else. Behavioral responses are found in every species. This isn’t helpful in exploring consciousness itself. Skinner too wanted to get beyond this level of basic behavioral research, so it seems. Interestingly, without any mention of Skinner, Jaynes does use the exact phrasing of Skinner in speaking about the unconscious learning of ‘verbal behavior’ (Book One, Chapter 1):

“Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, or whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying words. The important thing here is that the subject is not aware that he is learning anything at all. [13] He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it.”

This is just a passing comment in using one example among many, and he states that “Such unconscious learning is not confined to verbal behavior.” He doesn’t further explore language in this immediate section or repeat again the phrase ‘verbal behavior’ in any other section, although the notion of verbal behavior is central to the entire book. But a decade after the original publication date of his book, Jaynes wrote a paper where he does talk about Skinner’s ideas about language:

“One needs language for consciousness. We think consciousness is learned by children between two and a half and five or six years in what we can call the verbal surround, or the verbal community as B.F Skinner calls it. It is an aspect of learning to speak. Mental words are out there as part of the culture and part of the family. A child fits himself into these words and uses them even before he knows the meaning of them. A mother is constantly instilling the seeds of consciousness in a two- and three-year-old, telling the child to stop and think, asking him “What shall we do today?” or “Do you remember when we did such and such or were somewhere?” And all this while metaphor and analogy are hard at work. There are many different ways that different children come to this, but indeed I would say that children without some kind of language are not conscious.”
(Jaynes, J. 1986. “Consciousness and the Voices of the Mind.” Canadian Psychology, 27, 128– 148.)

I don’t have access to that paper. That quote comes from an article by John E. Limber: “Language and consciousness: Jaynes’s “Preposterous idea” reconsidered.” It is found in Reflections on the Dawn of Consciousness edited by Marcel Kuijsten (pp. 169-202).

Anyway, the point Jaynes makes is that language is required for consciousness as an inner sense of self because language is required to hear ourselves think. So verbal behavior is a necessary, if not sufficient, condition for the emergence of consciousness as we know it. As long as verbal behavior remains an external event, conscious experience won’t follow. Humans have to learn to hear themselves as they hear others, to split themselves into a speaker and a listener.

This relates to what makes possible the differentiation of hearing a voice being spoken by someone in the external world and hearing a voice as a memory of someone in one’s internal mind-space. Without this distinction, imagination isn’t possible for anything imagined would become a hallucination where internal and external hearing are conflated or rather never separated. Jaynes proposes this is why ancient texts regularly describe people as hearing voices of deities and deified kings, spirits and ancestors. The bicameral person, according to the theory, hears their own voice without being conscious that it is their own thought.

All of that emerges from those early studies of animal behavior. Behaviorism plays a key role simply in placing the emphasis on behavior. From there, one can come to the insight that consciousness is a neurocognitive behavior modeled on physical and verbal behavior. The self is a metaphor built on embodied experience in the world. This relates to many similar views, such as that humans learn a theory of mind within themselves by first developing a theory of mind in perceiving others. This goes along with attention schema and the attribution of consciousness. And some have pointed out what is called the double subject fallacy, a hidden form of dualism that infects neuroscience. However described, it gets at the same issue.

It all comes down our being both social animals and inhabitants of the world. Human development begins with a focus outward, culture and language determining what kind of identity forms. How we learn to behave is who we become.


Fluidity of Perceived Speciation

There is a Princeton article that discusses a study on speciation. Some researchers observed a single finch that became isolated from its own species. The island it ended up on, though, had several other species of finch. So, it crossed the species divide to mate with one of the other populations.

That alone questions the very meaning of species. It was neither genetics nor behavior that kept these breeding populations separate. It was simply geographic distance. Eliminate that geographic factor and hybridization quickly follows. The researchers argue that this hybridization represents a new species. But their observations are over a short period of time. There is no reason to assume that further hybridization won’t occur, causing this population to slowly assimilate back into the original local population, the genetic variance lessening over time (as with populations of homo sapiens that hybridized with other homonids such as neanderthals).

All this proves is that our present definition of ‘species’ isn’t always particularly scientific, in being useful for careful understanding. Of course, it’s not hard to create a separate breeding population. But if separate breeding populations don’t have much genetic difference and can easily interbreed, then how is calling them separate species meaningful in any possible sense of that word? Well, it isn’t meaningful.

This study showed that sub-populations can become isolated for periods of times. What it doesn’t show is that this isolation will be long-lasting, as it isn’t entirely known what caused the separation of the breeding populations in the first place. For example, we don’t know to what extent the altered bird songs are related to genetics versus epigenetics, microbiome, environmental shifts, learned behavior, etc. The original lost and isolated finch carried with it much more than the genetics of its species. It would be unscientific to conclude much from such limited info and observations.

The original cause(s) might change again. In that case, the temporary sub-population would lose the traits, in this case birdsong, that have separated it. That probably happens all the time, temporary changes within populations and occasional hybridized populations appearing only to disappear again. But it’s probably rare that these changes become permanent so as to develop into genuinely separate species, in the meaningful sense of being genetically and behaviorally distinct to a large enough degree.

Also, the researches didn’t eliminate the possible explanation of what in humans would be called culture. Consider mountain lions. Different mountain lion populations will only hunt certain prey species. This isn’t genetically determined behavior. Rather, specific hunting techniques are taught from mother to cub. But this could create separate breeding populations for, in some cases, they might hunt in different areas where the various prey are concentrated. Even so, this hasn’t separated the mountain lion populations into different species. They remain genetically the same.

Sure, give it enough time combined with environmental changes, and then speciation might follow. But speciation can’t be determined by behavior alone, even when combined with minor genetic differences. Otherwise, that would mean every human culture on the planet is a separate species. The Irish wold be a separate species from the English. The Germans would be a separate species from the French. The Chinese would be a separate species from the Japanese. Et cetera. This is ludicrous, even though some right-wingers might love this idea and in fact this was an early pre-scientific definition of races as species or sub-species. But as we know, humans have some of the lowest levels of genetic diversity as seen among similar species.

Our notion of species is too simplistic. We have this simplistic view because, as our lives are short and science is young, we only have a snapshot of nature. Supposed species are probably a lot more fluid than the present paradigm allows for. The perceived or imposed boundaries of ‘species’ could constantly be changing with various sub-populations constantly emerging and merging, with environmental niches constantly shifting and coalescing. The idea of static species generally seems unhelpful, except maybe in rare cases where a species becomes isolated over long periods of time (e.g., the ice age snails surviving in a few ice caves in Iowa and Illinois) or else in species that are so perfectly adapted that evolutionary conditions have had little apparent impact (e.g., crocodiles).

We easily forget that modern science hasn’t been studying nature for very long. As I often repeat, our ignorance is vast beyond comprehension, much greater than our present knowledge.

As an amusing related case, some species will have sex with entirely different species. Hybridization isn’t even possible in such situations. It’s not clear why this happens. An example of this is a particular population of monkeys sexually mounting deer and, as they sometimes get grooming and food out of the deal, a fair number of the deer tolerate the behavior. There is no reason to assume these deer-mounting monkeys have evolved into a new species, as compared to nearby populations of monkeys who don’t sexually molest hoofed animals. Wild animals don’t seem to care all that much what modern humans think of them. Abstract categories of species don’t stop them from acting however they so desire. And it hasn’t always stopped humans either, whether between the supposed races within the human species or across the supposed divide of species.

From the lascivious monkey article (linked directly above):

“Finally, the researchers say, this might be a kind of cultural practice. Japanese macaques display different behaviors in different locations — some wash their food, or take hot-spring baths, or play with snowballs.

“Adolescent females grinding on the backs of deer might similarly be a cultural phenomenon. But it has only been observed at Minoo within the past few years.

“The monkey-deer sexual interactions reported in our paper may reflect the early stage development of a new behavioural tradition at Minoo,” Gunst-Leca told The Guardian.

“Alternatively, the paper notes, it could be a “short-lived fad.” Time will tell.”


Trauma, Embodied and Extended

One of the better books on trauma I’ve seen is by Resmaa Menakem. He is a trauma therapist with a good range of personal and professional experience, which allows him to persuasively combine science with anecdotes. I heard him speak at Prairie Lights bookstore. He was at the end of his book tour and, instead of reading from his book My Grandmother’s Hands, he discussed what inspired it.

He covered his experience working with highly traumatized contract workers on military bases in Afghanistan. And he grounded it with stories about his grandmother. But more interestingly, he mentioned a key scientific study (see note 15 below). Although I had come across it before, I had forgotten about it. Setting up his discussion, he asked the audience, “Have any of you been to Washington, DC and smelled the cherry blossoms?” He described the warm, pleasant aroma. And then he gave the details of the study.

Mice were placed in a special enclosure. It was the perfect environment to fulfill a mouse’s every need and desire. But the wire mesh on the bottom was connected to electrical wires. The researchers would pump in the smell of cherries and then switch on the electricity. The mice jumped, ran around, clambered over each other, and struggled to escape — what any animal, including humans, would do in a similar situation. This was repeated many times, until finally the mice would have this Pavlovian response to cherry smell alone without any electric shock.

That much isn’t surprising. Thousands of studies have demonstrated such behavioralism. Where it gets intriguing is that the mice born to these traumatized mice also responded the same way to the cherry smell, despite never having been shocked. And the same behavior was observed with the generation of mice following that. Traumatic memory to something so specific as a smell became internalized and engrained within the body itself, passed on through genetics (or, to be specific, epigenetics). It became free-floating trauma disconnected from its originating source.

Menakem asked what would another scientist think who came in after the initial part of the study. The new scientist would not have seen the traumatizing shocks, but instead would only observe the strange response to the smell of cherries. Based on this limited perspective, this scientist would conclude that there was something wrong with those mice. From the book, here is how he describes it in human terms:

“Unhealed trauma acts like a rock thrown into a pond; it causes ripples that move outward, affecting many other bodies over time. After months or years, unhealed trauma can appear to become part of someone’s personality. Over even longer periods of time, as it is passed on and gets compounded through other bodies in a household, it can become a family norm. And if it gets transmitted and compounded through multiple families and generations, it can start to look like culture.”

This is a brilliant yet grounded way of explaining trauma. It goes beyond a victimization cycle. The trauma gets passed on, with or without a victimizer to mediate the transmission, although typically this process goes hand in hand with continuing victimization. Trauma isn’t a mere psychological phenomenon manifesting as personal dysfunction. It can become embodied and expressed as a shared experience, forming the background to the lives, relationships, and communities within an entire society — over the centuries, it could solidify into a well-trod habitus and entrenched social order. The personal becomes intergenerational becomes historical.

This helps explain the persistence of societal worldviews and collective patterns, what most often gets vaguely explained as ‘culture’. It’s not just about trauma for anything can be passed on in similar ways, such as neurocognitive memes involving thought, perception, and behavior — and it is plausible that, whether seeming harmful or beneficial, much of this is supported by epigenetic mechanisms in contributing to specific expressions of nature-nurture dynamics. Related to this, Christine Kenneally offers a corroborating perspective (The Invisible History of the Human Race, Kindle Locations 2430-2444):

“It seemed that both families and social institutions matter but that the former is more powerful. The data suggested that a region might develop its own culture of distrust and that it could affect people who moved into that area, even if their ancestors had not been exposed to the historical event that destroyed trust in the first place. But if someone’s ancestors had significant exposure to the slave trade, then even if he moved away from the area where he was born to an area where there was no general culture of mistrust, he was still less likely to be trusting. Indeed, Nunn and Wantchekon found evidence that the inheritance of distrust within a family was twice as powerful as the distrust that is passed down in a community.”

Kenneally doesn’t frame this according to epigenetics. But that would be a highly probable explanation, considering the influence happens mostly along familial lines, potentially implying a biological component. Elsewhere, the author does mention it in passing, using the same mouse study along with a human study (Kindle Locations 4863-4873):

“The lives that our parents and grandparents lived may also affect the way genetic conditions play out in our bodies. One of the central truths of twentieth-century genetics was that the genome is passed on from parents to child unaffected by the parents’ lives. But it has been discovered in the last ten years that there are crucial exceptions to this rule. Epigenetics tells us that events in your grandfather’s life may have tweaked your genes in particular ways. The classic epigenetics study showed that the DNA of certain adults in the Netherlands was irrevocably sculpted by the experience of their grandparents in a 1944 famine. In cases like this a marker that is not itself a gene is inherited and plays out via the genes. More recent studies have shown complex multigenerational effects. In one, mice were exposed to a traumatic event, which was accompanied by a particular odor. The offspring of the mice, and then their offspring, showed a greater reactivity to the odor than mice whose “grandparents” did not experience such conditioning. In 2014 the first ancient epigenome, from a four-thousand-year-old man from Greenland, was published. Shortly after that, drafts of the Neanderthal and Denisovan epigenomes were published. They may open up an entirely new way to compare and contrast our near-relatives and ancestors and to understand the way that they passed down experiences and predispositions. As yet it’s unclear for how many generations these attachments to our genes might be passed down.”

In emphasizing this point, she continues her thought with the comment that (Kindle Locations 4874-4876), “Even given our ability to read hundred of thousands of letters in the DNA of tens of thousands of people, it turns out that— at least for the moment— family history is still a better predictor of many health issues. For example, it is the presence of a BRCA mutation plus a family history of breast cancer that most significantly raises a woman’s risk of the disease.”

Much of that ‘family history’ would be epigenetic or else other biological mechanisms such as stress-induced hormones within the fetal environment of the womb. Also, microbiomes are inherited and have been proven to alter epigenetics, which means the non-human genes of bacteria can alter the expression of human genes (this can be taken a further step back, since presumably bacterial genetics also involve epigenetics). Besides all of this, there is much else that gets passed on by those around us, from viruses to parasites.

Another pathway of transmission would be shared environmental conditions, specifically considering that people tend to share environments to the degree their relationships are close. Those in the same society would have more shared environment than those in other societies, those in the same community moreso than those in other communities in the same society, those in the same neighborhood moreso than those in other neighborhoods in the same community, and those in the same family moreso than those in other families in the same neighborhood. The influence of environments is powerfully demonstrated with the rat park research. And the environmental factors easily remain hidden, even under careful laboratory conditions.

What we inherit is diverse and complex. But inheritance isn’t fatalism. Consider another mouse study involving electric shocks (Genetic ‘switch’ for memories, The Age), showing that the effects of trauma can be epigenetically reversed within the body:

“Both sets of mice were trained to fear a certain cage by giving them a mild electric shock every time they were put inside.
“Mice whose Tet1 gene was disabled learned to associate the cage with the shock, just like the normal mice. However, when the mice were put in the cage without an electric shock, the two groups behaved differently.
“To the scientists’ astonishment, mice with the Tet1 gene did not fear the cage because their memory of being hurt had already been replaced by new information. The mice with the disabled gene, whose memories had not been replaced, were still traumatised by the experience.”

Trauma isn’t a personal failing or weakness. In a sense, it isn’t even personal. It’s a biological coping mechanism, passed on from body to body, across generations and centuries. Trauma is a physical condition, based on a larger context of environmental conditions. And maybe one day we will be able to as easily treat it as any other physical condition. In turn, this could have a profound impact on so much of what has been considered ‘psychological’ and ‘cultural’. There are immense implications for the overlap of personal healthcare and public health.

* * *

My Grandmother’s Hands: Racialized Trauma and the Pathway to Mending Our Hearts and Bodies
by Resmaa Menakem
Chapter 3 Body to Body, Generation to Generation
pp. 23-34

Not to know what happened before you were born is to remain forever a child.

No man can know where he is going unless he knows exactly where he has been and exactly how he arrived at his present place.
Maya Angelou

Most of us think of trauma as something that occurs in an individual body, like a toothache or a broken arm. But trauma also routinely spreads between bodies, like a contagious disease. […]

It’s not hard to see how trauma can spread like a contagion within couples, families, and other close relationships. What we don’t often consider is how trauma can spread from body to body in any relationship.

Trauma also spreads impersonally, of course, and has done so throughout human history. Whenever one group oppresses, victimizes, brutalizes, or marginalizes another, many of the victimized people may suffer trauma, and then pass on that trauma response to their children as standard operating procedure. 13 Children are highly susceptible to this because their young nervous systems are easily overwhelmed by things that older, more experienced nervous systems are able to override. As we have seen, the result is a soul wound or intergenerational trauma. When the trauma continues for generation after generation, it is called historical trauma. Historical trauma has been likened to a bomb going off, over and over again.

When one settled body encounters another, this can create a deeper settling of both bodies. But when one unsettled body encounters another, the unsettledness tends to compound in both bodies. In large groups, this compounding effect can turn a peaceful crowd into an angry mob. The same thing happens in families, especially when multiple family members face painful or stressful situations together. It can also occur more subtly over time, when one person repeatedly passes on their unsettledness to another. In her book Everyday Narcissism, therapist Nancy Van Dyken calls this hazy trauma: trauma that can’t be traced back to a single specific event.

Unhealed trauma acts like a rock thrown into a pond; it causes ripples that move outward, affecting many other bodies over time. After months or years, unhealed trauma can appear to become part of someone’s personality. Over even longer periods of time, as it is passed on and gets compounded through other bodies in a household, it can become a family norm. And if it gets transmitted and compounded through multiple families and generations, it can start to look like culture.

But it isn’t culture. It’s a traumatic retention that has lost its context over time. Though without context, it has not lost its power. Traumatic retentions can have a profound effect on what we do, think, feel, believe, experience, and find meaningful. (We’ll look at some examples shortly.)

What we call out as individual personality flaws, dysfunctional family dynamics, or twisted cultural norms are sometimes manifestations of historical trauma. These traumatic retentions may have served a purpose at one time—provided protection, supported resilience, inspired hope, etc.—but generations later, when adaptations continue to be acted out in situations where they are no longer necessary or helpful, they get defined as dysfunctional behavior on the individual, family, or cultural level.

The transference of trauma isn’t just about how human beings treat each other. Trauma can also be inherited genetically. Recent work in genetics has revealed that trauma can change the expression of the DNA in our cells, and these changes can be passed from parent to child. 14

And it gets weirder. We now have evidence that memories connected to painful events also get passed down from parent to child—and to that child’s child. What’s more, these experiences appear to be held, passed on, and inherited in the body, not just in the thinking brain. 15 Often people experience this as a persistent sense of imminent doom—the trauma ghosting I wrote about earlier.

We are only beginning to understand how these processes work, and there are a lot of details we don’t know yet. Having said that, here is what we do know so far:

  • A fetus growing inside the womb of a traumatized mother may inherit some of that trauma in its DNA expression. This results in the repeated release of stress hormones, which may affect the nervous system of the developing fetus.
  • A man with unhealed trauma in his body may produce sperm with altered DNA expression. These in turn may inhibit the healthy functioning of cells in his children.
  • Trauma can alter the DNA expression of a child or grandchild’s brain, causing a wide range of health and mental health issues, including memory loss, chronic anxiety, muscle weakness, and depression.
  • Some of these effects seem particularly prevalent among African Americans, Jews, and American Indians, three groups who have experienced an enormous amount of historical trauma.

Some scientists theorize this genetic alteration may be a way to protect later generations. Essentially, genetic changes train our descendants’ bodies through heredity rather than behavior. This suggests that what we call genetic defects may actually be ways to increase our descendants’ odds of survival in a potentially dangerous environment, by relaying hormonal information to the fetus in the womb.

The womb is itself an environment: a watery world of sounds, movement, and human biochemicals. Recent research suggests that, during the last trimester of pregnancy, fetuses in the womb can learn and remember just as well as newborns. 16 Part of what they may learn, based on what their mothers go through during pregnancy, is whether the world outside the womb is safe and healthy or dangerous and toxic. […]

Zoë Carpenter sums this up in a simple, stark observation:

Health experts now think that stress throughout the span of a woman’s life can prompt biological changes that affect the health of her future children. Stress can disrupt immune, vascular, metabolic, and endocrine systems, and cause cells to age more quickly. 17 […]

These are the effects of trauma involving specific incidents. But what about the effects of repetitive trauma: unhealed traumas that accumulate over time? The research is now in: the effects on the body from trauma that is persistent (or pervasive, repetitive, or long-held) are significantly negative, sometimes profoundly so. While many studies support this conclusion, 19 the largest and best known is the Adverse Childhood Experiences Study (ACES), a large study of 17,000 people 20 conducted over three decades by the Centers for Disease Control and Prevention (CDC) and the healthcare conglomerate Kaiser Permanente. Published in 2014, ACES clearly links childhood trauma (and other “adverse childhood events” involving abuse or neglect 21) to a wide range of long-term health and social consequences, including illness, disability, social problems, and early death—all of which can get passed down through the generations. The ACE study also demonstrates a strong link between the number of “adverse childhood events” and increased rates of heart disease, cancer, stroke, diabetes, chronic lung disease, alcoholism, depression, liver disease, and sexually transmitted diseases, as well as illicit drug use, financial stress, poor academic and work performance, pregnancy in adolescence, and attempted suicide. People who have experienced four or more “adverse events” as children are twice as likely to develop heart disease than people who have experienced none. They are also twice as likely to develop autoimmune diseases, four and a half times as likely to be depressed, ten times as likely to be intravenous drug users, and twelve times as likely to be suicidal. As children, they are thirty-three times as likely to have learning and behavior problems in school.

Pediatrician Nadine Burke-Harris offers the following apt comparison: “If a child is exposed to lead while their brain is developing, it affects the long-term development of their brain . . . It’s the same way when a child is exposed to high doses of stress and trauma while their brain is developing . . . Exposure to trauma is particularly toxic for children.” In other words, there is a biochemical component behind all this.

When people experience repeated trauma, abuse, or high levels of stress for long stretches of time, a variety of stress hormones get secreted into their bloodstreams. In the short term, the purpose of these chemicals is to protect their bodies. But when the levels of these chemicals 22 remain high over time, they can have toxic effects, making a person less healthy, less resilient, and more prone to illness. High levels of one or more of these chemicals can also crowd out other, healthier chemicals—those that encourage trust, intimacy, motivation, and meaning. […]

The results of the ACE study are dramatic. Yet it covered only fifteen years. How much more dramatic might the results be for people who have experienced (or whose ancestors experienced) centuries of enslavement or genocide? 23

Historical trauma, intergenerational trauma, institutionalized trauma (such as white-body supremacy, gender discrimination, sexual orientation discrimination, etc.), and personal trauma (including any trauma we inherit from our families genetically, or through the way they treat us, or both) often interact. As these traumas compound each other, or as each new or recent traumatic experience triggers the energy of older experiences, they can create ever-increasing damage to human lives and human bodies.

* * *


13 Over time, roles can switch and the oppressed may become the oppressors. They then pass on trauma not only to their children, but also to a new group of victims. 14 This research has led to the creation of a new field of scientific inquiry known as epigenetics, the study of inheritable changes in gene expression. Epigenetics has transformed the way scientists think about genomes. The first study to clearly show that stress can cause inheritable gene defects in humans was published in 2015 by Rachel Yehuda and her colleagues, titled “Holocaust Exposure Induced Intergenerational Effects n FKBP5 Methylation” ( Biological Psychiatry 80, no. 5, September, 2016: 372–80). (Earlier studies identified the same effect in animals.) Yehuda’s study demonstrated that damaged genes in the bodies of Jewish Holocaust survivors—the result of the trauma they suffered under Nazism—were passed on to their children. Later research confirms Yehuda’s conclusions.

15 A landmark study demonstrating this effect in mice was published in 2014 by Kerry Ressler and Brian Dias (“Parental Olfactory Experience Influences Behavior and Neural Structure in Subsequent Generations,” Nature Neuroscience 17: 89–96). Ressler and Dias put male mice in a small chamber, then occasionally exposed them to the scent of acetophenone (which smells like cherries)—and, simultaneously, to small electric shocks. Eventually the mice associated the scent with pain; they would shudder whenever they were exposed to the smell, even after the shocks were discontinued. The children of those mice were born with a fear of the smell of acetophenone. So were their grandchildren. As of this writing, no one has completed a similar study on humans, both for ethical reasons and because we take a lot longer than mice to produce a new generation.

16 A good, if very brief, overview of these studies appeared in Science: .

17 This quote is from an eye-opening article in The Nation, “What’s Killing America’s Black Infants?”: . Carpenter also notes that in the United States, Black infants die at a rate that’s over twice as high as for white infants. In some cities, the disparity is much worse: in Washington, DC, the infant mortality rate in Ward 8, which is over 93 percent Black, is ten times the rate in Ward 3, which is well-to-do and mostly white. […]

19 See, for example: “Early Trauma and Inflammation” ( Psychosomatic Medicine 74, no. 2, February/March 2012: 146–52); “Chronic Stress, Glucocorticoid Receptor Resistance, Inflammation, and Disease Risk” ( Proceedings of the National Academy of Sciences 109, no. 16, April 17, 2012: 5995–99); and “Adverse Childhood Experiences and Adult Risk Factors for Age-Related Disease: Depression, Inflammation, and Clustering of Metabolic Risk Markers” ( Archives of Pediatrics and Adolescent Medicine 163, no. 12, December 2009: 1135–43).

20 Of the people studied, 74.8 percent were white; 4.5 percent were African American; 54 percent were female; and 46 percent were male.

21 The ten “adverse childhood events” are divorced or separated parents; physical abuse; physical neglect; emotional abuse; emotional neglect; sexual abuse; domestic violence that the child witnessed; substance abuse in the household; mental illness in the household; and a family member in prison.

22 These chemicals are cortisol, adrenaline, and norepinephrine. They are secreted by the adrenal gland.

23 Please don’t imagine that we African Americans claim to have cornered the market on adverse childhood experiences. In fact, in his brilliant book Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (New York: HarperCollins, 2016), white Appalachian J. D. Vance cites the ACE study in reference to himself, his sister Lindsay, and “my corner of the demographic world”: working-class Americans. As Vance notes, “Four in every ten working-class people had faced multiple instances of childhood trauma. If you want to deeply understand the hearts, psyches, and bodies of many Americans today, you can do no better than to read both Hillbilly Elegy and Ta-Nehisi Coates’s Between the World and Me (New York: Spiegel & Grau, 2015).

* * *

What white bodies did to Black bodies they did to other white bodies first.
Janice Barbee

* * *

From Genetic Literacy Project:

Childhood trauma: The kids are not alright and part of the explanation may be linked to epigenetics
Your DNA may have been altered by childhood stress and traumas
Childhood trauma leaves mark on DNA of some victims
Is the genetic imprint of traumatic experiences passed on to our children?
Do parents pass down trauma to their children?
Was trauma from Holocaust passed on to children of survivors?
Holocaust survivors studied to determine if trauma-induced mental illness can be inherited
Epigenetics, pregnancy and the Holocaust: How trauma can shape future generations
Epigenetic inheritance: Holocaust survivors passed genetic marks of trauma to children
How epigenetics, our gut microbiome and the environment interact to change our lives
Skin microbiomes differ largely between cultures, more diverse sampling is needed
Cities have unique microbiome ‘fingerprint,’ study finds
Your microbiome isn’t just in you: It’s all around you
Microbes, like genes, pass from one generation to next
Microbiome profile highlights diet, upbringing and birth
Baby’s microbiome may come from mom’s mouth via placenta


State and Non-State Violence Compared

There is a certain kind of academic that simultaneously interests me and infuriates me. Jared Diamond, in The World Until Yesterday, is an example of this. He is knowledgeable guy and is able to communicate that knowledge in a coherent way. He makes many worthy observations and can be insightful. But there is also naivete that at times shows up in his writing. I get the sense that occasionally his conclusions preceded the evidence he shares. Also, he’ll point out the problems with the evidence and then, ignoring what he admitted, will treat that evidence as strongly supporting his biased preconceptions.

Despite my enjoyment of Diamond’s book, I was disappointed specifically in his discussion of violence and war (much of the rest of the book, though, is worthy and I recommend it). Among the intellectual elite, it seems fashionable right now to describe modern civilization as peaceful — that is fashionable among the main beneficiaries of modern civilization, not so much fashionable according to those who bear the brunt of the costs.

In Chapter 4, he asks, “Did traditional warfare increase, decrease, or remain unchanged upon European contact?” That is a good question. And as he makes clear, “This is not a straightforward question to decide, because if one believes that contact does affect the intensity of traditional warfare, then one will automatically distrust any account of it by an outside observer as having been influenced by the observer and not representing the pristine condition.” But he never answers the question. He simply assumes that that the evidence proves what he appears to have already believed.

I’m not saying he doesn’t take significant effort to make a case. He goes on to say, “However, the mass of archaeological evidence and oral accounts of war before European contact discussed above makes it far-fetched to maintain that people were traditionally peaceful until those evil Europeans arrived and messed things up.” The archaeological and oral evidence, like the anthropological evidence, is diverse. For example, in northern Europe, there is no evidence of large-scale warfare before the end of the Bronze Age when multiple collapsing civilizations created waves of refugees and marauders.

All the evidence shows us is that some non-state societies have been violent and others non-violent, no different than in comparing state societies. But we must admit, as Diamond does briefly, that contact and the rippling influences of contact across wide regions can lead to greater violence along with other alterations in the patterns of traditional culture and lifestyle. Before contact ever happens, most non-state societies have already been influenced by trade, disease, environmental destruction, invasive species, refugees, etc. That pre-contact indirect influences can last for generations or centuries prior to final contact, especially with non-state societies that were more secluded. And those secluded populations are the most likely to be studied as supposedly representative of uncontacted conditions.

We should be honest in admitting our vast ignorance. The problem is that, if Diamond fully admitted this, he would have little to write about on such topics or it would be a boring book with all of the endless qualifications (I personally like scholarly books filled with qualifications, but most people don’t). He is in the business of popular science and so speculation is the name of the game he is playing. Some of his speculations might not hold up to much scrutiny, not that the average reader will offer much scrutiny.

He continues to claim that, “the evidence of traditional warfare, whether based on direct observation or oral histories or archaeological evidence, is so overwhelming.” And so asks, “why is there still any debate about its importance?” What a silly question. We simply don’t know. He could be right, just as easily as he could be wrong. Speculations are dime a dozen. The same evidence can and regularly is made to conform to and confirm endless hypotheses that are mostly non-falsifiable. We don’t know and probably will never know. It’s like trying to use chimpanzees as a comparison for human nature, even though chimpanzees have for a long time been in a conflict zone with human encroachment, poaching, civil war, habitat loss, and ecosystem destabilization. No one knows what chimpanzees were like pre-contact. But we do know that bonobos that live across a major river in a less violent area express less violent behavior. Maybe there is a connection, not that Diamond is as likely to mention these kinds of details.

I do give him credit, though. He knows he is on shaky ground. In pointing out the problems he previously discussed, he writes that, “One reason is the real difficulties, which we have discussed, in evaluating traditional warfare under pre-contact or early-contact conditions. Warriors quickly discern that visiting anthropologists disapprove of war, and the warriors tend not to take anthropologists along on raids or allow them to photograph battles undisturbed: the filming opportunities available to the Harvard Peabody Expedition among the Dani were unique. Another reason is that the short-term effects of European contact on tribal war can work in either direction and have to be evaluated case by case with an open mind.” In between the lines, Jared Diamond makes clear that he can’t really know much of anything about earlier non-state warfare.

Even as he mentions some archaeological sites showing evidence of mass violence, he doesn’t clarify that these sites are a small percentage of archaeological sites, most of which don’t show mass violence. It’s not as if anyone is arguing mass violence never happened prior to civilization. The Noble Savage myth is not widely supported these days and so there is no point in his propping it up as a straw man to knock down.

From my perspective, it goes back to what comparisons one wishes to make. Non-state societies may or may not be more violent per capita. But that doesn’t change the reality that state societies cause more harm, as a total number. Consider one specific example of state warfare. The United States has been continuously at war since it was founded, which is to say not a year has gone by without war (against both state and non-state societies), and most of that has been wars of aggression. The US military, CIA covert operations, economic sanctions, etc surely has killed at least hundreds of millions of people in my lifetime — probably more people killed than all non-states combined throughout human existence.

Here is the real difference in violence between non-states and states. State violence is more hierarchically controlled and targeted in its destruction. Non-state societies, on the other hand, tend to spread the violence across entire populations. When a tribe goes to war, often the whole tribe is involved. So state societies are different in that usually only the poor and minorities, the oppressed and disadvantaged experience the harm. If you look at the specifically harmed populations in state societies, the mortality rate is probably higher than seen in non-state societies. The essential point is that this violence is concentrated and hidden.

Immensely larger numbers of people are the victims of modern state violence, overt violence and slow violence. But the academics who write about it never have to personally experience or directly observe these conditions of horror, suffering, and despair. Modern civilization is less violent for the liberal class, of which academics are members. That doesn’t say much about the rest of the global population. The permanent underclass lives in constant violence within their communities and from state governments, which leads to a different view on the matter.

To emphasize this bias, one could further note what Jared Diamond ignores or partly reports. In the section where he discusses violence, he briefly mentions the Piraha. He could have pointed out that they are a non-violent non-state society. They have no known history of warfare, capital punishment, abuse, homicide, or suicide — at least none has been observed or discovered through interviews. Does he write about this evidence that contradicts his views? Of course not. Instead, lacking any evidence of violence, he speculates about violence. Here is the passage from Chapter 2 (pp. 93-94):

“Among still another small group, Brazil’s Piraha Indians (Plate 11), social pressure to behave by the society’s norms and to settle disputes is applied by graded ostracism. That begins with excluding someone from food-sharing for a day, then for several days, then making the person live some distance away in the forest, deprived of normal trade and social exchanges. The most severe Piraha sanction is complete ostracism. For instance, a Piraha teen-ager named Tukaaga killed an Apurina Indian named Joaquim living nearby, and thereby exposed the Piraha to the risk of a retaliatory attack. Tukaaga was then forced to live apart from all other Piraha villages, and within a month he died under mysterious circumstances, supposedly of catching a cold, but possibly instead murdered by other Piraha who felt endangered by Tukaaga’s deed.”

Why did he add that unfounded speculation at the end? The only evidence he has is that their methods of social conformity are non-violent. Someone is simply ostracized. But that doesn’t fit his beliefs. So he assumes there must be some hidden violence that has never been discovered after generations of observers having lived among them. Even the earliest account of contact from centuries ago, as far as I know, indicates absolutely no evidence of violence. It makes one wonder how many more examples he ignores, dismisses, or twists to fit his preconceptions.

This reminds me of Julian Jaynes’ theory of bicameral societies. He noted that these Bronze Age societies were non-authoritarian, despite having high levels of social conformity. There is no evidence of these societies having written laws, courts, police forces, formal systems of punishment, and standing armies. Like non-state tribal societies, when they went to war, the whole population sometimes was mobilized. Bicameral societies were smaller, mostly city-states, and so still had elements of tribalism. But the point is that the enculturation process itself was powerful enough to enforce order without violence. That was only within a society, as war still happened between societies, although it was limited and usually only involved neighboring societies. I don’t think there is evidence of continual warfare. Yet when conflict erupted, it could lead to total war.

It’s hard to compare either tribes or ancient city-states to modern nation-states. Their social orders and how they maintained them are far different. And the violence involved is of a vastly disparate scale. Besides, I wouldn’t take the past half century of relative peace in the Western world as being representative of modern civilization. In this new century, we might see billions of deaths from all combined forms of violence. And the centuries earlier were some of the bloodiest and destructive ever recorded. Imperialism and colonialism, along with the legacy systems of neo-imperialism and neo-colonialism, have caused and contributed to the genocide or cultural destruction of probably hundreds of thousands of societies worldwide, in most cases with all evidence of their existence having disappeared. This wholesale massacre has led to a dearth of societies left remaining with which to make comparisons. The survivors living in isolated niches may not be representative of the societal diversity that once existed.

Anyway, the variance of violence and war casualty rates likely is greater in comparing societies of the same kind than in comparing societies of different kinds. As the nearby bonobos are more peaceful than chimpanzees, the Piraha are more peaceful than the Yanomami who live in the same region — as Canada is more peaceful than the US. That might be important to explain and a lot more interesting. But this more incisive analysis wouldn’t fit Western propaganda, specifically the neo-imperial narrative of Pax Americana. From Pax Hispanica to Pax Britannica to Pax Americana, quite possibly billions of combatants have died in wars and billions more of innocents as casualties. That is neither a small percentage nor a small total number, if anyone is genuinely concerned about body counts.

* * *

Rebutting Jared Diamond’s Savage Portrait
by Paul Sillitoe & Mako John Kuwimb, iMediaEthics

Why Does Jared Diamond Make Anthropologists So Mad?
by Barbara J. King, NPR

In a beautifully written piece for The Guardian, Wade Davis says that Diamond’s “shallowness” is what “drives anthropologists to distraction.” For Davis, geographer Diamond doesn’t grasp that “cultures reside in the realm of ideas, and are not simply or exclusively the consequences of climatic and environmental imperatives.”

Rex Golub at Savage Minds slams the book for “a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomena.” In a fit of vexed humor, the Wenner-Gren Foundation for anthropological research tweeted Golub’s post along with this comment: “@savageminds once again does the yeoman’s work of exploring Jared Diamond’s new book so the rest of us don’t have to.”

This biting response isn’t new; see Jason Antrosio’s post from last year in which he calls Diamond’s Pulitzer Prize-winning Guns, Germs, and Steel a “one-note riff,” even “academic porn” that should not be taught in introductory anthropology courses.

Now, in no way do I want to be the anthropologist who defends Diamond because she just doesn’t “get” what worries all the cool-kid anthropologists about his work. I’ve learned from their concerns; I’m not dismissing them.

In point of fact, I was startled at this passage on the jacket of The World Until Yesterday: “While the gulf that divides us from our primitive ancestors may seem unbridgably wide, we can glimpse most of our former lifestyle in those largely traditional societies that still exist or were recently in existence.” This statement turns small-scale societies into living fossils, the human equivalent of ancient insects hardened in amber. That’s nonsense, of course.

Lest we think to blame a publicist (rather than the author) for that lapse, consider the text itself. Near the start, Diamond offers a chronology: until about 11,000 years ago, all people lived off the land, without farming or domesticated animals. Only around 5,400 years ago did the first state emerge, with its dense population, labor specialization and power hierarchy. Then Diamond fatally overlays that past onto the present: “Traditional societies retain features of how all of our ancestors lived for tens of thousands of years, until virtually yesterday.” Ugh.

Another problem, one I haven’t seen mentioned elsewhere, bothers me just as much. When Diamond urges his WEIRD readers to learn from the lifeways of people in small-scale societies, he concludes: “We ourselves are the only ones who created our new lifestyles, so it’s completely in our power to change them.” Can he really be so unaware of the privilege that allows him to assert — or think — such a thing? Too many people living lives of poverty within industrialized nations do not have it “completely in their power” to change their lives, to say the least.

Patterns of Culture by Ruth Benedict (1934) wins Jared Diamond (2012)
by Jason Antrosio, Living Anthropologically

Compare to Jared Diamond. Diamond has of course acquired some fame for arguing against biological determinism, and his Race Without Color was once a staple for challenging simplistic tales of biological race. But by the 1990s, Diamond simply echoes perceived liberal wisdom. Benedict and Weltfish’s Races of Mankind was banned by the Army as Communist propaganda, and Weltfish faced persecution from McCarthyism (Micaela di Leonardo, Exotics at Home 1998:196,224; see also this Jon Marks comment on Gene Weltfish). Boas and Benedict swam against the current of the time, when backlash could be brutal. In contrast, Diamond’s claims on race and IQ have mostly been anecdotal. They have never been taken seriously by those who call themselves “race realists” (see Jared Diamond won’t beat Mitt Romney). Diamond has never responded scientifically to the re-assertion of race from sources like “A Family Tree in Every Gene,” and he helped propagate a medical myth about racial differences in hypertension.

And, of course, although Guns, Germs, and Steel has been falsely branded as environmental or geographical determinism, there is no doubt that Diamond leans heavily on agriculture and geography as explanatory causes for differential success. […]

Compare again Jared Diamond. Diamond has accused anthropologists of falsely romanticizing others, but by subtitling his book What Can We Learn from Traditional Societies, Diamond engages in more than just politically-correct euphemism. When most people think of a “traditional society,” they are thinking of agrarian peasant societies or artisan handicrafts. Diamond, however, is referring mainly to what we might term tribal societies, or hunters and gatherers with some horticulture. Curiously, for Diamond the dividing line between the yesterday of traditional and the today of the presumably modern was somewhere around 5,000-6,000 years ago (see The Colbert Report). As John McCreery points out:

Why, I must ask, is the category “traditional societies” limited to groups like Inuit, Amazonian Indians, San people and Melanesians, when the brute fact of the matter is that the vast majority of people who have lived in “traditional” societies have been peasants living in traditional agricultural civilizations over the past several thousand years since the first cities appeared in places like the valleys of the Nile, the Tigris-Euphrates, the Ganges, the Yellow River, etc.? Talk about a big blind spot.

Benedict draws on the work of others, like Reo Fortune in Dobu and Franz Boas with the Kwakiutl. Her own ethnographic experience was limited. But unlike Diamond, Benedict was working through the best ethnographic work available. Diamond, in contrast, splays us with a story from Allan Holmberg, which then gets into the New York Times, courtesy of David Brooks. Compare bestselling author Charles Mann on “Holmberg’s Mistake” (the first chapter of his 1491: New Revelations of the Americas Before Columbus):

The wandering people Holmberg traveled with in the forest had been hiding from their abusers. At some risk to himself, Holmberg tried to help them, but he never fully grasped that the people he saw as remnants from the Paleolithic Age were actually the persecuted survivors of a recently shattered culture. It was as if he had come across refugees from a Nazi concentration camp, and concluded that they belonged to a culture that had always been barefoot and starving. (Mann 2005:10)

As for Diamond’s approach to comparing different groups: “Despite claims that Diamond’s book demonstrates incredible erudition what we see in this prologue is a profound lack of thought about what it would mean to study human diversity and how to make sense of cultural phenomenon” (Alex Golub, How can we explain human variation?).

Finally there is the must-read review Savaging Primitives: Why Jared Diamond’s ‘The World Until Yesterday’ Is Completely Wrong by Stephen Corry, Director of Survival International:

Diamond adds his voice to a very influential sector of American academia which is, naively or not, striving to bring back out-of-date caricatures of tribal peoples. These erudite and polymath academics claim scientific proof for their damaging theories and political views (as did respected eugenicists once). In my own, humbler, opinion, and experience, this is both completely wrong–both factually and morally–and extremely dangerous. The principal cause of the destruction of tribal peoples is the imposition of nation states. This does not save them; it kills them.

[…] Indeed, Jared Diamond has been praised for his writing, for making science popular and palatable. Others have been less convinced. As David Brooks reviews:

Diamond’s knowledge and insights are still awesome, but alas, that vividness rarely comes across on the page. . . . Diamond’s writing is curiously impersonal. We rarely get to hear the people in traditional societies speak for themselves. We don’t get to meet any in depth. We don’t get to know what their stories are, what the contents of their religions are, how they conceive of individual selfhood or what they think of us. In this book, geographic and environmental features play a much more important role in shaping life than anything an individual person thinks or feels. The people Diamond describes seem immersed in the collective. We generally don’t see them exercising much individual agency. (Tribal Lessons; of course, Brooks may be smarting from reviews that called his book The Dumbest Story Ever Told)

[…] In many ways, Ruth Benedict does exactly what Wade Davis wanted Jared Diamond to do–rather than providing a how-to manual of “tips we can learn,” to really investigate the existence of other possibilities:

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space. This is not to suggest naively that we abandon everything and attempt to mimic the ways of non-industrial societies, or that any culture be asked to forfeit its right to benefit from the genius of technology. It is rather to draw inspiration and comfort from the fact that the path we have taken is not the only one available, that our destiny therefore is not indelibly written in a set of choices that demonstrably and scientifically have proven not to be wise. By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. (Wade Davis review of Jared Diamond; and perhaps one of the best contemporary versions of this project is Wade Davis, The Wayfinders: Why Ancient Wisdom Matters in the Modern World)

[…] This history reveals the major theme missing from both Benedict’s Patterns of Culture and especially missing from Diamond–an anthropology of interconnection. That as Eric Wolf described in Europe and the People Without History peoples once called primitive–now perhaps more politely termed tribal or traditional–were part of a co-production with Western colonialism. This connection and co-production had already been in process long before anthropologists arrived on the scene. Put differently, could the Dobuan reputation for being infernally nasty savages have anything to do with the white recruiters of indentured labour, which Benedict mentions (1934:130) but then ignores? Could the revving up of the Kwakiutl potlatch and megalomaniac gamuts have anything to do with the fur trade?

The Collapse Of Jared Diamond
by Louis Proyect, Swans Commentary

In general, the approach of the authors is to put the ostensible collapse into historical context, something that is utterly lacking in Diamond’s treatment. One of the more impressive record-correcting exercises is Terry L. Hunt and Carl P. Lipo’s Ecological Catastrophe, Collapse, and the Myth of “Ecocide” on Rapa Nui (Easter Island). In Collapse, Diamond judged Easter Island as one of the more egregious examples of “ecocide” in human history, a product of the folly of the island’s rulers whose decision to construct huge statues led to deforestation and collapse. By chopping down huge palm trees that were used to transport the stones used in statue construction, the islanders were effectively sealing their doom. Not only did the settlers chop down trees, they hunted the native fauna to extinction. The net result was a loss of habitat that led to a steep population decline.

Diamond was not the first observer to call attention to deforestation on Easter Island. In 1786, a French explorer named La Pérouse also attributed the loss of habitat to the “imprudence of their ancestors for their present unfortunate situation.”

Referring to research about Easter Island by scientists equipped with the latest technologies, the authors maintain that the deforestation had nothing to do with transporting statues. Instead, it was an accident of nature related to the arrival of rats in the canoes of the earliest settlers. Given the lack of native predators, the rats had a field day and consumed the palm nuts until the trees were no longer reproducing themselves at a sustainable rate. The settlers also chopped down trees to make a space for agriculture, but the idea that giant statues had anything to do with the island’s collapse is as much of a fiction as Diamond’s New Yorker article.

Unfortunately, Diamond is much more interested in ecocide than genocide. If people interested him half as much as palm trees, he might have said a word or two about the precipitous decline in population that occurred after the island was discovered by Europeans in 1722. Indeed, despite deforestation there is evidence that the island’s population grew between 1250 and 1650, the period when deforestation was taking place — leaving aside the question of its cause. As was the case when Europeans arrived in the New World, a native population was unable to resist diseases such as smallpox and died in massive numbers. Of course, Diamond would approach such a disaster with his customary Olympian detachment and write it off as an accident of history.

While all the articles pretty much follow the narrowly circumscribed path as the one on Easter Island, there is one that adopts the Grand Narrative that Jared Diamond has made a specialty of and beats him at his own game. I am referring to the final article, Sustainable Survival by J.R. McNeill, who describes himself in a footnote thusly: “Unlike most historians, I have no real geographic specialization and prefer — like Jared Diamond — to hunt for large patterns in the human past.”

And one of those “large patterns” ignored by Diamond is colonialism. The greatest flaw in Collapse is that it does not bother to look at the impact of one country on another. By treating countries in isolation from one another, it becomes much easier to turn the “losers” into examples of individual failing. So when Haiti is victimized throughout the 19th century for having the temerity to break with slavery, this hardly enters into Diamond’s moral calculus.

Compassion Sets Humans Apart
by Penny Spikins, Sapiens

There are, perhaps surprisingly, only two known cases of likely interpersonal violence in the archaic species most closely related to us, Neanderthals. That’s out of a total of about 30 near-complete skeletons and 300 partial Neanderthal finds. One—a young adult living in what is now St. Césaire, France, some 36,000 years ago—had the front of his or her skull bashed in. The other, a Neanderthal found in Shanidar Cave in present-day Iraq, was stabbed in the ribs between 45,000 and 35,000 years ago, perhaps by a projectile point shot by a modern human.

The earliest possible evidence of what might be considered warfare or feuding doesn’t show up until some 13,000 years ago at a cemetery in the Nile Valley called Jebel Sahaba, where many of the roughly 60 Homo sapiens individuals appear to have died a violent death.

Evidence of human care, on the other hand, goes back at least 1.5 million years—to long before humans were anatomically modern. A Homo ergaster female from Koobi Fora in Kenya, dated to about 1.6 million years ago, survived several weeks despite a toxic overaccumulation of vitamin A. She must have been given food and water, and protected from predators, to live long enough for this disease to leave a record in her bones.

Such evidence becomes even more notable by half a million years ago. At Sima de los Huesos (Pit of Bones), a site in Spain occupied by ancestors of Neanderthals, three of 28 individuals found in one pit had severe pathology—a girl with a deformed head, a man who was deaf, and an elderly man with a damaged pelvis—but they all lived for long periods of time despite their conditions, indicating that they were cared for. At the same site in Shanidar where a Neanderthal was found stabbed, researchers discovered another skeleton who was blind in one eye and had a withered arm and leg as well as hearing loss, which would have made it extremely hard or impossible to forage for food and survive. His bones show he survived for 15 to 20 years after injury.

At a site in modern-day Vietnam called Man Bac, which dates to around 3,500 years ago, a man with almost complete paralysis and frail bones was looked after by others for over a decade; he must have received care that would be difficult to provide even today.

All of these acts of caring lasted for weeks, months, or years, as opposed to a single moment of violence.

Violence, Okinawa, and the ‘Pax Americana’
by John W. Dower, The Asia-Pacific Journal

In American academic circles, several influential recent books argue that violence declined significantly during the Cold War, and even more precipitously after the demise of the Soviet Union in 1991. This reinforces what supporters of US strategic policy including Japan’s conservative leaders always have claimed. Since World War II, they contend, the militarized Pax Americana, including nuclear deterrence, has ensured the decline of global violence.

I see the unfolding of the postwar decades through a darker lens.

No one can say with any certainty how many people were killed in World War II. Apart from the United States, catastrophe and chaos prevailed in almost every country caught in the war. Beyond this, even today criteria for identifying and quantifying war-related deaths vary greatly. Thus, World War II mortality estimates range from an implausible low of 50 million military and civilian fatalities worldwide to as many as 80 million. The Soviet Union, followed by China, suffered by far the greatest number of these deaths.

Only when this slaughter is taken as a baseline does it make sense to argue that the decades since World War II have been relatively non-violent.

The misleading euphemism of a “Cold War” extending from 1945 to 1991 helps reinforce the decline-of-violence argument. These decades were “cold” only to the extent that, unlike World War II, no armed conflict took place pitting the major powers directly against one another. Apart from this, these were years of mayhem and terror of every imaginable sort, including genocides, civil wars, tribal and ethnic conflicts, attempts by major powers to suppress anti-colonial wars of liberation, and mass deaths deriving from domestic political policies (as in China and the Soviet Union).

In pro-American propaganda, Washington’s strategic and diplomatic policies during these turbulent years and continuing to the present day have been devoted to preserving peace, defending freedom and the rule of law, promoting democratic values, and ensuring the security of its friends and allies.

What this benign picture ignores is the grievous harm as well as plain folly of much postwar US policy. This extends to engaging in atrocious war conduct, initiating never-ending arms races, supporting illiberal authoritarian regimes, and contributing to instability and humanitarian crises in many part of the world.

Such destructive behavior was taken to new levels in the wake of the September 11, 2001, attack on the World Trade Center and Pentagon by nineteen Islamist hijackers. America’s heavy-handed military response has contributed immeasurably to the proliferation of global terrorist organizations, the destabilization of the Greater Middle East, and a flood of refugees and internally displaced persons unprecedented since World War II.

Afghanistan and Iraq, invaded following September 11, remain shattered and in turmoil. Neighboring countries are wracked with terror and insurrection. In 2016, the last year of Barack Obama’s presidency, the US military engaged in bombing and air strikes in no less than seven countries (Afghanistan, Iraq, Pakistan, Somalia, Yemen, Libya, and Syria). At the same time, elite US “special forces” conducted largely clandestine operations in an astonishing total of around 140 countries–amounting to almost three-quarters of all the nations in the world.

Overarching all this, like a giant cage, is America’s empire of overseas military bases. The historical core of these bases in Germany, Japan, and South Korea dates back to after World War II and the Korean War (1950-1953), but the cage as a whole spans the globe and is constantly being expanded or contracted. The long-established bases tend to be huge. Newer installations are sometimes small and ephemeral. (The latter are known as “lily pad” facilities, and now exist in around 40 countries.) The total number of US bases presently is around 800.

Okinawa has exemplified important features of this vast militarized domain since its beginnings in 1945. Current plans to relocate US facilities to new sites like Henoko, or to expand to remote islands like Yonaguni, Ishigaki, and Miyako in collaboration with Japanese Self Defense Forces, reflect the constant presence but ever changing contours of the imperium. […]

These military failures are illuminating. They remind us that with but a few exceptions (most notably the short Gulf War against Iraq in 1991), the postwar US military has never enjoyed the sort of overwhelming victory it experienced in World War II. The “war on terror” that followed September 11 and has dragged on to the present day is not unusual apart from its seemingly endless duration. On the contrary, it conforms to this larger pattern of postwar US military miscalculation and failure.

These failures also tell us a great deal about America’s infatuation with brute force, and the double standards that accompany this. In both wars, victory proved elusive in spite of the fact that the United States unleashed devastation from the air greater than anything ever seen before, short of using nuclear weapons.

This usually comes as a surprise even to people who are knowledgeable about the strategic bombing of Germany and Japan in World War II. The total tonnage of bombs dropped on Korea was four times greater than the tonnage dropped on Japan in the US air raids of 1945, and destroyed most of North Korea’s major cities and thousands of its villages. The tonnage dropped on the three countries of Indochina was forty times greater than the tonnage dropped on Japan. The death tolls in both Korea and Indochina ran into the millions.

Here is where double standards enter the picture.

This routine US targeting of civilian populations between the 1940s and early 1970s amounted to state-sanctioned terror bombing aimed at destroying enemy morale. Although such frank labeling can be found in internal documents, it usually has been taboo in pro-American public commentary. After September 11, in any case, these precedents were thoroughly scrubbed from memory.

“Terror bombing” has been redefined to now mean attacks by “non-state actors” motivated primarily by Islamist fundamentalism. “Civilized” nations and cultures, the story goes, do not engage in such atrocious behavior. […]

Nuclear weapons were removed from Okinawa after 1972, and the former US and Soviet nuclear arsenals have been substantially reduced since the collapse of the USSR. Nonetheless, today’s US and Russian arsenals are still capable of destroying the world many times over, and US nuclear strategy still explicitly targets a considerable range of potential adversaries. (In 2001, under President George W. Bush, these included China, Russia, Iraq, Iran, North Korea, Syria, and Libya.)

Nuclear proliferation has spread to nine nations, and over forty other countries including Japan remain what experts call “nuclear capable states.” When Barack Obama became president in 2009, there were high hopes he might lead the way to eliminating nuclear weapons entirely. Instead, before leaving office his administration adopted an alarming policy of “nuclear modernization” that can only stimulate other nuclear nations to follow suit.

There are dynamics at work here that go beyond rational responses to perceived threats. Where the United States is concerned, obsession with absolute military supremacy is inherent in the DNA of the postwar state. After the Cold War ended, US strategic planners sometimes referred to this as the necessity of maintaining “technological asymmetry.” Beginning in the mid 1990s, the Joint Chiefs of Staff reformulated their mission as maintaining “full spectrum dominance.”

This envisioned domination now extends beyond the traditional domains of land, sea, and air power, the Joint Chiefs emphasized, to include space and cyberspace as well.



The Violent Narcissism of Small Differences

“As a kid, I saw the 1968 version of Planet of the Apes. As a future primatologist, I was mesmerized. Years later I discovered an anecdote about its filming: At lunchtime, the people playing chimps and those playing gorillas ate in separate groups.”
~ Robert Sapolsky

There are “many features of… warfare that turn out to be shared with wars in many other traditional societies… Those shared features include the following ones… So-called tribal warfare is often or usually actually intra-tribal, between groups speaking the same language and sharing the same culture, rather than inter-tribal. Despite that cultural similarity or identity between the antagonists, one’s enemies are sometimes demonized as subhuman.” (Jared Diamond, The World Until Yesterday, p. 120)

That isn’t something I’ve heard before. I’m surprised it isn’t a point brought up more often. It entirely undermines the case for racism being biological and instinctual. This intra-tribal warfare involves people who are extremely similar — in terms of ethnicity/culture, linguistics, lifestyle, diet, health, genetics, etc (and one would presume also in terms of epigenetics and microbiome). They are more similar to one another than is the rather diverse population of white Americans. Yet these basically identical tribal bands are able to not just see each other as different but even as subhuman, not that ‘subhuman’ has a scientific meaning in this context. It gives credence to Freud’s theory of the narcissism of small differences.

In modern nation-states, we forget how abnormal is every aspect of our society. Based on unrepresentative WEIRD research, we’ve come to some strange conclusions about human nature. Looking at the anthropological record demonstrates how far off from reality is our modern understanding. We think of warfare as only or primarily occurring between nation-states and we think of nation-states in ethno-racial terms. The world wars were fought with rhetoric declaring the other side to be of a different race or not fully human. That happened between the English and Germans who today are thought of as being so similar, what we now think of as white Westerners. But perceived differences has never had much to do with objective reality.

We should also put violence in perspective. We obsess over some violence while ignoring other violence. Most killings happen within societies, not between societies (unless your one of the populations historically targeted by Western imperialism). And most killings happen within specific demographics, not between demographics. For example, most American whites are killed by American whites, not by foreign terrorists or American blacks. About terrorism, most of it is committed by Americans against Americans; in fact, often whites against whites.

Race is as much a rationalization of violence than it is a cause. Westerners wanted to steal land and resources, to exploit populations. So, they invented racial ideology to justify it. But this basic tendency toward justification of violence is nothing new. As Jared Diamond describes, even groups that are essentially the same will use othering language in order to psychologically distance themselves. Otherwise, it would be harder to kill people. But creating perceived differences is quite simple (as shown numerous times: Jane Elliott’s eye color experiment, Rebecca Bigler’s shirt color experiment, Muzafer Sherif’s Robbers Cave experiment, etc).

Race is a social construct and a rather recent invention at that — for certain, it didn’t exist in the ancient world. There is nothing in human nature that demonstrates an instinct for racism. Rather, what humans are talented at is seeing differences and turning them into categories. This could be as simple as where one lives, such as two tribal bands or two neighborhood gangs fighting. Or it could be based on what clothes are worn and, when people are too similar, they will create artificial differences such as gang colors. But once we’ve created these differences, our minds treat them as essential. We need to learn to step back from our learned biases.

* * *

Why Your Brain Hates Other People
by Robert Sapolsky, Nautilis

We all have multiple dichotomies in our heads, and ones that seem inevitable and crucial can, under the right circumstances, evaporate in an instant.

Lessening the Impact of Us/Them-ing

So how can we make these dichotomies evaporate? Some thoughts:

Contact: The consequences of growing up amid diversity just discussed bring us to the effects of prolonged contact on Us/Theming. In the 1950s the psychologist Gordon Allport proposed “contact theory.” Inaccurate version: bring Us-es and Thems together (say, teenagers from two hostile nations in a summer camp), animosities disappear, similarities start to outweigh differences, everyone becomes an Us. More accurate version: put Us and Thems together under narrow circumstances and something sort of resembling that happens, but you can also blow it and worsen things.

Some of the effective narrower circumstances: each side has roughly equal numbers; everyone’s treated equally and unambiguously; contact is lengthy and on neutral territory; there are “superordinate” goals where everyone works together on a meaningful task (say, summer campers turning a meadow into a soccer field).

Even then, effects are typically limited—Us-es and Thems quickly lose touch, changes are transient and often specific—“I hate those Thems, but I know one from last summer who’s actually a good guy.” Where contact really causes fundamental change is when it is prolonged. Then we’re making progress.

Approaching the implicit: If you want to lessen an implicit Us/Them response, one good way is priming beforehand with a counter-stereotype (e.g., a reminder of a beloved celebrity Them). Another approach is making the implicit explicit—show people their implicit biases. Another is a powerful cognitive tool—perspective taking. Pretend you’re a Them and explain your grievances. How would you feel? Would your feet hurt after walking a mile in their shoes?

Replace essentialism with individuation: In one study, white subjects were asked about their acceptance of racial inequalities. Half were first primed toward essentialist thinking, being told, “Scientists pinpoint the genetic underpinnings of race.” Half heard an anti-essentialist prime—“Scientists reveal that race has no genetic basis.” The latter made subjects less accepting of inequalities.

Flatten hierarchies: Steep ones sharpen Us/Them differences, as those on top justify their status by denigrating the have-nots, while the latter view the ruling class as low warmth/high competence. For example, the cultural trope that the poor are more carefree, in touch with and able to enjoy life’s simple pleasures while the rich are unhappy, stressed, and burdened with responsibility (think of miserable Scrooge and those happy-go-lucky Cratchits). Likewise with the “they’re poor but loving” myth of framing the poor as high warmth/low competence. In one study of 37 countries, the greater the income inequality, the more the wealthy held such attitudes.

Some Conclusions

From massive barbarity to pinpricks of microaggression, Us versus Them has produced oceans of pain. Yet, I don’t think our goal should be to “cure” us of all Us/Them dichotomizing (separate of it being impossible, unless you have no amygdala).

I’m fairly solitary—I’ve spent a lot of my life living alone in a tent in Africa, studying another species. Yet some of my most exquisitely happy moments have come from feeling like an Us, feeling accepted, safe, and not alone, feeling part of something large and enveloping, with a sense of being on the right side and doing both well and good. There are even Us/Thems that I—eggheady, meek, and amorphously pacifistic—would kill or die for.

If we accept that there will always be sides, it’s challenging to always be on the side of angels. Distrust essentialism. Remember that supposed rationality is often just rationalization, playing catch-up with subterranean forces we never suspect. Focus on shared goals. Practice perspective taking. Individuate, individuate, individuate. And recall how often, historically, the truly malignant Thems hid themselves while making third parties the fall guy.

Meanwhile, give the right-of-way to people driving cars with the “Mean people suck” bumper sticker, and remind everyone that we’re in this together against Lord Voldemort and House Slytherin.



Black Global Ruling Elite

One of my favorite activities is reversing arguments, in order to make a point. It is using the structure of an argument to contradict someone’s claim or to demonstrate the fundamental irrationality of their worldview. Also, sometimes it can just be an act of playful silliness, a game of rhetoric. Either way, it requires imagination to take an argument in an unexpected direction.

To be able to reverse an argument, you have to first understand the argument. This requires getting into someone else’s head and seeing the world from their perspective. You need to know your enemy. I’ve long made it a habit to explore other ideologies and interact with those advocating them. It usually ends in frustration, but I come out the other side with an intimate knowledge of what makes others tick.

The opposing group I spent the most time with was HBD crowd (human biodiversity). HBDers are filled with many reactionaries, specifically race realists and genetic determinists. The thing about reactionaries is that they love to co-opt rhetoric and tactics from the political left. HBD theory was originated by someone, Jonathan Marks, making arguments against race realism and genetic determinism. The brilliance of the reactionaries was to do exactly what I’m talking about — they reversed the arguments.

But as chamelion-like faceless men, reactionaries use this strategy to hide their intentions behind deceptive rhetoric. No HBDer is ever going to admit the anti-reactionary origins of human biodiversity ( just like right-libertarians won’t acknowledge the origins of libertarianism as a left-wing ideology in the European workers movement). The talent of reactionaries is in pretending that what they stole was always theirs. They take their games of deception quite seriously. Their trolling is a way of life.

“There’s only one thing we can do to thwart the plot of these albino shape-shifting lizard BITCHES!” Their arguments need to be turned back the other way again. Or else turn them inside out to the point of absurdity. Let us call it introducing novelty. I’ve done this with previous posts about slavery and eugenics. The point I made is that, by using HBD-style arguments, we should actually expect American blacks to be a superior race.

This is for a couple of reasons. For centuries in America, the most violent, rebellious, and criminal blacks were eugenically removed from the breeding population, by way of being killed or imprisoned — and so, according to HBD, the genetics of violence, rebelliousness, criminality, etc should have decreased along with all of the related genetically-determined behavior. Also, since the colonial era, successful and supposedly superior upper class whites were impregnating their slaves, servants, and any other blacks they desired which should have infused their superior genetics into the American black population. Yet, contradicting these obvious conclusions, HBDers argue the exact opposite.

Let me clarify one point. African-Americans are a genetically constrained demographic, their ancestors having mostly come from one area of Africa. And the centuries of semi-eugenics theoretically would have narrowed those genetics down further, even in terms of the narrow selection of white genetics that was introduced. But these population pressures didn’t exist among other African descendants. Particularly in Africa itself, the complete opposite is the case.

Africa has more genetic and phenotypic diversity than the rest of the world combined. Former slave populations that came from more various regions of Africa should also embody this greater genetic diversity. The global black population in general, in and outside Africa, is even more diverse than the African population alone. As such we should expect that the global black population will show the greatest variance of all traits.

This came to mind because of the following comment:

“Having a less oppressive environment increases variance in many phenotypes. The IQ variance of (less-oppressed) whites is greater than (more-oppressed) blacks despite less genetic diversity. Since women are on average more oppressed (i.e. outcasted more for a given deviance from the norms and given norms that take more effort to conform to) their traits would be narrower.”

The data doesn’t perfectly follow this pattern, in that there are exceptions. Among certain sub-population in oppressed populations, there sometimes is greater IQ variance. There are explanations for why this is the case, specifically the theory that females have a greater biological capacity for dealing with stressful conditions (e.g., oppression). But for the moment, let’s ignore that complication.

The point is that, according to genetic determinism, the low genetic diversity of whites should express as low IQ gaps, no matter the environmental differences. It shouldn’t matter that, for example, in the US the white population is split between socioeconomic extremes — as the majority of poor Americans are white and the majority of rich Americans are white. But if genetic determinism is false (i.e., more powerful influences being involved: environment, epigenetics, microbiome, etc), the expected result would be lower average IQ with lower class whites and higher average IQ with higher class whites — the actual pattern that is found.

Going by the data, we are forced to conclude that genetic determinism isn’t a compelling theory, at least according to broad racial explanations. Some HBDers would counter that the different socioeconomic populations of whites are also different genetic sub-populations. But the problem is that this isn’t supported by the lack of genetic variance found across white populations.

That isn’t what mainly interested me, though. I was more thinking about what this means for the global black population, far beyond a single trait. Let us assume that genetic determinism and race realism is true, for the sake of argument.

Since the African continent has more genetic diversity than the rest of the world combined, the global black population (or rather populations) that originated in Africa should have the greatest variation of all traits, not just IQ. They should have the greatest variance of athleticism to lethargy, pacifism to violence, law-abiding to criminality, wealth to poverty, global superpowers to failed states, etc.

We should disproportionately find those of African ancestry at every extreme across the world. Compared to all other populations, they would have the largest numbers of individuals in both the elite and the underclass. That means that a disproportionate number of political and corporate leaders would be black, if there was a functioning meritocracy of the Social Darwinian variety.

The greater genetic variance would lead to the genetically superior blacks disproportionately rising to the upper echelons of global wealth and power. The transnational plutocracy, therefore, should be dominated by blacks. We should see the largest gaps within the global black population and not between blacks and whites, since the genetic distance between black populations is greater than the genetic difference between particular black populations and non-black populations.

Based on the principles of human biodiversity, that means principled HBDers should support greater representation of blacks at all levels of global society. I can’t wait to hear this new insight spread throughout the HBD blogosphere. Then HBDers will become the strongest social justice warriors in the civil rights movement. Based on the evidence, how could HBDers do anything less?

Well, maybe there is one other possible conclusion. As good reactionaries, the paranoid worldview could be recruited. Accordingly, it could be assumed that the genetically superior sub-population of black ruling elite is so advanced that they’ve hidden their wealth and power, pulling the strings behind the scenes. Maybe there is Black cabal working in secret with the Jewish cabal in controlling the world. It’s this Black-Jewish covert power structure that has promoted the idea of an inferior black race to hide the true source of power. We could take this argument even further. The black sub-population might be the ultimate master race with Jews acting as their minions in running the Jew-owned banks and media as front groups.

It’s starting to make sense. I think there might be something to all of this genetic determinism and race realism. It really does explain everything. And it is so amazingly scientific.


Is the Tide Starting to Turn on Genetics and Culture?

Here is an alt-righter struggling with scientific understanding:

When I first came upon the argument that “culture is a racial construct” last year, I was pretty horrified. I saw this as a re-gurgitated Nazi talking point that was clearly unfactual.

But like other longtime taboo topics such as HBD, eugenics, and White identity, I’ve seen this theory pop up over the past year in some shocking places. First, a scientific magazine revealed that orcas genetics’ are affected by culture and vice versa. Then, I started seeing normies discuss this talking point in comment sections in the Wall Street Journal and even NY Times.

Finally, a liberal academic has thrown himself into the discussion. Bret Weinsten, a Jewish Leftist who most people here know as the targeted professor of the Marxist insanity at Evergreen University, posted this tweet yesterday: “Sex is biological. Gender is cultural. Culture is biological,” and then this one today: “Culture is as adaptive, evolutionary and biological as genes. You’re unlikely to accept it. But if you did you’d see people with 10X clarity.”

This is a pretty remarkable assertion coming from someone like Bret Weinstein. I wonder if the dam will eventually break and rather than being seen as incredibly taboo, this theory will be commonly accepted. If so, it’s probably the best talking point you have for America to prioritize its demographics.

What is so shocking?

This line of thought, taken broadly, has been developing and taking hold in the mainstream for more than a century. Social constructionism was popularized and spread by the anthropologist Franz Boaz. I don’t think this guy grasps what this theory means nor its implications. That “culture is a racial construct” goes hand in hand with race being a cultural construct, which is to say we understand the world and our own humanity through the lens of ideology, in the sense used by Louis Althusser. As applied to the ideology of pseudo-scientific race realism and gender realism, claims of linear determinism of singular and isolated causal factors are meaningless because research has shown that all aspects are intertwined factors in how we develop and who we become.

Bret Weinstein makes three assertions: “Sex is biological. Gender is cultural. Culture is biological.” I don’t know what is his ideological position. But he sounds like a genetic determinist, although this is not clear since he also claims that his assertions have nothing to do with group selection (a standard reductionist approach). Anyway, to make these statements accurate, other statements would need to be added — such as that, biology is epigenetics, epigenetics is environment, and environment is culture. We’d have to throw in other things as well, from biome to linguistic relativism. To interpret Weinstein generously and not taking his use of ‘is’ too literally: Many things are many other things or rather closely related, if by that we mean that multiple factors can’t be reduced to one another in that they influence each other in multiple directions and through multiple pathways.

Recent research has taken this even further in showing that neither sex nor gender is binary (1, 2, 3, 4, & 5), as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way. Most of the genetics in human DNA is entirely unknown in its purpose or maybe lack of purpose, although the Junk DNA theory has become highly contested. And most genetics in the human body is non-human: bacteria, viruses, symbiotes, and parasites. The point is that, scientifically speaking, causation is a lot harder to prove than many would like to admit.

The second claim by Weinstein is even more interesting: “Culture is as adaptive, evolutionary and biological as genes.” That easily could be interpreted in alignment with Richard Dawkins theory of memetics. That argument is that there are cultural elements that act and spread similarly to genes, like a virus replicating. With the growing research on epigenetics, microbiome, parasites, and such, the mechanisms for such a thing become more plausible. We are treading in unexplored territory when we combine memetics not just with culture but also with extended mind and extended phenotype. Linguistic relativism, for example, has proven that cultural influences can operate through non-biological causes — in that bilingual individuals with the same genetics will think, perceive, and act differently depending on which language they are using. Yes, culture is adaptive, whether or not in the way Weinstein believes.

The problems in this area only occur when one demands a reductionist conclusion. The simplistic thinking of reductionism appeals to the limits of the human mind. But reality has no compulsion to comform to the human mind. Reality is irreducible. And so we need a scientific understanding that deals with, rather than dismisses, complexity. Indeed, the tide is turning.