The Language of Heritability

“The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference.”
~Siddhartha Mukherjee, Same But Different

If genetics are the words in a dictionary, then epigenetics is the creative force that forms those words into a library of books. Even using the same exact words in the genomic code from identical twins, they can be expressed in starkly different ways. Each gene’s expression is dependent on it’s relationship to numerous other genes, potentially thousands, and all of those genes together are moderated according to epigenetics.

The epigenome itself can be altered by individual and environmental factors (type of work, exercise, and injuries; traumatic abuse, chronic stress, and prejudice; smoking, drinking, and malnutrition; clean or polluted air, water and soil; availability of green spaces, socioeconomic class, and level of inequality; etc). Then those changes can be passed on across multiple generations (e.g., the grandchildren of famine victims having higher obesity rates). This applies even to complex behaviors being inherited (e.g., the grandchildren of shocked mice, when exposed to cherry blossom scent, still jumping in response to the shock their grandparents experienced when exposed to the same scent).

What is rarely understood is that heritability rates don’t refer directly to genetics alone. It simply speaks to the entire package of influences. We don’t only inherit genes for we also inherit epigenetic markers and environmental conditions, all of the confounders that make twin studies next to useless. Heritability is only meaningful at a population level and can say nothing directly about individual people or individual factors such as a specific gene. And at a population level, research has shown that behavioral and cultural traits can persist over centuries, and they seem to have been originally caused by distant historical events of which the living memory has long since disappeared, but the memory lingers in some combination of heritable factors.

Even if epigenetics could only last several generations, though at least in some species much longer, the social conditions could continually reinforce those epigenetic changes so that they effectively become permanently set. And the epigenetics, in predisposing social behaviors, would create a vicious cycle of feeding back into the conditions that maintain the epigenetics. Or think of the centuries-long history of racism in the United States where evidence shows racism remains pervasive, systemic, and institutional, in which case the heritability is partly being enforced upon an oppressed underclass by those with wealth, privilege, and power. That wealth, power, and privilege is likewise heritable, as is the entire social order. No one part can be disentangled from the rest for none of us are separate from the world that we are born into.

Now consider any given disease, behavior, personality trait, etc might be determined by thousands of genes, thousands of epigenetic markers, and thousands of external factors. Change any single part of that puzzle might mean to rearrange the the entire result, even leading to a complete opposite expression. The epigenome determines not only if a gene is expressed but how it is expressed because it determines how which words are used in the genomic dictionary and how those words are linked into sentences, paragraphs, and chapters. So, one gene might be correlated as heritable with something in a particular society while correlated to something entirely else in a different society. The same gene could potentially have immense possible outcomes, in how the same word could be found in hundreds of thousands of books. Many of the same words are found in both Harry Potter and Hamlet, but that doesn’t help us to understand what makes one book different from the other. This is a useful metaphor, although an aspect of it might be quite literal considering what has been proven in the research on linguistic relativity.

There is no part of our lives not touched by language in shaping thought and affect, perception and behavior. Rather than a Chomskyan language organ that we inherit, maybe language is partly passed on through the way epigenetics ties together genes and environment. Even our scientific way of thinking about such issues probably leaves epigenetic markers that might predispose our children and grandchildren to think scientifically as well. What I’m describing in this post is a linguistically-filtered narrative upheld by a specific Jaynesian voice of authorization in our society. Our way of speaking and understanding changes us, even at a biological level. We are unable of standing back from the very thing about which we speak. In fact, it has been the language of scientific reductionism that has made it so difficult coming to this new insight into human nature, that we are complex beings in a complex world. And that scientific reduction has been a central component to the entire ruling paradigm, which continues to resist this challenging view.

Epigenetics can last across generations, but it can also be changed in a single lifetime. For centuries, we enforced upon the world, often violently and through language, an ideology of genetic determinism and race realism. The irony is that the creation of this illusion of an inevitable and unalterable social order was only possible through the elite’s control of environmental conditions and hence epigenetic factors. Yet as soon as this enforcement ends, the illusion drifts away like a fog dissipated by a strong wind and now through clear vision the actual landscape is revealed, a patchwork of possible pathways. We constantly are re-created by our inheritance, biological and environmental, and in turn we re-create the social order we find. But with new ways of speaking will come new ways of perceiving and acting in the world, and from that a different kind of society could form.

[This post is based on what is emerging in this area of research. But some of it remains speculative. Epigenetics, specifically, is still a young field. It’s difficult to detect and follow such changes across multiple generations. If and when someone proves that linguistic relativity can even reach to the level of the epigenome, a seeming inevitability (considering it’s already proven language alters behavior and behavior alters epigenetics), that could be the death blow to the already ailing essentialist paradigm (Essentialism On the Decline). According to the status quo, epigenetics is almost too radical to be believed, as is linguistic relativity. Yet we know each is true to a larger extent than present thought allows for. Combine the two and we might have a revolution of the mind.]

* * *

The Ending of the Nature vs Nurture Debate
Heritability & Inheritance, Genetics & Epigenetics, Etc
Identically Different: A Scientist Changes His Mind
Epigenetic Memory and the Mind
Inherited Learned Behavior
Epigenetics, the Good and the Bad
Trauma, Embodied and Extended
Facing Shared Trauma and Seeking Hope
Society: Precarious or Persistent?
Plowing the Furrows of the Mind

What If (Almost) Every Gene Affects (Almost) Everything?
by Ed Yong

But Evan Boyle, Yang Li, and Jonathan Pritchard from Stanford University think that this framework doesn’t go far enough.

They note that researchers often assume that those thousands of weakly-acting genetic variants will all cluster together in relevant genes. For example, you might expect that height-associated variants will affect genes that control the growth of bones. Similarly, schizophrenia-associated variants might affect genes that are involved in the nervous system. “There’s been this notion that for every gene that’s involved in a trait, there’d be a story connecting that gene to the trait,” says Pritchard. And he thinks that’s only partly true.

Yes, he says, there will be “core genes” that follow this pattern. They will affect traits in ways that make biological sense. But genes don’t work in isolation. They influence each other in large networks, so that “if a variant changes any one gene, it could change an entire gene network,” says Boyle. He believes that these networks are so thoroughly interconnected that every gene is just a few degrees of separation away from every other. Which means that changes in basically any gene will ripple inwards to affect the core genes for a particular trait.

The Stanford trio call this the “omnigenic model.” In the simplest terms, they’re saying that most genes matter for most things.

More specifically, it means that all the genes that are switched on in a particular type of cell—say, a neuron or a heart muscle cell—are probably involved in almost every complex trait that involves those cells. So, for example, nearly every gene that’s switched on in neurons would play some role in defining a person’s intelligence, or risk of dementia, or propensity to learn. Some of these roles may be starring parts. Others might be mere cameos. But few genes would be left out of the production altogether.

This might explain why the search for genetic variants behind complex traits has been so arduous. For example, a giant study called… er… GIANT looked at the genomes of 250,000 people and identified 700 variants that affect our height. As predicted, each has a tiny effect, raising a person’s stature by just a millimeter. And collectively, they explain just 16 percent of the variation in heights that you see in people of European ancestry.

An Enormous Study of the Genes Related to Staying in School
by Ed Yong

Over the past five years, Benjamin has been part of an international team of researchers identifying variations in the human genome that are associated with how many years of education people get. In 2013, after analyzing the DNA of 101,000 people, the team found just three of these genetic variants. In 2016, they identified 71 more after tripling the size of their study.

Now, after scanning the genomes of 1,100,000 people of European descent—one of the largest studies of this kind—they have a much bigger list of 1,271 education-associated genetic variants. The team—which includes Peter Visscher, David Cesarini, James Lee, Robbee Wedow, and Aysu Okbay—also identified hundreds of variants that are associated with math skills and performance on tests of mental abilities.

The team hasn’t discovered “genes for education.” Instead, many of these variants affect genes that are active in the brains of fetuses and newborns. These genes influence the creation of neurons and other brain cells, the chemicals these cells secrete, the way they react to new information, and the way they connect with each other. This biology affects our psychology, which in turn affects how we move through the education system.

This isn’t to say that staying in school is “in the genes.” Each genetic variant has a tiny effect on its own, and even together, they don’t control people’s fates. The team showed this by creating a “polygenic score”—a tool that accounts for variants across a person’s entire genome to predict how much formal education they’re likely to receive. It does a lousy job of predicting the outcome for any specific individual, but it can explain 11 percent of the population-wide variation in years of schooling.

That’s terrible when compared with, say, weather forecasts, which can correctly predict about 95 percent of the variation in day-to-day temperatures.

Complex grammar of the genomic language
from Science Daily

Each gene has a regulatory region that contains the instructions controlling when and where the gene is expressed. This gene regulatory code is read by proteins called transcription factors that bind to specific ‘DNA words’ and either increase or decrease the expression of the associated gene.

Under the supervision of Professor Jussi Taipale, researchers at Karolinska Institutet have previously identified most of the DNA words recognised by individual transcription factors. However, much like in a natural human language, the DNA words can be joined to form compound words that are read by multiple transcription factors. However, the mechanism by which such compound words are read has not previously been examined. Therefore, in their recent study in Nature, the Taipale team examines the binding preferences of pairs of transcription factors, and systematically maps the compound DNA words they bind to.

Their analysis reveals that the grammar of the genetic code is much more complex than that of even the most complex human languages. Instead of simply joining two words together by deleting a space, the individual words that are joined together in compound DNA words are altered, leading to a large number of completely new words.

“Our study identified many such words, increasing the understanding of how genes are regulated both in normal development and cancer,” says Arttu Jolma. “The results pave the way for cracking the genetic code that controls the expression of genes. “

How is knowledge spread and made compelling?

Our friend over at the Open Society blog republished one of our pieces. He “edited out some of the bit about right-left brains.” And we were fine with that, as we understood his reasons. He said that, “I think this sort of dichotomy causes more misunderstandings for the average person than it clarifies.” And, “in order to keep this piece accessible to everyone, it’s better not to get into ongoing technical neuroanatomy debates here.”

We have no dispute with his choice of editing. It was just information and we like to share information, but it wasn’t even a part of the central text of what had been written. Still, it was important in a general sense, as background knowledge and explanatory context. In another comment, he brought up scientific illiteracy and the sorry state of (un-)education in this country. And we couldn’t disagree with any of that. But we responded back with some lengthy comments clarifying our position.

It’s not my first instinct to edit myself, as might be apparent to anyone reading my blog. I’m not always known for my concision. The idea of changing what I write based on the presumed level of knowledge of prospective readers isn’t exactly my style, not that I don’t understand the purpose of doing so. It’s not as if I never consider how others might read what I write, something I always try to keep in mind. I do want to communicate well. I’m not here to merely talk to myself. But thinking about it made me more self-aware of what motivates me in wanting to communicate.

We’re talking about not only knowledge but, more importantly, understanding and meaning, what forms our sense of shared reality and informs our sense of shared purpose. It’s an interesting and worthy topic to discuss. By the way, we felt like speaking in the plural for the introduction here, but the comments below are in first-person singular. These are taken from the Open Society blog with some revision. So, we’re republishing our comments to the republishing of our post. It’s almost like a conversation.

Before we get to our comments below, let us share some personal experience. When we were young, we had regular conversations with our father. He would always listen, question, elicit further thoughts, and respond. But what he never did was talk down to us or simplify anything. He treated us as if we were intellectual equals, even though obviously that wasn’t the case. He was a professor who, when younger, had found learning easy and rarely studied. He had obvious proof his intellectual abilities. We, on the other hand, always struggled with a learning disability. Still, our father instilled in us a respect for knowledge and a love of learning.

That is how we strive to treat all others. We don’t know if that is a good policy for a blog. Maybe that explains why our readership is so small. One could interpret that as a failure to our approach. If so, we fail on our own terms. But we hope that, in our good intentions, we do manage to reach some people. No doubt we could reach a larger audience by following the example of the Open Society blog. That blog is a much more finished product than the bare-bones text on offer here. So, maybe all my idealism is moot. That is an amusing thought. Then again, Open Society has republished other posts by us. So that is some minor accomplishment. Maybe those edited versions are an improvement. I’ll leave that for others to decide

* * *

Sadly, you’re probably right that science education is so pathetically deficient in this country that discussion of even something so basic as the research on brain hemispheres likely “causes more misunderstandings for the average person than it clarifies.” I wish that weren’t true.

Still, I’d encourage others to look into the science on brain hemispheres. I’d note that the views of Iain McGilchrist (and Julian Jaynes, etc) have nothing to do with the layman’s interpretation. To be honest, there is no way to fully understand what’s going on here without some working knowledge in this area. But the basic idea comes across without any of the brain science. Maybe that is good enough for present purposes.

I’m not entirely opposed to making material more accessible in meeting people where they are at. But hopefully, this kind of knowledge will become more common over time. It is so fundamental that it should be taught in high school science classes. My aspiration for my blog is to inspire people to stretch their minds and learn what might at first seem difficult or strange, not that I always accomplish that feat. Instead, I’m likely to talk over people’s heads or simply bore them.

It can be hard to express to others why something seems so fascinating to me, why it’s important to go to the effort of making sense of it. I realize my mind doesn’t operate normally, to put it mildly. But even with my endless intellectual curiosity, I have to admit to struggling with the science at times (to be honest, a lot of the times). So, I sympathize with those who lose interest or get confused by all the differing and sometimes wrongheaded opinions about brain hemispheres or whatever.

* * *

Scientific illiteracy is a problem in the US. And it’s an open secret. I’ve seen plenty of discussion of it over the years. It would help if there was a better education system and not limited to college. Remember that three quarter of Americans don’t have any college education at all. That is why educational reform would need to start with grade school.

Still, I don’t know what is the main problem. I doubt the average American is quite as ignorant as they get treated, even if they aren’t well educated. For example, most Americans seem to have a basic grasp of the climate crisis and support a stronger government response. It’s not as if we had more science classes that we’d finally get politicians on board. The basic science is already understood, even by those politicians who deny it.

Saying the public is scientifically illiterate doesn’t necessarily tell us much about the problem. I was reading a book about the issue of climate change in one of the Scandinavian countries. They have a much better education system and more scientific literacy. But even there, the author said that it’s hard to have an honest public debate because thinking about it makes most people feel uncomfortable, depressed, and hopeless. So people mostly just don’t talk about it.

Part of it goes back to cognitive dissonance. Even when people have immense knowledge on a topic, there remains the dissociation and splintering. People can know all kinds of things and yet not know. The collective and often self-enforced silencing is powerful, as Derrick Jensen shows. The human mind operates largely on automatic. By the way, the science of brain hemispheres can explain some of why that is the case, a major focus of Jaynes’ work.

What we lack is not so much knowledge about the world as insight and understanding about our own nature. We have enough basic working knowledge already to solve or lessen all of the major problems, if we could only get out of our own way. That said, we can never have too much knowledge and improving education certainly couldn’t hurt. We’re going to need the full human potential of humanity to meet these challenges.

* * *

Here is a thought. What if underestimating the public is a self-fulfilling prophecy? Paralyzing cynicism can come in many forms. And I know I’m often guilty of this. It’s hard to feel hopeful. If anything, hope can even seem naive and wrongheaded. Some argue that we’re long past that point and now it’s time for grieving lost opportunities that are forever gone. But even if we resign ourselves to mere triage, that still requires some basic sense of faith in the future.

I’m not sure what I think or feel about all of this. But what does seem clear to me is that we Americans have never fallen into the problem of overestimating the public. Instead, we have a disempowered and disenfranchised population. What motivation is there for the public to seek further knowledge when the entire system powerfully fucks them and their loved ones over and over again? What would inspire people to seek out becoming better informed through formal education or otherwise?

Knowledge matters. But the larger context to that knowledge matters even more. I don’t know what that means in practical terms. I’m just thinking the public should be given more credit, not so easily let off the hook. Even when public ignorance appears justified based on a failed education system or a successful non-education system, maybe that is all the more reason to hold up a high standard of knowledge, a high ideal of intellectual curiosity, rather than talking down to people and dumbing down discussion.

That isn’t to say we shouldn’t try to communicate well in knowing our audience. On many topics, it’s true that general knowledge, even among the elite, is limited at best and misinformed at worst. But the worst part is how ignorance has been embraced in so many ways, as if one’s truth is simply a matter of belief. What if we stopped tolerating this willful ignorance and all the rationalizations that accompany it. We should look to the potential in people that remains there no matter how little has been expected of them. We should treat people as intellectually capable.

Education is always a work in progress. Still, the American public is more educated today than a century ago. The average IQ measured in the early 1900s would be, by today’s standards of IQ testing, functionally retarded and I mean that literally (increases in IQ largely measure abstract and critical thinking skills). Few Americans even had high school degrees until the Silent Generation. Society has advanced to a great degree in this area, if not as much as it should. I worry that we’ve become so jaded that we see failure as inevitable and so we keep lowering our standards, instead of raising them higher as something to aspire toward.

My grandfather dropped out of high school. You know what was one of his proudest accomplishments? Sending two of his kids to college. Now kids are being told that education doesn’t matter, that college is a waste of money. We stopped valuing education and that symbolizes a dark change to the public mood. To not value education is to denigrate knowledge itself. This isn’t limited to formal education, scientific literacy and otherwise. I failed to get much scientific knowledge in high school and I didn’t get a college degree. Even so, I was taught by my parents to value learning, especially self-directed learning, and to value curiosity. I’ve struggled to educate myself (and to undo my miseducation), but I was inspired to do so because the value of it had been internalized.

The deficiency in education doesn’t by itself explain the cause. It doesn’t explain why we accept it, why we treat mass ignorance as if it were an inevitability. Instead of seeing ignorance as a challenge, as a motivation toward seeking greater knowledge, American society has treated ignorance as the natural state of humanity or at least the natural state of the dirty masses, the permanent underclass within the Social Darwinian (pseudo-)meritocracy. In this worldview, most people don’t merely lack knowledge but lack any potential or worth, some combination of grunt workers and useless eaters. What could shift this toward another way of seeing humanity?

* * *

I was wondering where knowledge is truly lacking, where curiosity about a topic is lacking, and where it matters most. Climate change is one topic where I do think there is basic necessary level of knowledge, most people have a fair amount of interest in it, and it obviously is important. What’s going on with the climate change ‘debate’ has to do with powerful interests controlling the reigns of power. If politicians did what most Americans want, we’d already be investing money and doing research to a far greater degree.

Ignorance is not the problem in that case. But it’s different with other topics. I’ve noticed how lead toxicity and high inequality maybe do more fall victim to ignorance, in that for some reason they don’t get the same kind of attention, as they aren’t looming threats in the way is climate change. In one post, I called lead toxicity a hyperobject to describe its pervasive invisibility. Temperature can be felt and a storm can be watched, but lead in your air, water, and soil comes across as an abstraction since we have no way to concretely perceive it. Even the lead in your child’s brain shows no outward signs, other than the kid being slightly lower IQ and having some behavioral issues.

Nonetheless, I’m not sure that is a problem of knowledge. Would teaching about lead toxicity actually make it more viscerally real? Maybe not. That’s a tough one. If you asked most people, they probably already know about the dangers of lead toxicity in a general sense and they already know about specific places where there are high rates, but they probably don’t grasp how widespread this is in so many communities, especially toxicity in general such as with toxic dumps. I don’t know what would make it seem more real.

Lead, as tiny particles, doesn’t only hide in the environment but hides in the body where it wreaks havoc but slowly and in many small ways. Your kid gets into a fight and has trouble at school. The first thought most parents have is simple concern for treating the behavior and the hurt the child is expressing. It doesn’t usually occur that there might be something damaging their child’s brain, nervous system, etc. All the parent sees is the result of changes in their child’s behavior. Knowledge, on the personal level, may or may not help that parent. Lead toxicity is often a larger environmental problem. What is really needed is a change of public policy. That would require not only knowledge, as politicians probably already know of this problem, but some other force of political will in the larger society. But since it’s mostly poor people harmed, nothing is done.

It’s hard to know how knowledge by itself makes a difference. It’s not as if there haven’t been major pieces on lead toxicity published in the mainstream media, some of them quite in depth. But the reporting on this comes and goes. It’s quickly forgotten again, as if it were just some minor, isolated problem of no greater concern. There definitely is no moral panic about it. Other than a few parents in poor communities that live with most severe consequences, it isn’t even seen as a moral issue at all.

That is what seems lacking, a sense of moral outrage and moral responsibility. I guess that is where, in my own thinking, self-understanding comes in. Morality is a deeper issue. Some of these thinkers on the mind and brain (McGilchrist, Jaynes, etc) are directly touching upon what makes the heart of morality beat. It’s not about something like brain hemispheres understood in isolation but how that relates to consciousness and identity, relates to the voices we listen to and the authority they hold. And, yes, this requires understanding a bit of science. So, how do we make this knowledge accessible and compelling, how do we translate it into common experience?

Take the other example. What about high inequality? In a way, it’s a hot topic and has grabbed public attention with Thomas Picketty, Kate Pickett, and Richard Wilkinson. Everyone knows it’s a problem. Even those on the political right are increasingly acknowledging it, such as the recent book Alienated America by the conservative Timothy Carney who works for a right-wing think tank. The knowledge is sort of there and yet not really. Americans, in theory, have little tolerance for high inequality. The problem is that, as the data shows, most Americans simply don’t realize how bad it’s gotten. Our present inequality is magnitudes beyond what the majority thinks should be allowable. Yet we go on allowing it. More knowledge, in that case, definitely would matter. But without the moral imperative, the sense of value of that knowledge remains elusive.

As for brain hemispheres, I suppose that seems esoteric to the average person. Even most well-educated people don’t likely take it seriously. Should they? I don’t know. It seems important to me, but I’m biased as this is an area of personal interest. I can make an argument that this kind of thing might be among the most important knowledge, since it cuts to the core of every other problem. Understanding how our brain-mind works underlies understanding anything and everything else, and it would help to explain what is going so wrong with the world in general. Knowledge of the brain-mind is knowledge about what makes knowledge possible at all, in any area. I suspect that, as long as our self-knowledge is lacking, to that degree any attempt at solving problems will be impotent or at least severely crippled.

Would discussing more about brain hemispheres and related info in the public sphere help with the situation? Maybe or maybe not. But it seems like the type of thing we should be doing, in raising the level of discussion in general. Brain research might not be a good place to start with our priorities. If so, then we need to find how to promote greater psychological and neurocognitive understanding in some other way. This is why I’m always going on about Jaynes, even though he seems like an obscure thinker. In my opinion, he may be one of the most important thinkers in the 20th century and his theories might hold the key to the revolution of the mind that we so sorely need. Then again, I could be giving him too much praise. It’s just that I doubt the world would be worse off for having more knowledge of this variety, not just knowledge but profound insight.

All in all, it’s a tough situation. Even if Jaynes’ book was made required reading in every school, I don’t know that would translate to anything beneficial. It would have to be part of a larger public debate going on in society. Before that can happen, we will probably need to hit a crisis that reaches the level of catastrophe. Then moral panic will follow and, assuming we avoid the disaster of authoritarianism, we might finally be able to have some serious discussion across society about what matters most. I guess that goes back to the context of knowledge, that which transmutes mere info into meaning.

* * *

Here is an interesting question. How does knowledge become common knowledge? That relates to what I mentioned in another comment. How does knowledge become meaning? Or to put it another way: How does the abstract become concretely, viscerally, and personally real? A lot of knowledge has made this shift. So much of the kind of elite education that once would have been limited to aristocracy and monks has now become increasingly common. Not that long ago, most Americans were illiterate and had next to no education. Or consider, as I pointed out, how the skills of abstract and critical thinking (fluid intelligence) has increased drastically.

We can see this in practical ways. People in general have more basic knowledge about the world around them. When Japan attacked, most Americans had little concept of where Japan was. We like to think American’s grasp of geography is bad and it may be, but it used to be far worse. Now most people have enough knowledge to, with some comprehension, follow a talk or read an article on genetics, solar flares, ocean currents, etc. We’ve become a scientific-minded society where there is a basic familiarity. It comes naturally to think about the world in scientific terms, to such extent that we now worry about scientific reductionism. No one worried about society being overtaken by scientific reductionism centuries ago.

Along with this, modern people have become more psychologically-minded. We think in terms of consciousness and unconsciousness, motives and behavior, cognitive biases and mental illnesses, personality traits and functions, and on and on. We have so internalized psychological knowledge that we simply take it for reality now. It’s similar with sociology. The idea of race as a social construction was limited to the rarified work of a few anthropologists, but now this is a common understanding that is publicly debated. Even something as simple as socioeconomic classes was largely unknown in the past, as it wasn’t how most people thought. My mother didn’t realize she was part of a socioeconomic class until she went to college and was taught about it in a sociology class.

That is what I’m hoping for, in terms of brain research and consciousness studies. This kind of knowledge needs to get over the hurdle of academia and spread out into the public mind. This is already happening. Jaynes’ ideas influenced Philip Pullman’s His Dark Materials which has been made into an HBO show. His ideas were directly discussed in another HBO show, Westworld, and caused a flurry of articles in the popular media. He also influenced Neal Stephenson in writing Snow Crash, also being made into a show, originally planned by Netflix but now picked up by HBO. I might take the superficial view of brain hemispheres as a positive sign. It means the knowledge is slowly spreading out into the general public. It’s an imperfect process and initially involves some misinformation, but that is how all knowledge spreads. It’s nothing new. For all the misinformation, the general public is far less ignorant about brain hemispheres than they were 50 years ago or a hundred years ago.

Along with the misinformation, genuine information is also becoming more common. This will eventually contribute to changing understandings and attitudes. Give it a generation or two and I’m willing to bet much of what McGilchrist is talking about will have made that transition into common knowledge in being incorporated into the average person’s general worldview. But it’s a process. And we can only promote that process by talking about it. That means confronting misinformation as it shows up, not avoiding the topic for fear of misinformation. Does that make sense?

Dietary Risk Factors for Heart Disease and Cancer

Based on a study of 42 European countries, a recent scientific paper reported that, “the highest CVD [cardiovascular disease] prevalence can be found in countries with the highest carbohydrate consumption, whereas the lowest CVD prevalence is typical of countries with the highest intake of fat and protein.” And that, “The positive effect of low-carbohydrate diets on CVD risk factors (obesity, blood lipids, blood glucose, insulin, blood pressure) is already apparent in short-term clinical trials lasting 3–36 months (58) and low-carbohydrate diets also appear superior to low-fat diets in this regard (36, 37).” Basically, for heart health, this would suggest eating more full-fat dairy, eggs, meat, and fish while eating less starches, sugar, and alcohol. That is to say, follow a low-carb diet. It doesn’t mean eat any low-carb diet, though, for the focus is on animal foods.

By the way, when you dig into the actual history of the Blue Zones (healthy, long-lived populations), what you find is that their traditional diets included large portions of animal foods, including animal fat (Blue Zones Dietary Myth, Eat Beef and Bacon!, Ancient Greek View on Olive Oil as Part of the Healthy Mediterranean Diet). The longest-lived society in the entire world, in fact, is also the one with the highest meat consumption per capita, even more than Americans. What society is that? Hong Kong. In general, nutrition studies in Asia has long shown that those eating more meat have the best health outcomes. This contradicts earlier Western research, as we’re dealing with how the healthy user effect manifests differently according to culture. But even in the West, the research is ever more falling in line with the Eastern research, such as with the study I quoted above. And that study is far from being the only one (Are ‘vegetarians’ or ‘carnivores’ healthier?).

This would apply to both meat-eaters and vegetarians, as even vegetarians could put greater emphasis on nutrient-dense animal foods. It is specifically saturated fat and animal proteins that were most strongly associated with better health, both of which could be obtained from dairy and eggs. Vegans, on the other hand, would obviously be deficient in this area. But certain plant foods (tree nuts, olives, citrus fruits, low-glycemic vegetables, and wine, though not distilled beverages) also showed some benefit. Considering plant foods, those specifically associated with greater risk of heart disease, strokes, etc were those high in carbohydrates such as grains. Unsurprisingly, sunflower oil was a risk factor, probably related to seed oils being inflammatory and oxidative (not to mention mutagenic); but oddly onions were also likewise implicated, if only weakly. Other foods showed up in the data, but the above were the most interesting and important.

Such correlations, of course, can’t prove causation. But it fits the accumulating evidence: “These findings strikingly contradict the traditional ‘saturated fat hypothesis’, but in reality, they are compatible with the evidence accumulated from observational studies that points to both high glycaemic index and high glycaemic load (the amount of consumed carbohydrates × their glycaemic index) as important triggers of CVDs. The highest glycaemic indices (GI) out of all basic food sources can be found in potatoes and cereal products, which also have one of the highest food insulin indices (FII) that betray their ability to increase insulin levels.” All of that seems straightforward, according to the overall data from nutrition studies (see: Uffe Ravnskov, Richard Smith, Robert Lustig, Eric Westman, Ben Bikman, Gary Taubes, Nina Teicholz, etc). About saturated fat not being linked to CVD risk, Andrew Mente discusses a meta-analysis he worked on and another meta-analysis by another group of researchers, Siri-Tarino PW et al (New Evidence Reveals that Saturated Fat Does Not Increase the Risk of Cardiovascular Disease). Likewise, many experts no longer see cholesterol as a culprit either (Uffe Ravnskov et al, LDL-C does not cause cardiovascular disease: a comprehensive review of the current literature).

Yet one other odd association was discovered: “In fact, our ecological comparison of cancer incidence in 39 European countries (for 2012; (59)) can bring another important argument. Current rates of cancer incidence in Europe are namely the exact geographical opposite of CVDs (see Fig. 28). In sharp contrast to CVDs, cancer correlates with the consumption of animal food (particularly animal fat), alcohol, a high dietary protein quality, high cholesterol levels, high health expenditure, and above average height. These contrasting patterns mirror physiological mechanisms underlying physical growth and the development of cancer and CVDs (60). The best example of this health paradox is again that of French men, who have the lowest rates of CVD mortality in Europe, but the highest rates of cancer incidence. In other words, cancer and CVDs appear to express two extremes of a fundamental metabolic disbalance that is related to factors such as cholesterol and IGF-1 (insulin-like growth factor).”

That is an argument people have made, but it’s largely been theoretical. In response, others have argued the opposite position (High vs Low Protein, Too Much Protein?, Gundry’s Plant Paradox and Saladino’s Carnivory, Carcinogenic Grains). It’s true that, for example, eating meat increases IGF-1, at least temporarily. Then again, eating in general does the same. And on a diet low enough in carbs, it’s been shown in studies that people naturally reduce their calorie intake, which would reduce IGF-1. And for really low-carb, the ketogenic diet is specifically defined as being low in animal protein while higher in fat. A low-carb diet is not necessarily a high-animal protein diet, especially when combined with intermittent fasting such as OMAD (one meal a day) with long periods of downregulated IGF-1. Also, this study didn’t appear to include plant proteins in the data, and so we don’t know if eating lots of soy, hemp protein powder, etc would show similar results; although nuts were mentioned in the report as being similar to meat in correlating to CVD health but, as far as I know, not mentioned in terms of cancer. What would make animal proteins more carcinogenic than plant proteins or, for that matter, plant carbohydrates? The hypothetical mechanism is not clear.

This anomaly would’ve been more interesting if the authors had surveyed the research literature. It’s hard to know what to make of it since other studies have pointed to the opposite conclusion, that the risks of these two are closely linked, rather than being inversely associated: “Epidemiologically, a healthy lifestyle lessens the risk of both cardiovascular disease and cancer, as first found in the Nurses’ Health study” (Lionel Opie, Cancer and cardiovascular disease; see Rob M. Van Dam, Combined impact of lifestyle factors on mortality). “Research has shown there are interrelationships among type 2 diabetes, heart disease, and cancer. These interrelationships may seem coincidental and based only on the fact these conditions share common risk factors. However, research suggests these diseases may relate to one another in multiple ways and that nutrition and lifestyle strategies used to prevent and manage these diseases overlap considerably” (Karen Collins, The Cancer, Diabetes, and Heart Disease Link).

Yet other researchers did find the same inverse relationship: “We herein report that, based on two separate medical records analysis, an inverse correlation between cancer and atherosclerosis” (Matthew Li et al, If It’s Not One Thing, It’s Another). But there was an additional point: “We believe that the anti-inflammatory aspect of cancer’s pan-inflammatory response plays an important role towards atherosclerotic attenuation.” Interesting! In that case, one of the key causal mechanisms to be considered is inflammation. Some diets high in animal proteins would be inflammatory, such as the Standard American Diet, whereas others would be anti-inflammatory. Eliminating seed oils (e.g., sunflower oil) would by itself reduce inflammation. Reducing starches and sugar would help as well. So, is it the meat that increases cancer or is it what the meat is being cooked in or eaten with? That goes back to the healthy and unhealthy user effects.

As this confounding factor is central, we might want to consider the increasingly common view that inflammation is involved in nearly every major disease. “For example, inflammation causes or is a causal link in many health problems or otherwise seen as an indicator of health deterioration (arthritis, depression, schizophrenia, etc), but inflammation itself isn’t the fundamental cause since it is a protective response itself to something else (allergens, leaky gut, etc). Or as yet another example, there is the theory that cholesterol plaque in arteries doesn’t cause the problem but is a response to it, as the cholesterol is essentially forming a scab in seeking to heal injury. Pointing at cholesterol would be like making accusations about firefighters being present at fires” (Coping Mechanisms of Health).

What exacerbates or moderates inflammation will be pivotal to overall health (Essentialism On the Decline), especially the nexus of disease called metabolic syndrome/derangement or what used to be called syndrome X: insulin resistance, diabetes, obesity, heart disease, strokes, etc. In fact, other researchers point directly to inflammation as being a common factor of CVD and cancer: “Although commonly thought of as two separate disease entities, CVD and cancer possess various similarities and possible interactions, including a number of similar risk factors (e.g. obesity, diabetes), suggesting a shared biology for which there is emerging evidence. While chronic inflammation is an indispensible feature of the pathogenesis and progression of both CVD and cancer, additional mechanisms can be found at their intersection” (Ryan J. Koene et al, Shared Risk Factors in Cardiovascular Disease and Cancer). But it might depend on the specific conditions how inflammation manifests as disease — not only CVD or cancer but also arthritis, depression, Alzheimer’s, etc.

This is the major downfall of nutrition studies, as the experts in the field find themselves hopelessly mired in a replication crisis. There is too much contradictory research and, when much of the research has been repeated, it simply did not replicate. That is to say much of it is simply wrong or misinterpreted. And as few have attempted to replicate much of it, we aren’t entirely sure what is valid and what is not. That further problemetizes meta-analyses, despite how potentially powerful that tool can be when working with quality research. The study I’ve been discussing here was an ecological study and that has its limitations. The researchers couldn’t disentangle all the major confounding factors, much less control for them in the first place, as they were working with data across decades that came from separate countries. Even so, it’s interesting and useful info to consider. And keep in mind that almost all official dietary recommendations are based on observational (associative, correlative, epidemiological) studies with far fewer controls. This is the nature of the entire field of nutrition studies, as long-term randomized and controlled studies on humans are next to impossible to do.

So, as always, qualifications must be made. The study’s authors state that, “In items of smaller importance (e.g. distilled beverages, sunflower oil, onions), the results are less persuasive and their interpretation is not always easy and straightforward. Similar to observational studies, our ecological study reflects ‘real-world data’ and cannot always separate mutual interactions among the examined variables. Therefore, the reliance on bivariate correlations could lead to misleading conclusions. However, some of these findings can be used as a starting point of medical hypotheses, whose validity can be investigated in controlled clinical trials.” Nonetheless, “The reasonably high accuracy of the input data, combined with some extremely high correlations, together substantially increase the likelihood of true causal relationships, especially when the results concern principal components of food with high consumption rates, and when they can be supported by other sources.”

This data is meaningful in offering strong supporting evidence. The finding about animal foods and starchy foods is the main takeaway, however tentative the conclusion may be for real world application, at least in taking this evidence in isolation. But the inverse correlation of CVD risk and cancer risk stands out and probably indicates confounders across populations, and that would be fertile territory for other researchers to explore. The main importance to this study is less in the specifics and more in how it further challenges the broad paradigm that has dominated nutrition studies for the past half century or so. The most basic point is that the diet-heart hypothesis simply doesn’t make sense of the evidence and it never really did. When the hypothesis was first argued, heart disease was going up precisely at the moment saturated fat intake was going down, since seed oils had replaced lard as the main fat source in the decades prior. Interestingly, lard has been a common denominator among most long-lived populations, from the Okinawans to Rosetans (Ancient Greek View on Olive Oil as Part of the Healthy Mediterranean Die, Blue Zones Dietary Myth).

This study is further support for a new emerging understanding, as seen with the American Heart Association backing off from its earlier position (Slow, Quiet, and Reluctant Changes to Official Dietary Guidelines). Fat is not the enemy of humanity, as seen with the high-fat ketogenic diet where fat is used as the primary fuel, instead of carbohydrates (Ketogenic Diet and Neurocognitive Health, The Ketogenic Miracle Cure, The Agricultural Mind). In fact, we wouldn’t be here without fat, as it is the evolutionary and physiological norm, specifically in terms of low-carb (Is Ketosis Normal?, “Is keto safe for kids?”). Instead, that too many carbohydrates are unhealthy used to be common knowledge (American Heart Association’s “Fat and Cholesterol Counter” (1991)). Consensus on this shifted a half century ago, the last time when low-carb diets were still part of mainstream thought, and now we are shifting back the other way. The old consensus will be new again.

* * *

Carbohydrates, not animal fats, linked to heart disease across 42 European countries
by Keir Watson

Key findings

  • Cholesterol levels were tightly correlated to the consumption of animal fats and proteins – Countries consuming more fat and protein from animal sources had higher incidence of raised cholesterol
  • Raised cholesterol correlated negatively with CVD risk – Countries with higher levels of raised cholesterol had fewer cases of CVD deaths and a lower incidence of CVD risk factors
  • Carbohydrates correlated positively with CVD risk – the more carbohydrates consumed (and especially those with high GI such as starches) the more CVD
  • Fat and Protein correlated negatively with CVD risk – Countries consuming more fat and protein from animal and plant sources had less CVD. The authors speculate that this is because increasing fat and protein in the diet generally displaces carbohydrates.

Food consumption and the actual statistics of cardiovascular diseases: an epidemiological comparison of 42 European countries
Pavel Grasgruber,* Martin Sebera, Eduard Hrazdira, Sylva Hrebickova, and Jan Cacek

Results

We found exceptionally strong relationships between some of the examined factors, the highest being a correlation between raised cholesterol in men and the combined consumption of animal fat and animal protein (r=0.92, p<0.001). The most significant dietary correlate of low CVD risk was high total fat and animal protein consumption. Additional statistical analyses further highlighted citrus fruits, high-fat dairy (cheese) and tree nuts. Among other non-dietary factors, health expenditure showed by far the highest correlation coefficients. The major correlate of high CVD risk was the proportion of energy from carbohydrates and alcohol, or from potato and cereal carbohydrates. Similar patterns were observed between food consumption and CVD statistics from the period 1980–2000, which shows that these relationships are stable over time. However, we found striking discrepancies in men’s CVD statistics from 1980 and 1990, which can probably explain the origin of the ‘saturated fat hypothesis’ that influenced public health policies in the following decades.

Conclusion

Our results do not support the association between CVDs and saturated fat, which is still contained in official dietary guidelines. Instead, they agree with data accumulated from recent studies that link CVD risk with the high glycaemic index/load of carbohydrate-based diets. In the absence of any scientific evidence connecting saturated fat with CVDs, these findings show that current dietary recommendations regarding CVDs should be seriously reconsidered. […]

Irrespective of the possible limitations of the ecological study design, the undisputable finding of our paper is the fact that the highest CVD prevalence can be found in countries with the highest carbohydrate consumption, whereas the lowest CVD prevalence is typical of countries with the highest intake of fat and protein. The polarity between these geographical patterns is striking. At the same time, it is important to emphasise that we are dealing with the most essential components of the everyday diet.

Health expenditure – the main confounder in this study – is clearly related to CVD mortality, but its influence is not apparent in the case of raised blood pressure or blood glucose, which depend on the individual lifestyle. It is also difficult to imagine that health expenditure would be able to completely reverse the connection between nutrition and all the selected CVD indicators. Therefore, the strong ecological relationship between CVD prevalence and carbohydrate consumption is a serious challenge to the current concepts of the aetiology of CVD.

The positive effect of low-carbohydrate diets on CVD risk factors (obesity, blood lipids, blood glucose, insulin, blood pressure) is already apparent in short-term clinical trials lasting 3–36 months (58) and low-carbohydrate diets also appear superior to low-fat diets in this regard (36, 37). However, these findings are still not reflected by official dietary recommendations that continue to perpetuate the unproven connection between saturated fat and CVDs (25). Understandably, because of the chronic nature of CVDs, the evidence for the connection between carbohydrates and CVD events/mortality comes mainly from longitudinal observational studies and there is a lack of long-term clinical trials that would provide definitive proof of such a connection. Therefore, our data based on long-term statistics of food consumption can be important for the direction of future research.

In fact, our ecological comparison of cancer incidence in 39 European countries (for 2012; (59)) can bring another important argument. Current rates of cancer incidence in Europe are namely the exact geographical opposite of CVDs (see Fig. 28). In sharp contrast to CVDs, cancer correlates with the consumption of animal food (particularly animal fat), alcohol, a high dietary protein quality, high cholesterol levels, high health expenditure, and above average height. These contrasting patterns mirror physiological mechanisms underlying physical growth and the development of cancer and CVDs (60). The best example of this health paradox is again that of French men, who have the lowest rates of CVD mortality in Europe, but the highest rates of cancer incidence. In other words, cancer and CVDs appear to express two extremes of a fundamental metabolic disbalance that is related to factors such as cholesterol and IGF-1 (insulin-like growth factor).

Besides total fat and protein consumption, the most likely preventive factors emerging in our study include fruits (particularly citrus fruits), wine, high-fat dairy products (especially cheese), sources of plant fat (tree nuts, olives), and potentially even vegetables and other low-glycaemic plant sources, provided that they substitute high-glycaemic foods. Many of these foodstuffs are the traditional components of the ‘Mediterranean diet’, which again strengthens the meaningfulness of our results. The factor analysis (Factor 3) also highlighted coffee, soybean oil and fish & seafood, but except for the fish & seafood, the rationale of this finding is less clear, because coffee is strongly associated with fruit consumption and soybean oil is used for various culinary purposes. Still, some support for the preventive role of coffee does exist (61) and hence, this observation should not be disregarded.

Similar to the “Mediterranean diet”, the Dietary Approaches to Stop Hypertension (DASH) diet, which is based mainly on fruits, vegetables, and low-fat dairy, also proved to be quite effective (62). However, our data indicate that the consumption of low-fat dairy may not be an optimal strategy. Considering the unreliability of observational studies highlighting low-fat dairy and the existence of strong bias regarding the intake of saturated fat, the health effect of various dairy products should be carefully tested in controlled clinical studies. In any case, our findings indicate that citrus fruits, high-fat dairy (such as cheese) and tree nuts (walnuts) constitute the most promising components of a prevention diet.

Among other potential triggers of CVDs, we should especially stress distilled beverages, which consistently correlate with CVD risk, in the absence of any relationship with health expenditure. The possible role of sunflower oil and onions is much less clear. Although sunflower oil consistently correlates with stroke mortality in the historical comparison and creates very productive regression models with some correlates of the actual CVD mortality, it is possible that both these food items mirror an environment that is deficient in some important factors correlating negatively with CVD risk.

A very important case is that of cereals because whole grain cereals are often propagated as CVD prevention. It is true that whole grain cereals are usually characterised by lower GI and FII values than refined cereals, and their benefits have been documented in numerous observational studies (63), but their consumption is also tied with a healthy lifestyle. All the available clinical trials have been of short duration and have produced inconsistent results indicating that the possible benefits are related to the substitution of refined cereals for whole grain cereals, and not because of whole grain cereals per se (64, 65). Our study cannot differentiate between refined and unrefined cereals, but both are highly concentrated sources of carbohydrates (~70–75% weight, ~80–90% energy) and cereals also make up ~50% of CA energy intake in general. To use an analogy with smoking, a switch from unfiltered to filtered cigarettes can reduce health risks, but this fact does not mean that filtered cigarettes should be propagated as part of a healthy lifestyle. In fact, even some unrefined cereals [such as the ‘whole-meal bread’ tested by Bao et al. (32)] have high glycaemic and insulin indices, and the values are often unpredictable. Therefore, in the light of the growing evidence pointing to the negative role of carbohydrates, and considering the lack of any association between saturated fat and CVDs, we are convinced that the current recommendations regarding diet and CVDs should be seriously reconsidered.

Battle of Voices of Authorization in the World and in Ourselves

New Feelings: Podcast Passivity
by Suzannah Showler

My concern is that on some level, I’m prone to mistake any voice that pours so convincingly into my brain for my own. And maybe it’s not even a mistake, per se, so much as a calculated strategy on the part of my ego to maintain its primacy, targeting and claiming any foreign object that would stray so far into the inner-sanctum of my consciousness. Whether the medium is insidious, my mind a greedy assimilation machine, or both, it seems that at least some of the time, podcasts don’t just drown out my inner-monologue — they actually overwrite it. When I listen to a podcast, I think some part of me believes I’m only hearing myself think.

Twentieth-century critics worried about this, too. Writing sometime around the late 1930s, Theodore Adorno theorized that a solitary listener under the influence of radio is vulnerable to persuasion by an anonymous authority. He writes: “The deeper this [radio] voice is involved within his own privacy, the more it appears to pour out of the cells of his more intimate life; the more he gets the impression that his own cupboard, his own photography, his own bedroom speaks to him in a personal way, devoid of the intermediary stage of the printed words; the more perfectly he is ready to accept wholesale whatever he hears. It is just this privacy which fosters the authority of the radio voice and helps to hide it by making it no longer appear to come from outside.”

I’ll admit that I have occasionally been gripped by false memories as a result of podcasts — been briefly sure that I’d seen a TV show I’d never watched, or convinced that it was a friend, not a professional producer, who told me some great anecdote. But on the whole, my concern is less that I am being brainwashed and more that I’m indulging in something deeply avoidant: filling my head with ideas without actually having to do the messy, repetitive, boring, or anxious work of making meaning for myself. It’s like downloading a prefabbed stream of consciousness and then insisting it’s DIY. The effect is twofold: a podcast distracts me from the tedium of being alone with myself, while also convincingly building a rich, highly-produced version of my inner life. Of course that’s addictive — it’s one of the most effective answers to loneliness and self-importance I can imagine.

Being Your Selves: Identity R&D on alt Twitter
by Aaron Z. Lewis

Digital masks are making the static and immortal soul of the Renaissance seem increasingly out of touch. In an environment of info overload, it’s easy to lose track of where “my” ideas come from. My brain is filled with free-floating thoughts that are totally untethered from the humans who came up with them. I speak and think in memes — a language that’s more like the anonymous manuscript culture of medieval times than the individualist Renaissance era. Everything is a remix, including our identities. We wear our brains outside of our skulls and our nerves outside our skin. We walk around with other people’s voices in our heads. The self is in the network rather than a node.

The ability to play multiple characters online means that the project of crafting your identity now extends far beyond your physical body. In his later years, McLuhan predicted that this newfound ability would lead to a society-wide identity crisis:

The instant nature of electric-information movement is decentralizing — rather than enlarging — the family of man into a new state of multitudinous tribal existences. Particularly in countries where literate values are deeply institutionalized, this is a highly traumatic process, since the clash of old segmented visual culture and the new integral electronic culture creates a crisis of identity, a vacuum of the self, which generates tremendous violence — violence that is simply an identity quest, private or corporate, social or commercial.

As I survey the cultural landscape of 2020, it seems that McLuhan’s predictions have unfortunately come true. More than ever before, people are exposed to a daily onslaught of world views and belief systems that threaten their identities. Social media has become the battlefield for a modern-day Hobbesian war of all-against-all. And this conflict has leaked into the allegedly “offline” world.

Alienated Middle Class Whites

I’ve been reading Timothy Carney’s book Alienated America that came out this past year (and already posted about it). Like so many others, it’s about what brought us to a Trump presidency. But this particular piece of journalism stands out from the crowd, albeit not a difficult achievement. I’m giving the author extra credit points because he is somewhat balanced. For a conservative (paleo-libertarian?) henchman of the American Enterprise Institute living in Washington, D.C., he surprisingly brings up a number of points a left-winger could at least partly agree with.

Looking through the book, I kept expecting to be more critical. The political right bias was there, but Carney also drew upon the views of the political left, if not to the degree and depth I’d have preferred. He discusses the history of racism, gender bias, privilege, etc. Then he puts this in context of the problems of conservative nostalgia and revanchism. He takes some pointed jabs at the right, although he plays to the frame of ‘moderation’ in believing the truth is somewhere to be found in a hypothetical ‘centrism’, such as his critique of both individualism and collectivism or his critique of both big gov and big biz.

In giving the political left credit, he admits the importance of economic factors, such as rising inequality and he also brings up the problems of segregation and mistrust. But he is completely unaware that diversity only leads to loss of trust when combined with segregation (Eric Uslaner, Segregation and Mistrust). Nor does he appreciate how far reaching are the effects of inequality (Keith Payne, The Broken Ladder; Richard Wilkinson & Kate Pickett, The Inner Level). His view is not superficial or lacking in nuance, even as he remains trapped in capitalist realism. But he is coming from a more or less conventional worldview, no matter how far he stretches the boundaries a bit, although admittedly he does bring some good points to the table (The Right’s Lena Dunham Fallacy).

Here is the basic limitation. He constantly associates one positive factor with another in the assumption that the link is causal and goes in the direction that fits his beliefs, but he rarely if ever goes beyond correlation and he doesn’t acknowledge all the immense data and examples that contradict his assumptions and conclusions. Consider Scandinavians who show better results on numerous measures: poverty, unemployment, inequality, small business ownership, patents per capita, education, health, etc. They do this with highly conformist and collectivist societies with centralized welfare states and without the kind of civic participation seen in the US; for example, schools are operated professionally by highly trained and heavily unionized teachers, and parents don’t belong to an equivalent of a PTA or typically volunteer at their children’s schools. Yet it can be argued they somehow have a stronger and healthier form of individualism (Anu Partanen, The Nordic Theory of Everything). Such examples show that Edmund Burke’s “small platoons” can be as large and centralized as a highly advanced modern nation-state. It is true they are smaller nation-states, but large enough to have ambassadors, international policy, maintain militaries, and be allies with global superpowers.

Carney barely discusses anything outside of the United States. As I recall, he mentions Scandinavia once or twice and even then only in passing. Scandinavia undermines every aspect of his conclusions. That is the problem. He covers a lot of material and, for a mainstream writer, it is reasonably comprehensive as non-academic popular writing. But he never quite brings it together and he offers no meaningful solutions. What could have been a more worthy book stopped short of challenging the paradigm itself and pushing to an entirely different perspective and level of insight. Instead, he offers an obsession with social conservatism, if slightly more interesting than the standard approach. He makes a decent argument for what it is, maybe one of the better mainstream conservative arguments I’ve come across. He actually engages with diverse info. If nothing else, it will serve the purpose of introducing conservatives and right-wingers to a wealth of info and ideas they otherwise would never see.

I’m not sure I can hold the limitations against the author. Even if it fails in the end, it doesn’t fail to a greater degree than is expected. The analysis is adequate and, within the chosen framework, it was inevitable that it couldn’t really go anywhere beyond known territory. Even so, I really did appreciate how much space he gave to a topic like inequality. An example of where it comes short is not even touching on the saddest of inequalities, that of environment and health. It’s not merely that the poor don’t have access to green spaces and nice schools. The poor are literally being poisoned by lead in old pipes and toxic dumps located in poor communities. The oppressed poor aren’t accidental victims for their communities were intentionally destroyed by design, in the kind of capitalism we have that makes profit by devouring ‘social capital’. Still, it’s amazing how much he is willing to speak of, considering who employs him and who is his likely audience, but it ends up feeling like a wad of loose threads. The edges of his argument are as frayed as the social fabric he details. There is no larger context to hold it together, which is to be expected as the author is part of the very same problematic mainstream social order he is attempting to understand… and doing so on the same level the problem was created.

Though going far beyond where most on the political right dare to tread, he never fully takes seriously the ideals of basic human rights and moral righteousness nor the democratic values of fairness and justice as being of penultimate importance. The entire history of corporatocratic and plutocratic capitalism is that of violence, oppression, and theft. The kind of analysis in Alienated America, no matter how fair-minded and reasonable in intention (if we give the author the benefit of the doubt), doesn’t confront the bloody corpse of the elephant in the room, the reality that capitalism only applies to the poor while the rich get socialism (Trillions Upon Trillions of Dollars). Neither church attendance nor marriage rates could come close to undoing the moral harm. Forget the social fabric. We need to worry about the moral foundation of modern civilization.

As someone harshly put it, “Just a rehash of the same old “Trickle Down Economics” and “Thousand Points of Light” BS. Shrink government down till you can drown it in the bathtub destroying the social safety net while cutting taxes on the wealthy and corporations and miraculously private local organizations will jump in to take care of everything. At least try and come up with a more plausible explanation for the disaster to divert us from the truth that the gangster capitalism the Republican Party has been pushing on America since Reagan” (comment to Andy Smarick’s Where civic life crumbled, Donald Trump arose). I might be slightly more forgiving as I came to it with low expectations, but basically this comment is correct.

Carney’s argument is intellectually reasonable as far as mainstream arguments go, but it lacks a gut-level punch. He remains within the range of respectability, not getting too close to anything that might be mistaken as radical. Envisioning a slightly more friendly capitalism is not exactly a new proposition or overly inspiring. Nonetheless, his refusal to scapegoat individuals and his refusal to think of communities in isolation is refreshing. His focus on alienation is key, even as I personally find Joahann Hari (Chasing the Scream & Lost Connections) to be much more probing in getting to the heart of the matter, but that ultimately is just to complain that Carney isn’t a left-winger, not that Hari is extremely radical either.

Where his take offered clarity of light to see by was his dissection of Trump supporters and voters. He does a wonderful takedown of the mainstream narrative that it was the highly religious who were behind Trump’s election. Opposite of this narrative, the facts show that, as church attendance went up in a community, Trump’s voter count went down in that location. His ‘religious’ followers were mostly the unchurched and, interestingly, those lacking in an ethnic identity, as contrasted with traditioanlly religious and community-minded populations such as Dutch-American Calvinists (Terry Mattingly, Journalists don’t understand religious fault lines in ‘alienated’ America). Yet those unchurched Trump supporters claimed that religion was important to them, apparently as a symbolic issue among those who have otherwise lost meaning in their lives, which seems to be Carney’s takeaway. It reminds me of how school shooters are also concentrated in similar communities and, even when non-religious, the assailants often express religious-like concern for meaning (12 Rules for Potential School Shooters).

He busted another myth in pointing out that core support for Trump, although coming from economically struggling populations, did not specifically come from the poor but rather the wealthier in those communities (yet strangely he kept reinvoking the very myth he disproved and dismantled, in returning his focus to poor whites). This economic class of the relatively comfortable apparently have a troubled relationship with their impoverished ‘neighbors’, either in a fear of them or in a fear of becoming like them, which is to say class anxiety in one way or another. It’s understandable as the middle class has been shrinking and surely the middle class is shrinking the most in those economically distressed communities. And that would be hitting white males most of all in how, as many other demographics (women, minorities, and immigrants) have had improving economic outcomes over the past half century, white males are now making less than in the past.

On the other hand, the wealthier in wealthier communities are more protected from these problems and so felt no attraction to Trump’s demagoguery; their local economies are less stressed and divided. It indicates that, though Carney didn’t explain it this way, the real problem is inequality, where it was immediately felt and not. The more well off communities could either ignore inequality altogether as if it didn’t exist or else treat it as a problem of other people elsewhere. To the economically-segregated elites, inequality is an abstraction that isn’t viscerally real in their immediate experience and so, in the mind of the privileged, it is not personally relevant or morally compelling. But such dissociation can only last for so long as society crumbles all around their walled enclaves — as Keith Payne makes clear, even the rich are stressed, suffer, and become sick under conditions of high inequality. Ultimately, there is no escape from a society gone mad, especially when that society is the leading global superpower.

Where Carney really gets things right is about isolation and alienation. And it doesn’t happen in the way most would expect. Why is this particular middle class white demographic so anxiety-ridden and not other populations? In dealing with everyday needs and problems, Carney writes that, “Trump voters—as compared with Ted Cruz voters, or Bernie or Hillary supporters—answered, “I just rely on myself” the most.” That is quite telling. Sanders won the largest proportion of the poor and working class, far more than Trump. So, similar to how the wealthy in wealthy communities feel greater trust and connection toward their neighbors, so do many of the poor.

Stephen Steinberg writes that, “In her 1973 study All Our Kin, Carol Stack showed how poor single mothers develop a domestic network consisting of that indispensable grandmother, grandfathers, uncles, aunts, cousins, and a patchwork of neighbors and friends who provide mutual assistance with childrearing and the other exigencies of life. By comparison , the prototypical nuclear family, sequestered in a suburban house, surrounded by hedges and cut off from neighbors, removed from the pulsating vitality of poor urban neighborhoods, looks rather bleak. As a black friend once commented , “I didn’t know that blacks had weak families until I got to college.”” (Poor Reason; see Black Families: “Broken” and “Weak”).

So that is what Carney gets wrong. He goes from Trump’s core supporters from the middle class being isolated and alienated to shifting the frame back to the mainstream narrative of it somehow being about the declining white working class, in stating that, “In general, poorer people “tend to be socially isolated,” Putnam found, “even from their neighbors.” That probably is true to some extent, but the point is that it isn’t nearly true to the degree as found among the anxious middle class. The poorest of the poor, unlike the upwardly aspiring middle class, are those the least likely to move to seek a job and so are the most likely to remain living near lifelong connections of family, friends, and neighbors.

Yes, poverty has a way of isolating people such as being constantly busy with working multiple jobs while unable to afford childcare. Nonetheless, even when they don’t have the time to spend with those important social ties, they know that their social network is always there to fall back on in times of dire need. Sure, the rural poor are increasingly isolated quite literally in a geographic sense, as the rural areas empty out with the young moving to the cities. But in spite of the media loving to obsess over these loneliest of the desperate and aging poor, the reality is the vast majority of the poor, specifically poor whites, have lived in urban areas for over a century now. That isn’t to say it isn’t also shitty to be among the urban poor. But the basic point comes down to something odd going on here. The poorest Americans, contrary to expectation, are not the most anxious and are not those turning most to reactionary politics of nostalgia and strong man leadership. Instead, those on the bottom of society tend to be apolitial and disenfranchised, that is to say they usually don’t vote.

How different that is from Trump’s America. Trump was not speaking to those facing the worst economic hardship but those a few rungs above them. Something happened to the middle class to cause them to feel precarious, as if they had been cheated out of a more comfortable and secure lifestyle that they deserved. Maybe they had sacrificed extended family and community in climbing the economic ladder and pursuing their careers, and it turned out the rewards did not match the costs. So, they were left hanging somewhere in between. “Trump voters were significantly less socially connected,” Carney writes. “There’s plenty more data like this, charting the loneliness and social disconnection in Trump’s early core support.” For certain, something is making middle class whites go crazy and not merely those gripping the lowest edge of it (Fractures of a Society Coming Apart). Look at the breakdown of Trump voters, from my post Right-Wing Politics of the Middle Class, and notice it doesn’t fit the narrative spun in the corporate media:

“Trump voters seemed to include many average Americans, although Trump voters were slightly above the national average on wealth. With incomes below $50,000, 52% for Clinton and 41% for Trump. With incomes more than $50,000, 49% for Trump and 47% for Clinton. A large part of Trump’s votes came from the income range of +50 to -100 thousand range, i.e., the middle class. The only income level bracket that Trump lost to Clinton was those who make $49,999 and under. Trump’s victory came from the combined force of the middle-to-upper classes. Trump did get strong support from those without a college degree (i.e., some college or less), but then again the vast majority of Americans lack a college degree. It’s easy to forget that even many in the middle class lack college degrees. Factory jobs and construction jobs often pay more than certain professional careers such as teachers and tax accountants. I’m sure a fair number low level managers and office workers lack college degrees.

“Among white voters alone, though, Trump won more college-educated than did Clinton. The white middle class went to Trump, including white women with college degrees. Only 1 in 6 Trump voters were non-college-educated whites earning less than $50,000. Ignoring the racial breakdown, Trump overall won 52% of those with some college/associate degree, 45% of college graduates, and 37% with postgraduate study. That is a fairly broad swath. A basic point I’d make is that the majority of Trump voters without a college education work in white collar or middle skill jobs, representing the anxious and precarious lower middle class, but it has been argued that the sense of financial insecurity is more perceived than real. The working class, especially the poor, were far from being Trump’s strongest and most important support, despite their greater financial insecurity. Rather, the Trump voters who played the biggest role were those who fear downward economic mobility, whether or not one deems this fear rational (I tend to see it as being rational, considering a single accident or health condition could easily send into debt many in the lower middle class).”

Of course, Carney is making a more targeted point. He is speaking about Trump’s core support in specifying those who were supporting him from the very beginning of his campaign, prior to the GOP nomination. That core support wasn’t the comfortable upper middle class, but still they were solidly middle class above the common rabble. As he further emphasizes, “recall that Trump’s core supporters weren’t necessarily poorer than other voters. But they lived in places that were worse off, culturally and economically, than other places.” That cuts straight to one of Keith Payne’s main points, the way high inequality can feel like poverty even to those who aren’t poor. Economic stress comes in many forms, not limited to outright economic desperation. Inequality, when pushed to extremes, makes everyone feel shitty. And if the sense of conflict lasts long enough, people begin acting crazy, even crazy enough to vote for demagogues, social dominators, and authoritarians.

If we are to seek the cause of this problem, we should look elsewhere to those concentrations of segregated wealth. “Inequality in the United States is growing,” says Carney in pointing out the obvious. “Economic mobility is low. These facts alone suggest that our elites aren’t sharing the wealth.” That is an interesting conclusion coming from the political right, even to suggest they should share the wealth. Now if the right could only admit that most of that wealth was stolen and so needs to be returned, not merely shared, but such breathtaking honesty is far too much to ask for. We have to take what meager honesty we can get, even if it only gives us a glimpse: “This social inequality, as earlier chapters laid out, was far less in the 1960s (racial and gender inequality were far worse, of course). Between the upper class and the working class, there was a far smaller gap in marriage, in divorce, and in out-of-wedlock births. At the root of it all: In 1960, there was a narrower gap in social connectedness, including church attendance. Today, family life and strong community are increasingly a luxury good. And here we can blame the elites.”

If only social conservatives would take seriously what it means to have made the public good a luxury unaffordable to most of the public. But all we are left with is a diatribe of paternalistic moralizing. We don’t need to get rid of this modern aristocracy, so goes the lament, for the moral failure is that they’ve forgotten their noblesse oblige. They need to return to the founding ideal, as embodied by George Washington, of an enlightened aristocracy. Carney preaches that the economic elite need to once again embrace their role as ruling elite, to return plutocracy back to its aristocratic roots of theocratic patriarchy. The “more pernicious problem” is an “ideoogical commitment to egalitarianism among elites that prevents them from seeing themselves as elites.” Yeah, that is where we went wrong. The elites aren’t elitist enough and so they aren’t taking seriously their moral responsibility to compassionately rule over their local populations of neo-feudal serfs, instead locking themselves away in the modern equivalent of a castle keep. I’m glad we got that cleared up. That should set the world right again.

* * *

Alienated America
by Timothy P. Carney

A quick reminder, though, as we discuss election results and “Trump Country”: By the general election in 2016, a vast majority of Republicans had come around to Donald Trump. Many would choose anyone but Hillary. Others had grown fond of the man. By the end of Trump’s first couple of years in office, after two Supreme Court picks and a tax cut, many other right-leaning Americans embraced him.

This book isn’t about those later adopters, though. This book has mostly studied the results of the early primaries to sort out who was Trump’s early core support . When we have looked at general election results, we have been most interested in the voters or places that shifted from Democrat to Republican—the voters who would have stayed home or voted Democrat had Trump not been the nominee.

So on this question—who was Trump’s early core support ?—different studies found wildly differing results. You may recall those who said “economic anxiety” was the cause, and those who said they could prove that there was no economic anxiety, just racism at the heart of Trump’s earliest support.

What distinguished these two classes of studies? The studies that found no or little connection between economic woe and Trump support were polls of individuals. Those finding that economic woe predicted Trump support were studies of places.

As a Washington Post headline aptly put it: PLACES THAT BACKED TRUMP SKEWED POOR; VOTERS WHO BACKED TRUMP SKEWED WEALTHIER. 3

This is one reason we couldn’t tell the story of Trump without discussing community. The story of how we got Trump is the story of the collapse of community, which is also the story behind our opioid plague, our labor-force dropouts, our retreat from marriage, and our growing inequality.

The core Trump voters weren’t the people dying, obviously. They weren’t even necessarily the unhealthy ones. They weren’t necessarily the people drawing disability payments or dropping out of the workforce. Trump’s core voters were these people’s neighbors.

Trump’s win—specifically his wins in the early primaries and his outperformance of Mitt Romney—is best explained by his support in places where communities are in disarray. Many traits characterized Trump’s early core supporters. This chapter will explore them, and we will see how closely they are all tied to alienation.

How Trump Voters Are Giving the Right Qualms About Capitalism
by Park MacDougald

Yet if Carney offers a convincingly bleak view of social collapse in working-class America, his explanation for this collapse — and his suggestions for what to do about it — are somewhat less satisfying. Carney channels, to a limited degree, some of the new right-wing market skepticism: He offers a soft criticism of big business for stamping out local variation in the name of standardization and efficiency; he laments the rise of “Taylorism” and its dehumanization of work; he attacks the “gig economy” for not providing workers with stability; he disapproves of suburbanization and the isolation that stems from it; he even quotes Deneen to the effect that capitalism breeds an individualistic mind-set that makes relationships contingent and easily broken. But in explaining the troubles of working-class America, Carney tends to fall back on the collapse of church and community, which he largely attributes to traditional Republican bogeymen such as the welfare state, the sexual revolution, the rise of expressive individualism, and secularization. These explanations are not wrong per se, but they are so large and fuzzily cultural that they resist solutions beyond the local and individual. Carney offers a few policy fixes he thinks might help — reforming the mortgage interest deduction, decentralizing control over public schools — but he admits in his closing chapter that the “solution is mostly: You should go to church. Also, You should start a T-ball team.

Generally speaking, it probably is a good idea to start a T-ball team. And Carney’s willingness to critique aspects of American capitalism, mild as they may be, represents a marked shift from where the mainstream right was during the Obama years and where some of its leading lights still are. But at the same time, by delivering an account of a country facing full-blown social collapse and then retreating into calls for local, voluntary solutions, Carney ends up restating the basic premises of an old conservative consensus — it’s not the government’s job to fix your problems — that, as a political philosophy, has contributed to the alienation Carney so convincingly describes. It may be true, for instance, that the state is ill equipped to re-create devastated communities, but it is also true that state policy has enabled or even accelerated their devastation, and not merely in the sense that overregulation has hurt small businesses or that the welfare state has crowded out private charity.

Rising international economic competition, for instance, was always going to hurt the American working class. But as critics on both the left and the right have pointed out, globalization has been systematically tilted in favor of the mobile and highly educated. The critic Michael Lind, for instance, notes that the international harmonization of economic rules has focused on tariffs, financial liberalization, and intellectual property while avoiding areas that would benefit the Western working classes, such as wages, labor standards, and tax laws. Even some of the more diffuse cultural shifts lamented by conservatives have been midwifed by the state. As Harvard Law professors Jacob Gersen and Jeannie Suk Gersen have argued in their study of the evolution of Title IX, civil-rights laws designed to protect women’s equal access to education have created, through bureaucratic drift and activist institutional capture, a vast federal regulatory apparatus that treats socialization into “traditional” gender roles as a public-health risk and attempts, under the guise of fighting sexual assault, to inculcate among college students a progressive view of gender and sexuality.

The point here is not to chastise Carney for not adopting a more dirigiste political philosophy than the one he presumably holds. It is to say that, even on the right, intellectuals are concluding that the problems Carney identifies are so alarming that localist, laissez-faire solutions simply aren’t going to cut it. In a recent essay in American Affairs, Gladden Pappin issued a broadside against fusionist conservatives who, in his view, waste their energies calling for the resurrection of vanished civil-society traditions “that worked only as culturally embedded practices dependent on the traditions of aristocratic centuries.” Instead, Pappin demands conservatives ask themselves, “What can we do with the reins of power, that is, the state, to ensure the common good of our citizens?”

It remains to be seen whether anyone will take up Pappin’s call and, if they do, whether such a conservatism of the state would be effective or popular. But if Middle America’s condition really is as dire as people like Carney make it out to be, it’s hard to imagine that “go to church” will turn out to be a political winner. Carney ably describes the sort of malaise that led Republicans to flock to Trump, but if there’s one thing we learned from the 2016 election, it’s that desperate people want a leader who promises to try something different, however flawed his solutions might be.

God’s Bailout: A Review of Timothy P. Carney’s “Alienated America”
by Tyler Austin Harper

It is here that Alienated America is very insightful: Carney has a genuine knack for parsing the data, drawing out counterintuitive but rigorously defended observations, and resisting simple narratives about complex states of affairs. His central claim that the 2016 election was a referendum on whether the American dream is alive or dead is not novel, but it is both convincing and better supported than similar efforts. Additionally, although his defense of the salutary nature of cultural practices like religious observance, child-rearing, and marriage are unapologetically conservative in nature, his message remains comparatively broad in scope: unlike other conservative Catholic critics of Trump (most notably, Patrick Deneen), Carney predicates his argument on the form, rather than the content, of these practices. In the pages of Alienated America, you will find no diatribe on the superiority of heterosexual marriage or the Catholic faith — he notes repeatedly, for example, that observant Muslim Americans are among the groups most likely to report optimism about America and faith in the American dream, even after Donald Trump’s election and attempted Muslim ban. Rather, Carney’s message is practical and universalist in nature: people are better off among other people, when they have something, anything whatsoever, that they belong to and that unites them in a network of mutual responsibility.

It is this aspect of Carney’s argument that I find most appealing, and most useful for progressives like myself. Namely, the author eschews the common tendency — on the right and the left — to posit a linear relationship between wealth and well-being. More specifically, his work persuasively suggests that financial security and emotional security go hand in hand not because some kind of mechanical relationship exists between the two, but because, in contrast to the working class, the wealthy tend to have the resources to live in and contribute to places that provide opportunities for meaningful lives lived in common. As he succinctly puts it: “The erosion of community […] is unequally distributed, it is concentrated in the working class, and it is geographically discrete to the point that we can see it on a map.”

While those of us on the left are generally quick (and correct!) to highlight the importance of addressing widening income inequality and an increasingly precarious labor market, for example, it often seems that we are comparatively less likely to talk about questions of community, as though we assume that fixing the former will necessarily achieve the latter. Furthermore, when we do talk about community, we often use the term to refer to people who share common interests and experiences (for example, “communities of color”) but not necessarily geographical proximity or concrete spaces of interaction. If we are willing to take Carney’s assessment seriously, then, two questions seem obvious: What are the barriers to community, understood in the sense of mutual, meaningful networks of local support? And how might these barriers be removed?

Not surprisingly, it is here that Carney’s analysis breaks down, where his professed desire for strong communities is predictably thwarted by his inability to recognize unfettered capitalism, rather than government centralization and regulation, as the primary threat to the robust civic life he vaunts. Although Carney approvingly cites Orwell’s maxim “To see what is in front of one’s nose needs constant struggle,” he consistently fails to see that at the heart of every flyover town, closed plant, and shuttered church whose death he laments, there is a place where unregulated capital — not some big government boogeyman — has reared its ugly head.

Unlike his meticulously researched and tightly argued defense of the prosocial virtues of marriage and religious observance, for example, Carney’s tepid but persistent support of free-market capitalism and his assaults on liberal governance are fast and loose, often relying on anecdotal evidence, sparse data, and obscure cases of bureaucratic malfeasance to make his points. Oftentimes, his arguments are absurd — such as his claim that massive companies like Walmart, Amazon, or Starbucks crowd out small businesses because of too much, rather than too little, regulation. Other times, they’re comical — once in the 1980s, Mayor Bernie Sanders apparently professed not to believe in charities. This decades-old remark is spun by Carney into a sweeping indictment of the contemporary left’s widespread desire to have neighborly goodwill replaced by the Nanny State.

In fairness, Carney isn’t entirely oblivious to the problems caused by our neoliberal economic order — he frequently cites cases of Chinese manufacturing undermining manufacturing-centric US communities, for example. However, like many modern conservatives, he assuages his doubts by acknowledging that free-market capitalism has a few minor kinks, before swiftly pivoting to the supposedly graver dangers posed by governmental overreach, centralization, and regulation. As a direct consequence of this reaffirmation of the legitimacy of unfettered capital, Carney is thus forced to retreat into the untenable position that religion is the best and most readily available means to redress our present crisis of community. We can’t all be affluent, his argument goes, and thus we can’t all have access to the kind of secular communal life enjoyed by the wealthy. Yet, even the dirt poor can enjoy the social bonds provided by religious life.

To reiterate, I have no problem with Carney’s high estimation of organized religion. As with marriage, I know plenty of people for whom religion has been nightmarish, a source of trauma, insecurity, and even violence. I also know plenty of people, like Jim the bookish engineer, for whom religious affiliation has been a bulwark against the loneliness endemic to modern life. The problem is not religion itself, as one means among many for achieving the communal ties that foster well-being. The problem is Carney’s reliance on God to bail out capitalism. Unlike Robert Nisbet, the conservative sociologist whose classic work — The Quest for Community (1953) — he returns to frequently, Carney’s own work persistently downplays the connection between social alienation and the flow of unregulated capital that is the principal engine of that same alienation.

Although he signals kinship with an earlier tradition of postwar conservatives who were also preoccupied with the question of community — people like Nisbet, Russell Kirk, and Peter Viereck, who highlighted the corrosive and antisocial effects of the cult of free enterprise — Carney cannot ultimately bring himself to shed the laissez-faire, libertarian economics that dominate the Republican Party. The result is a book that puts its finger on the right problem, but whose author is too besotted by economic fatalism to imagine a variety of contentment that would be otherwise than religious, positioning secular forms of community as the unique provenance of the elite. While Carney’s insistence that we must reintegrate the classes, combating the geographical isolation of wealth and its resources, is laudable, his calls to privatize the safety net are as predictable as they are puerile.

Rather than buy into a zero-sum game that forces a choice between government as a tentacular monster and government as a minimalist “reinsurance” program (“a safety net for safety nets,” to use Carney’s term) is it not possible to imagine a government that supports community institutions by — and hear me out on this — actually funding and defending them? If you want a thriving book club scene, for example, why not fix the public schools? Try pumping money into education and paying teachers a salary that will make such work a feasible option for the best and the brightest. After all, lifelong learners, the kind who read for pleasure, do not grow on trees. Likewise, if you want heightened church attendance, mightn’t an increased minimum wage — allowing prospective attendees to forsake that second job, spending Sundays in the pews rather than driving for Uber — be a good start? If college graduates are far more likely to build robust communities, as Carney repeatedly claims, shouldn’t we work toward making a college education more affordable for the alienated, working poor whose cause he champions? These are the kind of questions that Carney dismisses out of hand as “centralizing” and “utopian,” preferring instead his own brand of theocratic utopianism in which a minimalist state would be kept afloat by little platoons of the charitable religious.

“Individuation is not the culmination of the person; it is the end of the person.”

Julian Jaynes and the Jaynesian scholars have made a compelling argument about where egoic consciousness originated and how it formed. But in all the Jaynesian literature, I don’t recall anyone suggesting how to undo egoic consciousness, much less suggesting we should attempt annihilation of the demiurgic ego.

That latter project is what preoccupied Carl Jung, and it is what Peter Kingsley has often written about. They suggest it is not only possible but inevitable. In a sense, the ego is already dead and we are already in the underworld. We are corpses and our only task is to grieve.

The Cry of Merlin: Carl Jung and the Insanity of Reason
Gregory Shaw on Peter Kingsley

Kingsley explains that Jung emulated these magicians, and his journey through the Underworld followed the path of Pythagoras, Parmenides and Empedocles. Jung translated the terminology of the ancients into “scientific” terms, calling the initiation he realized in the abyss “individuation.” For Jungians today, individuation is the culmination of psychic development, as if it were our collective birthright. Yet Kingsley points out that this notion of individuation is a domestication, commodification, and utter distortion of what Jung experienced. Individuation is not the culmination of the person; it is the end of the person. It is the agonizing struggle of becoming a god and a person simultaneously, of living in contradictory worlds, eternity and time.

Kingsley reveals that although individuation is the quintessential myth of Jung’s psychology, it is almost never experienced because no one can bear it. Individuation is the surrendering of the personal to the impersonal, and precisely what Jung experienced it to be, the death of his personality. Jung explains that individuation is a total mystery; the mystery of the Grail that holds the essence of God. According to Henry Corbin, Jung saw “true individuation as becoming God or God’s secret.” Put simply, individuation is deification. To his credit, over twenty years ago Richard Noll argued this point and wrote that Jung experienced deification in the form of the lion-headed Mithras (Leontocephalus), but Kingsley gives the context for deification that Noll does not, and the context is crucial. He shows that Jung’s deification was not an “ego trip” that gave rise to “a religious cult with [Jung] as the totem,” Noll’s assumption; nor was it a “colossal narcissism,” as Ernest Jones suggested, but precisely the opposite. Individuation cuts to the very core of self-consciousness; it is the annihilation of the ego, not its inflation. […]

What is fundamentally important about Catafalque is that Kingsley demonstrates convincingly that Jung recovered the shamanic path exemplified by Pythagoras, Parmenides, and Socrates. Jung tried to save us from the “insanity of reason” by descending to the underworld, serving the archetypes, and disavowing the impiety of “the Greeks” who reduce the sacred to rationalizations. There is much in Catafalque I have not addressed, perhaps the most important is Kingsley’s discussion of the Hebrew prophets who raged against a godless world. Kingsley here appropriately includes Allen Ginsberg’s Howl, that draws from the rhythms of these prophets to wail against the “insanity of America,” its mechanized thinking, suffocating architecture, and the robotic efficiency that is the child of Reason. This almost verbatim mirrors the words of Jung who, after visiting New York, says “suppose an age when the machine gets on top of us …. After a while, when we have invested all our energy in rational forms, they will strangle us…They are the dragons now, they became a sort of nightmare.

Kingsley ends Catafalque with depressing prophecies about the end of western civilization, both from Jung and from Kingsley himself. The great wave that was our civilization has spent itself. We are in the undertow now, and we don’t even realize it. To read these chapters is to feel as if one is already a corpse. And Kingsley presents this so bluntly, with so much conviction, it is, frankly, disturbing. And even though Kingsley writes that “Quite literally, our western world has come to an end,” I don’t quite believe him. When speaking about Jung giving psychological advice, Kingsley says “make sure you have enough mētis or alertness not to believe him,” and I don’t believe Kingsley’s final message either. Kingsley’s message of doom is both true and false. The entire book has been telling us that we are already dead, that we are already in the underworld, but, of course, we just don’t understand it. So, then he offers us a very physical and literal picture of our end, laced with nuclear fallout and images of contamination. And he forthrightly says the purpose of his work is “to provide a catafalque for the western world.” It is, he says, time to grieve, and I think he is right. We need to grieve for the emptiness of our world, for our dead souls, our empty lives, but this grief is also the only medicine that can revive the collective corpse that we have become. Kingsley is doing his best to show us, without any false hope, the decaying corpse that we are. It is only through our unwavering acceptance, grieving and weeping for this, that we can be healed. In Jung’s terms, only the death of the personal can allow for birth into the impersonal. Into what…? We cannot know. We never will. It is not for our insatiable minds.

Noam Chomsky (2): On the Lesser of Two Evils

Noam Chomsky: “So let’s go to the Bible. That’s–you can find the model that Trump is following in the Bible. It’s King Ahab, the evil king, the epitome of evil in the Bible; he called the prophet Elijah to him, and condemned the prophet Elijah because he doesn’t love Israel enough. In fact, he’s a hater of Israel, the proof he was condemning the acts of the evil king. So loving a country, from Trump’s point of view, is follow its policies; whatever its policies are, you got to support them. That’s loving a country. So Trump and Ahab, the evil king, agree on that. The prophet Elijah and the ones who Linfield is attacking, they agree, no, you don’t support the policies. What you do is if you care about a country, it’s like caring about a friend. If you have a friend who’s doing something to harm himself, and to severely harm others, you don’t say, Great! I support you all the way. You try to change what the friend is doing.

“And in the case of a state, you first have to dismantle the cloud of propaganda and myth that every state constructs to justify what it’s doing. And when you do that, then what do you find? You go back to the early seventies, which actually is the point she emphasizes. At that point, Israel had a fateful decision. Namely, is it going to pursue expansion or security? That was very clear. On the table, there were very clear options for negotiation and political settlement.

“Linfield, incidentally, lies about this like a trooper. I actually described it with exact precision, and she claims I made it up by the clever technique of avoiding every single thing I said about it, OK, which happened to be exactly accurate. What happens is this. First of all, in 1971, Gunnar Jarring, international mediator, presented proposals to Egypt and Israel for a political settlement, pretty much in line with the international consensus. Egypt accepted it; Israel rejected it. In 1976 the Security Council debated a resolution calling for a political settlement, two-state settlement, on the international border with guarantees for the rights of each state, Israel and a Palestinian state, to live in peace and security within secure and recognized borders.

“It was vetoed by the United States; Israel was hysterical. The Israeli ambassador Chaim Herzog, later president, claimed that the PLO had written this as a device to destroy Israel, which of course was complete nonsense. The PLO kind of tacitly supported it, but certainly didn’t write it. The resolution, crucially, was supported by the three Arab confrontation states: Egypt, Jordan, and Syria. That was ’76. That’s a tough one for people like Linfield who support Israeli policies, so what she does is just lie about it, OK, in the way that I mentioned. But it was real, and there were other options, and it continues like this.

“Now, if you care about Israel, what you tell them is you’re sacrificing security for expansion. And it’s going to have a consequence. It’s going to lead to moral deterioration internally, and decline in status internationally, which is exactly what happened. You mentioned that people who used to be one or another form of Zionist are now very critical of Israel. It’s much more general than that. You go back to the 1970s, Israel was one of the most admired states in the world. Young people from Sweden were going to Israel to see the wonderful social democracy, and so on. Now it’s a pariah state. What’s happened? It’s not–the same that’s happened in the United States. Support for Israel used to be based in basically liberal democrats; that was support for Israel. No longer. Most of them support the Palestinians. Support for Israel now is in the Christian evangelical community and ultranationalists.

“That reflects the changes that have taken place. Is this good for Israel? I don’t think so. It’s turned Israel into, as I said, a pariah state which is declining–internally, morally–and it’s horrible for the Palestinians. So I think the ones who were following the path of Elijah were correct. You don’t love a state and follow its policies. You criticize what’s wrong, try to change the policies, expose them; criticize it, change it. And exactly what was predicted in the seventies has happened.”

The Madness of Drugs

There is always a question of what is making the world so crazy. And it’s not exactly a new question. “Cancer, like insanity,” Stanislou Tanochou wrote in 1843, “seems to increase with the progress of civilization.” Or go back earlier to 1809, the year Thomas Paine died and Abraham Lincoln was born, when John Haslam explained how common had become this concern of civilization going off the rails: “The alarming increase in Insanity, as might naturally be expected, has incited many persons to an investigation of this disease.” (For background, see: The Crisis of Identity.)

Was it changes of diet with the introduction of sugar, the first surplus yields of wheat, and a high-carb diet in general? If not the food itself, could it be the food additives such as glutamate and propionate? Was it the pollution from industrialization such as the chemicals in our food supply from industrial agriculture and industrial production, the pollution in the air we breathe and water we drink, and the spikes of toxic exposure with lead having been introduced to new products? Was it urbanization with 97% of the world’s population still in rural areas at the beginning of the 19th century followed by the majority of Westerners having moved to the cities a few generations later? Or was it the consequence of urbanization and industrialization as seen with increasing inequality of wealth, resources, and power that put the entire society under strain?

I’ve entertained all those possibilities over the years. And I’m of the opinion that they’re all contributing factors. Strong evidence can be shown for each one. But modernity saw another change as well. It was the era of science and that shaped medicine, especially drugs. In general, drugs became more common, whether medicinally or recreationally, even some thing so simple as the colonial trade of sugar and tobacco. Then later there were hardcore drugs like opium and cocaine that became increasingly common over the 19th century.

The 20th century, of course, pushed this to a whole new level. Drugs were everywhere. Consider the keto diet that, in the 1920s, showed a promising treatment or even cure for epileptic seizures, but shortly after that the drug companies came up with medications and the keto research dried up, even though those medications never came close to being as effective and some of them caused permanent harm to the patient, something rarely admitted by doctors (see the story of Charlie Abrams, son of the Hollywood produce Jim Abrams). Drugs seemed more scientific and modern humanity had fallen under the thrall of scientism. Ascie Dupont’s advertising slogan went, “Better Living Through Chemistry”.

It was irrelevant that most of the drugs never lived up to the hype, as the hype was endless. As research has shown, the placebo effect makes each new pharmaceutical seemingly effective, until shortly later the drug companies invent another drug and unsurprisingly the old drug stops showing the same benefits it did previously. Our hopes and fantasies are projected onto the next equivalent of a sugar pill and the placebo effect just goes on and on, as does the profit industry.

That isn’t to dismiss the actual advancements of science. But we now know that even the drugs that are beneficial to some people, from antidepressants to statins, are overprescribed and may be harming more people than they are helping. Part of this is because our scientific knowledge has been lacking, sometimes actively suppressed. It turns out that depression is not a neurotransmitter deficiency nor that cholesterol is bad for the body. Drugs that mess with the body in fundamental ways often have severe side effects and the drug companies have gone to extreme lengths to hide the consequences, as their profit model depends upon an ignorant and unquestioning population of citizen-consumers.

This is not a minor issue. The evidence points to statins making some people irritable to the point of violence and there is a statistically significant increase of violent death among statin users. That is on top of an increase of neurocognitive decline in general, as the brain requires cholesterol to function normally. Or consider how some painkillers might also be disrupting the physiological mechanisms underlying empathy and so, heavy regular usage, might contribute to sociopathy. It’s unsurprising that psychiatric medications can change behavior and personality, but no one expects such dire consequences when going to the drugstore to pick up their medication for asthma or whatever.

We are living in an era when patients, in many cases, can’t trust their own doctors. There is no financial incentive to honestly inform patients so that they can make rational choices based on balancing the benefits and harms. We know the immense influence drug companies have over doctors that happens through legal forms of bribery, from paid vacations to free meals and supplies. It’s related to not only why patients are kept in the dark but so are most doctors. It just so happens that drug company funding of medical school curriculum and continuing education for doctors doesn’t include education for effective dietary and lifestyle changes that are inexpensive or even free (i.e., no profit). This is why most doctors fail a basic test of nutritional knowledge. That needs to change.

This problem is just one among many. As I pointed out, there are many factors that are throwing gasoline on the fire. Whatever are the causes, the diseases of civilization, including but not limited to mental illness, is worsening with every generation and this is a centuries-old trend. It’s interesting that this has happened simultaneous with the rise of science. It was the hubris of the scientific mindset (and related technological industrialization) that has caused much of the harm, but it is also because of science that we are beginning to understand the harm we’ve done and what exactly are the causal mechanisms behind it. We must demand that science be turned into a tool not of private interest but of public good.

* * *

The medications that change who we are
by Zaria Gorvett

They’ve been linked to road rage, pathological gambling, and complicated acts of fraud. Some make us less neurotic, and others may even shape our social relationships. It turns out many ordinary medications don’t just affect our bodies – they affect our brains. Why? And should there be warnings on packets? […]

According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.

We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.

In most people, these changes are extremely subtle. But in some they can also be dramatic. […]

But Golomb’s most unsettling discovery isn’t so much the impact that ordinary drugs can have on who we are – it’s the lack of interest in uncovering it. “There’s much more of an emphasis on things that doctors can easily measure,” she says, explaining that, for a long time, research into the side-effects of statins was all focused on the muscles and liver, because any problems in these organs can be detected using standard blood tests.

This is something that Dominik Mischkowski, a pain researcher from Ohio University, has also noticed. “There is a remarkable gap in the research actually, when it comes to the effects of medication on personality and behaviour,” he says. “We know a lot about the physiological effects of these drugs – whether they have physical side effects or not, you know. But we don’t understand how they influence human behaviour.” […]

In fact, DeRubeis, Golomb and Mischkowski are all of the opinion that the drugs they’re studying will continue to be used, regardless of their potential psychological side-effects. “We are human beings, you know,” says Mischkowski. “We take a lot of stuff that is not necessarily always good in every circumstance. I always use the example of alcohol, because it’s also a painkiller, like paracetamol. We take it because we feel that it has a benefit for us, and it’s OK as long as you take it in the right circumstances and you don’t consume too much.”.

But in order to minimise any undesirable effects and get the most out of the staggering quantities of medications that we all take each day, Mischkowski reiterates that we need to know more. Because at the moment, he says, how they are affecting the behaviour of individuals – and even entire societies – is largely a mystery.

The Link Between Individualism and Collectivism

Individualism and collectivism. Autonomy and authoritarianism. These are opposites, right? Maybe not.

Julian Jaynes argued that humans, in the earliest small city-states, lived in a state he called the bicameral mind. It was a shared sense of identity where ‘thoughts’ were more publicly experienced as voices that were culturally inherited across generations. He observed that the rise of egoic consciousness as the isolated and independent self was simultaneous with a shift in culture and social order.

What was seen was a new kind of authoritarianism, much more brutally oppressive, much more centralized, hierarchical, and systematic. As the communal societies of the bicameral mind entered their end phase heading toward the collapse of the Bronze Age, there was the emergence of written laws, court systems, and standing armies. Criminals, enemy soldiers, and captives were treated much more harshly with mass killings like never before seen. Social order was no longer an organic community but required top-down enforcement.

One evidence of this new mentality was the sudden appearance of pornographic imagery. For thousands of years, humans created art, but never overtly sexual in nature. Then humans apparently became self-conscious of sexuality and also became obsessed with it. This was also a time when written laws and norms about sexuality became common. With sexual prurience came demands of sexual purity.

Repression was the other side of rigid egoic consciousness, as to maintain social control the new individualized self had to be controlled by society. The organic sense of communal identity could no longer be taken for granted and relied upon. The individual was cut off from the moral force of voice-hearing and so moral transgression as sin became an issue. This was the ‘Fall of Man’.

What is at stake is not merely an understanding of the past. We are defined by this past for it lives on within us. We are the heirs of millennia of psycho-cultural transformation. But our historical amnesia and our splintered consciousness leaves us adrift forces that we don’t understand or recognize. We are confused why, as we move toward greater individualism, we feel anxious about the looming threat of ever worse authoritarianism. There is a link between the two that is built into Jaynesian consciousness. But this is not fatalism, as if we are doomed to be ripped apart by diametric forces.

If we accept our situation and face the dilemma, we might be able to seek a point of balance. This is seen in Scandinavian countries where it is precisely a strong collective identity, culture of trust, and social democracy, even some democratic socialism, that makes possible a more stable and less fearful sense of genuine individuality (Anu Partanen, The Nordic Theory of Everything; & Nordic Theory of Love and Individualism). What is counter-intuitive to the American sensibility — or rather American madness — is that this doesn’t require greater legal regulations, such as how there is less red tape in starting a business in Scandinavia than the United States.

A book worth reading is Timothy Carney’s Alienated America. The author comes from the political right, but he is not a radical right-winger. His emphasis is on social conservatism, although the points he is making is dependent on the liberal viewpoint of social science. Look past some of the conservative biases of interpretation and there is much here that liberals, progressives, and even left-wingers could agree with.

He falls into the anti-government rhetoric of pseudo-libertarianism which causes him to be blind to how Scandinavian countries can have big governments that can rely more on culture of trust, rather than regulations, to enforce social norms. What Scandinavians would likely find odd is this American right-wing belief that government is separate from society, even when society isn’t outright denied as did Margaret Thatcher.

It’s because of this confusion that his other insights are all the more impressive. He is struggling against his own ideological chains. It shows how, even as the rhetoric maintains power over the mind, certain truths are beginning to shine through the weakening points of ideological fracture.

Even so, he ultimately fails to escape the gravity of right-wing ideological realism in coming to the opposite conclusion of Anu Partanen who understands that it is precisely the individual’s relationship to the state that allows for individual freedom. Carney, instead, wants to throw out both ‘collectivism’ and ‘hyper-individualism’. He expresses the still potent longing for the bicameral mind and its archaic authorization to compel social order.

What he misses is that this longing itself is part of the post-bicameral trap of Jaynesian consciousness, as the more one seeks to escape the dynamic the more tightly wound one becomes within its vice grip. It is only in holding lightly one’s place within the dynamic that one can steer a pathway through the narrow gap between the distorted extremes of false polarization and forced choice. This is exaggerated specifically by high inequality, not only of wealth but more importantly of resources and opportunities, power and privilege.

High inequality is correlated with mental illness, conflict, aggressive behavior, status anxiety, social breakdown, loss of social trust, political corruption, crony capitalism, etc. Collectivism and individualism may only express as authoritarianism and hyper-individualism under high inequality conditions. For some reason, many conservatives and right-wingers not only seem blind to the harm of inequality but, if anything, embrace it as a moral good expressing a social Darwinian vision of capitalist realism that must not be questioned.

Carney points to the greater social and economic outcomes of Scandinavian countries. But he can’t quite comprehend why such a collectivist society doesn’t have the problems he ascribes to collectivism. He comes so close to such an important truth, only to veer again back into the safety of right-wing ideology. Still, just the fact that, as a social conservative concerned for the public good, he feels morally compelled to acknowledge the kinds of things left-wingers have been talking about for generations shows that maybe we are finally coming to a point of reckoning.

Also, it is more than relevant that this is treading into the territory of Jaynesian thought, although the author has no clue how deep and dark are the woods once he leaves the well-beaten path. Even the briefest of forays shows how much has been left unexplored.

* * *

Alienated America:
Why Some Places Thrive While Others Collapse
by Timothy P. Carney

Two Sides of the Same Coin

“Collectivism and atomism are not opposite ends of the political spectrum,” Yuval Levin wrote in Fractured Republic, “but rather two sides of one coin. They are closely related tendencies, and they often coexist and reinforce one another—each making the other possible.” 32

“The Life of Julia” is clearly a story of atomization, but it is one made possible by the story of centralization: The growth of the central state in this story makes irrelevant—and actually difficult—the existence of any other organizations. Julia doesn’t need to belong to anything because central government, “the one thing we all belong to” (the Democratic Party’s mantra in that election), 33 took care of her needs.

This is the tendency of a large central state: When you strengthen the vertical bonds between the state and the individual, you tend to weaken the horizontal bonds between individuals. What’s left is a whole that by some measures is more cohesive, but individuals who are individually all less connected to one another.

Tocqueville foresaw this, thanks to the egalitarianism built into our democracy: “As in centuries of equality no one is obliged to lend his force to those like him and no one has the right to expect great support from those like him, each is at once independent and weak.

“His independence fills him with confidence and pride among his equals, and his debility makes him feel, from time to time, the need of the outside help that he cannot expect from any of them, since they are all impotent and cold.”

Tocqueville concludes, “In this extremity he naturally turns his regard to the immense being that rises alone in the midst of universal debasement.” 34

The centralizing state is the first step in this. The atomized individual is the end result: There’s a government agency to feed the hungry. Why should I do that? A progressive social philosophy, aimed at liberating individuals by means of a central state that provides their basic needs, can actually lead to a hyper-individualism.

According to some lines of thought, if you tell a man he has an individual duty to his actual neighbor, you are enslaving that man. It’s better, this viewpoint holds, to have the state carry out our collective duty to all men, and so no individual has to call on any other individual for what he needs. You’re freed of both debt to your neighbor (the state is taking care of it) and need (the state is taking care of it).

When Bernie Sanders says he doesn’t believe in charity, and his partymates say “government is the name for the things we do together,” the latter can sound almost like an aspiration —that the common things, and our duties to others, ought to be subsumed into government. The impersonality is part of the appeal, because everyone alike is receiving aid from the nameless bureaucrats and is thus spared the indignity of asking or relying on neighbors or colleagues or coparishioners for help.

And when we see the state crowding out charity and pushing religious organizations back into the corner, it’s easy to see how a more ambitious state leaves little oxygen for the middle institutions, thus suffocating everything between the state and the individual.

In these ways, collectivism begets atomization.

Christopher Lasch, the leftist philosopher, put it in the terms of narcissism. Paternalism, and the transfer of responsibility from the individual to a bureaucracy of experts, fosters a narcissism among individuals, Lasch argued. 35 Children are inherently narcissistic, and a society that deprives adults of responsibility will keep them more childlike, and thus more self-obsessed.

It’s also true that hyper-individualism begets collectivism. Hyper-individualism doesn’t work as a way of life. Man is a political animal and is meant for society. He needs durable bonds to others, such as those formed in institutions like a parish, a sports club, or a school community. Families need these bonds to other families as well, regardless of what Pa in Little House on the Prairie seemed to think at times.

The little platoons of community provide role models, advice, and a safety net, and everyone needs these things. An individual who doesn’t join these organizations soon finds himself deeply in need. The more people in need who aren’t cared for by their community, the more demand there is for a large central state to provide the safety net, the guidance, and the hand-holding.

Social scientists have repeatedly come across a finding along these lines. “[G]overnment regulation is strongly negatively correlated with measures of trust,” four economists wrote in MIT’s Quarterly Journal of Economics . The study relied on an international survey in which people were asked, “Generally speaking, would you say that most people can be trusted or that you need to be very careful in dealing with people?” The authors also looked at answers to the question “Do you have a lot of confidence, quite a lot of confidence, not very much confidence, no confidence at all in the following: Major companies? Civil servants?”

They found, among other examples:

High-trusting countries such as Nordic and Anglo-Saxon countries impose very few controls on opening a business, whereas low-trusting countries, typically Mediterranean, Latin-American, and African countries, impose heavy regulations. 36

The causality here goes both ways. In less trusting societies, people demand more regulation, and in more regulated societies, people trust each other less. This is the analogy of the Industrial Revolution’s vicious circle between Big Business and Big Labor: The less trust in humanity there is, the more rules crop up. And the more rules, the less people treat one another like humans, and so on.

Centralization of the state weakens the ties between individuals, leaving individuals more isolated, and that isolation yields more centralization.

The MIT paper, using economist-speak, concludes there are “two equilibria” here. That is, a society is headed toward a state of either total regulation and low trust, or low regulation and high trust. While both destinations might fit the definition of equilibrium, the one where regulation replaces interpersonal trust is not a fitting environment for human happiness.

On a deeper level, without a community that exists on a human level—somewhere where everyone knows your name, to borrow a phrase—a human can’t be fully human. To bring back the language of Aristotle for a moment, we actualize our potential only inside a human-scaled community.

And if you want to know what happens to individuals left without a community in which to live most fully as human, where men and women are abandoned, left without small communities in which to flourish, we should visit Trump Country.

Rate of Moral Panic

I’m always looking for historical background that puts our present situation in new light. We often don’t realize, for example, how different was the world before and after the Second World War. The 1940s and 1950s was a strange time.

There was a brief moment around the mid-century when the number of marriages shot up and people married younger. So, when we compare marriage rates now to those in the post-war period, we get a skewed perspective because that post-war period was extremely abnormal by historical standards (Ana Swanson, 144 years of marriage and divorce in the United States, in one chart). It’s true that marriage rates never returned to the level of that brief marriage (and birth) boom following the war, but then again marriage rates weren’t ever that high earlier either.

In the 1990s, during the height of the culture wars when family values were supposedly under attack, the marriage rate was about the same as it was from before the Civil War and into the early 1900s, the period I’ve referred to as the crisis of identity. In the decades immediately before that starting around 1970, the marriage rate had been even higher than what was seen in the late 19th century (there isn’t dependable earlier data). Nor is it that premarital sex has become normalized over time, as young people have always had sex: “leaving out the even lower teen sex rate of GenZ, there isn’t a massive difference between the teen sex rates of Millennials and that of Boomers and Silents” (Rates of Young Sluts).

As another example from this past century, “In 1920, 43 percent of Americans were members of a church; by 1960, that figure had jumped to 63 percent” (Alex Morris, False Idol — Why the Christian Right Worships Donald Trump). Think about that. Most Americans, in the early 1900s, were some combination of unchurched and non-religious or otherwise religiously uninvolved and disinterested. A similar pattern was seen in the colonial era when many people lived in communities that lacked a church. Church membership didn’t begin to rise until the 1800s and apparently declined again with mass urbanization and early industrialization.

By the way, that is closely associated with the issue of marriage. Consider early America when premarital sex was so common that a large percentage of women got married after pregnancy and many of those marriages were common law, meaning that couples were simply living together and likely often involving serial monogamy. Moral norms were an informal affair that, if and when enforced, came from neighbors and not religious authority figures. Those moral norms were generous enough to allow the commonality of bastards and single parents, although some of that was explained by other issues such as rape and spousal death.

Many early Americans rarely saw a minister, outside of itinerant preachers who occasionally passed by. This is partly why formal marriages were less common. “Historians of American religion have long noted that the colonies did not exude universal piety. There was a general agreement that in the colonial period no more than 10-20 percent of the population actually belonged to a church” (Roger Finke & Rodney Stark, The Churching of America). This was at a time when many governments had state religions and so churches were associated with oppressiveness, as seen with the rise of non-Christian and non-conventional views (agnosticism, atheism, deism, universalism, unitarianism, secularism, etc) during the revolutionary period.

And don’t get me started on abortion, in how maybe as high as one in five or six pregnancies were aborted right before the American Civil War. That might be related to why fertility rates have been steadily dropping for centuries: “Extending the analysis back further, the White fertility rate declined from 7.04 in 1800 to 5.42 in 1850, to 3.56 in 1900, and 2.98 in 1950. Thus, the White fertility declined for nearly all of American history but may have bottomed out in the 1980s. Black fertility has also been declining for well over 150 years, but it may very well continue to do so in the coming decades” (Ideas and Data, Sex, Marriage, and Children: Trends Among Millennial Women). That downward trend probably began during the height of the enclosure movement and British colonization during the 17th century.

Are we to blame commie liberal hippies traveling back in time to cause the decline of America practically before the country was even founded? Nostalgia is a fantasy and, interestingly, it is also a disease. The world is getting worse in some ways, but the main problems we face are real world crises such as climate change, not namby pamby cultural paranoia and fear-mongering of sensitive snowflakes. The fate of humanity does not rest on promoting the birth rate of native-born American WASPs nor on the hope that theocracy will save us. If we want to worry about doom, we should be looking at whether the rate of moral panic is experiencing an uptick, something that often precedes the rise of totalitarian ideologies and authoritarian mass violence.

* * *

Central to the moral panic of the culture wars has been “family values.” This has been held up by an idealized standard of middle-to-upper class respectability and responsibility embodied in WASP nuclear families. What relationship does the normative ideal have to do with the lived realty across American history? See below for a historical analysis.

“Family Values”: The Uses and Abuses of American Family History
by Elaine Tyler May

“Family Values” and Historical Scholarship

How did this national preoccupation emerge, and what does it mean for American political life? First, let us examine the phrase that is most frequently invoked in political debates: “family values.” In the political landscape that has emerged in recent decades, “family values” is a phrase that connotes specific positions on particular issues, and it has highly charged policy implications. It involves a constellation of issues. Under the banner of “family values” we find opposition to legal abortion; support for prayer in schools; opposition to civil rights for gays and lesbians; support for censorship of the arts, movies and popular culture; welfare reform; opposition to gun control; the “war on drugs.” These measures are usually found on the conservative agenda, although liberals have increasingly championed some of them in their efforts to jump on the “family values” bandwagon. Many of these issues have nothing to do with families—but they all have to do with values. And they all inspire fierce passions and heated debates.

It is also clear that “family values” is a term often used as a code and marker of race and class. For example, poor black single mothers, and educated white professional women, are both likely to be blamed for society’s ills as a result of their alleged defiance of “family values.” Presumably, a mother on welfare who goes out and gets a job demonstrates good family values; one who stays home with her kids does not. Yet an educated middle-class woman who goes out and gets a job demonstrates bad family values; one who stays home with her kids does not. The rules change according to racial and class position, as well as marital status. The gender, class, and sexual expectations also change over time. In the 1930s, for example, welfare payments were made to poor mothers to enable them to stay home with their children. Now mothers on welfare are required to hold jobs (Gordon 1994).

Scholars have only recently begun to examine these issues through the lens of history. What used to be called the “new” social history gave rise to flourishing scholarship on the working class, women, gender and sexuality, racial minorities and race relations, immigrants and ethnic groups, family history, and gay and lesbian history. That rich body of scholarship has altered the way history is studied and taught today. The new history expands our understanding of the American past, making it far richer and more complete than it ever was before. But it has also led to criticism that American history has now become so fragmented and particularized that there is no longer any unified understanding of the past that offers to Americans a cohesive view of their national history. This criticism often stems from a desire to replace the new complex multicultural and diverse history with a dominant narrative grounded in the stories and deeds of powerful leaders, returning to a traditional unified narrative that was partial, biased, and left out most Americans (Bender 2002). Nevertheless, these criticisms challenge scholars to bring together aspects of history that are usually studied in isolation from each other. Much of social history has left politics in the background, or left it out altogether. In recent years, several historians have done important work that takes a new look at American politics with the contributions of social history providing the foundation for their scholarship (Coontz 1992; Kerber 1998). Feminists were the first to proclaim that the “personal is political,” and scholars studying women, sexuality, gender, and the family have kept that insight at the center of their scholarship. [5][5]For a pioneering example of feminist scholarship that used this…

We know from the work of these scholars that there was never a “traditional” American family. There has been as much diversity and changes in American families as in any other aspect of national life. But the power of the myth continues. In fact, misperceptions of the American family may be more relevant to current political debates than the reality of American families (Coontz; May 1999). For example, many Americans are surprised to learn that contrary to common assumptions, the Puritans did not condemn premarital sex or out-of wedlock pregnancy (provided the young couple intended to marry); or that abortion was legal during much of the XIXth century; or that the alleged “golden age” of the 1950s’ white middle-class family was marred by rampant alcohol and drug abuse among suburban housewives, and high rates of sexual activity among teenagers (many of whom were married); or that rates of voluntary childlessness were higher a century ago than they are today. Many people believe that American nuclear families were strong, stable and self-reliant until the 1960s, when they began to unravel (Coontz; Stacey).

Scholars eager to set the record straight argue that the family has always been a changing institution, and that claims of its demise are highly exaggerated. They have put great effort into demonstrating that there never was a “traditional” self-sufficient nuclear family to match the mythical ideal. [6][6]For a forceful and thorough debunking of myths of the American… This scholarship is powerful and important. But it does not ask or answer a fundamental question: if change is a constant in the history of the American family, why during certain times—but not all times—do politicians and leaders warn that family decline portends the nation’s doom? I would like to suggest that anxieties about the family emerge at times when national identity, as defined and understood by the American middle class, appears to be threatened—by immigrants, radicals, “communists,” racial or sexual minorities, or feminists. […]

From the founding of the nation, then, the American family had a well-defined political role. Attached to that role were certain assumptions about the structure of the family, its functions, and the specific responsibilities of its members. In the first century of the Republic, gender roles within middle-class families carried civic meanings. As towns and cities grew, most urban households lost their function as centers of production. Instead of working at home, men left to work in the public arena while women remained in the domestic sphere. Men became breadwinners, while women took on the elevated stature of moral guardians and nurturers. Women’s responsibilities included instilling virtue in their families and raising children to be responsible and productive future citizens. The democratic family would be nuclear in structure, freed from undue influence from the older generation, and grounded in these distinct gender roles that were believed to be “natural” —at least for white European-Americans (Ryan 1981).

In the political culture that developed from these expectations, the family had a major responsibility for the well-being of society. The responsibility of the society for the well-being of the family was less well articulated, and defined mostly in the negative. The government was to leave the family alone, not intrude into it, and not provide for it. The family was, presumably, self-sufficient. Politics was the arena where white men, acting as democratic citizens, shaped public policies. The family was the place where white women, spared the corrupting influences of public life, would instill self-sufficiency and virtue into the citizenry.

From the beginning, however, the reality of family life defied those definitions and strained against the normative ideal. The vast majority of Americans lived on farms, or in households that required the productive labor of all adult members of the family. The prevailing middle-class norm in the XIXth century that defined “separate spheres” for men and women never pertained to these families, nor did it reflect the experiences of African-Americans, either during or after slavery. Only the most privileged white Protestant women in the towns and cities had the resources that allowed them to devote themselves full-time to nurturing their families and rearing future citizens. Their leisure time for moral uplift depended upon the labors of other women—African-American slaves, immigrant household servants, and working-class women who toiled in factories—to provide the goods and services that would enable privileged white women to pursue their role as society’s moral guardians. And it was those very women, affluent and educated, who first rebelled against their constrained domestic roles, arguing that the system of coverture denied them their rights as citizens. [7][7]For examples and analysis, see two classic works in the field:…

At the same time, when social problems developed that appeared to threaten social order, often the family was blamed—particularly those families, or individuals, whose behavior did not conform to the normative family ideal. The family came to be seen as the source or cause of social problems as well as the potential solution or cure. In other words, bad families eroded American society, and good families would restore it. Good families were the key to social order and national progress. Good families were those that conformed to the ideal of the so-called “traditional” American family, a family form that seemed to flourish among the white Protestant middle class in the XIXth century, and allegedly reached its twentieth-century apex, or “golden age,” in the 1950s. Here we find the source of the mythic nuclear family ideal.

A Historical Perspective

The founders of the nation assumed that the white middle-class family, nurtured by women in the private arena protected from the corruptions of commerce and public life, would produce virtuous citizens and provide the foundation for public order. The responsibility of the government was essentially to leave the family alone—not to intervene with either material support or regulation. Marriage laws established heterosexual monogamy as the foundation for families, prohibited unions across racial lines, and determined marital possibilities for immigrants (Cott 2000). Once established, the family was expected to serve its members and the society without government interference. However, by the late XIXth century, observers began to realize that not all families could be counted upon to promote the interests of the white Protestant status quo. Several dramatic developments—the end of slavery and the migration North of thousands of African-Americans, the influx of immigrants, the political activism of middle-class women, the declining birthrate of native-born Protestant Americans, the political power of Irish Catholics in northern cities—made it clear that the government could no longer remain aloof and expect families to take care of the nation. At the turn of the XXth century, the Anglo-Saxon middle class faced major challenges to its hegemonic definition of national identity.

In response, political leaders during the Progressive Era boldly altered the relationship between the family and the state. Progressive reformers no longer assumed that the family would, without support or intervention from the government, maintain civic virtue and social order. President Theodore Roosevelt was the first national leader to articulate a new dimension to the public/private bargain. In his first campaign for the presidency, he brought the family into the center of national political debates. It has remained there ever since.