Nearly every scientific field of study is facing a replication crisis and, although known about for decades now, it still has not been resolved. Most researchers are so limited in their knowledge and expertise that they lack any larger context of understanding. They simply don’t know what they don’t know and there is no incentive in siloed professions to spend time to understand anything outside of one’s field. In science, the replication crisis has numerous causes, sometimes because of bad study design or the difficulty of some areas of study. Nutrition studies, for example, has been dependent on epidemiological studies that are based on correlations without being able to prove causation; and, on top of that, are often dependent on notoriously unreliable self-reporting food surveys where people have to guesstimate what they ate in the past, sometimes over a period of years. More recent research has shown that much of what we thought we knew simply is not true or has yet to be verified.
Another problem is what or who is studied. There are problems with the lab animals used because certain species adapt better to labs, even though other species are more similar in certain ways to humans. Researchers’ preference for lab mice, for example, is not unlike the guy looking for his keys under the streetlight because the light was better there. This problem applies to human subjects as well, in that they’ve mostly been white middle class undergraduate college students in the United States because most research has been done in U.S. colleges; and, in medical studies of the past, this mostly involved men which meant women in healthcare were treated as men without penises. The first part is known as the WEIRD bias (Western Educated Industrialized Rich Democratic), and it has particularly rocked the world of the social sciences. Take personality studies where the leading theory has been the Big Five (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism), with an additional factor being added to form the HEXACO model (honesty-humility). Like so much else, it turns out that most of these personality traits don’t replicate outside of WEIRD and WEIRD-like populations. This challenge of non-WEIRD cultures and mentalities has been around a long time, as seen in the anthropological literature, but most experts in other fields have remained largely ignorant of what anthropologists have known for more than a century, that environment shapes mind, perception, and behavior.
The funny thing is that, even when studies have shown this problem with the Big Five, the WEIRD bias continues to hold sway over those trying to explain away the potential implications and to put the non-WEIRD results back into WEIRD boxes. This is done by asserting the bad results are simply caused by social desirability bias and acquiescence bias, since the answers given by non-WEIRD individuals seem to be contradictory. The researchers and interpreters of the research refuse to take the results at face value, refuse to give the benefit of the doubt that these non-WEIRD people might be accurately reporting their experience. There is almost a grasp of what is going on in pointing to these biases, since these biases are about context, but this comes so close only to miss the point. Non-WEIRD cultures and mentalities tend to be more context-dependent and so unsurprisingly give varying responses in being sensitive to how questions are being asked, whereas the WEIRD egoic abstraction of rules and principles operates more often the same across contexts. Only a highly WEIRD person would think that it is even possible to discover something entirely unrelated to context.
WEIRD personality traits are a kind of psychological rule-orientation where the individual adheres to a psychological heuristic of cognitive behavior, a strict and rigid maintenance of thought pattern that calcifies into an identity formation. The failure of cross-cultural understanding is that the very concept of a stable, unchanging personality might itself be part of the WEIRD bias and an exaggerated extension of the larger Axial Age shift when the ego theory of mind took hold, what some call Jaynesian consciousness in reference to Julian Jaynes theory about the disappearance of the bicameral mind that is a variation of the bundle theory of mind. This was then magnified by mass literacy, beginning with the Protestant Reformation, that alters brain structure, as argued by Joseph Henrich. It might not merely be that those very far distant from WEIRD culture not only lack WEIRD-style personality traits but might also lack egoic personality structures. Most WEIRD people can’t acknowledge non-WEIRD mentalities, much less grok what they mean and how to imaginatively empathize with them. The sad part is this also demonstrates a lack of self-awareness, as the bundled mind essentially exists in all of us, something that can be observed by anyone looking into their own psyche — this is why contemplative traditions like Buddhism adhere to the bundle theory of mind.
Another explanation of this psychological change of personality traits is that agriculture and later industrialization increased labor specialization that generally passed down the generations. These work niches were originally and largely still occupied by specific families, kin networks, castes, and communities over centuries or longer (e.g., feudal serfs and factory workers). It formed a stable environment and a stable culture that shaped the human psyche according to what was required. This is the opposite of hunter-gatherers who are forced to be generalists in doing a wide variety of work. Agriculture had led to some gender specialization, but even that specialization was often limited. It is definitely true, though, that hunter-gatherers are far less specialized where some like the egalitarian Piraha have little specialization at all, along with no permanent authority of any kind. It’s possible that represents how humanity lived for most of evolution when food was more abundant and life easier, as is the case where the Piraha live along a river surrounded by lush jungle. The study of the Piraha have helped challenge one area of WEIRD bias, that of seeing the world through a highly recursive literary culture. The Piraha apparently lack linguistic recursion; i.e., embedded phrases. By the way, they are an animistic culture with the typical bundled mind as overt 4E cognition (embodied, embedded, enactive, and extended). Such animistic cultures allow for personality fluidity, sometimes temporary possessions and at other times permanent identity changes.
Even gender specialization might be a somewhat recent invention, corresponding to the invention of the bow and arrow. For most of human existence, humans hunted with spears and the evidence now points to spear hunting having required the whole tribe, including women. Some of the earliest rock art also portrays men holding the hands of children, which indicates that men were either involved with childcare or not kept separate from it, maybe because the children had to be brought along on the hunt with the whole tribe. So, even the theory that there are two genetically-determined personality types based on men hunting and women gathering was a result of relatively recent changes. By the way, those changes were caused by the megafauna die-off. Smaller game replaced the megafauna and hunting smaller game motivated development of new hunting tools and techniques. The bow and arrow, once invented, allowed individuals to hunt alone and this more often was an activity of men. This forced women to take up a separate labor niche. The lower nutrition level of lean small game also made necessary a greater reliance on plant foods, which meant horticulture and later agriculture. The plow, like the bow and arrow, made another area of men’s work and further reinforced gender division.
The point is not all hunting is the same and so these different practices would create different personality structures. The same was probably true of gathering, particularly in terms of how early humans were also meat scavengers. To get into the effect of the agricultural revolution, this is reminiscent of research done on wheat and rice farming in China. What was found is that the two populations fell into the stereotypical patterns of Western and Eastern thinking, with wheat-based populations having less context-dependent thinking and rice-based populations emphasizing context, even though both populations were Chinese. The explanation is that wheat farming is typically done by one person alone working a plow or now a tractor, whereas rice farming requires highly organized collective labor. Interestingly, China stands out in that psychopathy is found equally among both genders, unlike in the West and some other places where it is disproportionately found among males. It would be interesting to study if this is primarily an effect of the larger populations involved in rice-growing and the culture that has developed around it. On a related note, research does show higher rates of psychopathy in urban areas than in rural areas. Is this simply because psychopaths prefer to remain anonymous in cities or is there something about city life that promotes psychopathic neurocognition?
Anyway, wheat farming is as different from rice farming, as bow-and-arrow hunting is from spear hunting. What stands out is that both rice farming and spear hunting are collective activities involving both genders, but wheat farming and bow-and-arrow hunting can be solitary activities that have tended to be done by men. In Western Europe, there never was rice farming. And, unlike certain populations, spear hunting in the West probably hasn’t been common in recent history. Yet there are still spear hunting tribes in various places. Some of those also do persistence hunting, probably the original form of hunting. Anyway, hunter-gatherers in general need more adaptable minds because they are dealing with diverse tasks and often over large diverse territories. This requires a more fluid and shifting mentality, one where the very concept of stable personality traits maybe simply does not apply to the same extent. Even in the West, research shows that personality traits can change over a lifetime and under different conditions, such as how a liberal can basically turn into a conservative simply by giving them a few beers. But it is true that modern WEIRD conditions are much more stable with narrow niches of work and living, often with racial and class segregation, not to mention the repetitive nature of modern life with little changes in activities from day to day, season to season.
This brings us to the worries some had in early modernity. Adam Smith thought public education was necessary because repetitive factory work made people stupid, which might be simply another way of saying that those individuals lose or else never develop cognitive flexibility, cognitive complexity, and cognitive diversity. Karl Marx explained this in terms of the transition from traditional labor where an individual constructed a product from beginning to end, often having involved multiple complex steps with various tools and techniques, each requiring different physical and cognitive skills. This gave the individual great sense of accomplishment and pride, not to mention autonomy as to be a tradesmen was to have immense skill. The dumbing down of the work force with industrial labor may have contributed to the WEIRD mentality. Even the average office worker experienced this narrowing down of activity. This allowed moderns to specialize, but in doing so sacrificed all other aspects of development. This relates to the creation of stupid smart people, those who are only capable of doing one thing well but otherwise are clueless. It’s not hard to see how this has forced people into niche personalities and hence making possible theories about how to categorize such personalities.
Complex societies produce people with more varied personalities. […] But this covariation is neither random nor easily explained by genes. The social and ecological environments in which we develop, the scientists said, have a lot to do with how we develop. Our personalities are created by the patterns of behavior we exhibit that are relatively stable over time. But what creates those patterns, and why do they persist?
That’s the question Smaldino is exploring with collaborators from UC Santa Barbara, California State University Fullerton and the University of Richmond. Their research, published in the journal Nature Human Behaviour, suggests societies differ in the personality profiles of their members because of the different sociological niches in those societies. The diverse niches in a society — the occupational, social and other ways people navigate through daily life — constrain how an individual’s personality can develop.
Psychologists have traditionally relied upon the statistically derived “Big Five” personality traits to structure their research: openness, consciousness, extroversion, agreeableness and neuroticism.
Smaldino and his colleagues question the universality of this model in their work, instead exploring why certain traits — such as trust and sympathy or impulsivity and anxiety — bundle together as they do in particular places.
The researchers looked at personality data from more than 55 societies to show that more complex societies — those with a greater diversity of socioecological niches — tended to have less covariation among behavioral traits, leading to the appearance of more broad personality factors. They developed a computer model to create simulated environments that varied in their number of niches, which demonstrated the plausibility of their theory.
“The importance of socioecological niches basically comes down to this: How many ways are there to be a person in a given culture?” Smaldino said. “What are the number of successful strategies one can use to thrive? If you’re in a complex society, like the wealthy parts of America, there are just myriad ways to be.
“No matter how idiosyncratic you are, you can find a community that accepts you. On the other end of the spectrum, say in a small-scale foraging society, your behaviors are going to be a lot more constrained. This affects the ways in which behaviors cluster together, and the patterns that manifest as personality characteristics.”
So, why doesn’t the Big Five test hold up around the world? Lead author Rachid Laajaj, an economics researcher at the University of Los Andes in Columbia, said many of the reasons are rooted in literacy and education barriers. Many personality tests used in WEIRD countries are intended to be self-administered, designed for people who can read and write. But because of lower literacy rates in developing countries, tests may need to be given verbally. This introduces the possibility of translation or phrasing differences that could skew results.
Researchers also think that face-to-face questioning allows social desirability bias to creep into the process. This means that respondents may try to interpret social cues for a “right answer” or give answers they think would be viewed more favorably by others.
“Yea-saying,” or the tendency to agree with a statement even if it’s untrue, is also more common in developing countries, where there’s less access to education, the researchers say.
“People may have a harder time understanding abstract questions. Acquiescence bias may be accentuated when people do not fully understand, in which case it feels safer to just agree,” Laajaj said.
Additionally, the idea of personality tests — or personality itself — may not be a natural concept everywhere. Understandably, people who aren’t familiar with the idea of personality testing might be a bit wary of revealing personal details about themselves.
“Imagine that you live in a poor area and someone comes to you to ask you a bunch of questions, such as how hardworking you are, whether you get stressed easily or whether you are a polite person. If it is not common for you to fill out surveys, or if it’s not clear what will be done with it, you may, for example, care more about giving a good impression than being completely truthful,” Laajaj said.
To understand why industrialisation might be an influential force in the development of behaviour, it’s important to understand its legacy in the human story. The advent of agriculture 10,000 years ago launched perhaps the most profound transformation in the history of human life. No longer dependent on hunting or gathering for survival, people formed more complex societies with new cultural innovations. Some of the most important of these innovations involved new ways of accumulating, storing and trading resources. One effect of these changes, from a decision-making standpoint, was a reduction in uncertainty. Instead of relying on hard-to-predict resources such as prey, markets allowed us to create larger and more stable pools of resources.
As a result of these broader changes, markets might have also changed our perceptions of affordability. In WEIRD societies with more resources (remember that the R in WEIRD stands for rich) kids might feel that they can better afford strategies such as patience and risk-seeking. If they get unlucky and pull out a green marble and didn’t win any candy, that’s okay; it didn’t cost them that much. But for Shuar kids in the rainforest with less resources, the loss of that candy is a much bigger deal. They’d rather avoid the risk.
Over time, these successful strategies can stabilise and become recurrent strategies for interacting with our world. So, for instance, in an environment where the costs of waiting are high, people might be consistently impatient.
Other studies support the notion that personality is shaped more by the environment than previously thought. In work among Indigenous Tsimané adults in Bolivia, anthropologists from the University of California, Santa Barbara found weak support for the so-called ‘Big Five’ model of personality variation, which consists of openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. Similar patterns came from rural Senegalese farmers and the Aché in Paraguay. The Big Five model of personality, it turns out, is WEIRD.
In another recent paper, the anthropologist Paul Smaldino at the University of California, Merced and his collaborators followed up on these findings further, relating them to changes that were catalysed by industrialisation. They argue that, as societies become more complex, they lead to the development of more niches – or social and occupational roles that people can take. Different personality traits are more successful in some roles than others, and the more roles there are, the more diverse personality types can become.
As these new studies all suggest, our environments can have a profound impact on our personality traits. By expanding the circle of societies we work with, and approaching essentialist notions of personality with skepticism, we can better understand what makes us who we are.
When it comes to personality psychology the Big 5 (or Five-Factor Model/FFM) are still considered the gold standard and many other personality tests, like the Myers-Briggs (MBTI) are considered pseudoscience. The FFM is even more useful and has more predictive power when a sixth dimension is added: honesty humility (HEXACO model).
However, adding new personality dimensions is of little use when it comes to understanding human nature, as not even five factors are human universals. Two of the factors that are often associated with mental disorders (neuroticism and openness to experience), never even show up in non-Western societies, which are called “WEIRD” (Western, educated, industrialized, rich and democratic) by Joseph Henrich in The WEIRDest People in the World (2020). Henrich points out the Big 5 are indeed WEIRD 5, as they are by no means human universals. Some societies yield only three or four factors. Subsistence-level economies often only have two factors. The Tsimane’ practise subsistence farming and Henrich writes about them:
So, did the Tsimane’ reveal the WEIRD-5? No, not even close. The Tsimane’ data reveal only two dimensions of personality. No matter how you slice and dice the data, there’s just nothing like the WEIRD-5. Moreover, based on the clusters of characteristics associated with each of the Tsimane’’s two personality dimensions, neither matches up nicely with any of the WEIRD-5 dimensions […] these dimensions capture the two primary routes to social success among the Tsimane’, which can be described roughly as “interpersonal prosociality” and “industriousness.” The idea is that if you are Tsimane’, you can either focus on working harder on the aforementioned productive activities and skills like hunting and weaving, or you can devote your time and mental efforts to building a richer network of social relationships.
Decades of experimental research show that, compared to most populations in the world, people from societies that are Western, Educated, Industrialized, Rich, and Democratic (WEIRD) (4) are psychologically unusual, being both highly individualistic and analytically minded. High levels of individualism mean that people see themselves as independent from others and as characterized by a set of largely positive attributes. They willingly invest in new relationships even outside their kin, tribal, or religious groups. By contrast, in most other societies, people are enmeshed in dense, enduring networks of kith and kin on which they depend for cooperation, security, and personal identity. In such collectivistic societies, property is often corporately owned by kinship units such as clans; inherited relationships are enduring and people invest heavily in them, often at the expense of outsiders, strangers, or abstract principles (4).
Psychologically, growing up in an individualistic social world biases one toward the use of analytical reasoning, whereas exposure to more collectivistic environments favors holistic approaches. Thinking analytically means breaking things down into their constituent parts and assigning properties to those parts. Similarities are judged according to rule-based categories, and current trends are expected to continue. Holistic thinking, by contrast, focuses on relationships between objects or people anchored in their concrete contexts. Similarity is judged overall, not on the basis of logical rules. Trends are expected to be cyclical.
Various lines of evidence suggest that greater individualism and more analytical thinking are linked to innovation, novelty, and creativity (5). But why would northern Europe have had greater individualism and more analytical thinking in the first place? China, for example, was technologically advanced, institutionally complex, and relatively educated by the end of the first millennium. Why would Europe have been more individualist and analytically oriented than China? […]
Sure enough, participants from provinces more dependent on paddy rice cultivation were less analytically minded. The effects were big: The average number of analytical matches increased by about 56% in going from all-rice to no-rice cultivation. The results hold both nationwide and for the counties in the central provinces along the rice-wheat (north-south) border, where other differences are minimized.
Participants from rice-growing provinces were also less individualistic, drawing themselves roughly the same size as their friends, whereas those from wheat provinces drew themselves 1.5 mm larger. [This moves them only part of the way toward WEIRD people: Americans draw themselves 6 mm bigger than they draw others, and Europeans draw themselves 3.5 mm bigger (6).] People from rice provinces were also more likely to reward their friends and less likely to punish them, showing the in-group favoritism characteristic of collectivistic populations.
So, patterns of crop cultivation appear linked to psychological differences, but can these patterns really explain differences in innovation? Talhelm et al. provide some evidence for this by showing that less dependence on rice is associated with more successful patents for new inventions. This doesn’t nail it, but is consistent with the broader idea and will no doubt drive much future inquiry. For example, these insights may help explain why the embers of an 11th century industrial revolution in China were smothered as northern invasions and climate change drove people into the southern rice paddy regions, where clans had an ecological edge, and by the emergence of state-level political and legal institutions that reinforced the power of clans (7).
The isolated self is not real, but the fearful mind makes it feel real. We always exist in interrelationship with others, with the world, and with a shared sense of our humanity. This greater reality of connection and being is what monotheists refer to as God, what Buddhists refer to as Emptiness, what Taoists refer to as the Tao, etc; but even atheists can intuit something beyond atomistic individualism, be it Nature or Gaia or something similar, the world as alive or vital, maybe simply the human warmth of family, friends, and community. In the below quoted piece, the political philosopher Hannah Arendt is discussed in her views on loneliness and totalitarianism. Maybe she is referring to how ideologies (political, economic, or religious) can fill that void and that is what transforms mundane authoritarianism into totalitarianism. The loneliness arises when we are fearful and anxious, desperate and vulnerable. We become open to anyone who will offer us a sense of meaning and purpose. We get pulled in and lose our bearings.
That is what ideologies can do in telling us a story and that is why media can have such power in controlling the rhetorical framing of narrative. I might take Arendt’s thought a step further. She argues that loneliness paralyzes us and that is true, but loneliness also is intolerable and eventually forces us to action, even if destructive action, be it riot or suicide. In loneliness, we often attack others around us who could remind us that we are not alone. The fear of isolation, a terrifying experience for a social creature like humans, can cause the imagination to run rampant and become overtaken by nightmares. In loneliness, we are socially blind and forget our own larger sense of humanity. Under such perverse conditions, ideological beliefs and principles can feel like a protection, an anchoring in dark waters, but in reality we end up pushing away what might save us, finding ourselves further adrift from the shore. We can only discover our own humanity in others, never in isolation. This is what can transform harmful isolation into healthy solitude, learning to relate well to ourselves.
Learn to listen to emotions. A feeling is never merely feeling. It speaks to the state of our soul. It not only indicates our place in reality but touches upon that reality. If we allow ourselves to be present, we can begin to sense something deeper, somether greater. We are more than we’ve been told. Your emotions will also tell you what is true, what is genuine — that is once you’ve learned to listen. If when or after being exposed to media you feel fearful and anxious or feel isolated and lonely, take note and pay attention to what ideological narrative was being fed to you that brought you to this state. Or else follow the lines of thought back into the tape loops playing in your mind and ask yourself where they came from. Why do these thoughts of isolation keep repeating and why have they taken such powerful hold in your mind? Remember, only in false isolation can we think of ourselves as powerless, as victms, but in reality we are never in isolation. If your ideology makes you feel in conflict wth friends, neighbors, and loved ones, it is the ideoloogy that is the danger, not those other people. The same is true for everyone else as well, but you must begin with yourself, the plank in your own eye.
The Modern Challenge to Tradition begins where Origins ends, with an essay titled “Ideology and Terror” (1953). In the chapter of the same title concluding Origins, she had made one of her most controversial claims, “that loneliness, once a borderline experience . . . has become an everyday experience of the ever growing masses of our century.” Her critics easily believe in the prevalence of loneliness, but they often challenge the apparently causal relation she proposes between it and totalitarian states. The later essay included in The Modern Challenge responds to her critics and revises aspects of her argument that had been genuinely unclear. Arendt maintains the centrality of loneliness to totalitarianism, but more clearly grounds it not in an existential cause—say, anomie, that keyword of the social theory of Emile Durkheim—but in a political one: terror. Loneliness is not the cause of totalitarianism, she claims, but terror produces loneliness. Once a population is lonely, totalitarian governments will find it far easier to govern, for lonely people find it hard to join together, lacking the strong extra-familial bonds necessary to organize rebellions. These individualizing effects of loneliness prevent political action even in non-totalitarian states, because politics requires collaboration and mutuality. In this regard, Arendt claims a role for emotions in politics.
Contrary to loneliness, she argues that solitude can be a boon to politics. While loneliness “is closely associated with uprootedness and superfluousness . . . to have no place in the world, recognize and guaranteed by others,” solitude is the exact opposite. It “requires being alone,” but “loneliness shows itself most sharply in company with others.” She often quotes a line from Cicero, originally attributed to Cato, to describe the difference: “‘Never was he less alone than when he was alone’ (numquam minus solum esse quam cum solus esset).” Yet, Arendt writes, “solitude can become loneliness; this happens when all by myself I am deserted by my own self.” She concludes,
what makes loneliness so unbearable is the loss of one’s own self which can be realized in solitude, but confirmed in its identity only by the trusting and trustworthy company of [one’s] equals. In [loneliness], man loses trust in himself as the partner of his thoughts and that elementary confidence in the world which is necessary to make experiences at all. Self and world, capacity for thought and experience are lost at the same time.
The breast is best. That signifies the central importance of breastfeeding. But one could also take it as pointing to our cultural obsession with human mammary glands, something not shared by all cultures. I’m going to make the argument that the breast, at least in American society, is the main site of social control. Before making my case, let me explore what social control has meant, as society has developed over the millennia.
There is a connection between social control and self-control. The most extreme forms of this dualistic dynamic is authoritarianism and hyper-individualism (Westworld, Scripts, and Freedom), the reason liberty has a close relationship to slavery (Liberty, Freedom, and Fairness). In reading Julian Jaynes’ classic, he makes this clear, although he confuses the matter a bit. He sometimes refers to the early Bronze Age societies as ‘authoritarian’, but he definitely does not mean totalitarianism, something that only describes the civilizations that followed later on. In the broader usage, the word ‘authoritarianism’ is sometimes tinged with his notions of archaic authorization and collective cognitive imperative (“Beyond that, there is only awe.”). The authority in question, as Jaynes argued, are the external or dispersed voices that early humans heard and followed (as today we hear and follow the voices in our own metaphorical “inner space”, what we call thoughts or what Jaynes referred to as self-authorization; The Spell of Inner Speech). Without an archaic authorization heard in the world allowing social order to emerge organically, an authoritarian system has to enforce the social order from above: “the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies” (“Beyond that, there is only awe.”).
And the ego is this new form of authoritarian power internalized, a monotheistic demiurge to rule over the inner world. Totalitarianism turns in on itself and becomes Jaynesian consciousness, a totalizing field of identity, but the bicameral mind continues to lurk in the shadows, something any aspiring authoritarian can take advantage of (Ben G. Price, Authoritarian Grammar and Fundamentalist Arithmetic). “We are all potential goosestepping authoritarian followers, waiting for the right conditions to bring our primal natures out into the open. With the fiery voice of authority, we can be quickly lulled into compliance by an inspiring or invigorating vision […] The danger is that the more we idolize individuality the more prone we become to what is so far beyond the individual. It is the glare of hyper-individualism that casts the shadow of authoritarianism” (Music and Dance on the Mind).
The practice of literally carving laws into stone came rather late in the Bronze Age, during the period that preceded the near total collapse of all the major societies. That totalitarianism then, as today, coincided with brutality and oppression — never before seen in the historical record. Authoritarianism as totalitarianism apparently was something new in human experience. That might be because totalitarianism requires higher levels of abstraction, such as dogmatic laws that are envisioned and enforced as universal truths, principle, and commandments. Such abstract thinking was encouraged by the spread of more complex writing (e.g., literature), beyond what earlier had been primarily limited to minimalistic record-keeping. Individualism, as I said, also arose out of this violent birth of what would eventually mature into the Axial Age. It was the radically emergent individual, after all, that needed to be controlled. We now take this all for granted, the way the world is.
There was authority as archaic authorization prior to any hint of totalitarianism, but I question if it is useful to speak of it as authoritarianism. The earliest civilizations were mostly city-states, closer to hunter-gather tribes than to anything we’d recognize in the later vast empires or in our modern nation-states. Even in gaining the capacity for great achievements, the earliest civilizations remained rather basic in form. Consider the impressive Egyptian kingdoms that, having constructed vast stone monuments, didn’t even bother to build roads and bridges. They were such a small population so tightly clustered together in that narrow patch of fertility surrounded and protected by desert that nothing more complex was required. There weren’t the vast distances of a centralized government, the disconnections between complex hierarchies, nor numerous specialized social roles beyond the immediate work at hand. These societies were small and simple, the conditions necessary for their maintaining order through social identity, through the conformity of groupthink and cultural worldview, rather than violent force. Besides lacking written laws, they also lacked police forces and standing armies. They were loosely organized communities, having originated as informal settlements that had become permanent over time.
Now back to the breast, the first source of sustenance and nurturance. Unfortunately, we don’t have any idea about what the ancients might have thought of the breast as a focus of concern, although Jaynes did have some fascinating thoughts about the naked body and sexuality. As totalitarianism appeared late, so did pornography in the broad sense as found in portrayals of sex engraved in stone, around the same time that laws also were being engraved. With fantasies of sexuality, there was sin that needed to be controlled, guilt that needed to be punished, and the laws to achieve this end. It was all of a single package, an emergent worldview and way of being, an anxiety-driven self-consciousness.
Lacking a time travel machine, the next best option is to look at other societies that challenge biases of Western modernity, specifically here in the United States. Let me begin with American society. First off, I’d note that with the Puritan comes the prurient. Americans are obsessed with all things sexual. And so the sexual has a way of pervading our society. Even something so innocent as the female breast, designed by evolution to feed infants, somehow becomes a sexual object. That projection of lust and shame isn’t seen in all societies. In hunter-gatherer tribes, it is common for the breast to have no grand significance at all. The weirdness doesn’t end there. We don’t have to look to tribal people to find cultures that aren’t sexually prudish. Among some traditional cultures in Asia and elsewhere, even the touching of someone else’s genitals doesn’t necessarily express sexual intentions, as instead it can be a way of greeting someone or showing fondness for a family member. But admittedly, the cultures that seem the most foreign to us are those that have remained the most isolated from Western influences.
The Piraha, according to Daniel Everett, are rather relaxed about sex and sexuality (Dark Matter of the Mind). It’s not that they typically have sex out in the open, except during communal dances when orgies sometimes occur, but their lifestyle doesn’t accord much privacy. Talking about sex is no big deal and children are exposed to it from a young age. Sexuality is considered a normal part of life, certainly not something to be shamed or repressed. As with some other societies, sexual play is common and not always leading to sex. That is true among both adults and children, including what Westerners would call pedophilia. A child groping an adults genitals is not considered a big deal to them. And certainly there is no issue with two children dry-humping each other or whatever, as children are wont to do in their curiosity and budding sexuality. Sex is so common among the Piraha that potential sexual partners are more available, such as with a cousin, step-sibling, or step-parent. The main restrictions are between full siblings and between a child and a biological parent or grandparent. This is a close-knit community.
“The Pirahãs all seem to be intimate friends,” writes Everett, “no matter what village they come from. Pirahãs talk as though they know every other Pirahã extremely well. I suspect that this may be related to their physical connections. Given the lack of stigma attached to and the relative frequency of divorce, promiscuousness associated with dancing and singing, and post- and prepubescent sexual experimentation, it isn’t far off the mark to conjecture that many Pirahãs have had sex with a high percentage of the other Pirahãs. This alone means that their relationships will be based on an intimacy unfamiliar to larger societies (the community that sleeps together stays together?). Imagine if you’d had sex with a sizable percentage of the residents of your neighborhood and that this fact was judged by the entire society as neither good nor bad, just a fact about life— like saying you’ve tasted many kinds of food” (Don’t Sleep, There Are Snakes, p. 88).
[As a quick note, the Piraha have some interesting practices with breastfeeding. When hunting, orphaned animals sometimes are brought back to the village and breastfed alongside human offspring, one at each breast. These human-raised animals will often be eaten later on. But that must create another kind of intimacy for babies and toddlers, a kind of intimacy that includes other species. The toddler who is weaned might have as one of his first meals the meat of the animal that was his early playmate or at least breast-mate. Their diet, as with their entire lifestyle, is intimate in numerous ways.]
That offers quite the contrast to our own society. Appropriate ways of relating and touching are much more constrained (certainly, breastfeeding other species is not typical for American mothers). Not only would an adult Westerner be imprisoned for touching a child’s genitalia and a child severely chastised for touching an adult’s genitalia, two children would be shamed for touching one another or even for touching themselves. Think about that. Think about all of the children over the generations who have been ridiculed, screamed at, spanked, beaten, or otherwise traumatized for simply touching themselves or innocently playing with another child. Every form of touch is potentially fraught and becoming ever more fraught over time. This surely causes immense fear and anxiety in children raised in such a society. A psychological scarification forms into thick egoic boundaries, the individual isolated and separate from all others. It is the foot-binding of the human mind.
There is one and only one form of touch young children in the West are almost always guaranteed. They can breastfeed. They are allowed human contact with their mother’s breast. And it has become increasingly common for breastfeeding to extend for the first several years. All of the psychic energy that has few other human outlets of skin-to-skin contact gets narrowed down to the mother’s breast. The potency of this gets underestimated, as it makes many of us uncomfortable to think about it. Consider that a significant number of mothers have experienced an orgasm while breastfeeding. This happens often enough to be well within the range of a normal biological response, assuming it’s not cultural. Yet such widespread experience is likely to be judged as perverse, either by the mother in judging herself or by others if she were ever to admit to it. The breast becomes a site of shame, even as it is a site of desire.
Then, as part of weening, the child is given a pacifier. All the psychic energy that was limited to the breast then gets transferred to an inanimate object (Pacifiers, Individualism & Enculturation). The argument for pacifiers is that they’re self-soothing, but when you think about that, it is rather demented. Young children need parents and other adults to soothe them. For them to not be able to rely upon others in this basic human need creates a psychological crisis. The pacifier lacks any human quality, any nurturance or nutrient. It is empty and that emptiness is internalized. The child becomes identified with the pacifier as object. The egoic-self becomes an object with a part of the psyche that stands outside of itself (what Jaynes refers to as the analogous ‘I’ and metaphorical ‘me’) — the bundled mind becomes a splintered self (Bundle Theory: Embodied Mind, Social Nature). This is extremely bizarre, an expression of WEIRD culture (western, educated, industrialized, rich, and democratic; although the last part is questionable in the case of the United States). Until quite recently in the scheme of history and evolution, regular intimacy among humans was the norm. The first pacifier wasn’t used until 1935.
So, even in the West, some of these changes don’t go back very far. A certain kind of prudishness was introduced to the Western mind with Christianity, one of the transformative effects of the Axial Age. But even then, sexuality was much more relaxed in the Western world for a long time after that. “As late as Feudalism, heavily Christianized Europe offered little opportunity for privacy and maintained a relatively open attitude about sexuality during many public celebrations, specifically Carnival, and they spent an amazing amount of their time in public celebrations. Barbara Ehrenreich describes this ecstatic communality in Dancing in the Streets. Like the Piraha, these earlier Europeans had a more social and fluid sense of identity” (Hunger for Connection). It is no surprise that, as more open sexuality and ecstatic communality declined, modern hyper-individualism followed. Some like to praise the Western mind as more fluid (Ricardo Duchesne, The Higher Cognitive Fluidity of the European Mind), but for the same reason it is also more unstable and sometimes self-destructive. This is a far different kind of fluidity, if we are to cal it that at all. Individuality, in its insatiable hunger, cannibalizes its own social foundation.
* * *
It occurs to me that this breast obsession is another example of symbolic conflation. As I’ve often explained, a symbolic conflation is the central way of maintaining social order. And the body is the primary field of their operation, typically involving highly potent focal points involving sexuality (e.g., abortion). The symbolic conflation obscures and distracts from the real issues and points of conflict. Obviously, the female breast becomes a symbol of something far beyond its evolutionary and biological reality as mammalian mammary gland. This also relates to the discussion of metonymy and shame by Lewis Hyde in his book The Trickster Makes This World — see two of my posts where I connect Hyde’s work to that of Jaynes’: Lock Without a Key and “Why are you thinking about this?”.
Not to go all Bill Clinton on you, but we need to define what we mean by “performing a sexual act.” For now let’s just say that, based strictly on appearances, some cultures tolerate stuff that in the United States would get you branded as a pervert. Examples:
In 2006 a Cambodian immigrant living in the Las Vegas area was charged with sexual assault for allegedly performing fellatio on her 6-year-old son. The woman’s attorney said what she’d actually done was kiss the kid’s penis, once, when he was 4 or 5. A spokesperson for the Cambodian Association of America said that while this kind of thing wasn’t widespread in Cambodia, some rural folk went in for it as an expression of love or respect, although in his experience never with children older than 1 or maybe 2.
En route to being elected U.S. senator from Virginia in 2006, Jim Webb, onetime Secretary of the Navy under Reagan, was lambasted by his opponent for a passage in his 2001 novel Lost Soldiers in which a Thai man picks up his naked young son and puts his penis in his mouth. Webb responded that he had personally witnessed such a greeting in a Bangkok slum.
Numerous ethnographers report that mothers and caregivers in rural New Guinea routinely fondle the genitals of infants and toddlers of both sexes. In the case of boys this supposedly aids the growth of the penis. It’s often done in public and is a source of great amusement.
The Telegu-speaking people of central India dote on the penises of boys up through age six, which they hold, rub, and kiss. (Girls escape with minor same-sex touching.) A typical greeting involves an adult grabbing a boy’s arm with one hand and his penis with the other.
A 1946 report claimed that among lower-class Japanese families, parents would play with the genitals of children to help them fall asleep, and a researcher visiting Japan in the 1930s noted that mothers played with the genitals of their sons.
I didn’t make an exhaustive search and so don’t know to what extent such things occur in Latin America, Europe, Australia, or elsewhere. However, it appears that:
Fooling with kids’ privates is a fairly widespread practice in Asia, particularly among people toward the lower end of the socioeconomic scale. The reports are too numerous and credible for them all to be dismissed as the ravings of hysterical Westerners. My surmise is that, as societies become more westernized, urban, and affluent, the practice dies out.
The acts are sexual in the sense that those doing the fondling are well aware of the sexual implications and find it droll to give a little boy an erection.
Lurid tales occasionally do surface. Reports of mother-son incest were briefly faddish in Japanese magazines in the 1980s. These stories played off the unflattering Japanese stereotype of the mother obsessed with getting her son into a top school, suggesting some “education mamas” would violate the ultimate taboo to help their horny pubescent boys stay relaxed and focused on studying. A few Westerners have taken these urban legends at face value. Lloyd deMause, founder of and prolific contributor to a publication called the Journal of Psychohistory, cites the Japanese mother-son stories as prime evidence in his account of what he calls “the universality of incest.” It’s pretty clear, however, that incest inspires as much revulsion in Japan as anywhere else.
A less excitable take on things is that Asian societies just aren’t as hung up about matters of the flesh as we Western prudes are. In Japan, mixed-sex naked public bathing was fairly common until the postwar occupation, and some families bathe together now if they have a big enough tub. Nonetheless, so far as I can determine, Asian societies have always drawn a bright line between fooling around with babies and toddlers and having sex with your kids. If Westerners can’t fathom that elementary distinction, well, whose problem is that?
Dark Matter of the Mind by Daniel L. Everett Kindle Location 2688-2698
These points of group attachment are strengthened during the children’s maturation through other natural experiences of community life as the children learn their language, the configuration of their village and to sleep on the ground or on rough, uneven wooden platforms made from branches or saplings. As with other children of traditional societies, Pirahã young people experience the biological aspects of life with far less buffering than Western children. They remember these experiences, consciously or unconsciously, even though these apperceptions are not linguistic.
Pirahã children observe their parents’ physical activities in ways that children from more buffered societies do not (though often similar to the surrounding cultures just mentioned). They regularly see and hear their parents and other members of the village engage in sex (though Pirahã adults are modest by most standards, there is still only so much privacy available in a world without walls and locked doors), eliminate bodily waste, bathe, die, suffer severe pain without medication, and so on. 8 They know that their parents are like them. A small toddler will walk up to its mother while she is talking, making a basket, or spinning cotton and pull her breast out of the top of her dress (Pirahã women use only one dress design for all), and nurse— its mother’s body is its own in this respect. This access to the mother’s body is a form of entitlement and strong attachment.
Kindle Location 2736-2745
Sexual behavior is another behavior distinguishing Pirahãs from most middle-class Westerners early on. A young Pirahã girl of about five years came up to me once many years ago as I was working and made crude sexual gestures, holding her genitalia and thrusting them at me repeatedly, laughing hysterically the whole time. The people who saw this behavior gave no sign that they were bothered. Just child behavior, like picking your nose or farting. Not worth commenting about.
But the lesson is not that a child acted in a way that a Western adult might find vulgar. Rather, the lesson, as I looked into this, is that Pirahã children learn a lot more about sex early on, by observation, than most American children. Moreover, their acquisition of carnal knowledge early on is not limited to observation. A man once introduced me to a nine- or ten-year-old girl and presented her as his wife. “But just to play,” he quickly added. Pirahã young people begin to engage sexually, though apparently not in full intercourse, from early on. Touching and being touched seem to be common for Pirahã boys and girls from about seven years of age on. They are all sexually active by puberty, with older men and women frequently initiating younger girls and boys, respectively. There is no evidence that the children then or as adults find this pedophilia the least bit traumatic.
Don’t Sleep, There Are Snakes by Daniel L. Everett pp. 82-84
Sex and marriage also involve no ritual that I can see. Although Pirahãs are reluctant to discuss their own intimate sexual details, they have done so in general terms on occasion. They refer to cunnilingus and fellatio as “licking like dogs,” though this comparison to animal behavior is not intended to denigrate the act at all. They consider animals good examples of how to live. Sexual intercourse is described as eating the other. “I ate him” or “I ate her” means “I had sexual intercourse with him or her.” The Pirahãs quite enjoy sex and allude to it or talk about others’ sexual activity freely.
Sex is not limited to spouses, though that is the norm for married men and women. Unmarried Pirahãs have sex as they wish. To have sex with someone else’s spouse is frowned upon and can be risky, but it happens. If the couple is married to each other, they will just walk off in the forest a ways to have sex. The same is true if neither member of the couple is married. If one or both members of the couple are married to someone else, however, they will usually leave the village for a few days. If they return and remain together, the old partners are thereby divorced and the new couple is married. First marriages are recognized simply by cohabitation. If they do not choose to remain together, then the cuckolded spouses may or may not choose to allow them back. Whatever happens, there is no further mention of it or complaint about it, at least not openly, once the couple has returned. However, while the lovers are absent from the village, their spouses search for them, wail, and complain loudly to everyone. Sometimes the spouses left behind asked me to take them in my motorboat to search for the missing partners, but I never did. […]
During the dance, a Pirahã woman asked me, “Do you only lie on top of one woman? Or do you want to lie on others?”
“I just lie on one. I don’t want others.”
“He doesn’t want other women,” she announced.
“Does Keren like other men?”
“No, she just wants me,” I responded as a good Christian husband.
Sexual relations are relatively free between unmarried individuals and even between individuals married to other partners during village dancing and singing, usually during full moons. Aggression is observed from time to time, from mild to severe (Keren witnessed a gang rape of a young unmarried girl by most of the village men). But aggression is never condoned and it is very rare.
The Pirahãs all seem to be intimate friends, no matter what village they come from. Pirahãs talk as though they know every other Pirahã extremely well. I suspect that this may be related to their physical connections. Given the lack of stigma attached to and the relative frequency of divorce, promiscuousness associated with dancing and singing, and post- and prepubescent sexual experimentation, it isn’t far off the mark to conjecture that many Pirahãs have had sex with a high percentage of the other Pirahãs. This alone means that their relationships will be based on an intimacy unfamiliar to larger societies (the community that sleeps together stays together?). Imagine if you’d had sex with a sizable percentage of the residents of your neighborhood and that this fact was judged by the entire society as neither good nor bad, just a fact about life— like saying you’ve tasted many kinds of food.
Again, couples initiate cohabitation and procreation without ceremony. If they are unattached at the time, they simply begin to live together in the same house. If they are married, they first disappear from the village for two to four days, while their former spouses call for and search for them. Upon their return, they begin a new household or, if it was just a “fling,” return to their previous spouses. There is almost never any retaliation from the cuckolded spouses against those with whom their spouses have affairs. Relations between men and women and boys and girls, whether married or not, are always cordial and often marked by light to heavy flirting.
Sexually it is the same. So long as children are not forced or hurt, there is no prohibition against their participating in sex with adults. I remember once talking to Xisaoxoi, a Pirahã man in his late thirties, when a nine- or ten-year-old girl was standing beside him. As we talked, she rubbed her hands sensually over his chest and back and rubbed his crotch area through his thin, worn nylon shorts. Both were enjoying themselves.
“What’s she doing?” I asked superfluously.
“Oh, she’s just playing. We play together. When she’s big she will be my wife” was his nonchalant reply— and, indeed, after the girl went through puberty, they were married.
Marriage itself among the Pirahãs, like marriage in all cultures, comes with sets of mores that are enforced in different ways. People often ask me, for example, how the Pirahãs deal with infidelity in marriage. So how would this couple, the relatively old man and the young girl, deal with infidelity? They would deal with it like other Pirahãs, in what I take to be a very civilized fashion.
The solution or response to infidelity can even be humorous. One morning I walked over to my friend Kóhoibiíihíai’s home to ask him to teach me more of his language. As I approached his hut, everything looked pretty normal. His wife, Xíbaihóíxoi, was sitting up and he was lying down with his head in her lap.
“Hey, can you help me learn Pirahã words today?” I inquired.
He started to raise his head to answer. Then I noticed that Xíbaihóíxoi was holding him by the hair of his head. As he tried to raise his head, she jerked his head back by the hair, picked up a stick at her side and started whacking him irregularly on the top of his head, occasionally hitting him in the face. He laughed hard, but not too hard, because she jerked his hair every time he moved.
“My wife won’t let me go anywhere,” he said, giggling.
His wife was smirking but the grin disappeared right away and she struck him harder. Some of those whacks looked pretty painful to me. Kóhoi wasn’t in the best position to talk, so I left and found Xahoábisi, another good language teacher. He could work with me, he said.
As we walked back to my house together, I asked, “So what is going on with Kóhoibiíihíai? Xíbaihóíxoi is holding down his head and hitting him with a stick.”
“Oh, he was playing with another woman last night,” Xahoábisi chortled. “So this morning his woman is mad at him. He can’t go anywhere today.”
The fact that Kóhoi, a strong man and a fearless hunter, would lie like that all day and allow his wife to whack him at will (three hours later I revisited them and they were in the same position) was clearly partly voluntary penance. But it was partly a culturally prescribed remedy. I have since seen other men endure the same treatment.
By the next day, all seemed well. I didn’t hear of Kóhoi playing around with women again for quite a while after that. A nifty way to solve marital problems, I thought. It doesn’t always work, of course. There are divorces (without ceremony) among the Pirahãs. But this form of punishment for straying is effective. The woman can express her anger tangibly and the husband can show her he is sorry by letting her bang away on his head at will for a day. It is important to note that this involves no shouting or overt anger. The giggling, smirking, and laughter are all necessary components of the process, since anger is the cardinal sin among the Pirahãs. Female infidelity is also fairly common. When this happens the man looks for his wife. He may say something mean or threatening to the male who cuckolded him. But violence against anyone, children or adults, is unacceptable to the Pirahãs.
Other observations of Pirahã sexuality were a bit more shocking to my Christian sensibilities, especially when they involved clashes between our culture and Pirahã values. One afternoon during our second family stay among the Pirahãs, I walked out of the back room of our split-wood and thatched-roof home on the Maici into the central area of the house, which had no walls and in practice belonged more to the Pirahãs than to us. Shannon was staring at two Pirahã men lying on the floor in front of her. They were laughing, with their shorts pulled down around their ankles, each grabbing the other’s genitals and slapping each other on the back, rolling about the floor. Shannon grinned at me when I walked in. As a product of sexophobic American culture, I was shocked. “Hey, don’t do that in front of my daughter!” I yelled indignantly.
They stopped giggling and looked up at me. “Don’t do what?”
“That, what you’re doing, grabbing each other by the penis.”
“Oh,” they said, looking rather puzzled. “He doesn’t like to see us have fun with each other.” They pulled their pants up and, ever adaptable to new circumstances, changed the subject and asked me if I had any candy.
I never really needed to tell Shannon or her siblings much about human reproduction, death, or other biological processes. They got a pretty good idea of all that from watching the Pirahãs.
The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes pp. 465-470
From Mating to “Sex”
The third example I would consider here is the affect of mating. It is similar in some respects to other affects but in other ways quite distinct. Animal studies show that mating, contrary to what the popular mind thinks, is not a necessary drive that builds up like hunger or thirst (although it seems so because of consciousness), but an elaborate behavior pattern waiting to be triggered off by very specific stimuli. Mating in most animals is thus confined to certain appropriate times of the year or day as well as to certain appropriate sets of stimuli as in another’s behavior, or pheromones, light conditions, privacy, security, and many other variables. These include the enormous variety of extremely complicated courtship procedures that for rather subtle evolutionary advantages seem in many animals almost designed to prevent mating rather than to encourage it, as one might expect from an oversimplified idea of the workings of natural selection. Among the anthropoid apes, in contrast to other primates, mating is so rare in the natural habitat as to have baffled early ethologists as to how these most human-like species reproduced at all. So too perhaps with bicameral man.
But when human beings can be conscious about their mating behavior, can reminisce about it in the past and imagine it in the future, we are in a very different world, indeed, one that seems more familiar to us. Try to imagine what your “sexual life” would be if you could not fantasize about sex.
What is the evidence for this change? Scholars of the ancient world, I think, would agree that the murals and sculptures of what I’m calling the bicameral world, that is, before 1000B.C., are chaste; depictions with sexual references are scarcely existent, although there are exceptions. The modest, innocent murals from bicameral Thera now on the second floor of the National Museum in Athens are good examples.
But with the coming of consciousness, particularly in Greece, where the evidence is most clear, the remains of these early Greek societies are anything but chaste. 25 Beginning with seventh century B.C. vase paintings, with the depictions of ithyphallic satyrs, new, semidivine beings, sex seems indeed a prominent concern. And I mean to use the word concern, for it does not at first seem to be simply pornographic excitement. For example, on one island in the Aegean, Delos, is a temple of huge phallic erections.
Boundary stones all over Attica were in the form of what are called herms: square stone posts about four feet high, topped with a sculptured head usually of Hermes and, at the appropriate height, the only other sculptured feature of the post, a penile erection. Not only were these herms not laughter-producing, as they certainly would be to children of today, they were regarded as serious and important, since in Plato’s Symposium “the mutilation of the herms” by the drunken general Alcibiades, in which he evidently knocked off these protuberances with his sword around the city of Athens, is regarded as a sacrilege.
Erect phalli of stone or other material have been found in large numbers in the course of excavations. There were amulets of phalli. Vase paintings show naked female dancers swinging a phallus in a Dionysian cult. One inscription describes the measures to be taken even in times of war to make sure that the phallus procession should be led safely into the city. Colonies were obliged to send phalli to Athens for the great Dionysian festivals. Even Aristotle refers to phallic farces or satyr plays which generally followed the ritual performances of the great tragedies.
If this were all, we might be able to agree with older Victorian interpretations that this phallicism was merely an objective fertility rite. But the evidence from actual sexual behavior following the advent of conscious fantasy speaks otherwise. Brothels, supposedly instituted by Solon, were everywhere and of every kind by the fourth century B.C. Vase paintings depict every possible sexual behavior from masturbation to bestiality to human threesomes, as well as homosexuality in every possible form.
The latter indeed began only at this time, due, I suggest, in part to the new human ability to fantasize. Homosexuality is utterly absent from the Homeric poems. This is contrary to what some recent Freudian interpretations and even classical references of this period (particularly after its proscription by Plato in The Laws as being contrary to physis, or nature), seeking authorization for homosexuality in Homer, having projected into the strong bonding between Achilles and Patroclus.
And again I would have you consider the problem twenty-five hundred years ago, when human beings were first conscious and could first fantasize about sex, of how they learned to control sexual behavior to achieve a stable society. Particularly because erectile tissue in the male is more prominent than in the female, and that feedback from even partial erections would promote the continuance of sexual fantasy (a process called recruitment), we might expect that this was much more of a male problem than a female one. Perhaps the social customs that came into being for such control resulted in the greater social separation of the sexes (which was certainly obvious by the time of Plato) as well as an enhanced male dominance. We can think of modern orthodox Muslim societies in this respect, in which an exposed female ankle or lock of hair is punishable by law.
I certainly will admit that there are large vacant places in the evidence for what I am saying. And of course there are other affects, like anger becoming our hatred, or more positive ones like excitement with the magical touch of consciousness becoming joy, or affiliation consciousized into love. I have chosen anxiety, guilt, and sex as the most socially important. Readers of a Freudian persuasion will note that their theorizing could begin here. I hope that these hypotheses can provide historians more competent than myself with a new way of looking at this extremely important period of human history, when so much of what we regard as modern psychology and personality was being formed for the first time.
Reflections on the Dawn of Consciousness ed. by Marcel Kuijsten Chapter 1 – Julian Jaynes: Introducing His Life and Thought by William R. Woodward & June F. Tower Kindle Location 1064-1079
Jaynes gave an overview of the “consequences of consciousness.” Here he seems to have been developing the feeling side of consciousness in its evolution during the first millennium b.c. He reminded his audience of the historical origins of shame in human and animal experience:
Think of primary school, toilet accidents. Think how painful it was. … If you say to a dog, “bad dog,” he wonders what he did wrong. He puts his tail between his legs and crawls off. It is such a biological part of us that we are ashamed to admit it. … Guilt is the consciousness of shame over time. 58
For Jaynes, the Bible remains our best source on ideas of sin. He lectured that “sin is an awful word for it,” but “the whole Hebrew Bible is talking about the importance of guilt.” He asked rhetorically “how do you get rid of guilt?” and then answered that “it is very interesting to remember what Paul makes of the crucifixion of Jesus: Jesus was taking away the sins of the world.”
After shame and guilt, he went on to the consequences of consciousness in “mating and sex, which is one of the interesting things to us.” Theoretically, that is. Julian hastened to point out that “if you go back to the bicameral world, all the art is extremely chaste. … Then if you go to the Greek world that begins around 700 b.c., it is anything but. You have never seen anything so dirty. … There were brothels at this time. It happens in the Etruscans. You find these very gross sexual scenes. So I am saying that sex is a very different thing than it was before.” What is the significance of all this lewdness appearing in human history? “You can imagine what your own sex life would be if you could not fantasize about it. This is consciousness coming in and influencing our behavior, and our physiology. Here we have consciousness, and guilt, and sex, and anxiety.” 59
The Julian Jaynes Collection ed. by Marcel Kuijsten Chapter 14 – Imagination and the Dance of the Self pp. 209-212
It is similar with love, although there are differences. It is a little more difficult to talk about. We have affiliation responses in animals (or imprinting, which I have studied) where animals have a very powerful impulse to stay together. But this becomes our complicated kind of love when we can imagine the loved person and go back and forth in our imagination about them.
Similarly — and interestingly — with sex. If you look at the comparative psychology of sexual behavior in animals, it is very clear that this is not an open kind of behavior that happens any time or anything like that. It is cued ethologically into certain kinds of stimuli. So you have to have just the right kind of situation in order for animals to mate.
This is a problem that happens in every zoo: as soon as they get new animals, they want to mate them and have progeny. It is a tremendous problem, because you don’t know ethologically what those tiny cues are — they might be temperature or darkness or whatnot. For human beings it might be moonlight and roses [laughs], but it is this kind of thing that you find evolved into animal behavior.
I tend to think that in bicameral times mating was very similar to what it is in animals in that sense. It was cued into moonlight and roses shall I say, and not otherwise. Therefore it was not a problem in a way. Now, when human beings become conscious, have imagination, and can fantasize about sex, it becomes what we mean in quotes “sex.” Which I think is a problem in the sense that it does not ever quite fit into our conscious society. We go back and forward in history from having a free sex age and then a clamping down of Ms. Grundy 2 and Queen Victoria and so on. It goes back and forth because sex to us is tremendously more important than it was to bicameral man because we can fantasize about it.
Now similarly as I mentioned with the Oedipus story and the idea of guilt, we should be able to go back into history and find evidence for this. The evidence that I found for this — and I should be studying it in different cultures — is again in Greece. If you talk to Greek art historians and you ask them to compare, for example, Greek vase painting of the conscious era with the vase painting or other kinds of painting that went on in what I call the bicameral period — either in Minoan art in Crete or the famous murals that were found in Thera — they will all tell you that there is a big distinction. The older art is chaste, there is nothing about sex in it. But then you come to the vase paintings of Greece. We often think of Greece in terms of Plato and Aristotle and so on, and we do not realize that sex was something very different. For example, they have all of these satyrs with penile erections on their vases and odd things like that. Another example are things called herms. Most people have not heard of them. All the boundary stones of the city were stones about four feet in height called herms. They are called herms, by us anyway, because they were just posts that very often they had a sculpture of Hermes at the top — but sometimes of other people. Then at the appropriate place — the body was just a column — there was a penile erection. I do not think we would find Athens back in these early conscious times very congenial.
These were all over the city of Athens. They were at the boundary stones everywhere. If you think of them being around nowadays you can imagine children giggling and so on. It is enough to make you realize that these people, even at this time, the time of Plato and Aristotle, were very different than we are. And if you read Plato you can find that one of the great crimes of Alcibiades — the Greek general that comes into several of the dialogues — is this terrible, frightful night when he got drunk and went and mutilated the herms. You can imagine what he was knocking off. This is hard for us to realize, because it again makes this point that these people are still not like us even though they are conscious. Because they are new to these emotions. I do not mean to intimate that Greek life was sexually free all over the place because I don’t think that was the case. If you read Kenneth Dover’s 3 classic work about Greek homosexuality, for example, you see it is very different from the gay liberation movement that we can find going on in our country right now. It is a very tame kind of thing.
I don’t think we really understand what is going on. There is the evidence, it is there in vase paintings, it is there in Greek times, but there is something we still do not fully understand about it. But it is different from the bicameral period. We have a different kind of human nature here, and it is against this that we look at where the self can come from.
Chapter 27 – Baltimore Radio Interview: Interview by Robert Lopez pp. 447-448
Jaynes: Yes indeed. And it happens with other emotions. Fear becomes anxiety. At the same time we have a huge change in sexual behavior. If you try to sit down and imagine what your sexual life would be like if you couldn’t fantasize about it. It’s a hard thing to do, and you probably would think it would be much less, and I suspect it would be. If we go back to bicameral times, and look at all the artwork, wherever we look, there is nothing sexual about it. There is no pornography or anything even reminiscent of that at all. It’s what classicists call chaste. But when we come into the first conscious period, for example in Greece from 700 b.c . up to 200 or 100 b.c . — the sexual life in Greece is difficult to describe because we are taught of great, noble Perician Athens and we don’t think of the sexual symbols … phalli of all kinds were just simply everywhere. This has been well documented now but it’s not something that’s presented to schoolchildren.
Lopez: You mean then that the erotic pottery that we see in ancient Greece was a result of new found consciousness and the resulting new found fascination with sex?
Jaynes: The ability to fantasize about sex immediately brought it in as a major concern. There is something I don’t understand about it… these phalli or erections were on statues everywhere. They were on the boundary stones called herms around the city of Athens. And yet they weren’t unusual to these people as it certainly would be in Baltimore today if you had these things all around the streets. It seems that sex had a religious quality, which is curious. There were a lot of very odd and different kinds of things that were happening.
Chapter 32 – Consciousness and the Voices of the Mind: University of New Hampshire Discussion pp. 508-510
By affect I mean biologically, genetically organized emotions, such that we share with all mammals, and which have a genetically based way of arousing them and then getting rid of their byproducts. But then these become something — and we really don’t have the terminology for it, so I’m going to call them feelings right now, and by that I mean conscious feelings. We have shame, for example. It is very important and powerful — if you remember your childhood, and the importance of fitting yourself into the group without being humiliated. This becomes guilt when you have consciousness operating on it over time. Guilt is the fear of shame. We also see the emergence of anxiety, which is built on the affect of fear.
Then you have the same thing happening with sex. I think mating was pretty perfunctory back in the bicameral period, just as it is with most of the primates. It isn’t an obvious thing in any of the anthropoid apes — like the orangutans, the gorillas, the gibbons, and the chimpanzees. It is not all that obvious. And I think it was the same thing in the bicameral time — there is nothing really “sexy,” if I may use that adjective — in the bicameral paintings and sculptures. But just after this period, beginning in 700 b.c ., the Greek world is a pornographic world if ever there was one. It’s astonishing what happens. [At museums] most of these vases are not upstairs where children can see them, they are usually kept downstairs. At the same time this isn’t just a matter of artifacts; it is a part of their behavior. There is evidence of brothels beginning here, homosexuality perhaps begins at this same time, and we have various kinds of laws to regulate these things. It is something we don’t understand though, because it isn’t quite like our sexuality — it has a religious basis. It is very strange and odd, this almost religious basis. You have the tragedies, like the Oedipus plays, put on as a trilogy, and it was always followed by a phallic farce, for example. This seems extraordinary to us, because it destroys the whole beauty of these plays.
All that was going on in Greece, and was going in with the Etruscans — who didn’t leave much writing, but they left us enough so that we have a pattern and know that there was group sex going on and things like that. We don’t find it so much among the Hebrews I think because the Hebrews — who in some places were monotheistic and in other places were not — had a very powerful God saying “thou shalt not” and so on — follow the law. At least we don’t have evidence for those behaviors.
So we have for the first time increases in sexual behavior and the emergence of guilt and anxiety. Think of that: anxiety, sex, and guilt — if anybody wants to be a Freudian, this is where it begins [laughs]. Because then you had to have psychological mechanisms of controlling this. I mentioned something about repression — that’s one of the things that comes into play here — but all these methods of forgiveness and the whole concept of sin begins at this time.
Gods, Voices, the the Bicameral Mind ed. by Marcel Kuijsten Introduction p. 9
The birth of consciousness ushered in profound changes for human civilization. In what Jaynes terms the “cognitive explosion,” we see the sudden beginnings of philosophy, science, history, and theater. We also observe the gradual transition from polytheism to monotheism. Consciousness operating on human emotions caused shame to become guilt, fear to become anxiety, anger to become hatred, and mating behavior to give rise to sexual fantasy. Through the spatialization of time, people could, for the first time, think about their lives on a continuum and contemplate their own death.
Chapter 12 – The Origin of Consciousness, Gains and Losses: Walker Percy vs. Julian Jaynes
by Laura Mooneyham White pp. 174-175
This sort of “regression from a stressful human existence to a peaceable animal existence” 58 also includes a reversion to a bestial sexuality, as women present rearward for intercourse with the disinterestedness of simple physical need. Heavy sodium, among other things, drastically reduces the frequency of a woman’s estrus, so that hormonal urges and, in consequence, mating, become far less common. Sexual activity becomes emotionless and casual, as casual as in the sexual practices of the higher primates. As Jaynes has noted in a 1982 essay on the effect of consciousness on emotions, such mating, “in contrast to ourselves, is casual and almost minimal, with observations of mating in gibbons, chimpanzees, orangutans, and gorillas in the wild being extremely rare.” 59 Jaynes forecasts the emotionless participation in sex we see in Percy’s drugged and regressive characters, for Jaynes connects the erotic with the conscious capacity to narrate, to tell ourselves a story about our presence in time. Narration makes fantasy possible. Preconscious humans were not obsessed by sexuality, Jaynes argues: “All classicists will agree with this, that all Mycenean and Minoan art, in particular before 1000 B.C., is what seems to us as severely chaste”; “… tomb and wall paintings, sculpture and the writings of bicameral civilizations rarely if ever have any sexual references.” 60 But after the advent of human consciousness, the erotic begins to make its claim upon human attention: “About 700 B.C., Greek and Etruscan art is rampant with sexual references, very definitely demonstrating that sexual feelings were a new and profound concern in human development in these regions. We can perhaps appreciate this change in ourselves if we try to imagine what our sexual lives would be like if we could not fantasize about sexual behavior.” 61
The sexually abused and sodium-dosed children at Belle Ame Academy in Percy’s novel have lost that capacity to narrate about themselves and have therefore lost all sense of shame, all sense of what should be either morally perverse or erotically exciting. As Tom More surveys the six photographs which document the sexual abuse at Belle Ame, he is struck by the demeanor of the children’s faces. One child being subjected to fellatio by an adult male seems in countenance merely “agreeable and incurious.” 62 In another picture, a young girl is being penetrated by the chief villain, Van Dorn; she “is gazing at the camera, almost dutifully, like a cheerleader in a yearbook photo, as if to signify that all is well” 63 Another photograph is a group shot of junior-high age boys witnessing an act of cunnilingus: “Two or three, instead of paying attention to the tableau, are mugging a bit for the camera, as if they were bored, yet withal polite.” 64 Another child in yet another appalling picture seems to have a “demure, even prissy expression.” 65 What is remarkable about these photographs is how eloquently they testify to the needfulness of consciousness for the emotions of guilt, shame, or desire. Percy and Jaynes concur that without consciousness, sex is a mildly entertaining physical activity, either at best or worst.
Chapter 16 – Vico and Jaynes: Neurocultural and Cognitive Operations in the Origin of Consciousness by Robert E. Haskell pp. 270-271
As noted earlier, there are many differences between Vico and Jaynes that cannot be developed here. The following, however, seems noteworthy. In Vico’s “anthropological” description of the first men, he is systematic throughout his New Science in imagining the early sexual appetites, not only of the first males but also of the first females. In fact, it is basically only in this context that he describes the first females. The first men, he says, “must be supposed to have gone off into bestial wandering … [in] the great forests of the earth Jaynes, become “conscious about their mating behavior, can reminisce about it in the past and imagine it in the future, we are in a very different world, indeed, one that seems more familiar to us” ( OC : 466). Vico can be read as saying the same thing; in describing the sexuality of the first men Vico uses the phrase: “the impulse of the bodily motion of lust” ( NS : 1098, my italics), implying a kind of Jaynesian bicameral sexuality not enhanced by consciousness.
The second line of research supporting Jaynes’s claim is as follows. Scholars of ancient history would agree, says Jaynes, that the murals and sculptures during what he calls the bicameral age, that is, before 1000 B.C., are chaste. Though there are exceptions, depictions with sexual references prior to this time are nearly non-existent. After 1000 B.C., there seems to be a veritable explosion of visual depictions of sexuality: ithyphallic satyrs, large stone phalli, naked female dancers, and later, brothels, apparently instituted by Solon of Athens in the fifth century B.C. Such rampant sexuality had to be controlled. According to Vico it was “frightful superstition” (ibid.) and fear of the gods that lead to control. Jaynes speculates that one way was to separate the sexes socially, which has been observed in many preliterate societies. Since males have more visible erectile tissue than females, something had to be done to inhibit the stimulation of sexual imagination (fantasy). Jaynes cites the example of the orthodox Muslim societies in which to expose female ankles or hair is a punishable offence.29
[Note 29: It is interesting to note that both Vico and Jaynes seem to assume a hyper-sexuality on the part of males, not females. Is this an example of Vico’s “conceit of scholars,” or more specifically, the conceit of male scholars? To the contrary, Mary Jane Sherfey (1996), a physician, has suggested that in early history the female sexual appetite was stronger than the male and therefore had to be controlled by the male in order to create and maintain social order.]
* * *
At the very bottom is an interview with Marcel Kuijsten who is responsible for reviving Jaynesian scholarship. The other links are about Julian Jaynes view on (egoic-)consciousness and the self, in explaining what he means by analog ‘I’, metaphor ‘me’, metaphier, metaphrand, paraphier, parphrand, spatialization, excerption, narratization, conciliation (or compatibilization, consillience), etc. Even after all these years studying Jaynesian thought, I still struggle to keep it all straight, but it’s worth trying to understand.
Also interesting is the relationship of Jaynes’ view and that of Tor Norretranders, Benjamin Libet, Friedrich Nietzsche, and David Hume. Further connections can be made to Eastern philosophy and religion, specifically Buddehism. Some claim that Hume probably developed his bundle theory from what he learned of Buddhism from returning missionaries.
“Besides real diseases we are subject to many that are only imaginary, for which the physicians have invented imaginary cures; these have then several names, and so have the drugs that are proper to them.”
~Jonathan Swift, 1726 Gulliver’s Travels
“The alarming increase in Insanity, as might naturally be expected, has incited many persons to an investigation of this disease.”
~John Haslam, 1809 On Madness and Melancholy: Including Practical Remarks on those Diseases
I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is exploring: “Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”
That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent. (The term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ became a more common diagnosis, although ‘nostalgia’ was first referred to in the 17th century (Swiss doctor Johannes Hofer coined the term, also using it interchangeably with nosomania and philopatridomania — see: Michael S. Roth, Memory, Trauma, and History; David Lowenthal, The Past Is a Foreign Country; Thomas Dodman, What Nostalgia Was; Susan J. Matt, Homesickness; Linda Marilyn Austin, Nostalgia in Transition, 1780-1917; Svetlana Boym, The Future of Nostalgia; Gary S. Meltzer, Euripides and the Poetics of Nostalgia; see The Disease of Nostalgia). Today, we might speak of ‘neurasthenia’ as stress and, even earlier, they had other ways of talking about it — as Bryan Kozlowski explained in The Jane Austen Diet, p. 231: “A multitude of Regency terms like “flutterings,” “fidgets,” “agitations,” “vexations,” and, above all, “nerves” are the historical equivalents to what we would now recognize as physiological stress.” It was the stress of falling into history, a new sense of time, linear progression that made the past a lost world — from Stranded in the Present, Peter Fritzsche wrote:
“On that August day on the way to Mainz, Boisseree reported on of the startling consequences of the French Revolution. This was that more and more people began to visualize history as a process that affected their lives in knowable, comprehensible ways, connected them to strangers on a market boat, and thus allowed them to offer their own versions and opinions to a wider public. The emerging historical consciousness was not restricted to an elite, or a small literate stratum, but was the shared cultural good of ordinary travelers, soldiers, and artisans. In many ways history had become a mass medium connecting people and their stories all over Europe and beyond. Moreover, the drama of history was construed in such a way as to put emphasis on displacement, whether because customary business routines had been upset by the unexpected demands of headquartered Prussian troops, as the innkeepers protested, or because so many demobilized soldiers were on the move as they returned home or pressed on to seek their fortune, or because restrictive legislation against Jews and other religious minorities had been lifted, which would explain the keen interest of “the black-bearded Jew” in Napoleon and of Boisseree in the Jew. History was not simply unsettlement, though. The exchange of opinion “in the front cabin” and “in the back” hinted at the contested nature of well-defined political visions: the role of the French, of Jacobins, of Napoleon. The travelers were describing a world knocked off the feet of tradition and reworked and rearranged by various ideological protagonists and conspirators (Napoleon, Talleyrand, Blucher) who sought to create new social communities. Journeying together to Mainz, Boisseree and his companions were bound together by their common understanding of moving toward a world that was new and strange, a place more dangerous and more wonderful than the one they left behind.”
That excitement was mixed with the feeling of being spent, the reserves having been fully tapped. This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ in the management of libido as psychic energy, a modernization of Galenic thought (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.
Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” This class bias goes back even earlier to Robert Burton’s melancholia with its element of what later would be understood as the Cartesian anxiety of mind-body dualism, a common ailment of the intellectual elite (mind-body dualism goes back to the Axial Age; see Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind). The class bias was different for nostalgia, as written about by Svetlana Boym in The Future of Nostalgia (p. 5):
“For Robert Burton, melancholia, far from being a mere physical or psychological condition, had a philosophical dimension. The melancholic saw the world as a theater ruled by capricious fate and demonic play. Often mistaken for a mere misanthrope, the melancholic was in fact a utopian dreamer who had higher hopes for humanity. In this respect, melancholia was an affect and an ailment of intellectuals, a Hamletian doubt, a side effect of critical reason; in melancholia, thinking and feeling, spirit and matter, soul and body were perpetually in conflict. Unlike melancholia, which was regarded as an ailment of monks and philosophers, nostalgia was a more “democratic” disease that threatened to affect soldiers and sailors displaced far from home as well as many country people who began to move to the cities. Nostalgia was not merely an individual anxiety but a public threat that revealed the contradictions of modernity and acquired a greater importance.”
Like diabetes, melancholia and neuraesthenia was first seen among the elite, and so it was taken as demonstrating one’s elite nature. Prior to neurasthenic diagnoses but in the post-revolutionary era, a similar phenomenon went by other names. This is explored by Bryan Kozlowski in one chapter of The Jane Austen Diet (p. 232-233):
“Yet the idea that this was acceptable—nay, encouraged—behavior was rampant throughout the late 18th century. Ever since Jane was young, stress itself was viewed as the right and prerogative of the rich and well-off. The more stress you felt, the more you demonstrated to the world how truly delicate and sensitive your wealthy, softly pampered body actually was. The common catchword for this was having a heightened sensibility—one of the most fashionable afflictions in England at the time. Mainly affecting the “nerves,” a Regency woman who caught the sensibility but “disdains to be strong minded,” wrote a cultural observer in 1799, “she trembles at every breeze, faints at every peril and yields to every assailant.” Austen knew real-life strutters of this sensibility, writing about one acquaintance who rather enjoys “her spasms and nervousness and the consequence they give her.” It’s the same “sensibility” Marianne wallows in throughout the novel that bears its name, “feeding and encouraging” her anxiety “as a duty.” Readers of the era would have found nothing out of the ordinary in Marianne’s high-strung embrace of stress.”
This condition was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains (‘Americanitis’: The Disease of Living Too Fast):
“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.
“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””
I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry, with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.
Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat) and food groups, affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt, as sometimes did happen (Ken Albala & Trudy Eden, Food and Faith in Christian Culture). Red meat, in particular, was thought to heat up blood (warm, wet) and yellow bile (warm, dry), in promoting sanguine and choleric personalities of masculinity. Like women, peasants were supposed to be submissive and hence not too masculine — they were to be socially controlled, not self-controlled. Anyone who was too strong-willed and strong-minded, other than the (ruling, economic, clerical, and intellectual) elite, was considered problematic; and one of the solutions was an enforced change of diet to create the proper humoral disposition for their appointed social role within the social order (i.e., depriving nutrient-dense meat until an individual or group was too malnourished, weak, anemic, sickly, docile, and effeminate to be assertive, aggressive, and confrontational toward their ‘betters’)
There does seem to be a correlation (causal link?) between an increase of intellectual activity and abstract thought with an increase of carbohydrates and sugar, with this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.
As a side note, the gendering of diet was seen as important for constructing, maintaining, and enforcing gender roles; that is carried over into the modern bias that masculine men eat steak and effeminate women eat salad. According to humoralism, men are well contained while women are leaky vessels. One can immediately see the fears of neurasthenia, emasculation, and excessive ejaculation. The ideal man was supposed to hold onto and control his bodily fluids, from urine to semen, by using and investing them carefully. With neurasthenia, though, men were seen as having become effeminate and leaky, dissipating and draining away their reserves vital fluids and psychic energies. So, a neurasthenic man needed a strengthening of the boundaries that held everything in. The leakiness of women was also a problem, but women couldn’t and shouldn’t be expected to contain themselves. The rest cure designed for women was to isolate them in a bedroom where they’d be contained by architectural structure of home that was owned and ruled over by the male master. A husband and, as an extension, the husband’s property was to contain the wife; since she too was property of the man’s propertied self. This made a weak man of the upper classes even more dangerous to the social order because he couldn’t play he is needed gender role of husband and patriarch, upon which all of Western civilization was dependent.
All of this was based on an economic model of physiological scarcity. With neurasthenia arising in late modernity, the public debate was overtly framed by an economic metaphor. But the perceived need of economic containment of the self, be it self-containment or enforced containment, went back to early modernity. The enclosure movement was part of a larger reform movement, not only of land but also of society and identity.
* * *
“It cannot be denied that civilization, in its progress, is rife with causes which over-excite individuals, and result in the loss of mental equilibrium.”
~Edward Jarvis, 1843 “What shall we do with the Insane?”
The North American Review, Volume 56, Issue 118
It goes far beyond diet or any other single factor. There has been a diversity of stressors that continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.
Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).
There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:
“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)
This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.
The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.
By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.
In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.
Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Church and Peterson did acknowledge that this went beyond mere individual or even familial issues: “The proportion of the insane to normal individuals may be stated to be about 1 to 300 of the population, though this proportion varies somewhat within narrow limits among different races and countries. It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”
Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.
Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).
* * *
“It is easy, as we can see, for a barbarian to be healthy; for a civilized man the task is hard. The desire for a powerful and uninhibited ego may seem to us intelligible, but, as is shown by the times we live in, it is the profoundest sense antagonistic to civilization.”
~Sigmund Freud, 1938 An Outline of Psychoanalysis
“Consciousness is a very recent acquisition of nature, and it is still in an “experimental” state. It is frail, menaced by specific dangers, and easily injured.”
~Carl Jung, 1961 Man and His Symbols
Part 1: Approaching the Unconscious The importance of dreams
The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.
A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).
“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as ‘Red Republicans’, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.
This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the ‘civilizing process’ that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate (Norbert Elias, The Civilizing Process, & Society of Individuals; Bruce Mazlish, Civilization and Its Contents; Keith Thomas, In Pursuit of Civility; Stephen Mennell, The American Civilizing Process; Cas Wouters, Informalization; Jonathan Fletcher, Violence and Civilization; François Dépelteau & T. Landini, Norbert Elias and Social Theory; Rob Watts, States of Violence and the Civilising Process; Pieter Spierenburg, Violence and Punishment; Steven Pinker, The Better Angels of Our Nature; Eric Dunning & Chris Rojek, Sport and Leisure in the Civilizing Process; D. E. Thiery, Polluting the Sacred; Helmut Kuzmics, Roland Axtmann, Authority, State and National Character; Mary Fulbrook, Un-Civilizing Processes?; John Zerzan, Against Civilization; Michel Foucault, Madness and Civilization; Dennis Smith, Norbert Elias and Modern Social Theory; Stejpan Mestrovic, The Barbarian Temperament; Thomas Salumets, Norbert Elias and Human Interdependencies). Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.
To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the ‘First Wave’ of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.
This was at a time when the abortion rate was sky-rocketing, indicating most women held other views; since large farm families were less needed with increase of both industrialized urbanization and industrialized farming. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” Other sources concur and extend this pattern of high abortion rate into the early 20th century: “Some have estimated that between 20-35 percent of 19th century pregnancies were terminated as a means of restoring “menstrual regularity” (Luker, 1984, p. 18-21). About 20 percent of pregnancies were aborted as late as in the 1930s (Tolnai, 1939, p. 425)” (Rickie Solinger, Pregnancy and Power, p. 61). (Polly F. Radosh, “Abortion: A Sociological Perspective”, from Interdisciplinary Views on Abortion ed. by Susan A. Martinelli-Fernandez, Lori Baker-Sperry, & Heather McIlvaine-Newsad).
This is unsurprising as abortifacients have been known for at least millennia earlier, recorded in ancient texts from diverse societies, and probably were common knowledge prior to any written language, considering abortifacients are used by many hunter-gatherer tribes who need birth control to space out pregnancies in order to avoid malnourished babies and for other reasons. This is true within the Judeo-Christian tradition as well, such as where the Old Testament gives an abortion recipe for when a wife gets pregnant from an affair (Numbers 5:11-31). Patriarchal social dominators sought to further control women not necessarily for religious reasons, but more because medical practice was becoming professionalized by men who wanted to eliminate the business competition of female doctors, midwives, and herbalists. “To do so, they challenged common perceptions that a fetus was not a person until the pregnant mother felt it “quicken,” or move, inside their womb. In a time before sonograms, this was often the only way to definitively prove that a pregnancy was underway. Quickening was both a medical and legal concept, and abortions were considered immoral or illegal only after quickening. Churches discouraged the practice, but made a distinction between a woman who terminated her pregnancy pre- or post-quickening” (Erin Blakemore). Yet these conservative authoritarians would and still claim to speak on behalf of some vague and amorphous concept of Western Civilization and Christendom.
This is a great example of how, through the power of charismatic demagogues and Machiavellian social dominators, modern reactionary ideology obscures the past with deceptive nostalgia and replaces the traditional with historical revisionism. The thing is, until the modern era, abortifacients and other forms of birth control weren’t politicized, much less under the purview of judges. They were practical concerns that were largely determined privately and personally or else determined informally within communities and families. “Prior to the formation of the AMA, decisions related to pregnancy and abortion were made primarily with the domain and control of women. Midwives and the pregnant women they served decided the best course of action within extant knowledge of pregnancy. Most people did not view what would currently be called first trimester abortion as a significant moral issue. […] A woman’s awareness of quickening indicated a real pregnancy” (Polly F. Radosh). Yet something did change with birth control that was improved in its efficacy and ever more common or else more out in the open, making it a much more public and politicized issue, not to mention exacerbated by capitalist markets and mass media.
Premarital sex or, heck, even marital sex no longer inevitably meant birth; and with contraceptives, unwanted pregnancies often could be prevented entirely. Maybe this is why fertility had been declining for so long, and definitely the reason there was a mid-19th century moral panic. “Extending the analysis back further, the White fertility rate declined from 7.04 in 1800 to 5.42 in 1850, to 3.56 in 1900, and 2.98 in 1950. Thus, the White fertility declined for nearly all of American history but may have bottomed out in the 1980s. Black fertility has also been declining for well over 150 years, but it may very well continue to do so in the coming decades” (Ideas and Data, Sex, Marriage, and Children: Trends Among Millennial Women). If this is a crisis, it started pretty much at the founding of the country. And if we had reliable data before that, we might see the trend having originated in the colonial era or maybe back in late feudalism during the enclosure movement that destroyed traditional rural communities and kinship groups. Early Americans, by today’s standards of the culture wars, were not good Christians — many visiting Europeans at the time saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):
“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”
Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-19th century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of whites/WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants (i.e., replacement theory); at a time when Southern and Eastern Europeans, and even the Irish, were questionable in their whiteness, particularly if Catholic (Aren’t Irish White?).
The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?
“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)
This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.
Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why, according to Rogin, abortion laws were designed to primarily target male doctors, although they rarely did, and not their female patients (at least once women had been largely removed from medical and healthcare practice, beyond the role as nurses who assisted male doctors). Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us, including control of others, for our own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the metaphorically contained self. But this psychic container is weak and keeps leaking all over the place.
* * *
“It is clear that if it goes on with the same ruthless speed for the next half century . . . the sane people will be in a minority at no very distant day.”
~Henry Maudsley, 1877 “The Alleged Increase of Insanity”
Journal of Mental Science, Volume 23, Issue 101
“If this increase was real, we have argued, then we are now in the midst of an epidemic of insanity so insidious that most people are even unaware of its existence.”
~Edwin Fuller Torrey & Judy Miller, 2001 The Invisible Plague: The Rise of Mental Illness from 1750 to the Present
To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or self-understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging. Over this past century, we have continued to be in a crisis of identity (Mark Greif, The Age of the Crisis of Man).
“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.
* * *
I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.
The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
WOMEN AND THE WOMB: The Emerging Birth Control Debate
The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31
“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32
As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33
But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.
“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.
Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37
Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.
“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41
“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”
“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44
Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.
Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.
Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg
Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes
The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz
Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz
Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel
Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler
Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer
American Sexual Histories
edited by Elizabeth Reis
Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff
From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin
Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman
One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker
* * *
8/18/19 – Looking back at this piece, I realize there is so much that could be added to it. And it already is long. It’s a topic that would require writing a book to do it justice. And it is such a fascinating area of study with lines of thought going in numerous directions. But I’ll limit myself by adding only a few thoughts that point toward some of those other directions.
The topic of this post goes back to the Renaissance (Western Individuality Before the Enlightenment Age) and even earlier to the Axial Age (Hunger for Connection), a thread that can be traced back through history following the collapse of what Julian Jaynes called bicameral civilization in the Bronze Age. At the beginning of modernity, the psychic tension erupted in many ways that were increasingly dramatic and sometimes disturbing, from revolution to media panics (Technological Fears and Media Panics). I see all of this as having to do with the isolating and anxiety-inducing effects of hyper-individualism. The rigid egoic boundaries required by our social order are simply tiresome (Music and Dance on the Mind), as Julian Jaynes conjectured:
“Another advantage of schizophrenia, perhaps evolutionary, is tirelessness. While a few schizophrenics complain of generalized fatigue, particularly in the early stages of the illness, most patients do not. In fact, they show less fatigue than normal persons and are capable of tremendous feats of endurance. They are not fatigued by examinations lasting many hours. They may move about day and night, or work endlessly without any sign of being tired. Catatonics may hold an awkward position for days that the reader could not hold for more than a few minutes. This suggests that much fatigue is a product of the subjective conscious mind, and that bicameral man, building the pyramids of Egypt, the ziggurats of Sumer, or the gigantic temples at Teotihuacan with only hand labor, could do so far more easily than could conscious self-reflective men.”
On the Facebook page for Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind, Luciano Imoto made the same basic point in speaking about hyper-individualism. He stated that, “In my point of view the constant use of memory (and the hippocampus) to sustain a fictitious identity of “self/I” could be deleterious to the brain´s health at long range (considering that the brain consumes about 20 percent of the body’s energy).” I’m sure others have made similar observations. This strain on the psyche has been building up for a long time, but it became particularly apparent in the 19th century, to such an extent it was deemed necessary to build special institutions to house and care for the broken and deficient humans who couldn’t handle modern life or else couldn’t appropriately conform to the ever more oppressive social norms (Mark Jackson, The Borderland of Imbecility). As radical as some consider Jaynes to be, insights like this were hardly new — in 1867, Henry Maudsley offered insight laced with bigotry, from The Physiology and Pathology of Mind:
“There are general causes, such as the state of civilization in a country, the form of its government and its religion, the occupation, habits, and condition of its inhabitants, which are not without inﬂuence in determining the pro portion of mental diseases amongst them. Reliable statistical data respecting the prevalence of insanity in different countries are not yet to be had ; even the question whether it has increased with the progress of civilization has not been positively settled. Travellers are certainly agreed that it is a rare disease amongst barbarous people, while, in the different civilized nations of the world, there is, so far as can be ascertained, an average of about one insane person in ﬁve hundred inhabitants. Theoretical considerations would lead to the expectation of an increased liability to mental disorder with an increase in the complexity of the mental organization: as there are a greater liability to disease, and the possibility of many more diseases, in a complex organism like the human body, where there are many kinds of tissues and an orderly subordination of parts, than in a simple organism with less differentiation of tissue and less complexity of structure; so in the complex mental organization, with its manifold, special, and complex relations with the external, which a state of civilization implies, there is plainly the favourable occasion of many derangements. The feverish activity of life, the eager interests, the numerous passions, and the great strain of mental work incident to the multiplied industries and eager competition of an active civilization, can scarcely fail, one may suppose, to augment the liability to mental disease. On the other hand, it may be presumed that mental sufferings will be as rare in an infant state of society as they are in the infancy of the individual. That degenerate nervous function in young children is displayed, not in mental disorder, but in convulsions; that animals very seldom suffer from insanity; that insanity is of comparatively rare occurrence among savages; all these are circumstances that arise from one and the same fact—a want of development of the mental organization. There seems, therefore, good reason to believe that, with the progress of mental development through the ages, there is, as is the case with other forms of organic development, a correlative degeneration going on, and that an increase of insanity is a penalty which an increase of our present civilization necessarily pays. […]
“If we admit such an increase of insanity with our present civilization, we shall be at no loss to indicate causes for it. Some would no doubt easily ﬁnd in over-population the proliﬁc parent of this as of numerous other ills to mankind. In the ﬁerce and active struggle for existence which there necessarily is where the claimants are many and the supplies are limited, and where the competition therefore is severe, the weakest must suffer, and some of them, breaking down into madness, fall by the wayside. As it is the distinctly manifested aim of mental development to bring man into more intimate, special, and complex relations with the rest of nature by means of patient investigations of physical laws, and a corresponding internal adaptation to external relations, it is no marvel, it appears indeed inevitable, that those who, either from inherited weakness or some other debilitating causes, have been rendered unequal to the struggle of life, should be ruthlessly crushed out as abortive beings in nature. They are the waste thrown up by the silent but strong current of progress; they are the weak crushed out by the strong in the mortal struggle for development; they are examples of decaying reason thrown off by vigorous mental growth, the energy of which they testify. Everywhere and always “to be weak is to be miserable.”
As civilization became complex, so did the human mind in having to adapt to it and sometimes that hit a breaking point in individuals; or else what was previously considered normal behavior was now judged unacceptable, the latter explanation favored by Michel Foucault and Thomas Szasz (also see Bruce Levine’s article, Societies With Little Coercion Have Little Mental Illness). Whatever the explanation, something that once was severely abnormal had become normalized and, as it happened with insidious gradualism, few noticed and would accept what had changed “Living amid an ongoing epidemic that nobody notices is surreal. It is like viewing a mighty river that has risen slowly over two centuries, imperceptibly claiming the surrounding land, millimeter by millimeter. . . . Humans adapt remarkably well to a disaster as long as the disaster occurs over a long period of time” (E. Fuller Torrey & Judy Miller, Invisible Plague; also see Torrey’s Schizophrenia and Civilization); “At the end of the seventeenth century, insanity was of little significance and was little discussed. At the end of the eighteenth century, it was perceived as probably increasing and was of some concern. At the end of the nineteenth century, it was perceived as an epidemic and was a major concern. And at the end of the twentieth century, insanity was simply accepted as part of the fabric of life. It is a remarkable history.” All of the changes were mostly happening over generations and centuries, which left little if any living memory from when the changes began. Many thinkers like Torrey and Miller would be useful for fleshing this out, but here is a small sampling of authors and their books: Harold D. Foster’s What Really Causes Schizophrenia, Andrew Scull’s Madness in Civilization, Alain Ehrenberg’s Weariness of the Self, etc; and I shouldn’t ignore the growing field of Jaynesian scholarship such as found in the books put out by the Julian Jaynes Society.
Besides social stress and societal complexity, there was much else that was changing. For example, increasing concentrated urbanization and close proximity with other species meant ever more spread of infectious diseases and parasites (consider toxoplasma gondii from domesticated cats; see E. Fuller Torrey’s Beasts of Earth). Also, the 18th century saw the beginnings of industrialization with the related rise of toxins (Dan Olmsted & Mark Blaxill, The Age of Autism: Mercury, Medicine, and a Man-Made Epidemic). That worsened over the following century. Industrialization also transformed the Western diet. Sugar, having been introduced in the early colonial era, now was affordable and available to the general population. And wheat, once hard to grow and limited to the rich, also was becoming a widespread ingredient with new milling methods allowing highly refined white flour which made white bread popular (in the mid-1800s, Stanislas Tanchou did a statistical analysis that correlated the rate of grain consumption with the rate of cancer; and he observed that cancer, like insanity, spread along with civilization). For the first time in history, most Westerners were eating a very high-carb diet. This diet is addictive for a number of reasons and it was combined with the introduction of addictive stimulants. As I argue, this profoundly altered neurocognitive functioning and behavior (The Agricultural Mind, “Yes, tea banished the fairies.”, Autism and the Upper Crust, & Diets and SystemsDiets and Systems).
This represents an ongoing project for me. And I’m in good company.
What does it mean to be in the world? This world, this society, what kind is it? And how does that affect us? Let me begin with the personal and put it in the context of family. Then I’ll broaden out from there.
I’ve often talked about my own set of related issues. In childhood, I was diagnosed with learning disability. I’ve also suspected I might be on the autistic spectrum which could relate to the learning disability, but that kind of thing wasn’t being diagnosed much when I was in school. Another label to throw out is specific language impairment, something I only recently read about — it maybe better fits my way of thinking than autistic spectrum disorder. After high school, specifically after a suicide attempt, I was diagnosed with depression and thought disorder, although my memory of the latter label is hazy and I’m not sure exactly what was the diagnosis. With all of this in mind, I’ve thought that some of it could have been caused by simple brain damage, since I played soccer since early childhood. Research has found that children regularly head-butting soccer balls causes repeated micro-concussions and micro-tears which leads to brain inflammation and permanent brain damage, such as lower IQ (and could be a factor in depression as well). On the other hand, there is a clear possibility of genetic and/or epigenetic factors, or else some other kind of shared environmental conditions. There are simply too many overlapping issues in my family. It’s far from being limited to me.
My mother had difficulty learning when younger. One of her brothers had even more difficulty, probably with a learning disability as I have. My grandfather dropped out of school, not that such an action was too uncommon at the time. My mother’s side of the family has a ton of mood disorders and some alcoholism. In my immediate family, my oldest brother also seems like he could be somewhere on the autistic spectrum and, like our grandfather, has been drawn toward alcoholism. My other brother began stuttering in childhood and was diagnosed with anxiety disorder, and interestingly I stuttered for a time as well but in my case it was blamed on my learning disability involving word recall. There is also a lot of depression in the family, both immediate and extended. Much of it has been undiagnosed and untreated, specifically in the older generations. But besides myself, both of my brothers have been on antidepressants along with my father and an uncle. Now, my young niece and nephew are on anti-depressants, that same niece is diagnosed with Asperger’s, the other even younger niece is probably also autistic and has been diagnosed with obsessive-compulsive disorder, and that is only what I know about.
I bring up these ailments among the next generation following my own as it indicates something serious going on in the family or else in society as a whole. I do wonder what gets epigenetically passed on with each generation worsening and, even though my generation was the first to show the strongest symptoms, it may continue to get far worse before it gets better. And it may not have anything specifically to do with my family or our immediate environment, as many of these conditions are increasing among people all across this country and in many other countries as well. The point relevant here is that, whatever else may be going on in society, there definitely were factors specifically impacting my family that seemed to hit my brothers and I around the same time. I can understand my niece and nephew going on antidepressants after their parents divorced, but there was no obvious triggering condition for my brothers and I, well besides moving into a different house in a different community.
Growing up and going into adulthood, my own issues always seemed worse, though, or maybe just more obvious. Everyone who has known me knows that I’ve struggled for decades with depression, and my learning disability adds to this. Neither of my brothers loved school, but neither of them struggled as I did, neither of them had delayed reading or went to a special education teacher. Certainly, neither of them nearly flunked out of a grade, something that would’ve happened to me in 7th grade if my family hadn’t moved. My brothers’ conditions were less severe or at least the outward signs of it were easier to hide — or maybe they are simply more talented at acting normal and conforming to social norms (unlike me, they both finished college, got married, had kids, bought houses, and got respectable professional jobs; basically the American Dream). My brother with the anxiety and stuttering learned how to manage it fairly early on, and it never seemed have a particularly negative affect on his social life, other than making him slightly less confident and much more conflict-avoidant, sometimes passive-aggressive. I’m the only one in the family who attempted suicide and was put in a psychiatric ward for my effort, the only one to spend years in severe depressive funks of dysfunction.
This caused me to think about my own problems as different, but in recent years I’ve increasingly looked at the commonalities. It occurs to me that there is an extremely odd coincidence that brings together all of these conditions, at least for my immediate family. My father developed depression in combination with anxiety during a stressful period of his life, after we moved because he got a new job. He began having moments of rapid heartbeat and it worried him. My dad isn’t an overly psychologically-oriented person, though not lacking in self-awareness, and so it is unsurprising that it took a physical symptom to get his attention. It was a mid-life crisis. Added to his stress were all the problems developing in his children. It felt like everything was going wrong.
Here is the strange part. Almost all of this started happening specifically when we moved into that new house, my second childhood home. It was a normal house, not that old. The only thing that stood out, as my father told me, was that the electricity usage was much higher than it was at the previous house, and no explanation for this was ever discovered. Both that house and the one we lived in before were in the Lower Midwest and so there were no obvious environmental differences. It only now struck me, in talking to my father again about it, that all of the family’s major neurocognitive and psychological issues began or worsened while living in that house.
About my oldest brother, he was having immense behavioral issues from childhood onward: refused to do what he was told, wouldn’t complete homework, and became passive-aggressive. He was irritable, angry, and sullen. Also, he was sick all the time, had a constant runny nose, and was tired. It turned out he had allergies that went undiagnosed for a long time, but once treated the worst symptoms went away. The thing about allergies is that it is an immune condition where the body is attacking itself. During childhood, allergies can have a profound impact on human biology, including neurocognitive and psychological development, often leaving the individual with a condition of emotional sensitivity for the rest of their lives, as if the body is stuck in permanent defensive mode. This was a traumatic time for my brother and he has never recovered from it — still seething with unresolved anger and still blaming my parents for what happened almost a half century ago.
One of his allergies was determined to be mold, which makes sense considering the house was on a shady lot. This reminds me of how some molds can produce mycotoxins. When mold is growing in a house, it can create a toxic environment with numerous symptoms for the inhabitants that can be challenging to understand and connect. Unsurprisingly, research does show that air quality is important for health and cognitive functioning. Doctors aren’t trained in diagnosing environmental risk factors and that was even more true of doctors decades ago. It’s possible that something about that house was behind all of what was going on in my family. It could have been mold or it could have been some odd electromagnetic issue or else it could have been a combination of factors. This is what is called sick building syndrome.
Beyond buildings themselves, it can also involve something brought into a building. In one fascinating example, a scientific laboratory was known to have a spooky feeling that put people at unease. After turning off a fan, this strange atmosphere went away. It was determined the fan was vibrating at a level that was affecting the human nervous system or brain. There has been research into how vibrations and electromagnetic energy can cause stressful and disturbing symptoms (the human body is so sensitive that the brain can detect the weak magnetic field of the earth, something that earlier was thought to be impossible). Wind turbines, for example, can cause the eyeball to resonate in a way to cause people to see glimpses of things that aren’t there (i.e., hallucinations). So, it isn’t always limited to something directly in a building itself but can include what is in the nearby environment. I discuss all of this in an earlier post: Stress Is Real, As Are The Symptoms.
This goes along with the moral panic about violent crime in the early part of my life during the last several decades of the 20th century. It wasn’t an unfounded moral panic, not mere mass hysteria. There really was a major spike in the rate of homicides (not to mention suicides, child abuse, bullying, gang activity, etc). All across society, people were acting more aggressive (heck, aggression became idealized, as symbolized by the ruthless Wall Street broker who wins success through social Darwinian battle of egoic will and no-holds-barred daring). Many of the perpetrators and victims of violence were in my generation. We were a bad generation, a new Lost Generation. It was the period when the Cold War was winding down and then finally ended. There was a sense of ennui in the air, as our collective purpose in fighting a shared enemy seemed less relevant and eventually disappeared altogether. But that was in the background and largely unacknowledged. Similar to the present mood, there was a vague sense of something being terribly wrong with society. Those caught up in the moral panic blamed it on all kinds of things: video games, mass media, moral decline, societal breakdown, loss of strict parenting, unsupervised latchkey kids, gangs, drugs, and on and on. With so many causes, many solutions were sought, not only in different cities and states across the United States but also around the world: increased incarceration or increased rehabilitation programs, drug wars or drug decriminalization, stop and frisk or gun control, broken window policies or improved community relations, etc. No matter what was done or not done, violent crime went down over the decades in almost every population around the planet.
It turned out the strongest correlation was also one of the simplest. Lead toxicity drastically went up in the run up to those violent decades and, depending on how quickly environmental regulations for lead control were implemented, lead toxicity dropped back down again. Decline of violent crime followed with a twenty year lag in every society (twenty years is the time for a new generation to reach adulthood). Even to this day, in any violent population from poor communities to prisons, you’ll regularly find higher lead toxicity rates. It was environmental all along and yet it’s so hard for us to grasp environmental conditions like this because they can’t be directly felt or seen. Most people still don’t know about lead toxicity, despite it being one of the most thoroughly researched areas of public health. So, there is not only sick building syndrome for entire societies can become sick. When my own family was going bonkers, it was right in the middle of this lead toxicity epidemic and we were living right outside of industrial Chicago and, prior to that, we were living in a factory town. I have wondered about lead exposure, since my generation saw the highest lead exposure rate in the 20th century and probably one of the highest since the Roman Empire started using lead water pipes, what some consider to have been the cause of its decline and fall.
Or consider high inequality that can cause widespread bizarre and aggressive behavior, as it mimics the fear and anxiety of poverty even among those who aren’t poor. Other social conditions have various kinds of effects, in some cases with repercussions that last for centuries. But in any of these examples, the actual cause is rarely understood by many people. The corporate media and politicians are generally uninterested in reporting on what scientists have discovered, assuming scientists can get the funding to do the needed research. Large problems requiring probing thought and careful analysis don’t sell advertising nor do they sell political campaigns, and the corporations behind both would rather distract the public from public problems that would require public solutions, such as corporate regulations and higher taxation.
In our society, almost everything gets reduced to the individual. And so it is the individual who is blamed or treated or isolated, which is highly effective for social control. Put them in prison, give them a drug, scapegoat them in the media, or whatever. Anything so long as we don’t have to think about the larger conditions that shape individuals. The reality is that psychological conditions are never merely psychological. In fact, there is no psychology as separate and distinct from all else. The same is true for many physical diseases as well, such as autoimmune disorders. Most mental and physical health concerns are simply sets of loosely associated symptoms with thousands of possible causal and contributing factors. Our categorizing diseases by which drugs treat them is simply a convenience for the drug companies. But if you look deeply enough, you’ll typically find basic things that are implicated: gut dysbiosis, mitochondrial dysfunction, etc —- inflammation, for example, is found in numerous conditions, from depression and Alzheimer’s to heart disease and arthritis — the kinds of conditions that have been rapidly spreading over the past century (also, look at psychosis). Much of it is often dietary related, since in this society we are all part of the same food system and so we are all hit by the same nutrient-deficient foods, the same macronutrient ratios, the same harmful hydrogenated and partially-hydrogenated vegetable oils/margarine, the same food additives, the same farm chemicals, the same plastic-originated hormone mimics, the same environmental toxins, etc. I’ve noticed the significant changes in my own mood, energy, and focus since turning to a low-carb, high-fat diet based mostly on whole foods and traditional foods that are pasture-fed, organic, non-GMO, local, and in season — lessening the physiological stress load. It is yet another factor that I see as related to my childhood difficulties, as diverse research has shown how powerful is diet in every aspect of health, especially neurocognitive health.
This makes it difficult for individuals in a hyper-individualistic society. We each feel isolated in trying to solve our supposedly separate problems, an impossible task, one might call it a Sisyphean task. And we rarely appreciate how much childhood development shapes us for the rest of our lives and how much environmental factors continue to influence us. We inherit so much from the world around us and the larger society we are thrown into, from our parents and the many generations before them. A society is built up slowly with the relationship between causes and consequences often not easily seen and, even when noticed, rarely appreciated. We are born and we grow up in conditions that we simply take for granted as our reality. But those conditions don’t have to be taken as fatalistic for, if we seek to understand them and embrace that understanding, we can change the very conditions that change us. This will require us first to get past our culture of blame and shame.
We shouldn’t personally identify with our health problems and struggles. We aren’t alone nor isolated. The world is continuously affecting us, as we affect others. The world is built on relationships, not just between humans and other species but involving everything around us — what some describe as embodied, embedded, enacted, and extended (we are hypersubjects among hyperobjects). The world that we inhabit, that world inhabits us, our bodies and minds. There is no world “out there” for there is no possible way for us to be outside the world. Everything going on around us shapes who we are, how we think and feel, and what we do — most importantly, shapes us as members of a society and as parts of a living biosphere, a system of systems all the way down. The personal is always the public, the individual always the collective, the human always the more than human.
* * *
When writing pieces like this, I should try to be more balanced. I focused solely on the harm that is caused by external factors. That is a rather lopsided assessment. But there is the other side of the equation implied in everything I wrote.
As higher inequality causes massive dysfunction and misery, greater equality brings immense benefit to society as a whole and each member within it. All you have to do in order to understand this is to look to cultures of trust such as the well functioning social democracies, with the Nordic countries being the most famous examples (The Nordic Theory of Everything by Anu Partanen). Or consider how, no matter your intelligence, you are better off being in an on average high IQ society than to be the smartest person in an on average low IQ society. Other people’s intelligence has greater impact on your well being and socioeconomic situation than does your own intelligence (see Hive Mind by Garett Jones).
This other side was partly pointed to in what I already wrote in the first section, even if not emphasized. For example, I pointed out how something so simple as regulating lead pollution could cause violent crime rates around the world to drop like a rock. And that was only looking at a small part of the picture. Besides impulsive behavior and aggression that can lead to violent crime, lead by itself is known to cause a wide array of problems: lowered IQ, ADHD, dyslexia, schizophrenia, Alzheimer’s, etc; and also general health issues, from asthma to cardiovascular disease. Lead is only one among many such serious toxins, with others including cadmium and mercury. The latter is strange. Mercury can actually increase IQ, even as it causes severe dysfunction in other ways. Toxoplasmosis also can do the same for the IQ of women, even as the opposite pattern is seen in men.
The point is that solving or even lessening major public health concerns can potentially benefit the entire society, maybe even transform society. We act fatalistic about these collective conditions, as if there is nothing to be done about inequality, whether the inequality of wealth, resources, and opportunities or the inequality of healthy food, clean water, and clean air. We created these problems and we can reverse them. It often doesn’t require much effort and the costs in taking action are far less than the costs of allowing these societal wounds to fester. It’s not as if Americans lack the ability to tackle difficult challenges. Our history is filled with examples of public projects and programs with vast improvements being made. Consider the sewer socialists who were the first to offer clean water to all citizens in their cities, something that once demonstrated as successful was adopted by every other city in the United States (more or less adopted, if we ignore the continuing lead toxicity crisis).
There is no reason to give up in hopelessness, not quite yet. Let’s try to do some basic improvements first and see what happens. We can wait for environmental collapse, if and when it comes, before we resign ourselves to fatalism. It’s not a matter if we can absolutely save all of civilization from all suffering. Even if all we could accomplish is reducing some of the worst harm (e.g., aiming for less than half of the world’s population falling victim to environmental sickness and mortality), I’d call it a wild success. Those whose lives were made better would consider it worthwhile. And who knows, maybe you or your children and grandchildren will be among those who benefit.
Let me make an argument about (hyper-)individualism, rigid egoic boundaries, and hence Jaynesian consciousness (about Julian Jaynes, see other posts). But I’ll come at it from a less typical angle. I’ve been reading much about diet, nutrition, and health. With agriculture, the entire environment in which humans lived was fundamentally transformed, such as the rise of inequality and hierarchy, concentrated wealth and centralized power; not to mention the increase of parasites and diseases from urbanization and close cohabitation with farm animals (The World Around Us). We might be able to thank early agricultural societies, as an example, for introducing malaria to the world.
Maybe more importantly, there are significant links between what we eat and so much else: gut health, hormonal regulation, immune system, and neurocognitive functioning. There are multiple pathways, one of which is direct, connecting the gut and the brain: nervous system, immune system, hormonal system, etc — with the affect of diet and nutrition on immune response, including leaky gut, consider the lymphatic-brain link (Neuroscience News, Researchers Find Missing Link Between the Brain and Immune System) with the immune system as what some refer to as the “mobile mind” (Susan L. Prescott & Alan C. Logan, The Secret Life of Your Microbiome, pp. 64-7, pp. 249-50). As for a direct and near instantaneous gut-brain link, there was a recent discovery of the involvement of the vagus nerve, a possible explanation for the ‘gut sense’, with the key neurotransmitter glutamate modulating the rate of transmission in synaptic communication between enteroendocrine cells and vagal nerve neurons (Rich Haridy, Fast and hardwired: Gut-brain connection could lead to a “new sense”), and this is implicated in “episodic and spatial working memory” that might assist in the relocation of food sources (Rich Haridy, Researchers reveal how disrupting gut-brain communication may affect learning and memory). The gut is sometimes called the second brain because it also has neuronal cells, but in evolutionary terms it is the first brain. To demonstrate one example of a connection, many are beginning to refer to Alzheimer’s as type 3 diabetes, and dietary interventions have reversed symptoms in clinical studies. Also, gut microbes and parasites have been shown to influence our neurocognition and psychology, even altering personality traits and behavior such as with toxoplasma gondii. [For more discussion, see Fasting, Calorie Restriction, and Ketosis.]
The gut-brain link explains why glutamate as a food additive might be so problematic for so many people. Much of the research has looked at other health areas, such as metabolism or liver functioning. It would make more sense to look at its effect on neurocognition, but as with many other particles many scientists have dismissed the possibility of glutamate passing the blood-brain barrier. Yet we now know many things that were thought to be kept out of the brain do, under some conditions, get into the brain. After all, the same mechanisms that cause leaky gut (e.g., inflammation) can also cause permeability in the brain. So, we know the mechanism about how this could happen. Evidence is pointing in this direction: “MSG acts on the glutamate receptors and releases neurotransmitters which play a vital role in normal physiological as well as pathological processes (Abdallah et al., 2014). Glutamate receptors have three groups of metabotropic receptors (mGluR) and four classes of ionotropic receptors (NMDA, AMPA, delta and kainite receptors). All of these receptor types are present across the central nervous system. They are especially numerous in the hypothalamus, hippocampus and amygdala, where they control autonomic and metabolic activities (Zhu and Gouaux, 2017). Results from both animal and human studies have demonstrated that administration of even the lowest dose of MSG has toxic effects. The average intake of MSG per day is estimated to be 0.3-1.0 g (Solomon et al., 2015). These doses potentially disrupt neurons and might have adverse effects on behaviour” (Kamal Niaz, Extensive use of monosodium glutamate: A threat to public health?).
One possibility to consider is the role of exorphins that are addictive and can be blocked in the same way as opioids. Exorphin, in fact, means external morphine-like substance, in the way that endorphin means indwelling morphine-like substance. Exorphins are found in milk and wheat. Milk, in particular, stands out. Even though exorphins are found in other foods, it’s been argued that they are insignificant because they theoretically can’t pass through the gut barrier, much less the blood-brain barrier. Yet exorphins have been measured elsewhere in the human body. One explanation is gut permeability (related to permeability throughout the body) that can be caused by many factors such as stress but also by milk. The purpose of milk is to get nutrients into the calf and this is done by widening the space in gut surface to allow more nutrients through the protective barrier. Exorphins get in as well and create a pleasurable experience to motivate the calf to drink more. Along with exorphins, grains and dairy also contain dopaminergic peptides, and dopamine is the other major addictive substance. It feels good to consume dairy as with wheat, whether you’re a calf or a human, and so one wants more. Think about that the next time you pour milk over cereal.
Something else to consider is that low-carb diets can alter how the body and brain functions (the word ‘alter’ is inaccurate, though, since in evolutionary terms ketosis would’ve been the normal state; and so rather the modern high-carb diet is altered from the biological norm). That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past (Past Views On One Meal A Day (OMAD)). Interestingly, this only applies to adults since we know that babies remain in ketosis during breastfeeding, there is evidence that they are already in ketosis in utero, and well into the teen years humans apparently remain in ketosis: “It is fascinating to see that every single child , so far through age 16, is in ketosis even after a breakfast containing fruits and milk” (Angela A. Stanton, Children in Ketosis: The Feared Fuel). “I have yet to see a blood ketone test of a child anywhere in this age group that is not showing ketosis both before and after a meal” (Angela A. Stanton, If Ketosis Is Only a Fad, Why Are Our Kids in Ketosis?). Ketosis is not only safe but necessary for humans (“Is keto safe for kids?”). Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is that, for many people, cravings and food addictions disappear. [For more discussion of this topic, see previous posts: Fasting, Calorie Restriction, and Ketosis, Ketogenic Diet and Neurocognitive Health, Is Ketosis Normal?, & “Is keto safe for kids?”.] Ketosis is a non-addictive or maybe even anti-addictive state of mind (FranciscoRódenas-González, et al, Effects of ketosis on cocaine-induced reinstatement in male mice), similar to how certain psychedelics can be used to break addiction — one might argue there is a historical connection over the millennia between a decrease of psychedelic use and an increase of addictive substances: sugar, caffeine, nicotine, opium, etc (Diets and Systems, “Yes, tea banished the fairies.”, & Wealth, Power, and Addiction). Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, such as Daniel Everett’s account of the Piraha, and that is typical of ketosis — fasting forces one into ketosis, if one isn’t already in ketosis, and so beginning a fast in ketosis makes it even easier. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group, the opposite of the addictive egoic mind of high-carb agricultural societies. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. Hold that thought for a moment.
Now go back to the two staples of the modern diet, grains and dairy. Besides exorphins and dopaminergic substances, they also have high levels of glutamate, as part of gluten and casein respectively. Dr. Katherine Reid is a biochemist whose daughter was diagnosed with autism and it was severe. She went into research mode and experimented with supplementation and then diet. Many things seemed to help, but the greatest result came from restriction of dietary glutamate, a difficult challenge as it is a common food additive (see her TED talk here and another talk here or, for a short and informal video, look here). This requires going on a largely whole foods diet, that is to say eliminating processed foods (also see Traditional Foods diet of Weston A. Price and Sally Fallon Morell, along with the GAPS diet of Natasha Campbell-McBride). But when dealing with a serious issue, it is worth the effort. Dr. Reid’s daughter showed immense improvement to such a degree that she was kicked out of the special needs school. After being on this diet for a while, she socialized and communicated normally like any other child, something she was previously incapable of. Keep in mind that glutamate, as mentioned above, is necessary as a foundational neurotransmitter in modulating communication between the gut and brain. But typically we only get small amounts of it, as opposed to the large doses found in the modern diet. In response to the TED Talk given by Reid, Georgia Ede commented that it’s, “Unclear if glutamate is main culprit, b/c a) little glutamate crosses blood-brain barrier; b) anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production.” Either way, glutamate plays a powerful role in brain functioning. And no matter the exact line of causation, industrially processed foods in the modern diet would be involved. By the way, an exacerbating factor might be mercury in its relation to anxiety and adrenal fatigue, as it ramps up the fight or flight system via over-sensitizing the glutamate pathway — could this be involved in conditions like autism where emotional sensitivity is a symptom? Mercury and glutamate simultaneously increasing in the modern world demonstrates how industrialization can push the effects of the agricultural diet to ever further extremes.
Glutamate is also implicated in schizophrenia: “The most intriguing evidence came when the researchers gave germ-free mice fecal transplants from the schizophrenic patients. They found that “the mice behaved in a way that is reminiscent of the behavior of people with schizophrenia,” said Julio Licinio, who co-led the new work with Wong, his research partner and spouse. Mice given fecal transplants from healthy controls behaved normally. “The brains of the animals given microbes from patients with schizophrenia also showed changes in glutamate, a neurotransmitter that is thought to be dysregulated in schizophrenia,” he added. The discovery shows how altering the gut can influence an animals behavior” (Roni Dengler, Researchers Find Further Evidence That Schizophrenia is Connected to Our Guts; reporting on Peng Zheng et al, The gut microbiome from patients with schizophrenia modulates the glutamate-glutamine-GABA cycle and schizophrenia-relevant behaviors in mice, Science Advances journal). And glutamate is involved in other conditions as well, such as in relation to GABA: “But how do microbes in the gut affect [epileptic] seizures that occur in the brain? Researchers found that the microbe-mediated effects of the Ketogenic Diet decreased levels of enzymes required to produce the excitatory neurotransmitter glutamate. In turn, this increased the relative abundance of the inhibitory neurotransmitter GABA. Taken together, these results show that the microbe-mediated effects of the Ketogenic Diet have a direct effect on neural activity, further strengthening support for the emerging concept of the ‘gut-brain’ axis.” (Jason Bush, Important Ketogenic Diet Benefit is Dependent on the Gut Microbiome). Glutamate is one neurotransmitter among many that can be affected in a similar manner; e.g., serotonin is also produced in the gut.
That reminds me of propionate, a short chain fatty acid and the conjugate base of propioninic acid. It is another substance normally taken in at a low level. Certain foods, including grains and dairy, contain it. The problem is that, as a useful preservative, it has been generously added to the food supply. Research on rodents shows injecting them with propionate causes autistic-like behaviors. And other rodent studies show how this stunts learning ability and causes repetitive behavior (both related to the autistic demand for the familiar), as too much propionate entrenches mental patterns through the mechanism that gut microbes use to communicate to the brain how to return to a needed food source, similar to the related function of glutamate. A recent study shows that propionate not only alters brain functioning but brain development (L.S. Abdelli et al, Propionic Acid Induces Gliosis and Neuro-inflammation through Modulation of PTEN/AKT Pathway in Autism Spectrum Disorder), and this is a growing field of research (e.g., Hyosun Choi, Propionic acid induces dendritic spine loss by MAPK/ERK signaling and dysregulation of autophagic flux). As reported by Suhtling Wong-Vienneau at University of Central Florida, “when fetal-derived neural stem cells are exposed to high levels of Propionic Acid (PPA), an additive commonly found in processed foods, it decreases neuron development” (Processed Foods May Hold Key to Rise in Autism). This study “is the first to discover the molecular link between elevated levels of PPA, proliferation of glial cells, disturbed neural circuitry and autism.”
The impact is profound and permanent — Pedersen offers the details: “In the lab, the scientists discovered that exposing neural stem cells to excessive PPA damages brain cells in several ways: First, the acid disrupts the natural balance between brain cells by reducing the number of neurons and over-producing glial cells. And although glial cells help develop and protect neuron function, too many glia cells disturb connectivity between neurons. They also cause inflammation, which has been noted in the brains of autistic children. In addition, excessive amounts of the acid shorten and damage pathways that neurons use to communicate with the rest of the body. This combination of reduced neurons and damaged pathways hinder the brain’s ability to communicate, resulting in behaviors that are often found in children with autism, including repetitive behavior, mobility issues and inability to interact with others.” According to this study, “too much PPA also damaged the molecular pathways that normally enable neurons to send information to the rest of the body. The researchers suggest that such disruption in the brain’s ability to communicate may explain ASD-related characteristics such as repetitive behavior and difficulties with social interaction” (Ana Sandoiu, Could processed foods explain why autism is on the rise?).
So, the autistic brain develops according to higher levels of propionate and maybe becomes accustomed to it. A state of dysfunction becomes what feels normal. Propionate causes inflammation and, as Dr. Ede points out, “anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production”. High levels of propionate and glutamate become part of the state of mind the autistic becomes identified with. It all links together. Autistics, along with cravings for foods containing propionate (and glutamate), tend to have larger populations of a particular gut microbe that produces propionate. In killing microbes, this might be why antibiotics can help with autism. But in the case of depression, gut issues are associated instead with the lack of certain microbes that produce butyrate, another important substance that also is found in certain foods (Mireia Valles-Colomer et al, The neuroactive potential of the human gut microbiota in quality of life and depression). Depending on the specific gut dysbiosis, diverse neurocognitive conditions can result. And in affecting the microbiome, changes in autism can be achieved through a ketogenic diet, temporarily reducing the microbiome (similar to an antibiotic) — this presumably takes care of the problematic microbes and readjusts the gut from dysbiosis to a healthier balance. Also, ketosis would reduce the inflammation that is associated with glutamate production.
As with propionate, exorphins injected into rats will likewise elicit autistic-like behaviors. By two different pathways, the body produces exorphins and propionate from the consumption of grains and dairy, the former from the breakdown of proteins and the latter produced by gut bacteria in the breakdown of some grains and refined carbohydrates (combined with the propionate used as a food additive; and also, at least in rodents, artificial sweeteners increase propionate levels). [For related points and further discussion, see section below about vitamin B1 (thiamine/thiamin). Also covered are other B vitamins and nutrients.] This is part of the explanation for why many autistics have responded well to ketosis from carbohydrate restriction, specifically paleo diets that eliminate both wheat and dairy, but ketones themselves play a role in using the same transporters as propionate and so block their buildup in cells and, of course, ketones offer a different energy source for cells as a replacement for glucose which alters how cells function, specifically neurocognitive functioning and its attendant psychological effects.
There are some other factors to consider as well. With agriculture came a diet high in starchy carbohydrates and sugar. This inevitably leads to increased metabolic syndrome, including diabetes. And diabetes in pregnant women is associated with autism and attention deficit disorder in children. “Maternal diabetes, if not well treated, which means hyperglycemia in utero, that increases uterine inflammation, oxidative stress and hypoxia and may alter gene expression,” explained Anny H. Xiang. “This can disrupt fetal brain development, increasing the risk for neural behavior disorders, such as autism” (Maternal HbA1c influences autism risk in offspring); by the way, other factors such as getting more seed oils and less B vitamins are also contributing factors to metabolic syndrome and altered gene expression, including being inherited epigenetically, not to mention mutagenic changes to the genes themselves (Catherine Shanahan, Deep Nutrition). The increase of diabetes, not mere increase of diagnosis, could partly explain the greater prevalence of autism over time. Grain surpluses only became available in the 1800s, around the time when refined flour and sugar began to become common. It wasn’t until the following century that carbohydrates finally overtook animal foods as the mainstay of the diet, specifically in terms of what is most regularly eaten throughout the day in both meals and snacks — a constant influx of glucose into the system.
A further contributing factor in modern agriculture is that of pesticides, also associated with autism. Consider DDE, a product of DDT, which has been banned for decades but apparently it is still lingering in the environment. “The odds of autism among children were increased, by 32 percent, in mothers whose DDE levels were high (high was, comparatively, 75th percentile or greater),” one study found (Aditi Vyas & Richa Kalra, Long lingering pesticides may increase risk for autism: Study). “Researchers also found,” the article reports, “that the odds of having children on the autism spectrum who also had an intellectual disability were increased more than two-fold when the mother’s DDE levels were high.” A different study showed a broader effect in terms of 11 pesticides still in use:
“They found a 10 percent or more increase in rates of autism spectrum disorder, or ASD, in children whose mothers lived during pregnancy within about a mile and a quarter of a highly sprayed area. The rates varied depending on the specific pesticide sprayed, and glyphosate was associated with a 16 percent increase. Rates of autism spectrum disorders combined with intellectual disability increased by even more, about 30 percent. Exposure after birth, in the first year of life, showed the most dramatic impact, with rates of ASD with intellectual disability increasing by 50 percent on average for children who lived within the mile-and-a-quarter range. Those who lived near glyphosate spraying showed the most increased risk, at 60 percent” (Nicole Ferox, It’s Personal: Pesticide Exposures Come at a Cost).
An additional component to consider are plant anti-nutrients. For example, oxalates may be involved in autism spectrum disorder (Jerzy Konstantynowicz et al, A potential pathogenic role of oxalate in autism). With the end of the Ice Age, vegetation became more common and some of the animal foods less common. That increased plant foods as part of the human diet. But even then it was limited and seasonal. The dying off of the megafauna was a greater blow, as it forced humans to both rely on less desirable lean meats from smaller prey but also more plant foods. And of course, the agricultural revolution followed shortly after that with its devastating effects. None of these changes were kind to human health and development, as the evidence shows in the human bones and mummies left behind. Yet they were minor compared to what was to come. The increase of plant foods was a slow process over millennia. All the way up to the 19th century, Americans were eating severely restricted amounts of plant foods and instead depending on fatty animal foods, from pasture-raised butter and lard to wild-caught fish and deer — the abundance of wilderness and pasturage made such foods widely available, convenient, and cheap, besides being delicious and nutritious. Grain crops and vegetable gardens were simply too hard to grow, as described by Nina Teicholz in The Big Fat Surprise (see quoted passage at Malnourished Americans).
While maintaining a garden at Walden Pond by growing beans, peas, corn, turnips and potatoes, a plant-based diet (Jennie Richards, Henry David Thoreau Advocated “Leaving Off Eating Animals”) surely contributed to Henry David Thoreau’s declining health from tuberculosis in weakening his immune system from deficiency in the fat-soluble vitamins, although his nearby mother occasionally made him a fruit pie that would’ve had nutritious lard in the crust: “lack of quality protein and excess of carbohydrate foods in Thoreau’s diet as probable causes behind his infection” (Dr. Benjamin P. Sandler, Thoreau, Pulmonary Tuberculosis and Dietary Deficiency). Likewise, Franz Kafka who became a vegetarian also died from tuberculosis (Old Debates Forgotten). Weston A. Price observed the link between deficiency of fat-soluble vitamins and high rates of tuberculosis, not that one causes the other but that nutritious diet is key to a strong immune system (Dr. Kendrick On Vaccines & Moral Panic and Physical Degeneration). Besides, eliminating fatty animal foods typically means increasing starchy and sugary plant foods, which lessens the anti-inflammatory response from ketosis and autophagy and hence the capacity for healing.
It should be re-emphasized the connection of physical health to mental health, another insight of Price. Interestingly, Kafka suffered from psychological, presumably neurocognitive, issues long before tubercular symptoms showed up and he came to see the link between them as causal, although he saw it the the other way around as psychosomatic. Even more intriguing, Kafka suggests that, as Sander L. Gilman put it, “all urban dwellers are tubercular,” as if it is a nervous condition of modern civilization akin to what used to be called neurasthenia (about Kafka’s case, see Sander L. Gilman’s Franz Kafka, the Jewish Patient). He even uses the popular economic model of energy and health: “For secretly I don’t believe this illness to be tuberculosis, at least primarily tuberculosis, but rather a sign of general bankruptcy” (for context, see The Crisis of Identity). Speaking of the eugenic, hygienic, sociological and aesthetic, Gillman further notes that, “For Kafka, that possibility is linked to the notion that illness and creativity are linked, that tuberculars are also creative geniuses,” indicating an interpretation of neurasthenia among the intellectual class, an interpretation that was more common in the United States than in Europe.
The upper classes were deemed the most civilized and so it was expected they they’d suffer the most from the diseases of civilization, and indeed the upper classes fully adopted the modern industrial diet before the rest of the population. In contrast, while staying at a sanatorium (a combination of the rest cure and the west cure), Kafka stated that, “I am firmly convinced, now that I have been living here among consumptives, that healthy people run no danger of infection. Here, however, the healthy are only the woodcutters in the forest and the girls in the kitchen (who will simply pick uneaten food from the plates of patients and eat it—patients whom I shrink from sitting opposite) but not a single person from our town circles,” from a letter to Max Brod on March 11, 1921. It should be pointed out that tuberculosis sanatoriums were typically located in rural mountain areas where local populations were known to be healthy, the kinds of communities Weston A. Price studied in the 1930s; a similar reason for why in America tuberculosis patients were sometimes sent west (the west cure) for clean air and a healthy lifestyle, probably with an accompanying change toward a rural diet, with more wild-caught animal foods higher in omega-3s and lower in omega-6s, not to mention higher in fat-soluble vitamins.
The historical context of public health overlapped with racial hygiene, and indeed some of Kafka’s family members and lovers would later die at the hands of Nazis. Eugenicists were obsessed with body types in relation to supposed racial features, but non-eugenicists also accepted that physical structure was useful information to be considered; and this insight is supported, if not the eugenicist ideology, by the more recent scientific measurements of stunted bone development in the early agricultural societies. Hermann Brehmer, a founder of the sanitorium movement, asserted that a particular body type (habitus phthisicus, equivalent to habitus asthenicus) was associated with tuberculosis, the kind of thinking that Weston A. Price would pick up in his observations in physical development, although Price saw the explanation as dietary and not racial. The other difference is that Price saw “body type” not as a cause but as a symptom of ill health, and so the focus on re-forming the body (through lung exercises, orthopedic corsets, etc) to improve health was not the most helpful advice. On the other hand, if re-forming the body involved something like the west cure in changing the entire lifestyle and environmental conditions, it might work by way of changing other factors of health and, along with diet, exercise and sunshine and clean air and water would definitely improve immune function, lower inflammation, and much else (sanitoriums prioritized such things as getting plenty of sunshine and dairy, both of which would increase vitamin D3 that is necessary for immunological health). Improvements in physical health, of course, would go hand in hand with that of mental health. An example of this is that winter conceptions, when vitamin D3 production is low, result in higher rates later on of childhood learning disabilities and other problems in neurocognitive development (BBC, Learning difficulties linked with winter conception).
As a side note, physical development was tied up with gender issues and gender roles, especially for boys in becoming men. There became a fear that the newer generations of urban youth were failing to develop properly, physically and mentally, morally and socially. Fitness became a central concern for the civilizational project and it was feared that we modern humans might fail this challenge. Most galling of all was ‘feminization’, not only about loss of an athletic build but loss of something to the masculine psychology, involving the depression and anxiety, sensitivity and weakness of conditions like neurasthenia while also overlapping with tubercular consumption. Some of this could be projected onto racial inferiority, far from being limited to the distinction between those of European descent and all others for it also was used to divide humanity up in numerous ways (German vs French, English vs Irish, North vs South, rich vs poor, Protestants vs Catholics, Christians vs Jews, etc).
Gender norms were applied to all aspects of health and development, including perceived moral character and personality disposition. This is a danger to the individual, but also potentially a danger to society. “Here we can return for the moment to the notion that the male Jew is feminized like the male tubercular. The tubercular’s progressive feminization begins in the middle of the nineteenth century with the introduction of the term: infemminire, to feminize, which is supposedly a result of male castration. By the 1870s, the term is used to describe the feminisme of the male through the effects of other disease, such as tuberculosis. Henry Meige, at the Salpetriere, saw this feminization as an atavism, in which the male returns to the level of the “sexless” child. Feminization is therefore a loss, which can cause masturbation and thus illness in certain predisposed individuals. It is also the result of actual castration or its physiological equivalent, such as an intensely debilitating illness like tuberculosis, which reshapes the body” (Sanders L. Gilman, Franz Kafka, the Jewish Patient). There was a fear that all of civilization was becoming effeminate, especially among the upper classes who were expected to be the leaders. That was the entire framework of neurasthenia-obsessed rhetoric in late nineteenth to early twentieth century America. The newer generations of boys, the argument went, were somehow deficient and inadequate. Looking back on that period, there is no doubt that physical and mental illness was increasing, while bone structure was becoming underdeveloped in a way one could perceive as effeminate; such bone development problems are particularly obvious among children raised on plant-based diets, especially veganism and near-vegan vegetarianism, but also anyone on a diet lacking nutritious animal foods.
Let me make one odd connection before moving on. The Seventh Day Adventist Dr. John Harvey Kellogg believed masturbation was both a moral sin and a cause of ill health but also a sign of inferiority, and his advocacy of a high-fiber vegan diet including breakfast cereals was based on the Galenic theory that such foods decreased libido. Dr. Kellogg was also an influential eugenicist and operated a famous sanitorium. He wasn’t alone in blaming masturbation for disease. The British Dr. D. G. Macleod Munro treated masturbation as a contributing factor for tuberculosis: “the advent of the sexual appetite in normal adolescence has a profound effect upon the organism, and in many cases when uncontrolled, leads to excess about the age when tuberculosis most frequently delivers its first open assault upon the body,” as quoted by Gilman. This related to the ‘bankruptcy’ Kafka mentioned, the idea that one could waste one’s energy reserves. Maybe there is an insight in this belief, despite it being misguided and misinterpreted. The source of the ‘bankruptcy’ may have in part been a nutritional debt and certainly a high-fiber vegan diet would not refill ones energy and nutrient reserves as an investment in one’s health — hence, the public health risk of what one might call a hyper-agricultural diet as exemplified by the USDA dietary recommendations and corporate-backed dietary campaigns like EAT-Lancet (Dietary Dictocrats of EAT-Lancet; & Corporate Veganism), but it’s maybe reversing course, finally (Slow, Quiet, and Reluctant Changes to Official Dietary Guidelines; American Diabetes Association Changes Its Tune; & Corporate Media Slowly Catching Up With Nutritional Studies).
So far, my focus has mostly been on what we ingest or are otherwise exposed to because of agriculture and the food system, in general and more specifically in industrialized society with its refined, processed, and adulterated foods, largely from plants. But the other side of the picture is what our diet is lacking, what we are deficient in. As I touched upon directly above, an agricultural diet hasn’t only increased certain foods and substances but simultaneously decreased others. What promoted optimal health throughout human evolution has, in many cases, been displaced or interrupted. Agriculture is highly destructive and has depleted the nutrient-level in the soil (Carnivore Is Vegan) and, along with this, even animal foods as part of the agricultural system are similarly depleted of nutrients as compared to animal foods from pasture or free-range. For example, fat-soluble vitamins (true vitamin A as retinol, vitamin D3, vitamin K2 not to be confused with K1, and vitamin E complex) are not found in plant foods and are found in far less concentration with foods from animals from factory-farming or from grazing on poor soil from agriculture, especially the threat of erosion and desertification. Rhonda Patrick points to deficiencies of vitamin D3, EPA and DHA and hence insufficient serotonin levels as being causally linked to autism, ADHD, bipolar disorder, schizophrenia, etc (TheIHMC, Rhonda Patrick on Diet-Gene Interactions, Epigenetics, the Vitamin D-Serotonin Link and DNA Damage). She also discusses inflammation, epigenetics, and DNA damage which relates to the work by others (Dr. Catherine Shanahan On Dietary Epigenetics and Mutations).
One of the biggest changes with agriculture was the decrease of fatty animal foods that were nutrient-dense and nutrient-bioavailable. It’s in the fat that are found the fat-soluble vitamins and fat is necessary for their absorption (i.e., fat-soluble), and these key nutrients relate to almost everything else such as minerals as calcium and magnesium that also are found in animal foods (Calcium: Nutrient Combination and Ratios); the relationship of seafood with the balance of sodium, magnesium, and potassium is central (On Salt: Sodium, Trace Minerals, and Electrolytes) and indeed populations that eat more seafood live longer. These animal foods used to hold the prized position in the human diet and the earlier hominid diet as well, as part of our evolutionary inheritance from millions of years of adaptation to a world where fatty animals once were abundant (J. Tyler Faith, John Rowan & Andrew Du, Early hominins evolved within non-analog ecosystems). That was definitely true in the paleolithic before the megafauna die-off, but even to this day hunter-gatherers when they have access to traditional territory and prey will seek out the fattest animals available, entirely ignoring lean animals because rabbit sickness is worse than hunger (humans can always fast for many days or weeks, if necessary, and as long as they have reserves of body fat they can remain perfectly healthy).
We’ve already discussed autism in terms of many other dietary factors, especially excesses of otherwise essential nutrients like glutamate, propionate, and butyrate. But like most modern people, those on the autistic spectrum can be nutritionally deficient in other ways and unsurprisingly that would involve fat-soluble vitamins. In a fascinating discussion one of her more recent books, Nourishing Fats, Sally Fallon Morell offers a hypothesis of an indirect causal mechanism. First off, she notes that, “Dr. Mary Megson of Richmond, Virginia, had noticed that night blindness and thyroid conditions—both signs of vitamin A deficiency—were common in family members of autistic children” (p. 156), and so indicating a probable deficiency of the same in the affected child. This might be why supplementing cod liver oil, high in true vitamin A, helps with autistic issues. “As Dr. Megson explains, in genetically predisposed children, autism is linked to a G-alpha protein defect. G-alpha proteins form one of the most prevalent signaling systems in our cells, regulating processes as diverse as cell growth, hormonal regulation and sensory perception—like seeing” (p. 157).
The sensory issues common among autistics may seem to be neurocognitive in origin, but the perceptual and psychological effects may be secondary to the real cause in altered eye development. Because the rods in their eyes don’t function properly, they have distorted vision that is experienced as a blurry and divided visual field, like a magic-eye puzzle, that takes constant effort in making coherent sense of the world around them. “According to Megson, the blocked visual pathways explain why children on the autism spectrum “melt down” when objects are moved or when you clean up their lines or piles of toys sorted by color They work hard to piece together their world; it frightens and overwhelms them when the world as they are able to see it changes. It also might explain why children on the autism spectrum spend time organizing tings so carefully. It’s the only way they can “see” what’s out there” (p. 157). The rods at the edge of their vision work better and so they prefer to not look directly at people.
The vitamin A link is not merely speculative. In other aspects seen in autism, studies have sussed out some of the proven and possible factors and mechanisms: “Decreased vitamin A, and its retinoic acid metabolites, lead to a decrease in CD38 and associated changes that underpin a wide array of data on the biological underpinnings of ASD, including decreased oxytocin, with relevance both prenatally and in the gut. Decreased sirtuins, poly-ADP ribose polymerase-driven decreases in nicotinamide adenine dinucleotide (NAD+), hyperserotonemia, decreased monoamine oxidase, alterations in 14-3-3 proteins, microRNA alterations, dysregulated aryl hydrocarbon receptor activity, suboptimal mitochondria functioning, and decreases in the melatonergic pathways are intimately linked to this. Many of the above processes may be modulating, or mediated by, alterations in mitochondria functioning. Other bodies of data associated with ASD may also be incorporated within these basic processes, including how ASD risk factors such as maternal obesity and preeclampsia, as well as more general prenatal stressors, modulate the likelihood of offspring ASD” (Michael Maes et al, Integrating Autism Spectrum Disorder Pathophysiology: Mitochondria, Vitamin A, CD38, Oxytocin, Serotonin and Melatonergic Alterations in the Placenta and Gut). By the way, some of those involved pathways are often discussed in terms of longevity, which indicates autistics might be at risk for shortened lifespan. Autism, indeed, is comorbid with numerous other health issues and genetic syndromes. So autism isn’t just an atypical expression on a healthy spectrum of neurodiversity.
The affect of the agricultural diet, especially in its industrially-processed variety, has a powerful impact on numerous systems simultaneously, as autism demonstrates. There is unlikely any single causal factor and causal mechanism with most other health conditions as well. We can take this a step further. With historical changes in diet, it wasn’t only fat-soluble vitamins that were lost. Humans traditionally ate nose-to-tail and this brought with it a plethora of nutrients, even some thought of as being only sourced from plant foods. In its raw or lightly cooked form, meat has more than enough vitamin C for a low-carb diet; whereas a high-carb diet, since glucose competes with vitamin C, requires higher intake of this antioxidant which can lead to deficiencies at levels that otherwise would be adequate (Sailors’ Rations, a High-Carb Diet). Also, consider that prebiotics can be found in animal foods as well and animal-based prebiotics likely feeds a very different kind of microbiome that could shift so much else in the body, such as neurotransmitter production: “I found this list of prebiotic foods that were non-carbohydrate that included cellulose, cartilage, collagen, fructooligosaccharides, glucosamine, rabbit bone, hair, skin, glucose. There’s a bunch of things that are all — there’s also casein. But these tend to be some of the foods that actually have some of the highest prebiotic content,” from Vanessa Spina as quoted in Fiber or Not: Short-Chain Fatty Acids and the Microbiome).
Let’s briefly mention fat-soluble vitamins again in making a point about other animal-based nutrients. Fat-soluble vitamins, similar to ketosis and autophagy, have a profound effect on human biological functioning, including that of the mind (see the work of Weston A. Price as discussed in Health From Generation To Generation; also see the work of those described in Physical Health, Mental Health). In many ways, they are closer to hormones than mere nutrients, as they orchestrate entire systems in the body and how other nutrients get used, particularly seen with vitamin K2 that Weston A. Price discovered in calling it “Activator X” (only found in animal and fermented foods, not in whole or industrially-processed plant foods). I bring this up because some other animal-based nutrients play a similar important role. Consider glycine that is the main amino acid in collagen. It is available in connective tissues and can be obtained through soups and broths made from bones, skin, ligaments, cartilage, and tendons. Glycine is right up there with the fat-soluble vitamins in being central to numerous systems, processes, and organs.
As I’ve already discussed glutamate at great length, let me further that discussion by pointing out a key link. “Glycine is found in the spinal cord and brainstem where it acts as an inhibitory neurotransmitter via its own system of receptors,” writes Afifah Hamilton. “Glycine receptors are ubiquitous throughout the nervous system and play important roles during brain development. [Ito, 2016] Glycine also interacts with the glutaminergic neurotransmission system via NMDA receptors, where both glycine and glutamate are required, again, chiefly exerting inhibitory effects” (10 Reasons To Supplement With Glycine). Hamilton elucidates the dozens of roles played by this master nutrient and the diverse conditions that follow from its deprivation or insufficiency — it’s implicated in obsessive compulsive disorder, schizophrenia, and alcohol use disorder, along with much else such as metabolic syndrome. But it’s being essential to glutamate really stands out for this discussion. “Glutathione is synthesised,” Hamilton further explains, “from the amino acids glutamate, cysteine, and glycine, but studies have shown that the rate of synthesis is primarily determined by levels of glycine in the tissue. If there is insufficient glycine available the glutathione precursor molecules are excreted in the urine. Vegetarians excrete 80% more of these precursors than their omnivore counterparts indicating a more limited ability to complete the synthesis process.” Did you catch what she is saying there? Autistics already have too much glutamate and, if they are deficient in glycine, they won’t be able to convert glutamate into the important glutathione. When the body is overwhelmed with unused glutamate, it does what it can to eliminate them, but when constantly flooded with high-glutamate intake it can’t keep up. The excess glutamate then wreaks havoc on neurocognitive functioning.
The whole mess of the agricultural diet, specifically in its modern industrialized form, has been a constant onslaught taxing our bodies and minds. And the consequences are worsening with each generation. What stands out to me about autism, in particular, is how isolating it is. The repetitive behavior and focus on objects to the exclusion of human relationships resonates with how addiction isolates the individual. As with other conditions influenced by diet (shizophrenia, ADHD, etc), both autism and addiction block normal human relating in creating an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon (Afifah Hamilton, Why No One Should Eat Grains. Part 3: Ten More Reasons to Avoid Wheat). And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic consciousness with its rigid psychological boundaries is the result of our food system, as part of the civilizational project of mass agriculture?
* * *
Mongolian Diet and Fasting:
“Heaven grew weary of the excessive pride and luxury of China… I am from the Barbaric North. I wear the same clothing and eat the same food as the cowherds and horse-herders. We make the same sacrifices and we share our riches. I look upon the nation as a new-born child and I care for my soldiers as though they were my brothers.”
~Genghis Khan, letter of invitation to Ch’ang Ch’un
For anyone who is curious to learn more, the original point of interest was a quote by Jack Weatherford in his book Genghis Khan and the Making of the Modern World. He wrote that, “The Chinese noted with surprise and disgust the ability of the Mongol warriors to survive on little food and water for long periods; according to one, the entire army could camp without a single puff of smoke since they needed no fires to cook. Compared to the Jurched soldiers, the Mongols were much healthier and stronger. The Mongols consumed a steady diet of meat, milk, yogurt, and other diary products, and they fought men who lived on gruel made from various grains. The grain diet of the peasant warriors stunted their bones, rotted their teeth, and left them weak and prone to disease. In contrast, the poorest Mongol soldier ate mostly protein, thereby giving him strong teeth and bones. Unlike the Jurched soldiers, who were dependent on a heavy carbohydrate diet, the Mongols could more easily go a day or two without food.” By the way, that biography was written by an anthropologist who lived among and studied the Mongols for years. It is about the historical Mongols, but filtered through the direct experience of still existing Mongol people who have maintained a traditional diet and lifestyle longer than most other populations.
As nomadic herders living on arid grasslands with no option of farming, they had limited access to plant foods from foraging and so their diet was more easily applied to horseback warfare, even over long distances when food stores ran out. That meant, when they had nothing else, on “occasion they will sustain themselves on the blood of their horses, opening a vein and letting the blood jet into their mouths, drinking till they have had enough, and then staunching it.” They could go on “quite ten days like this,” according to Marco Polo’s observations. “It wasn’t much,” explained Logan Nye, “but it allowed them to cross the grasses to the west and hit Russia and additional empires. […]On the even darker side, they also allegedly ate human flesh when necessary. Even killing the attached human if horses and already-dead people were in short supply” (How Mongol hordes drank horse blood and liquor to kill you). The claim of their situational cannibalism came from the writings of Giovanni da Pian del Carpini who noted they’d eat anything, even lice. The specifics of what they ate was also determined by season: “Generally, the Mongols ate dairy in the summer, and meat and animal fat in the winter, when they needed the protein for energy and the fat to help keep them warm in the cold winters. In the summers, their animals produced a lot of milk so they switched the emphasis from meat to milk products” (from History on the Net, What Did the Mongols Eat?). In any case, animal foods were always the staple.
By the way, some have wondered how long humans have been consuming dairy, since the gene for lactose tolerance is fairly recent. In fact, “a great many Mongolians, both today and in Genghis Khan’s time are lactose intolerant. Fermentation breaks down the lactose, removing it almost entirely, making it entirely drinkable to the Mongols” (from Exploring History, Food That Conquered The World: The Mongols — Nomads And Chaos). Besides mare’s milk fermented into alcohol, they had a wide variety of other cultured dairy and aged cheese. Even then, much of the dairy would contain significant amounts of lactose. A better explanation is that many of the dairy-loving microbes have been incorporated into the Mongolian microbiome, and these microbes in combination as a microbial ecosystem do some combination of: digest lactose, moderate the effects of lactose intolerance, and/or somehow alter the body’s response to lactose. But looking at a single microbe might not tell us much. “Despite the dairy diversity she saw,” wrote Andrew Curry, “an estimated 95 percent of Mongolians are, genetically speaking, lactose intolerant. Yet, in the frost-free summer months, she believes they may be getting up to half their calories from milk products. […] Rather than a previously undiscovered strain of microbes, it might be a complex web of organisms and practices—the lovingly maintained starters, the milk-soaked felt of the yurts, the gut flora of individual herders, the way they stir their barrels of airag—that makes the Mongolian love affair with so many dairy products possible” (The answer to lactose intolerance might be in Mongolia).
Here is what is interesting. Based on study of ancient corpses, it’s been determined that lactose intolerant people in this region have been including dairy in their diet for 5,000 years. It’s not limited to the challenge of lactose intolerant people depending on a food staple that is abundant in lactose. The Mongolian population also has high rates of carrying the APOE4 gene variation that can make problematic a diet high in saturated fat (Helena Svobodová et al, Apolipoprotein E gene polymorphism in the Mongolian population). That is a significant detail, considering dairy has a higher amount of saturated fat than any other food. These people should be keeling over with nearly every disease known to humanity, particularly as they commonly drink plenty of alcohol and smoke tobacco (as was likewise true of the heart-healthy and long-lived residents of mid-20th century Roseto, Pennsylvania with their love of meat, lard, alcohol, and tobacco; see Blue Zones Dietary Myth). Yet, it’s not the traditional Mongolians but the the industrialized Mongolians who show all the health problems. A major difference between these two populations in Mongolia is diet, much of it being a difference of much low-carb animal foods eaten versus the amount of high-carb plant foods. Genetics are not deterministic, not in the slightest. As some others have noted, the traditional Mongolian diet would be accurately described as a low-carb paleo diet that, in the wintertime, would often have been a strict carnivore diet and ketogenic diet; although even rural Mongolians, unlike in the time of Genghis Khan, now get a bit more starchy agricultural foods. Maybe there is a protective health factor found in a diet that relies on nutrient-dense animal foods and leans toward the ketogenic.
It isn’t only that the Mongolian diet was likely ketogenic because of being low-carbohydrate, particularly on their meat-based winter diet, but also because it involved fasting. From Mongolia Volume 1 The Tangut Country, and the Solitudes of Northernin (1876), Nikolaĭ Mikhaĭlovich Przhevalʹskiĭ writes in the second note on p. 65 under the section Calendar and Year-Cycle: “On the New Year’s Day, or White Feast of the Mongols, see ‘Marco Polo’, 2nd ed. i. p. 376-378, and ii. p. 543. The monthly fetival days, properly for the Lamas days of fasting and worship, seem to differ locally. See note in same work, i. p. 224, and on the Year-cycle, i. p. 435.” This is alluded to in another text, in describing that such things as fasting were the norm of that time: “It is well known that both medieval European and traditional Mongolian cultures emphasized the importance of eating and drinking. In premodern societies these activities played a much more significant role in social intercourse as well as in religious rituals (e.g., in sacrificing and fasting) than nowadays” (Antti Ruotsala, Europeans and Mongols in the middle of the thirteenth century, 2001). A science journalist trained in biology, Dyna Rochmyaningsih, also mentions this: “As a spiritual practice, fasting has been employed by many religious groups since ancient times. Historically, ancient Egyptians, Greeks, Babylonians, and Mongolians believed that fasting was a healthy ritual that could detoxify the body and purify the mind” (Fasting and the Human Mind).
Mongol shamans and priests fasted, no different than in so many other religions, but so did other Mongols — more from Przhevalʹskiĭ’s 1876 account showing the standard feast and fast cycle of many traditional ketogenic diets: “The gluttony of this people exceeds all description. A Mongol will eat more than ten pounds of meat at one sitting, but some have been known to devour an average-sized sheep in twenty-four hours! On a journey, when provisions are economized, a leg of mutton is the ordinary daily ration for one man, and although he can live for days without food, yet, when once he gets it, he will eat enough for seven” (see more quoted material in Diet of Mongolia). Fasting was also noted of earlier Mongols, such as Genghis Khan: “In the spring of 2011, Jenghis Khan summoned his fighting forces […] For three days he fasted, neither eating nor drinking, but holding converse with the gods. On the fourth day the Khakan emerged from his tent and announced to the exultant multitude that Heaven had bestowed on him the boon of victory” (Michael Prawdin, The Mongol Empire, 1967). Even before he became Khan, this was his practice as was common among the Mongols, such that it became a communal ritual for the warriors:
“When he was still known as Temujin, without tribe and seeking to retake his kidnapped wife, Genghis Khan went to Burkhan Khaldun to pray. He stripped off his weapons, belt, and hat – the symbols of a man’s power and stature – and bowed to the sun, sky, and mountain, first offering thanks for their constancy and for the people and circumstances that sustained his life. Then, he prayed and fasted, contemplating his situation and formulating a strategy. It was only after days in prayer that he descended from the mountain with a clear purpose and plan that would result in his first victory in battle. When he was elected Khan of Khans, he again retreated into the mountains to seek blessing and guidance. Before every campaign against neighboring tribes and kingdoms, he would spend days in Burhkhan Khandun, fasting and praying. By then, the people of his tribe had joined in on his ritual at the foot of the mountain, waiting his return” (Dr. Hyun Jin Preston Moon, Genghis Khan and His Personal Standard of Leadership).
As a concluding thought, we may have the Mongols to thank for the modern American hamburger: “Because their cavalry was traveling so much, they would often eat while riding their horses towards their next battle. The Mongol soldiers would soften scraps of meat by placing it under their saddles while they rode. By the time the Mongols had time for a meal, the meat would be “tenderized” and consumed raw. […] By no means did the Mongols have the luxury of eating the kind of burgers we have today, but it was the first recorded time that meat was flattened into a patty-like shape” (Anna’s House, Brunch History: The Shocking Hamburger Origin Story You Never Heard; apparently based on the account of Jean de Joinville who was born a few years after Genghis Khan’s death). The Mongols introduced it to Russia, in what was called steak tartare (Tartars being one of the ethnic groups in the Mongol army), the Russians introduced it to Germany where it was most famously called hamburg steak (because sailors were served it at the ports of Hamburg), from which it was introduced to the United States by way of German immigrants sailing out of Hamburg. Another version of this is Salisbury steak that was invented during the American Civil War by Dr. James Henry Salisbury (physician, chemist, and medical researcher) as part of a meat-based, low-carb diet in medically and nutritionally treating certain diseases and ailments.
* * *
3/30/19 – An additional comment: I briefly mentioned sugar, that it causes a serotonin high and activates the hedonic pathway. I also noted that it was late in civilization when sources of sugar were cultivated and, I could add, even later when sugar became cheap enough to be common. Even into the 1800s, sugar was minimal and still often considered more as medicine than food.
Fructose is not like other sugars. This was important for early hominid survival and so shaped human evolution. It might have played a role in fasting and feasting. In 100 Million Years of Food, Stephen Le writes that, “Many hypotheses regarding the function of uric acid have been proposed. One suggestion is that uric acid helped our primate ancestors store fat, particularly after eating fruit. It’s true that consumption of fructose induces production of uric acid, and uric acid accentuates the fat-accumulating effects of fructose. Our ancestors, when they stumbled on fruiting trees, could gorge until their fat stores were pleasantly plump and then survive for a few weeks until the next bounty of fruit was available” (p. 42).
That makes sense to me, but he goes on to argue against this possible explanation. “The problem with this theory is that it does not explain why only primates have this peculiar trait of triggering fat storage via uric acid. After all, bears, squirrels, and other mammals store fat without using uric acid as a trigger.” This is where Le’s knowledge is lacking for he never discusses ketosis that has been centrally important for humans unlike other animals. If uric acid increases fat production, that would be helpful for fattening up for the next starvation period when the body returned to ketosis. So, it would be a regular switching back and forth between formation of uric acid that stores fat and formation of ketones that burns fat.
That is fine and dandy under natural conditions. Excess fructose on a continuous basis, however, is a whole other matter. It has been strongly associated with metabolic syndrome. One pathway of causation is that increased production of uric acid. This can lead to gout (wrongly blamed on meat) but other things as well. It’s a mixed bag. “While it’s true that higher levels of uric acid have been found to protect against brain damage from Alzheimer’s, Parkinson’s, and multiple sclerosis, high uric acid unfortunately increases the risk of brain stroke and poor brain function” (Le, p. 43).
The potential side effects of uric acid overdose are related to other problems I’ve discussed in relation to the agricultural mind. “A recent study also observed that high uric acid levels are associated with greater excitement-seeking and impulsivity, which the researchers noted may be linked to attention deficit hyperactivity disorder (ADHD)” (Le, p. 43). The problems of sugar go far beyond mere physical disease. It’s one more factor in the drastic transformation of the human mind.
* * *
4/2/19 – More info: There are certain animal fats, the omega-3 fatty acids EPA and DHA, that are essential to human health (Georgia Ede, The Brain Needs Animal Fat). These were abundant in the hunter-gatherer diet. But over the history of agriculture, they have become less common.
This is associated with psychiatric disorders and general neurocognitive problems, including those already mentioned above in the post. Agriculture and industrialization have replaced these healthy lipids with industrially-processed seed oils that are high in linoleic acid (LA), an omega-6 fatty acids. LA interferes with the body’s use of omega-3 fatty acids. Worse still, these seed oils appear to not only alter gene expression (epigenetics) but also to be mutagenic, a possible causal factor behind conditions like autism (Dr. Catherine Shanahan On Dietary Epigenetics and Mutations).
“Biggest dietary change in the last 60 years has been avoidance of animal fat. Coincides with a huge uptick in autism incidence. The human brain is 60 percent fat by weight. Much more investigation needed on correspondence between autism and prenatal/child ingestion of dietary fat.”
~ Brad Lemley
The agricultural diet, along with a drop in animal foods, saw a loss of access to the high levels and full profile of B vitamins. As with the later industrial seed oils, this had a major impact on genetics:
“The phenomenon wherein specific traits are toggled up and down by variations in gene expression has recently been recognized as a result of the built-in architecture of DNA and dubbed “active adaptive evolution.” 44
“As further evidence of an underlying logic driving the development of these new autism-related mutations, it appears that epigenetic factors activate the hotspot, particularly a kind of epigenetic tagging called methylation. 45 In the absence of adequate B vitamins, specific areas of the gene lose these methylation tags, exposing sections of DNA to the factors that generate new mutations. In other words, factors missing from a parent’s diet trigger the genome to respond in ways that will hopefully enable the offspring to cope with the new nutritional environment. It doesn’t always work out, of course, but that seems to be the intent.”
~Catherine Shanahan, Deep Nutrition, p. 56
And one last piece of evidence on the essential nature of animal fats:
“Maternal intake of fish, a key source of fatty acids, has been investigated in association with child neurodevelopmental outcomes in several studies. […]
“Though speculative at this time, the inverse association seen for those in the highest quartiles of intake of ω-6 fatty acids could be due to biological effects of these fatty acids on brain development. PUFAs have been shown to be important in retinal and brain development in utero (37) and to play roles in signal transduction and gene expression and as components of cell membranes (38, 39). Maternal stores of fatty acids in adipose tissue are utilized by the fetus toward the end of pregnancy and are necessary for the first 2 months of life in a crucial period of development (37). The complex effects of fatty acids on inflammatory markers and immune responses could also mediate an association between PUFA and ASD. Activation of the maternal immune system and maternal immune aberrations have been previously associated with autism (5, 40, 41), and findings suggest that increased interleukin-6 could influence fetal brain development and increase risk of autism and other neuropsychiatric conditions (42–44). Although results for effects of ω-6 intake on interleukin-6 levels are inconsistent (45, 46), maternal immune factors potentially could be affected by PUFA intake (47). […]
6/13/19 – About the bicameral mind, I saw some other evidence for it in relationship to fasting. In the following quote, it is described that after ten days of fasting ancient humans would experience spirits. One thing for certain is that one can be fully in ketosis in three days. This would be true even if it wasn’t total fasting, as the caloric restriction would achieve the same end.
The author, Michael Carr, doesn’t think fasting was the cause of the spirit visions, but he doesn’t explain the reason(s) for his doubt. There is a long history of fasting used to achieve this intended outcome. If fasting was ineffective for this purpose, why has nearly every known traditional society for millennia used such methods? These people knew what they were doing.
By the way, imbibing alcohol after the fast would really knock someone into an altered state. The body becomes even more sensitive to alcohol when in ketogenic state during fasting. Combine this altered state with ritual, setting, cultural expectation, and archaic authorization. I don’t have any doubt that spirit visions could easily be induced.
Reflections on the Dawn of Consciousness
ed. by Marcel Kuijsten
Kindle Location 5699-5718
The Shi ‘Corpse/ Personator’ Ceremony in Early China
by Michael Carr
“”Ritual Fasts and Spirit Visions in the Liji” 37 examined how the “Record of Rites” describes zhai 齋 ‘ritual fasting’ that supposedly resulted in seeing and hearing the dead. This text describes preparations for an ancestral sacrifice that included divination for a suitable day, ablution, contemplation, and a fasting ritual with seven days of sanzhai 散 齋 ‘relaxed fasting; vegetarian diet; abstinence (esp. from sex, meat, or wine)’ followed by three days of zhizhai 致 齋 ‘strict fasting; diet of grains (esp. gruel) and water’.
“Devoted fasting is inside; relaxed fasting is outside. During fast-days, one thinks about their [the ancestor’s] lifestyle, their jokes, their aspirations, their pleasures, and their affections. [After] fasting three days, then one sees those [spirits] for whom one fasted. On the day of the sacrifice, when one enters the temple, apparently one must see them at the spirit-tablet. When one returns to go out the door [after making sacrifices], solemnly one must hear sounds of their appearance. When one goes out the door and listens, emotionally one must hear sounds of their sighing breath. 38
“This context unequivocally uses biyou 必 有 ‘must be/ have; necessarily/ certainly have’ to describe events within the ancestral temple; the faster 必 有 見 “must have sight of, must see” and 必 有 聞 “must have hearing of, must hear” the deceased parent. Did 10 days of ritual fasting and mournful meditation necessarily cause visions or hallucinations? Perhaps the explanation is extreme or total fasting, except that several Liji passages specifically warn against any excessive fasts that could harm the faster’s health or sense perceptions. 39 Perhaps the explanation is inebriation from drinking sacrificial jiu 酒 ‘( millet) wine; alcohol’ after a 10-day fast. Based on measurements of bronze vessels and another Liji passage describing a shi personator drinking nine cups of wine, 40 York University professor of religious studies Jordan Paper calculates an alcohol equivalence of “between 5 and 8 bar shots of eighty-proof liquor.” 41 On the other hand, perhaps the best explanation is the bicameral hypothesis, which provides a far wider-reaching rationale for Chinese ritual hallucinations and personation of the dead.”
This made my mind immediately wonder how this relates. Changes in diets alter hormonal functioning. Endocrinology, the study of hormones, has been a major part of the diet debate going back to European researchers from earlier last century (as discussed by Gary Taubes). Diet affects hormones and hormones in turn affect diet. But I had something more specific in mind.
What about propionate and glutamate? What might their relationship be to testosterone? In a brief search, I couldn’t find anything about propionate. But I did find some studies related to glutamate. There is an impact on the endocrine system, although these studies weren’t looking at the results in terms of autism specifically or neurocognitive development in general. It points to some possibilities, though.
One could extrapolate from one of these studies that increased glutamate in the pregnant mother’s diet could alter what testosterone does to the developing fetus, in that testosterone increases the toxicity of glutamate which might not be a problem under normal conditions of lower glutamate levels. This would be further exacerbated during breastfeeding and later on when the child began eating the same glutamate-rich diet as the mother.
11/28/21 – Here is some discussion of vitamin B1 (thiamin/thiamine). It couldn’t easily fit into the above post without revising and rewriting some of it. And it could’ve been made into a separate post by itself. But, for the moment, we’ll look at some of the info here, as relevant to the above survey and analysis. This section will be used as a holding place for some developing thoughts, although we’ll try to avoid getting off-topic in a post that is already too long. Nonetheless, we are going to have to trudge a bit into the weeds so as to see the requisite details more clearly.
Related to autism, consider this highly speculative hypothesis: “Thiamine deficiency is what made civilization. Grains deplete it, changing the gut flora to make more nervous and hyperfocused (mildly autistic) humans who are afraid to stand out. Conformity. Specialization in the division of labor” (JJ, Is Thiamine Deficiency Destroying Your Digestive Health? Why B1 Is ESSENTIAL For Gut Function, EONutrition). Thiamine deficiency is also associated with delirium and psychosis, such as schizophrenia (relevant scientific papers available are too numerous to be listed). By the way, psychosis, along with mania, has an established psychological and neurocognitive overlap with measures of modern conservatism; in opposition to the liberal link to mood disorders, addiction, and alcoholism (Uncomfortable Questions About Ideology; & Radical Moderates, Depressive Realism, & Visionary Pessimism). This is part of some brewing thoughts that won’t be further pursued here.
The point is simply to emphasize the argument that modern ideologies, as embodied worldviews and social identities, may partly originate in or be shaped by dietary and nutritional factors, among much else in modern environments and lifestyles. Nothing even comparable to conservatism and liberalism existed as such prior to the expansion and improvement of agriculture during the Axial Age (farm fields were made more uniform and well-managed, and hence with higher yields; e.g., systematic weeding became common as opposed to letting fields grow in semi-wild state); and over time there were also innovations in food processing (e.g., removing hulls from grains made them last longer in storage while having the unintended side effect of also removing a major source of vitamin B1 to help metabolize carbs).
In the original writing of this post, one focus was on addiction. Grains and dairy were noted as sources of exorphins and dopaminergic peptides, as well as propionate and glutamate. As already explained, this goes a long way to explain the addictive quality of these foods and their relationship to the repetitive behavior of obsessive-compulsive disorder. This is seen in many psychiatric illnesses and neurocognitive conditions, including autism (Derrick Lonsdale et al, Dysautonomia in Autism Spectrum Disorder: Case Reports of a Family with Review of the Literature):
“It has been hypothesized that autism is due to mitochondrial dysfunction , supported more recently . Abnormal thiamine homeostasis has been reported in a number of neurological diseases and is thought to be part of their etiology . Blaylock  has pointed out that glutamate and aspartate excitotoxicity is more relevant when there is neuron energy failure. Brain damage from this source might be expected in the very young child and the elderly when there is abnormal thiamine homeostasis. In thiamine-deficient neuroblastoma cells, oxygen consumption decreases, mitochondria are uncoupled, and glutamate, formed from glutamine, is no longer oxidized and accumulates . Glutamate and aspartate are required for normal metabolism, so an excess or deficiency are both abnormal. Plaitakis and associates  studied the high-affinity uptake systems of aspartate/glutamate and taurine in synaptosomal preparations isolated from brains of thiamine-deficient rats. They concluded that thiamine deficiency could impair cerebellar function by inducing an imbalance in its neurotransmitter systems.”
We’ve previously spoken of glutamate, a key neurotransmitter; but let’s summarize it while adding in new info. Among those on the autistic spectrum, there is commonly a glutamate excess. This is caused by eating a lot of processed foods that use glutamate as an additive (e.g., MSG). And there is the contributing factor of many autistics being drawn to foods naturally high in glutamate, specifically dairy and wheat. A high-carb diet also promotes the body’s own production of glutamate, with carb-related inflammation spiking glutamate levels in the brain; and it downregulates the levels of the inhibitory neurotransmitter GABA that balances glutamate. GABA is important for sleep and much else.
Keep in mind that thiamine is required in the production of numerous other neurotransmitters and required in the balanced interaction between them. Another B vitamin, B12 (cobalamin), plays a similar role; and it deficiency is not uncommonly seen there as well. The B vitamins, by the way, are particularly concentrated in animal foods, as are other key nutrients. Think about choline, precursor of acetylecholine, that promotes sensory habituation, perceptual regulation, attentional focus, executive function, and selective responsiveness while supporting mental flexibility (thiamine is also needed in making acetylcholine, and notably choline has some similarities to B vitamins); while similarly the amino acid L-tyrosine further promotes mental flexibility — the two form a balance of neurocognitive functioning, both of which can be impaired in diverse psychiatric diseases, neurological conditions, speech/language issues, learning disabilities, etc.
There is way too much scientific evidence to be cited and surveyed here, but let’s briefly focus in on some examples involving choline, such an easily found nutrient in eggs, meat, liver, and seafood. Studies indicate choline prevents mental health issues like schizophrenia and ADHD that involve sensory inhibition and attention problems that can contribute to social withdrawal (Bret Stetka, Can Mental Illness Be Prevented In The Womb?). Autism spectrum disorders and mood disorders, in being linked to choline deficiency, likewise exhibit social withdrawal. In autism, the sensory inhibition challenge is experienced as sensory overload and hyper-sensitivity (Anuradha Varanasi, Hypersensitivity Might Be Linked To A Transporter Protein Deficiency In The Brain: Study).
Mental flexibility, specifically, seems less relevant to modern society or rather, maybe it’s suppression, has made possible the rise of modern society; as hyper-specialization has become central for most modern work that is narrowly focused and repetitive. Yet one might note that modern liberalism strongly correlates with mental flexibility; e.g., Ernest Hartmann’s fluid and thin boundaries of mind, Big Five’s trait of openness to experience, and Myers-Briggs intuition and perceiving — by the way, a liberal arts education is defined by its not being specialized, and that is precisely what makes it ‘liberal’ (i.e., generous, expansive, inclusive, diverse, tolerant, multiperspectival, etc).
Maybe this also relates to how modern liberalism, as an explicit socio-ideological identity, has typically been tied into the greater wealth of the middle-to-upper classes and hence involving greater access to nutritious foods and costly supplements, not to mention high quality healthcare that tests for nutritional deficiencies and treats them early on; along with higher status, more privileges, and less stress within the high inequality hierarchy of the American caste system. There is a significant amount of truth to the allegation about a ‘liberal elite’, which in some ways applies to the relatively more liberal-minded conservative elites as well. It would be interesting to know if malnutrition or specific nutritional deficiencies increases social conservatism, similar to studies that have proven a link between parasite load and authoritarianism (in this blog, it’s been pointed out that all authoritarianism is socially conservative, not only the likes of Nazis but also Soviets, Maoists, and others; all of which targeted social liberals and those under the protection of socially liberal society).
Many other factors can exacerbate the delicate system. To return to glutamate, one of three precursors in producing the endogenous antioxidant glutathione. A major limit to this process is glycine that primarily comes from the connective tissue of animal foods (tough meats, gristle, bone broths, etc). Without sufficient glycine, glutamate won’t get used up and so will accumulate. Plus, glycine directly interacts with the glutaminergic neurotransmission system and so is needed for healthy functioning of glutamate. Further complicating can be mercury toxicity that over-excites the glutamate pathway. Then, as already described, the modern diet dumps even more glutamate on the fire. It’s a whole freaking mess, the complex and overlapping conditions of modernity. Altering any single factor would throw a wrench into the works, but what we’re talking about is nearly every major factor along with many minor factors all being tossed up in the air.
The standard American diet is high in refined carbs while low in certain animal-based nutrients that were more typical on a traditional nose-to-tail diet. About the first part, refined carbs are low in vitamin B1 (thiamin/thiamine), but governments have required fortification of such key nutrients. The problem is that thiamine is required for metabolism of carbs. The more carbs one eats the more thiamine that is needed. Carb intake has risen so vastly that, as some argue, the levels of fortification aren’t enough. To make matters worse, because thiamine deficiency causes carb metabolism disruption, there is an increasing craving for carbs as the body struggles to get the fuel it needs. Then, as those cravings lead to continued overeating of carbs, thiamine deficiency gets worse which makes the carb cravings even stronger. It becomes a lifelong addiction, in some cases involving alcoholism as liquid carbs (the body treats alcohol the same as sugar).
The only alternative fuel for the body is fat. Here we get to another wrinkle. A high-carb diet also causes insulin resistance. The hormone insulin, like thiamine, is also needed in energy metabolism. This often leads to obesity where excess calories get stored as fat but, without insulin sensitivity, the body can’t easily access that stored energy. So, this is why fat people are constantly hungry, despite having immense stored energy. Their bodies can’t fully use that stored energy and neither can their bodies fully use the carbs they’re eating. Thiamine deficiency combined with insulin resistance is a spiral of metabolic dysfunction. This is why some experts in this field worry that thiamine insufficiency might be greater than acknowledged and that it might not show up on standard tests, as what is not being considered is the higher demand for thiamine with a higher intake of carbs that has ever before existed. To further obscure this health crisis, it is irrelevant how much thiamine a test shows in one’s bloodstream, if one lacks the cofactors (e.g., magnesium) to help the body process thiamine and transport it into cells.
Insulin resistance, along with the rest of metabolic syndrome, has many neurological consequences. Numerous neurocognitive conditions are directly linked to it and often involve thiamine deficiency — besides autism: mood disorders, obsessive-compulsive disorder, schizophrenia, etc. For example, consider Alzheimer’s that some are now referring to as type III diabetes because there is insulin resistance in the brain; and the brain requires glucose that in turn requires insulin and insulin sensitivity. All cells need energy and this goes to the centrality of the mitochondria, the powerhouses of cellular energy (each cell can have thousands of mitochondria). Besides autoimmune conditions like multiple sclerosis, mitochondrial dysfunction might also involved in conditions like autism. That is related to thiamine deficiency causing energy deficiency and affects the role of glutamate.
It’s a morass of intertwining mechanisms, pathways, and systems that are hard for a laymen to comprehend. But it is serious stuff on so many levels, for individuals and society. For a moment, let’s step back and look again at the big picture. In The Crisis of Identity, public health was explained as a moral panic and existential crisis. One aspect that wasn’t explored in that post is cancer, but we did briefly note that, “in the mid-1800s, Stanislas Tanchou did a statistical analysis that correlated the rate of grain consumption with the rate of cancer; and he observed that cancer, like insanity, spread along with civilization.” We only bring this up now because we’ve been reading Sam Apple’s book Ravenous that is about the Nazi obsession about cancer with the same mass hysteria as was going on elsewhere in the Western world, such as with neurasthenia and tuberculosis; and bringing up antisemitism everywhere it was found.
Cancer, though, can help us understand an aspect of thiamine deficiency and insufficiency. It also has to do with neurological and mental health. In interfering with carb metabolism, insufficient thiamine also interferes with mitochondrial oxidation and so mitochondria turn to fermenting glucose for energy. This is what happens in cancer cells, as the Jewish-Nazi scientist Otto Warburg thought so important. In general, mitochondrial dysfunction results and energy production goes down. Also, the mitochondria are closely related to immune functioning and so autoimmune disorders can follow: multiple sclerosis, Hashimoto’s, rheumatoid arthritis, etc. Along with causing gut issues and a diversity of other symptoms, this is why thiamine deficiency is known as a disease mimic in so often getting misdiagnosed as something else.
That is a problem with something like psychiatric categories and labels, as they are simply groupings of symptoms; but then again that is true for most conventional healthcare. We need to discern the underlying cause(s). To demonstrate this, we’ll now move on to the limbic system that is part of the primitive brain stem, having to do with emotional processing and control of the autonomic nervous system. Thiamine deficiency have a strong impact on limbic cells, similar to an oxygen deficiency because of the aforementioned altered energy metabolism of mitochondria that prioritizes oxygen in production of ATP (the main fuel used by most cells). There is not only a loss of energy but eventually mitochondrial death and hence cell death, also from decreased glucose utilization in cells; or, in some cases, something worse when cells refuse to die (i.e., cancer) in turning to glucose fermentation in mitochondria that allows those cells to proliferate. In either case, the involvement of carbs and glucose becomes dramatically changed and imbalanced.
This points to how the same fundamental issues deep within our physiology can become expressed in numerous ways, such as the link between cancer and metabolic syndrome (particularly obesity). But, in terms of subjective experience, we can’t realize most of this is going on and even doctors often aren’t able to detect it with the crude tools at hand. Yet the individual might experience the consequences of what can’t be seen. If thiamine deficiency causes brain damage in the limbic system and elsewhere, the results can be depression, anxiety, irritability, fatigue, bipolar, emotional instability, moodiness, confusion, schizophrenia, cognitive decline, learning difficulties, inability to form memories, loss of memory recall, confabulation (making up stories), etc; with the worse symptoms corresponding to Wernicke-Korsakoff syndrome. And can ultimately (and very rapidly) etc. Now multiply that across an entire society and no wonder the reactionary mind has taken hold and created such a powerful psychological undertow, not only for conservatives but for everyone.
* * *
6/2/22 – Let’s make yet another subsection to throw in some other info. This is an extension of what has already been said on the growing number of factors involved in autism spectrum disorder, not to mention often overlapping with numerous other physical and cognitive conditions. There are so many proven and potential factors (correlated, contributing, and causal) that it can give one a headache trying to piece it all together and figure out what it means. Writing about it here is nearly headache-inducing, and so empathy goes out to any readers trying to work their way through this material. Such diverse and wide-ranging evidence might imply that so-called autism spectrum disorder is not really a single disorder but a blanket label to cover up mass complexity and confusion. Okay. Take a deep breath.
An interesting substance is carnitine that is needed for energy production by helping transport fatty acids into the mitochondria. Low carnitine levels are prevalent in certain neurocognitive conditions, from depression to autism. “Some tenuous links between carnitine and autism already exist. Defects in the mitochondria, which have previously been linked to autism, can sometimes lead to carnitine deficiency. And treating children with autism with valproic acid, an anti-seizure medicine that can lower carnitine levels, can have serious side effects” (Emily Singer, Defects in carnitine metabolism may underlie autism). It’s one of the many nutrients that is mostly found in or entirely exclusive to animal foods, and so having much to do with the agricultural diet and even more so in terms of modern industrial food production. For such an easily obtained substance, there is a significant number of Westerners who are not getting enough of it. But all they’d need to do to obtain it is eat some red meat, which is precisely the main food that health experts and public officials have been telling Americans to avoid.
Beef consumption is almost half of what it was at the beginning of the 19th century and has leveled out since then, whereas low-carnitine meats such as chicken and fish have increasingly replaced beef. About the agricultural angle, it might be noted that grain-fed animals have lower amounts of diverse nutrients (carnitine, choline, CoQ10, zinc, carotenoids, vitamin A3, E vitamins, omega-3s, etc) as compared to pasture-raised and wild-caught animals; except with certain nutrients that are typically added to animal feed — and this might partly explain why the agricultural revolution led to increased stunting and sickliness, many thousands of years before the modern industrialized diet of hyper-processed foods produced from industrial agriculture. So, it’s not only that modern Americans are eating less red meat but replacing such nutrient-density with lower quality animal foods from factory farming; while overall meat consumption has dropped since the 19th century, along with animal fat intake having drastically declined after being mostly replaced with industrial seed oils by the 1930s. It’s safe to say that the average American is consuming approximately zero fatty ruminant meat or any other animal foods from pasture-raised or wild-caught animals. Yet the intake of vegetables, fruits, nuts, seeds, and seed oils is greater than past centuries.
To refocus, the human body has some capacity to produce carnitine de novo, but it’s limited and far from optimal. Autistics, in particular, can have carnitine-related genetic defects with a deletion in the gene trimethyllysine hydroxylase epsilon (TMLHE); a genetic effect that is mostly found in families with multiple autistic boys. Also, as expected, vegans and vegetarians measure as having low plasma levels of this key nutrient. Such deficiencies are potentially a worse problem for certain modern populations but less so in the past because “genetic deficiencies in carnitine synthesis were tolerated in the European population because their effects were nutritionally complemented by a carnitine-rich diet. In this manner, the selection pressures that would have otherwise eliminated such mutations from the population were effectively removed” (Vytas A. Bankaitis & Zhigang Xie, The neural stem cell/carnitine malnutrition hypothesis: new prospects for effective reduction of autism risk?). As for the present, the authors “estimate that some 20%–30% of pregnant women in the United States might be exposing the developing fetus to a suboptimal carnitine environment.”
Carnitine underpins many physiological factors and functions involving embryonic neural stem cells, long-chain fatty acids, mitochondrial function, ATP production, oxidative stress, inflammation, epigenetic regulation of gene expression, etc. As mediated by epigenetic control, carnitine promotes “the switch from solitary to gregarious social behavior” in other species and likely in humans as well (Rui Wu et al, Metabolomic analysis reveals that carnitines are key regulatory metabolites in phase transition of the locusts). Certainly, as Bankaitis and Xie explains, carnitine is directly correlated to language/speech delay, language weakness, or speech deficits along with stunted motor development and common autistic behaviors that are causally linked by way of long-chain fatty acid (LCFA) β-oxidation deficits, medium-chain FAO deficits, etc. To emphasize this point, overlapping with the same deficiencies (carnitine, B vitamins, fat-soluble vitamins, choline, etc) and excesses (glutamate, propionate, etc) as found in autism, there are many other speech and language conditions: dyslexia, specific language impairment (SLI), developmental language disorder (DLD), etc; along with ADHD, learning disabilities, and much else (about all of this, approximately a million studies have been done and another million articles written) — these might not always be entirely distinct categories but imperfect labels for capturing a swarm of underlying issues, as has been suggested by some experts in the field.
Bankaitis and Zhigang Xie then conclude that, “Finally, we are struck by the fact that two developments dominating public interest in contemporary news cycles detail the seemingly unrelated topics of the alarming rise of autism in young children and the damaging human health and planetary-scale environmental costs associated with cattle farming and consumption of red meat (86.). The meteoric rise of companies promoting adoption of meatless mimetics of beef and chicken at major fast food outlets testifies to the rapidly growing societal appetite for reducing meat consumption. This philosophy is even rising to the level of circulation of scientific petitions exhorting world governments to unite in adopting global measures to restrict meat consumption (87). We now pose the question whether such emerging societal attitudes regarding nutrition and its environmental impact are on collision course with increased ASD risk. Food for thought, indeed.” It’s been shown that mothers of autistic children ate less meat before conception, during pregnancy, or during lactation period; and had lower levels of calcium (Ya-Min Li, Maternal dietary patterns, supplements intake and autism spectrum disorders). Sure, we could supplement carnitine and every other nutrient concentrated in meat. That certainly would help bring the autism rate back down again (David A. Geier et al, A prospective double-blind, randomized clinical trial of levocarnitine to treat autism spectrum disorders). But maybe, instead, we should simply emphasize a healthy diet of nutrient-dense animal foods, particularly as whole foods.
It might be about finding the right form in the right amount, maybe in the needed ratio with other nutrients — our partial knowledge and vast ignorance being the eternal problem (Hubris of Nutritionism); whereas animal foods, particularly pasture-raised and wild-caught, have all of the nutrients we need in the forms, amounts, and ratios we need them. As clever monkeys, we’ve spent the past century failing in our endeavor to industrially and medically re-create the wheel that Mother Nature invented through evolution. To put this in context of everything analyzed here in this unwieldy piece, if most modern people weren’t following a nutritionally-deficient agricultural diet largely consisting of industrially hyper-processed and fortified plant foods, nearly all of the scientific disagreement and debate would be irrelevant. We’ve painted ourselves into a corner. The fact of the matter is we are a sickly people and much of that is caused by diet, although not limited to micronutrients or whatever as the macronutrients play a particular role in metabolic health or lack thereof which in turn is another contributing factor to autism (Alison Jean Thomas, Is a Risk of Autism Related to Nutrition During Pregnancy?). And metabolic dysfunction and disease has much to do with addictive and/or harmful overconsumption of agricultural foods like grains, potatoes, sugar cane, high fructose corn syrup, seed oils, etc.
For vitamin B9, some speculate that increased risk of autism might have to do with methylation defects caused by mutations in the MTHFR gene (A1298C and C667T) or even possibly mimicking this phenomenon for those without it (Karen E Christensen, High folic acid consumption leads to pseudo-MTHFR deficiency, altered lipid metabolism, and liver injury in mice). This relates to a reason behind recommendations for methylated forms of B vitamins; which is a good source of methyl groups required for various physiological functions. For example, in demonstrating how one thing leads to another: “The methyl group from methyl folate is given to SAMe, whose job it is to deliver methyl to 200 essential pathways in the body. […] After receiving methyl donors, SAMe delivers methyl to 200 pathways in the body including ones needed to make carnitine, creatine and phosphotidylcholine. Carnitine supplementation improves delivery of omega 3 & 6 fatty acids needed to support language, social and cognitive development. Phosphatidylcholine is important in cell membrane health and repair. […] Repair of the cell membrane is an important part of improving sensory issues and motor planning issues in children with autism, ADHD and sensory integration disorder. Dimethylglycine (DMG) and trimethylglycine (TMG) donate methyl groups to the methylation cycle. TMG is needed to recycle homocysteine and help produce SAMe” (Treat Autism, Autism and Methylation – Are you helping to repair your child’s methylation cycle?).
Others dismiss these skeptical concerns and alternative theories as pseudo-scientific fear-mongering. The debate began with a preliminary study done in 2016; and, in the following year, a published review concurred that, “Based on the evidence evaluated, we conclude that caution regarding over supplementing is warranted” (Darrell Wiens & M. Catherine DeSoto, Is High Folic Acid Intake a Risk Factor for Autism?—A Review). There are other issues, besides that. There has been a quarter century of mass supplementation of folate with fortified foods, but there apparently never was done any safety studies or analysis for the general population. On top of that, phthalate exposure from plastic contamination in water and such disrupts genetic signals for the processing of folate (Living On Earth, Plastics Linked to Rising Rates of Autism). But supplementation of folic acid might compensate for this (Nancy Lemieux, Study reports link between phthalates and autism, with protective effects of folic acid). The breakdown of plastic into microplastic can accumulate in biological tissue that humans consume, if unsure the same is true in plants and if unsure how much phthalates can accumulate up the food chain. So, it’s not clear how this how this may or may not be a problem specifically within present agriculture, but one suspects it might be an issue. Certainly, the majority of water in the world now is contaminated by microplastics and much else; and that water is used for livestock and agricultural goods. It’s hard to imagine how such things couldn’t be getting into everything or what it might mean for changes in the human body-mind, as compounded by all the rest (e.g., how various substances interact within the body). About pesticides in the water or from other sources, one might note that folic acid may have a protective effect against autism (Arkansas Folic Acid Coalition, Folic Acid May Reduce Autism Risk from Pesticides).
Whatever it all means, it’s obvious that the B vitamins are among the many super important nutrients mostly found in animal foods and concentrated in highest amounts in the most quality sources from animals grown on pasture or in the wild. Much of the B vitamin debate about autism risk is too complex and murky to further analyze here, not to mention to mixed up with confounders and replication crisis; with one potential confounder being the birth order effect or stoppage effect (Gideon Koren, High-Dose Gestational Folic Acid and the Risk for Autism? The Birth Order Effect). As one person noted, “If the literature is correct, and folic acid really causes a 42% reduction in autism, we should see a sharp decrease in autism diagnosis for births starting in 1997. Instead, autism rates continued to increase at exactly the same rate they had before. There is nothing in the data to suggest even a small drop in autism around the time of folic acid fortification” (Chris Said, Autism, folic acid, and the trend without a blip). And elsewhere it’s recently stated that, “The overall evidence for all these claims remains inconclusive. While some meta-analyses have found a convincing pattern, a comprehensive 2021 Nutrients review failed to find a “robust” statistical association — a more definitive outcome in the field of epidemiology“ (Molly Glick, A Popular Supplement’s Confusing Links With Autism Development). That same assessment is repeated by others: “Studies have pointed out a potential beneficial effect of prenatal folic acid maternal supplementation (600 µg) on the risk of autism spectrum disorder onset, but opposite results have been reported as well” (Bianka Hoxha et al, Folic Acid and Autism: A Systematic Review of the Current State of Knowledge). It doesn’t add up, but we won’t attempt to solve that mystery.
To further muck up the works, it’s amusing that some suggest a distinction be made: “The signs and symptoms of pediatric B12 deficiency frequently mimic those of autism spectrum disorders. Both autistic and brain-injured B12– deficient children have obsessive-compulsive behaviors and difficulty with speech, language, writing, and comprehension. B12 deficiency can also cause aloofness and withdrawal. Sadly, very few children presenting with autistic symptoms receive adequate testing for B12 deficiency” (Sally M. Pacholok, Pediatric Vitamin B12 Deficiency: When Autism Isn’t Autism). Not being alone in that claim, someone else said, “A vitamin B12 deficiency can cause symptoms and behaviours that sometimes get wrongly diagnosed as autism” (). That second person’s motivation was to deny the culpability of veganism: “Vegans and vegetarians often struggle to get sufficient levels of B12 in their diets. Therefore the children of pregnant vegans may be more likely to have B12 deficiency.” But also that, “Early research shows that many genuinely autistic people have excessive levels of B12 in their systems. […] Vegans are more like likely to take supplements to boost the vitamins they lack in their diet, including B12.” A deficiency in early life and a compensatory excess in later life could both be tied into vegan malnourishment — maybe or maybe not. Apparently, however explained or else rationalized away, just because something looks like duck, walks like a duck, and quacks like a duck doesn’t necessarily mean it’s actually a duck. But has the autistic label ever been anything other than a constellation of factors, symptoms, behaviors, and traits? It’s like asking if ‘depression’ variously caused by stress, overwork, sleep deprivation, trauma, nutritional deficiency, toxicity, parasitism, or physical disease really are all the same mental illness. Admittedly, that is a useful line of thinking, from the perspective of functional medicine that looks for underlying causes and not mere diagnoses for the sake of insurance companies, bureaucratic paperwork, and pharmaceutical prescriptions.
Anyway, let’s just drop a load of links for anyone who is interested to explore it for themselves:
The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera by Edward Muir Introduction pp. 5-7
One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.
Historical and Philosophical Issues in the Conservation of Cultural Heritage edited by Nicholas Price, M. Kirby Talley, and Alessandra Melucco Vaccaro Reading 5: “The History of Art as a Humanistic Discipline” by Erwin Panofsky pp. 83-85
Nine days before his death Immanuel Kant was visited by his physician. Old, ill and nearly blind, he rose from his chair and stood trembling with weakness and muttering unintelligible words. Finally his faithful companion realized that he would not sit down again until the visitor had taken a seat. This he did, and Kant then permitted himself to be helped to his chair and, after having regained some of his strength, said, ‘Das Gefühl für Humanität hat mich noch nicht verlassen’—’The sense of humanity has not yet left me’. The two men were moved almost to tears. For, though the word Humanität had come, in the eighteenth century, to mean little more than politeness and civility, it had, for Kant, a much deeper significance, which the circumstances of the moment served to emphasize: man’s proud and tragic consciousness of self-approved and self-imposed principles, contrasting with his utter subjection to illness, decay and all that implied in the word ‘mortality.’
Historically the word humanitas has had two clearly distinguishable meanings, the first arising from a contrast between man and what is less than man; the second between man and what is more. In the first case humanitas means a value, in the second a limitation.
The concept of humanitas as a value was formulated in the circle around the younger Scipio, with Cicero as its belated, yet most explicit spokesman. It meant the quality which distinguishes man, not only from animals, but also, and even more so, from him who belongs to the species homo without deserving the name of homo humanus; from the barbarian or vulgarian who lacks pietas and παιδεια- that is, respect for moral values and that gracious blend of learning and urbanity which we can only circumscribe by the discredited word “culture.”
In the Middle Ages this concept was displaced by the consideration of humanity as being opposed to divinity rather than to animality or barbarism. The qualities commonly associated with it were therefore those of frailty and transience: humanitas fragilis, humanitas caduca.
Thus the Renaissance conception of humanitas had a two-fold aspect from the outset. The new interest in the human being was based both on a revival of the classical antithesis between humanitas and barbartias, or feritas, and on a survival of the mediaeval antithesis between humanitas and divinitas. When Marsilio Ficino defines man as a “rational soul participating in the intellect of God, but operating in a body,” he defines him as the one being that is both autonomous and finite. And Pico’s famous ‘speech’ ‘On the Dignity of Man’ is anything but a document of paganism. Pico says that God placed man in the center of the universe so that he might be conscious of where he stands, and therefore free to decide ‘where to turn.’ He does not say that man is the center of the universe, not even in the sense commonly attributed to the classical phrase, “man the measure of all things.”
It is from this ambivalent conception of humanitas that humanism was born. It is not so much a movement as an attitude which can be defined as the conviction of the dignity of man, based on both the insistence on human values (rationality and freedom) and the acceptance of human limitations (fallibility and frailty); from this two postulates result responsibility and tolerance.
Small wonder that this attitude has been attacked from two opposite camps whose common aversion to the ideas of responsibility and tolerance has recently aligned them in a united front. Entrenched in one of these camps are those who deny human values: the determinists, whether they believe in divine, physical or social predestination, the authoritarians, and those “insectolatrists” who profess the all-importance of the hive, whether the hive be called group, class, nation or race. In the other camp are those who deny human limitations in favor of some sort of intellectual or political libertinism, such as aestheticists, vitalists, intuitionists and hero-worshipers. From the point of view of determinism, the humanist is either a lost soul or an ideologist. From the point of view of authoritarianism, he is either a heretic or a revolutionary (or a counterrevolutionary). From the point of view of “insectolatry,” he is a useless individualist. And from the point of view of libertinism he is a timid bourgeois.
Erasmus of Rotterdam, the humanist par excellence, is a typical case in point. The church suspected and ultimately rejected the writings of this man who had said: “Perhaps the spirit of Christ is more largely diffused than we think, and there are many in the community of saints who are not in our calendar.” The adventurer Uhich von Hutten despised his ironical skepticism and his unheroic love of tranquillity. And Luther, who insisted that “no man has power to think anything good or evil, but everything occurs in him by absolute necessity,” was incensed by a belief which manifested itself in the famous phrase; “What is the use of man as a totality [that is, of man endowed with both a body and a soul], if God would work in him as a sculptor works in clay, and might just as well work in stone?”
Food and Faith in Christian Culture edited by Ken Albala and Trudy Eden Chapter 3: “The Food Police” Sumptuary Prohibitions On Food In The Reformation by Johanna B. Moyer pp. 80-83
Protestants too employed a disease model to explain the dangers of luxury consumption. Luxury damaged the body politic leading to “most incurable sickness of the universal body” (33). Protestant authors also employed Galenic humor theory, arguing that “continuous superfluous expense” unbalanced the humors leading to fever and illness (191). However, Protestants used this model less often than Catholic authors who attacked luxury. Moreover, those Protestants who did employ the Galenic model used it in a different manner than their Catholic counterparts.
Protestants also drew parallels between the damage caused by luxury to the human body and the damage excess inflicted on the French nation. Rather than a disease metaphor, however, many Protestant authors saw luxury more as a “wound” to the body politic. For Protestants the danger of luxury was not only the buildup of humors within the body politic of France but the constant “bleeding out” of humor from the body politic in the form of cash to pay for imported luxuries. The flow of cash mimicked the flow of blood from a wound in the body. Most Protestants did not see luxury foodstuffs as the problem, indeed most saw food in moderation as healthy for the body. Even luxury apparel could be healthy for the body politic in moderation, if it was domestically produced and consumed. Such luxuries circulated the “blood” of the body politic creating employment and feeding the lower orders. 72 De La Noue made this distinction clear. He dismissed the need to individually discuss the damage done by each kind of luxury that was rampant in France in his time as being as pointless “as those who have invented auricular confession have divided mortal and venal sins into infinity of roots and branches.” Rather, he argued, the damage done by luxury was in its “entire bulk” to the patrimonies of those who purchased luxuries and to the kingdom of France (116). For the Protestants, luxury did not pose an internal threat to the body and salvation of the individual. Rather, the use of luxury posed an external threat to the group, to the body politic of France.
The Reformation And Sumptuary Legislation
Catholics, as we have seen, called for antiluxury regulations on food and banqueting, hoping to curb overeating and the damage done by gluttony to the body politic. Although some Protestants also wanted to restrict food and banqueting, more often French Protestants called for restrictions on clothing and foreign luxuries. These differing views of luxury during and after the French Wars of Religion not only give insight into the theological differences between these two branches of Christianity but also provides insight into the larger pattern of the sumptuary regulation of food in Europe in this period. Sumptuary restrictions were one means by which Catholics and Protestants enforced their theology in the post-Reformation era.
Although Catholicism is often correctly cast as the branch of Reformation Christianity that gave the individual the least control over their salvation, it was also true that the individual Catholic’s path to salvation depended heavily on ascetic practices. The responsibility for following these practices fell on the individual believer. Sumptuary laws on food in Catholic areas reinforced this responsibility by emphasizing what foods should and should not be eaten and mirrored the central theological practice of fasting for the atonement of sin. Perhaps the historiographical cliché that it was only Protestantism which gave the individual believer control of his or her salvation needs to be qualified. The arithmetical piety of Catholicism ultimately placed the onus on the individual to atone for each sin. Moreover, sumptuary legislation tried to steer the Catholic believer away from the more serious sins that were associated with overeating, including gluttony, lust, anger, and pride.
Catholic theology meshed nicely with the revival of Galenism that swept through Europe in this period. Galenists preached that meat eating, overeating, and the imbalance in humors which accompanied these practices, led to behavioral changes, including an increased sex drive and increased aggression. These physical problems mirrored the spiritual problems that luxury caused, including fornication and violence. This is why so many authors blamed the French nobility for the luxury problem in France. Nobles were seen not only as more likely to bear the expense of overeating but also as more prone to violence. 73
Galenism also meshed nicely with Catholicism because it was a very physical religion in which the control of the physical body figured prominently in the believer’s path to salvation. Not surprisingly, by the seventeenth century, Protestants gravitated away from Galenism toward the chemical view of the body offered by Paracelsus. 74 Catholic sumptuary law embodied a Galenic view of the body where sin and disease were equated and therefore pushed regulations that advocated each person’s control of his or her own body.
Protestant legislators, conversely, were not interested in the individual diner. Sumptuary legislation in Protestant areas ran the gamut from control of communal displays of eating, in places like Switzerland and Germany, to little or no concern with restrictions on luxury foods, as in England. For Protestants, it was the communal role of food and luxury use that was important. Hence the laws in Protestant areas targeted food in the context of weddings, baptisms, and even funerals. The English did not even bother to enact sumptuary restrictions on food after their break with Catholicism. The French Protestants who wrote on luxury glossed over the deleterious effects of meat eating, even proclaiming it to be healthful for the body while producing diatribes against the evils of imported luxury apparel. The use of Galenism in the French Reformed treatises suggests that Protestants too were concerned with a “body,” but it was not the individual body of the believer that worried Protestant legislators. Sumptuary restrictions were designed to safeguard the mystical body of believers, or the “Elect” in the language of Calvinism. French Protestants used the Galenic model of the body to discuss the damage that luxury did to the body of believers in France, but ultimately to safeguard the economic welfare of all French subjects. The Calvinists of Switzerland used sumptuary legislation on food to protect those predestined for salvation from the dangerous eating practices of members of the community whose overeating suggested they might not be saved.
Ultimately, sumptuary regulations in the Reformation spoke to the Christian practice of fasting. Fasting served very different functions in Protestants and Catholic theology. Raymond Mentzer has suggested that Protestants “modified” the Catholic practice of fasting during the Reformation. The major reformers, including Luther, Calvin, and Zwingli, all rejected fasting as a path to salvation. 75 For Protestants, fasting was a “liturgical rite,” part of the cycle of worship and a practice that served to “bind the community.” Fasting was often a response to adversity, as during the French Wars of Religion. For Catholics, fasting was an individual act, just as sumptuary legislation in Catholic areas targeted individual diners. However, for Protestants, fasting was a communal act, “calling attention to the body of believers.” 76 The symbolic nature of fasting, Mentzer argues, reflected Protestant rejection of transubstantiation. Catholics continued to believe that God was physically present in the host, but Protestants believed His was only a spiritual presence. When Catholics took Communion, they fasted to cleanse their own bodies so as to receive the real, physical body of Christ. Protestants, on the other hand, fasted as spiritual preparation because it was their spirits that connected with the spirit of Christ in the Eucharist. 77
“When an Indian child has been brought up among us, taught our language and habituated to our customs, yet if he goes to see his relations and makes one Indian ramble with them, there is no persuading him ever to return. [But] when white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho’ ransomed by their friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good opportunity of escaping again into the woods, from whence there is no reclaiming them.”
~ Benjamin Franklin
“The Indians, their old masters, gave them their choice and, without requiring any consideration, told them that they had been long as free as themselves. They chose to remain, and the reasons they gave me would greatly surprise you: the most perfect freedom, the ease of living, the absence of those cares and corroding solicitudes which so often prevail with us… all these and many more motives which I have forgot made them prefer that life of which we entertain such dreadful opinions. It cannot be, therefore, so bad as we generally conceive it to be; there must be in their social bond something singularly captivating, and far superior to anything to be boasted of among us; for thousands of Europeans are Indians, and we have no examples of even one of those Aborigines having from choice become Europeans! There must be something more congenial to our native dispositions than the fictitious society in which we live, or else why should children, and even grown persons, become in a short time so invincibly attached to it? There must be something very bewitching in their manners, something very indelible and marked by the very hands of nature. For, take a young Indian lad, give him the best education you possibly can, load him with your bounty, with presents, nay with riches, yet he will secretly long for his native woods, which you would imagine he must have long since forgot, and on the first opportunity he can possibly find, you will see him voluntarily leave behind all you have given him and return with inexpressible joy to lie on the mats of his fathers…
“Let us say what will of them, of their inferior organs, of their want of bread, etc., they are as stout and well made as the Europeans. Without temples, without priests, without kings and without laws, they are in many instances superior to us, and the proofs of what I advance are that they live without care, sleep without inquietude, take life as it comes, bearing all its asperities with unparalleled patience, and die without any kind of apprehension for what they have done or for what they expect to meet with hereafter. What system of philosophy can give us so many necessary qualifications for happiness? They most certainly are much more closely connected to nature than we are; they are her immediate children…”
~ J. Hector St. John de Crèvecœur
Western, educated, industrialized, rich, and democratic. Such societies, as the acronym goes, are WEIRD. But what exactly makes them weird?
This occurred to me because of reading Sebastian Junger’s Tribe. Much of what gets attributed to these WEIRD descriptors has been around for more than a half millennia, at least since the colonial imperialism began in Europe. From the moment Europeans settled in the Americas, a significant number of colonists were choosing to live among the natives because in many ways that lifestyle was a happier and healthier way of living with less stress and work, less poverty and inequality, not only lacking in arbitrary political power but also allowing far more personal freedom, especially for women.
Today, when we bother to think much about the problems we face, we mostly blame them on the side effects of modernity. But colonial imperialism began when Europe was still under the sway of of monarchies, state churches, and feudalism. There was nothing WEIRD about Western civilization at the time.
Those earlier Europeans hadn’t yet started to think of themselves in terms of a broad collective identity such as ‘Westerners’. They weren’t particularly well educated, not even the upper classes. Industrialization was centuries away. As for being rich, there was some wealth back then but it was limited to a few and even for those few it was rather unimpressive by modern standards. And Europeans back then were extremely anti-democratic.
Since European colonists were generally no more WEIRD than various native populations, we must look for other differences between them. Why did so many Europeans choose to live among the natives? Why did so many captured Europeans who were adopted into tribes refuse or resist being ‘saved’? And why did the colonial governments have to create laws and enforce harsh punishments in their having tried to stop people from ‘going native’?
European society, on both sides of the ocean, was severely oppressive and violent. That was particularly true in Virginia that was built on the labor of indentured servitude, at a time when most were worked to death before getting the opportunity for release from bondage. They had plenty of reason to seek the good life among the natives. But life was less than pleasant in the other colonies as well. A similar pattern repeated itself.
Thomas Morton went off with some men into the wilderness to start their own community where they commingled with the natives. This set an intolerable example that threatened Puritan social control and so the Puritans destroyed their community of the free. Roger Williams, a Puritan minister, took on the mission to convert the natives but found himself converted instead. He fled Puritan oppression because, as he put, the natives were more civilized than the colonists. Much later on, Thomas Paine living near some still free Indian tribes observed that their communities demonstrated greater freedom, self-governance, and natural rights than the colonies. He hoped Americans could take that lesson to heart. Other founders, from John Adams to Thomas Jefferson, looked admiringly to the example of their Indian neighbors.
The point is that whatever is problematic about Western society has been that way for a long time. Modernity has worsened this condition of unhappiness and dysfunction. There is no doubt that becoming more WEIRD has made us ever more weird, but Westerners were plenty weird from early on before they became WEIRD. Maybe the turning point for the Western world was the loss of our own traditional cultures and tribal lifestyles, as the Roman model of authoritarianism spread across Europe and became dominant. We have yet to shake off these chains of the past and instead have forced them upon everyone else.
It is what some call the Wetiko, one of the most infectious and deadly of mind viruses. “The New World fell not to a sword but to a meme,” as Daniel Quinn stated it (Beyond Civilization, p. 50). But it is a mind virus that can only take hold after immunity is destroyed. As long as there were societies of the free, the contagion was contained because the sick could be healed. But the power of the contagion is that the rabidly infected feel an uncontrollable compulsion to attack and kill the uninfected, the very people who would offer healing. Then the remaining survivors become infected and spread it further. A plague of victimization until no one is left untouched, until there is nowhere else to escape. Once all alternatives are eliminated, once a demiurgic monoculture comes to power, we are trapped in what Philip K. Dick called the Black Iron Prison. Sickness becomes all we know.
The usefulness of taking note of contemporary WEIRD societies isn’t that WEIRDness is the disease but that it shows the full blown set of symptoms of the disease. But the onset of the symptoms comes long after the infection, like a slow-growing malignant tumor in the brain. Still, symptoms are important, specifically when there is a comparison to a healthy population. That is what the New World offered the European mind, a comparison. The earliest accounts of native societies in the Americas helped Europeans to diagnose their own disease and helped motivate them to begin looking for a cure, although the initial attempts were fumbling and inept. The first thing that some Europeans did was simply to imagine what a healthy community might look like. That is what Thomas More attempted to do six centuries ago with his book Utopia.
Maybe the key is that of social concerns. Utopian visions have always focused on the social aspect, typically describing how people would ideally live together in communities. That was also the focus of the founders when they sought out alternative possibilities of organizing society. The change with colonialism, as feudalism was breaking down, was loss of belonging, of community and kinship. Modern individualism began not as an ideal but as a side effect of social breakdown, a condition of isolation and disconnection forced upon entire populations rather than having been freely chosen. And it was traumatizing to the Western psyche, and still is traumatizing, as seen with the high rates of mental illnesses in WEIRD societies, especially in the hyper-individualistic United States. That began before the defining factors of the WEIRD took hold. It was that trauma that made the WEIRD possible.
The colonists, upon meeting natives, discovered what had been lost. And for many colonists, that loss had happened within living memory. The hunger for what was lost was undeniable. To have seen a traditional communities that still functioned would have been like taking a breath of fresh air after having spent months in the stench of a ship’s hull. Not only did these native communities demonstrate what was recently lost but also what had been lost so much earlier. As many Indian tribes had more democratic practices, so did many European tribes prior to feudalism. But colonists had spent their entire lives being told democracy was impossible, that personal freedom was dangerous or even sinful.
The difference today is that none of this is within living memory for most of us, specifically Americans, unless one was raised Amish or in some similar community. The closest a typical American comes to this experience is by joining the military during war time. That is one of the few opportunities for a modern equivalent to a tribe, at least within WEIRD societies. And maybe a large part of the trauma soldiers struggle with isn’t merely the physical violence of war but the psychological violence of returning to the state of alienation, the loss of a bond that was closer than that of their own families.
Sebastian Junger notes that veterans who return to strong social support experience low rates of long-term PTSD, similar to Johann Hari’s argument about addiction in Chasing the Scream and his argument about depression in Lost Connections. Trauma, depression, addiction, etc — these are consequences of or worsened by isolation. These responses are how humans cope under stressful and unnatural conditions.
* * *
Tribe: On Homecoming and Belonging by Sebastian Junger pp. 16-25
The question for Western society isn’t so much why tribal life might be so appealing—it seems obvious on the face of it—but why Western society is so unappealing. On a material level it is clearly more comfortable and protected from the hardships of the natural world. But as societies become more affluent they tend to require more, rather than less, time and commitment by the individual, and it’s possible that many people feel that affluence and safety simply aren’t a good trade for freedom. One study in the 1960s found that nomadic !Kung people of the Kalahari Desert needed to work as little as twelve hours a week in order to survive—roughly one-quarter the hours of the average urban executive at the time. “The ‘camp’ is an open aggregate of cooperating persons which changes in size and composition from day to day,” anthropologist Richard Lee noted with clear admiration in 1968. “The members move out each day to hunt and gather, and return in the evening to pool the collected foods in such a way that every person present receives an equitable share… Because of the strong emphasis on sharing, and the frequency of movement, surplus accumulation… is kept to a minimum.”
The Kalahari is one of the harshest environments in the world, and the !Kung were able to continue living a Stone-Age existence well into the 1970s precisely because no one else wanted to live there. The !Kung were so well adapted to their environment that during times of drought, nearby farmers and cattle herders abandoned their livelihoods to join them in the bush because foraging and hunting were a more reliable source of food. The relatively relaxed pace of !Kung life—even during times of adversity—challenged long-standing ideas that modern society created a surplus of leisure time. It created exactly the opposite: a desperate cycle of work, financial obligation, and more work. The !Kung had far fewer belongings than Westerners, but their lives were under much greater personal control. […]
First agriculture, and then industry, changed two fundamental things about the human experience. The accumulation of personal property allowed people to make more and more individualistic choices about their lives, and those choices unavoidably diminished group efforts toward a common good. And as society modernized, people found themselves able to live independently from any communal group. A person living in a modern city or a suburb can, for the first time in history, go through an entire day—or an entire life—mostly encountering complete strangers. They can be surrounded by others and yet feel deeply, dangerously alone.
The evidence that this is hard on us is overwhelming. Although happiness is notoriously subjective and difficult to measure, mental illness is not. Numerous cross-cultural studies have shown that modern society—despite its nearly miraculous advances in medicine, science, and technology—is afflicted with some of the highest rates of depression, schizophrenia, poor health, anxiety, and chronic loneliness in human history. As affluence and urbanization rise in a society, rates of depression and suicide tend to go up rather than down. Rather than buffering people from clinical depression, increased wealth in a society seems to foster it.
Suicide is difficult to study among unacculturated tribal peoples because the early explorers who first encountered them rarely conducted rigorous ethnographic research. That said, there is remarkably little evidence of depression-based suicide in tribal societies. Among the American Indians, for example, suicide was understood to apply in very narrow circumstances: in old age to avoid burdening the tribe, in the ritual paroxysms of grief following the death of a spouse, in a hopeless but heroic battle with an enemy, and in an attempt to avoid the agony of torture. Among tribes that were ravaged by smallpox, it was also understood that a person whose face had been hideously disfigured by lesions might kill themselves. According to The Ethics of Suicide: Historical Sources , early chroniclers of the American Indians couldn’t find any other examples of suicide that were rooted in psychological causes. Early sources report that the Bella Coola, the Ojibwa, the Montagnais, the Arapaho, the Plateau Yuma, the Southern Paiute, and the Zuni, among many others, experienced no suicide at all.
This stands in stark contrast to many modern societies, where the suicide rate is as high as 25 cases per 100,000 people. (In the United States, white middle-aged men currently have the highest rate at nearly 30 suicides per 100,000.) According to a global survey by the World Health Organization, people in wealthy countries suffer depression at as much as eight times the rate they do in poor countries, and people in countries with large income disparities—like the United States—run a much higher lifelong risk of developing severe mood disorders. A 2006 study comparing depression rates in Nigeria to depression rates in North America found that across the board, women in rural areas were less likely to get depressed than their urban counterparts. And urban North American women—the most affluent demographic of the study—were the most likely to experience depression.
The mechanism seems simple: poor people are forced to share their time and resources more than wealthy people are, and as a result they live in closer communities. Inter-reliant poverty comes with its own stresses—and certainly isn’t the American ideal—but it’s much closer to our evolutionary heritage than affluence. A wealthy person who has never had to rely on help and resources from his community is leading a privileged life that falls way outside more than a million years of human experience. Financial independence can lead to isolation, and isolation can put people at a greatly increased risk of depression and suicide. This might be a fair trade for a generally wealthier society—but a trade it is. […]
The alienating effects of wealth and modernity on the human experience start virtually at birth and never let up. Infants in hunter-gatherer societies are carried by their mothers as much as 90 percent of the time, which roughly corresponds to carrying rates among other primates. One can get an idea of how important this kind of touch is to primates from an infamous experiment conducted in the 1950s by a primatologist and psychologist named Harry Harlow. Baby rhesus monkeys were separated from their mothers and presented with the choice of two kinds of surrogates: a cuddly mother made out of terry cloth or an uninviting mother made out of wire mesh. The wire mesh mother, however, had a nipple that dispensed warm milk. The babies took their nourishment as quickly as possible and then rushed back to cling to the terry cloth mother, which had enough softness to provide the illusion of affection. Clearly, touch and closeness are vital to the health of baby primates—including humans.
In America during the 1970s, mothers maintained skin-to-skin contact with babies as little as 16 percent of the time, which is a level that traditional societies would probably consider a form of child abuse. Also unthinkable would be the modern practice of making young children sleep by themselves. In two American studies of middle-class families during the 1980s, 85 percent of young children slept alone in their own room—a figure that rose to 95 percent among families considered “well educated.” Northern European societies, including America, are the only ones in history to make very young children sleep alone in such numbers. The isolation is thought to make many children bond intensely with stuffed animals for reassurance. Only in Northern European societies do children go through the well-known developmental stage of bonding with stuffed animals; elsewhere, children get their sense of safety from the adults sleeping near them.
The point of making children sleep alone, according to Western psychologists, is to make them “self-soothing,” but that clearly runs contrary to our evolution. Humans are primates—we share 98 percent of our DNA with chimpanzees—and primates almost never leave infants unattended, because they would be extremely vulnerable to predators. Infants seem to know this instinctively, so being left alone in a dark room is terrifying to them. Compare the self-soothing approach to that of a traditional Mayan community in Guatemala: “Infants and children simply fall asleep when sleepy, do not wear specific sleep clothes or use traditional transitional objects, room share and cosleep with parents or siblings, and nurse on demand during the night.” Another study notes about Bali: “Babies are encouraged to acquire quickly the capacity to sleep under any circumstances, including situations of high stimulation, musical performances, and other noisy observances which reflect their more complete integration into adult social activities.”
As modern society reduced the role of community, it simultaneously elevated the role of authority. The two are uneasy companions, and as one goes up, the other tends to go down. In 2007, anthropologist Christopher Boehm published an analysis of 154 foraging societies that were deemed to be representative of our ancestral past, and one of their most common traits was the absence of major wealth disparities between individuals. Another was the absence of arbitrary authority. “Social life is politically egalitarian in that there is always a low tolerance by a group’s mature males for one of their number dominating, bossing, or denigrating the others,” Boehm observed. “The human conscience evolved in the Middle to Late Pleistocene as a result of… the hunting of large game. This required… cooperative band-level sharing of meat.”
Because tribal foragers are highly mobile and can easily shift between different communities, authority is almost impossible to impose on the unwilling. And even without that option, males who try to take control of the group—or of the food supply—are often countered by coalitions of other males. This is clearly an ancient and adaptive behavior that tends to keep groups together and equitably cared for. In his survey of ancestral-type societies, Boehm found that—in addition to murder and theft—one of the most commonly punished infractions was “failure to share.” Freeloading on the hard work of others and bullying were also high up on the list. Punishments included public ridicule, shunning, and, finally, “assassination of the culprit by the entire group.” […]
Most tribal and subsistence-level societies would inflict severe punishments on anyone who caused that kind of damage. Cowardice is another form of community betrayal, and most Indian tribes punished it with immediate death. (If that seems harsh, consider that the British military took “cowards” off the battlefield and executed them by firing squad as late as World War I.) It can be assumed that hunter-gatherers would treat their version of a welfare cheat or a dishonest banker as decisively as they would a coward. They may not kill him, but he would certainly be banished from the community. The fact that a group of people can cost American society several trillion dollars in losses—roughly one-quarter of that year’s gross domestic product—and not be tried for high crimes shows how completely de-tribalized the country has become.
Dishonest bankers and welfare or insurance cheats are the modern equivalent of tribe members who quietly steal more than their fair share of meat or other resources. That is very different from alpha males who bully others and openly steal resources. Among hunter-gatherers, bullying males are often faced down by coalitions of other senior males, but that rarely happens in modern society. For years, the United States Securities and Exchange Commission has been trying to force senior corporate executives to disclose the ratio of their pay to that of their median employees. During the 1960s, senior executives in America typically made around twenty dollars for every dollar earned by a rank-and-file worker. Since then, that figure has climbed to 300-to-1 among S&P 500 companies, and in some cases it goes far higher than that. The US Chamber of Commerce managed to block all attempts to force disclosure of corporate pay ratios until 2015, when a weakened version of the rule was finally passed by the SEC in a strict party-line vote of three Democrats in favor and two Republicans opposed.
In hunter-gatherer terms, these senior executives are claiming a disproportionate amount of food simply because they have the power to do so. A tribe like the !Kung would not permit that because it would represent a serious threat to group cohesion and survival, but that is not true for a wealthy country like the United States. There have been occasional demonstrations against economic disparity, like the Occupy Wall Street protest camp of 2011, but they were generally peaceful and ineffective. (The riots and demonstrations against racial discrimination that later took place in Ferguson, Missouri, and Baltimore, Maryland, led to changes in part because they attained a level of violence that threatened the civil order.) A deep and enduring economic crisis like the Great Depression of the 1930s, or a natural disaster that kills tens of thousands of people, might change America’s fundamental calculus about economic justice. Until then, the American public will probably continue to refrain from broadly challenging both male and female corporate leaders who compensate themselves far in excess of their value to society.
That is ironic, because the political origins of the United States lay in confronting precisely this kind of resource seizure by people in power. King George III of England caused the English colonies in America to rebel by trying to tax them without allowing them a voice in government. In this sense, democratic revolutions are just a formalized version of the sort of group action that coalitions of senior males have used throughout the ages to confront greed and abuse. Thomas Paine, one of the principal architects of American democracy, wrote a formal denunciation of civilization in a tract called Agrarian Justice : “Whether… civilization has most promoted or most injured the general happiness of man is a question that may be strongly contested,” he wrote in 1795. “[Both] the most affluent and the most miserable of the human race are to be found in the countries that are called civilized.”
When Paine wrote his tract, Shawnee and Delaware warriors were still attacking settlements just a few hundred miles from downtown Philadelphia. They held scores of white captives, many of whom had been adopted into the tribe and had no desire to return to colonial society. There is no way to know the effect on Paine’s thought process of living next door to a communal Stone-Age society, but it might have been crucial. Paine acknowledged that these tribes lacked the advantages of the arts and science and manufacturing, and yet they lived in a society where personal poverty was unknown and the natural rights of man were actively promoted.
In that sense, Paine claimed, the American Indian should serve as a model for how to eradicate poverty and bring natural rights back into civilized life.
“Just as there are mental states only possible in crowds, there are mental states only possible in privacy.”
Those are the words of Sarah Perry from Luxuriating in Privacy. I came across the quote from a David Chapman tweet. He then asks, “Loneliness epidemic—or a golden age of privacy?” With that lure, I couldn’t help but bite.
I’m already familiar with Sarah Perry’s writings at Ribbonfarm. There is even an earlier comment by me at the piece the quote comes from, although I had forgotten about it. In the post, she begins with links to some of her previous commentary, the first one (Ritual and the Consciousness Monoculture) having been my introduction to her work. I referenced it in my post Music and Dance on the Mind and it does indeed connect to the above thought on privacy.
In that other post by Perry, she discusses Keeping Together in Time by William H. McNeill. His central idea is “muscular bonding” that creates, maintains, and expresses a visceral sense of group-feeling and fellow-feeling. This can happen through marching, dancing, rhythmic movements, drumming, chanting, choral singing, etc (for example, see: Choral Singing and Self-Identity). McNeill quotes A. R. Radcliffe about the Andaman islanders: “As the dancer loses himself in the dance, as he becomes absorbed in the unified community, he reaches a state of elation in which he feels himself filled with energy or force immediately beyond his ordinary state, and so finds himself able to perform prodigies of exertion” (Kindle Locations 125-126).
The individual is lost, at least temporarily, an experience humans are drawn to in many forms. Individuality is tiresome and we moderns feel compelled to take a vacation from it. Having forgotten earlier ways of being, maybe privacy is the closest most of us get to lowering our stressful defenses of hyper-individualistic pose and performance. The problem is privacy so easily reinforces the very individualistic isolation that drains us of energy.
This might create the addictive cycle that Johann Hari discussed in Chasing the Scream and would relate to the topic of depression in his most recent book, Lost Connections. He makes a strong argument about the importance of relationships of intimacy, bonding, and caring (some communities have begun to take seriously this issue; others deem what is required are even higher levels of change, radical and revolutionary). In particular, the rat park research is fascinating. The problem with addiction is that it simultaneously relieves the pain of our isolation while further isolating us. Or at least this is what happens in a punitive society with weak community and culture of trust. For that reason, we should look to other cultures for comparison. In some traditional societies, there is a greater balance and freedom to choose. I specifically had the Piraha in mind, as described by Daniel Everett.
The Piraha are a prime example of how not all cultures have a dualistic conflict between self and community, between privacy and performance. Their communities are loosely structured and the individual is largely autonomous in how and with whom they use their time. They lack much in the way of formal social structure, since there are no permanent positions of hierarchical authority (e.g., no tribal council of elders), although any given individual might temporarily take a leadership position in order to help accomplish an immediate task. Nor do they have much in the way of ritual or religion. It isn’t an oppressive society.
Accordingly, Everett observes how laid back, relaxed, and happy they seem. Depression, anxiety, and suicide appear foreign to them. When he told them about a depressed family member who killed herself, the Piraha laughed because assumed he was joking. There was no known case of suicide in the tribe. Even more interesting is that, growing up, the Piraha don’t exhibit transitional periods such as the terrible twos or teenage rebelliousness. They simply go from being weaned to joining adult activities with no one telling them to what to do.
The modern perceived conflict between group and individual might not be a universal and intrinsic aspect of human society. But it does seem a major issue for WEIRD societies, in particular. Maybe has to do with how ego-bound is our sense of identity. The other thing the Piraha lack is a permanent, unchanging self-identity because such as a meeting with a spirit in the jungle might lead to a change of name and, to the Piraha, the person who went by the previous name no longer is there. They feel no need to defend their individuality because any given individual self can be set aside.
It is hard for Westerners and Americans most of all to imagine a society that is this far different. It is outside of the mainstream capacity of imagining what is humanly possible. It’s similar to why so many people reject out of hand such theories as Julian Jaynes’ bicameral mind. Such worldviews simply don’t fit into what we know. But maybe this sense of conflict we cling to is entirely unnecessary. If so, why do we feel such conflict is inevitable? And so why do we value privacy so highly? What is it that we seek from being isolated and alone? What is it that we think we have lost that needs to be regained? To help answer these questions, I’ll present a quote by Julian Jaynes that I included in writing Music and Dance On the Mind — from his book that Perry is familiar with:
“Another advantage of schizophrenia, perhaps evolutionary, is tirelessness. While a few schizophrenics complain of generalized fatigue, particularly in the early stages of the illness, most patients do not. In fact, they show less fatigue than normal persons and are capable of tremendous feats of endurance. They are not fatigued by examinations lasting many hours. They may move about day and night, or work endlessly without any sign of being tired. Catatonics may hold an awkward position for days that the reader could not hold for more than a few minutes. This suggests that much fatigue is a product of the subjective conscious mind, and that bicameral man, building the pyramids of Egypt, the ziggurats of Sumer, or the gigantic temples at Teotihuacan with only hand labor, could do so far more easily than could conscious self-reflective men.”
Considering that, it could be argued that privacy is part of the same social order, ideological paradigm, and reality tunnel that tires us out so much in the first place. Endlessly without respite, we feel socially compelled to perform our individuality. And even in retreating into privacy, we go on performing our individuality for our own private audience, as played out on the internalized stage of self-consciousness that Jaynes describes. That said, even though the cost is high, it leads to great benefits for society as a whole. Modern civilization wouldn’t be possible without it. The question is whether the costs outweigh the benefits and also whether the costs are sustainable or self-destructive in the long term.
As Eli wrote in the comments section to Luxuriating in Privacy: “Privacy isn’t an unalloyed good. As you mention, we are getting ever-increasing levels of privacy to “luxuriate” in. But who’s to say we’re not just coping with the change modernity constantly imposes on us? Why should we elevate the coping mechanism, when it may well be merely a means to lessen the pain of an unnecessarily “alienating” constructed environment.” And “isn’t the tiresomeness of having to model the social environment itself contingent on the structural precariousness of one’s place in an ambiguous, constantly changing status hierarchy?”
Still, I do understand where Perry is coming from, as I’m very much an introvert who values my alone time and can be quite jealous of my privacy, although I can’t say that close and regular social contact “fills me with horror.” Having lived alone for years in apartments and barely knowing my neighbors, I spend little time at my ‘home’ and instead choose to regularly socialize with my family at my parents’ house. Decades of depression has caused me to be acutely aware of the double-edged sword of privacy.
Let me respond to some specifics of Perry’s argument. “Consider obesity,” she writes. “A stylized explanation for rising levels of overweight and obesity since the 1980s is this: people enjoy eating, and more people can afford to eat as much as they want to. In other words, wealth and plenty cause obesity.” There are some insightful comparisons of eating practices. Not all modern societies with equal access to food have equal levels of obesity. Among many other health problems, obesity can result from stress because our bodies prepare for challenging times by accumulating fat reserves. And if there is enough stress, studies have found this is epigenetically passed onto children.
As a contrast, consider the French culture surrounding food. The French don’t eat much fast food, don’t tend to eat or drink on the go. It is more common for them sit down to enjoy their coffee in the morning, rather than putting it in a traveling mug to drink on the way to work. Also, they are more likely to take long lunches in order to eat leisurely and typically do so with others. For the French, the expectation is that meals are to be enjoyed as a social experience and so they organize their entire society accordingly. Even though they eat many foods that some consider unhealthy, they don’t have the same high rates of stress-related diseases as do Americans.
An even greater contrast comes from looking once again at the Piraha. They live in an environment of immense abundance. And it requires little work to attain sustenance. In a few hours of work, an individual could get enough food to feed an extended family for multiple meals. They don’t worry about going hungry and yet, for various reasons, will choose not to not eat for extended periods of time when they wish to spend their time in other ways such as relaxing or dancing. They impose a feast and fast lifestyle on themselves, a typical pattern for hunter-gatherers. As with the French, when the Piraha have a meal, it is very much a social event. Unsurprisingly, the Piraha are slim and trim, muscular and healthy. They don’t suffer from stress-related physical and mental conditions, certainly not obesity.
Perry argues that, “Analogized to privacy, perhaps the explanation of atomization is simply that people enjoy privacy, and can finally afford to have as much as they want. Privacy is an economic good, and people show a great willingness to trade other goods for more privacy.” Using Johan Hari’s perspective, I might rephrase it: Addiction is economically profitable within the hyper-individualism of capitalist realism, and people show a difficult to control craving that causes them to pay high costs to feed their addiction. Sure, temporarily alleviating the symptoms makes people feel better. But what is it a symptom of? That question is key to understanding. I’m persuaded that the issue at hand is disconnection, isolation, and loneliness. So much else follows from that.
Explaining the title of her post, Perry writes that: “One thing that people are said to do with privacy is to luxuriate in it. What are the determinants of this positive experience of privacy, of privacy experienced as a thing in itself, rather than through violation?” She goes on to describes on to describes the features of privacy, various forms of personal space and enclosure. Of course, Julian Jaynes argued that the ultimate privacy is the construction of individuality itself, the experience of space metaphorically internalized and interiorized. Further development of privacy, however, is a rather modern invention. For example, it wasn’t until recent centuries that private bedrooms became common, having been popularized in Anglo-American culture by Quakers. Before that, full privacy was a rare experience and far from having been considered a human necessity or human right.
But we have come to take privacy for granted, not talking about certain details is a central part of privacy itself. “Everybody knows that everybody poops. Still, you’re not supposed to poop in front of people. The domain of defecation is tacitly edited out of our interactions with other people: for most social purposes, we are expected to pretend that we neither produce nor dispose of bodily wastes, and to keep any evidence of such private. Polite social relations exclude parts of reality by tacit agreement; scatological humor is a reminder of common knowledge that is typically screened off by social agreement. Sex and masturbation are similar.”
Defecation is a great example. There is no universal experience about the privatization of the act of pooping. In early Europe, relieving oneself in public was common and considered well within social norms. It was a slow ‘civilizing’ process to teach people to be ashamed of bodily functions, even simple things like farting and belching in public (there are a number of interesting books on the topic). I was intrigued by Susan P. Mattern’s The Prince of Medicine. She describes how almost everything in the ancient world was a social experience. Even taking a shit was an opportunity to meet and chat with one’s family, friends, and neighbors. They apparently felt no drain of energy or need to perform in their social way of being in the world. It was relaxed and normal to them, simply how they lived and they knew nothing else.
Also, sex and masturbation haven’t always been exclusively private acts. We have little knowledge of sex in the archaic world. Jaynes noted that sexuality wasn’t treated as anything particularly concerning and worrisome during the bicameral era. Obsession with sex, positive or negative, more fully developed during the Axial Age. As late as Feudalism, heavily Christianized Europe offered little opportunity for privacy and maintained a relatively open attitude about sexuality during many public celebrations, specifically Carnival, and they spent an amazing amount of their time in public celebrations. Barbara Ehrenreich describes this ecstatic communality in Dancing in the Streets. Like the Piraha, these earlier Europeans had a more social and fluid sense of identity.
Let me finish by responding to Perry’s conclusion: “As I wrote in A Bad Carver, social interaction has increasingly become “unbundled” from other things. This may not be a coincidence: it may be that people have specifically desired more privacy, and the great unbundling took place along that axis especially, in response to demand. Modern people have more room, more autonomy, more time alone, and fewer social constraints than their ancestors had a hundred years ago. To scoff at this luxury, to call it “alienation,” is to ignore that it is the choices of those who are allegedly alienated that create this privacy-friendly social order.”
There is no doubt what people desire. In any given society, most people desire whatever they are acculturated to desire. Example after example of this can be found in social science research, the anthropological literature, and classical studies. It’s not obvious to me that there is any evidence that modern people have fewer social constraints. What is clear is that we have different social constraints and that difference seems to have led to immense stress, anxiety, depression, and fatigue. Barbara Ehrenreich discusses the rise in depression in particular, as have others such as Mark Fisher’s work on capitalist realism (I quote him and others here). The studies on WEIRD cultures are also telling (see: Urban Weirdness and Dark Triad Domination).
The issue isn’t simply what choices we make but what choices we are offered and denied, what choices we can and cannot perceive or even imagine. And that relates to how we lose contact with the human realities of other societies that embody other possibilities not chosen or even considered within the constraints of our own. We are disconnected not only from others within our society. Our WEIRD monocultural dominance has isolated us also from other expressions of human potential.
We luxuriate in privacy because our society offers us few other choices, like a choice between junk food or starvation, in which case junk food tastes delicious. For most modern Westerners, privacy is nothing more than a temporary escape from an overwhelming world. But what we most deeply hunger for is genuine connection.
We live in a liberal age and the liberal paradigm dominates, not just for liberals but for everyone. Our society consists of nothing other than liberalism and reactions to liberalism. And at the heart of it all is individualism. But through the cracks, other possibilities can be glimpsed.
One challenging perspective is that of hyperobjects, a proposed by Timothy Morton — as he writes: “Hyperobjects pose numerous threats to individualism, nationalism, anti-intellectualism, racism, speciesism, anthropocentrism, you name it. Possibly even capitalism itself.”
Evander Price summarizes the origin of the theory and the traits of hyperobjects (Hyperobjects & Dark Ecology). He breaks it down into seven points. The last three refer to individuality — here they are (with some minor editing):
5)Individuality is lost. We are not separate from other things. (This is Object Oriented Ontology) — Morton calls this entangledness. “Knowing more about hyperobjects is knowing more about how we are hopelessly fastened to them.” A little bit like Ahab all tangled up in the lines of Moby-Dick.
6) “Utilitarianism is deeply flawed when it comes to working with hyperobjects. The simple reason why is that hpyerobjects are profoundly futural.” (135) <–I’ve been arguing against utilitarianism for a while now within this line of thinking; this is because utilitarianism, the idea that moral goodness is measured by whether an action or idea increases the overall happiness of a given community, is always embedded within a temporal framework, outside of which the collective ‘happiness’ of a given individual or community is not considered. Fulfilling the greatest happiness for the current generation is always dependent on taking resources now [from] future generations. What is needed is chronocritical utilitarianism, but that is anathema to the radical individuality of utilitarianism.
7) Undermining — the opposite of hyperobjecting. From Harman. “Undermining is when things are reduced to smaller things that are held to be more real. The classic form of undermining in contemporary capitalism is individualism: ‘There are only individuals and collective decisions are ipso facto false.’” <– focusing on how things affect me because I am the most important is essentially undermining that I exist as part of a community, and a planet.
And from the book on the topic:
Philosophy and Ecology after the End of the World by Timothy Morton Kindle Locations 427-446
The ecological thought that thinks hyperobjects is not one in which individuals are embedded in a nebulous overarching system, or conversely, one in which something vaster than individuals extrudes itself into the temporary shapes of individuals. Hyperobjects provoke irreductionist thinking, that is, they present us with scalar dilemmas in which ontotheological statements about which thing is the most real (ecosystem, world, environment, or conversely, individual) become impossible. 28 Likewise, irony qua absolute distance also becomes inoperative. Rather than a vertiginous antirealist abyss, irony presents us with intimacy with existing nonhumans.
The discovery of hyperobjects and OOO are symptoms of a fundamental shaking of being, a being-quake. The ground of being is shaken. There we were, trolling along in the age of industry, capitalism, and technology, and all of a sudden we received information from aliens, information that even the most hardheaded could not ignore, because the form in which the information was delivered was precisely the instrumental and mathematical formulas of modernity itself. The Titanic of modernity hits the iceberg of hyperobjects. The problem of hyperobjects, I argue, is not a problem that modernity can solve. Unlike Latour then, although I share many of his basic philosophical concerns, I believe that we have been modern, and that we are only just learning how not to be.
Because modernity banks on certain forms of ontology and epistemology to secure its coordinates, the iceberg of hyperobjects thrusts a genuine and profound philosophical problem into view. It is to address these problems head on that this book exists. This book is part of the apparatus of the Titanic, but one that has decided to dash itself against the hyperobject. This rogue machinery— call it speculative realism, or OOO— has decided to crash the machine, in the name of a social and cognitive configuration to come, whose outlines are only faintly visible in the Arctic mist of hyperobjects. In this respect, hyperobjects have done us a favor. Reality itself intervenes on the side of objects that from the prevalent modern point of view— an emulsion of blank nothingness and tiny particles— are decidedly medium-sized. It turns out that these medium-sized objects are fascinating, horrifying, and powerful.
For one thing, we are inside them, like Jonah in the Whale. This means that every decision we make is in some sense related to hyperobjects. These decisions are not limited to sentences in texts about hyperobjects.
Kindle Locations 467-472
Hyperobjects are a good candidate for what Heidegger calls “the last god,” or what the poet Hölderlin calls “the saving power” that grows alongside the dangerous power. 31 We were perhaps expecting an eschatological solution from the sky, or a revolution in consciousness— or, indeed, a people’s army seizing control of the state. What we got instead came too soon for us to anticipate it. Hyperobjects have dispensed with two hundred years of careful correlationist calibration. The panic and denial and right-wing absurdity about global warming are understandable. Hyperobjects pose numerous threats to individualism, nationalism, anti-intellectualism, racism, speciesism, anthropocentrism, you name it. Possibly even capitalism itself.
Kindle Locations 2712-2757
Marxists will argue that huge corporations are responsible for ecological damage and that it is self-destructive to claim that we are all responsible. Marxism sees the “ethical” response to the ecological emergency as hypocrisy. Yet according to many environmentalists and some anarchists, in denying that individuals have anything to do with why Exxon pumps billions of barrels of oil, Marxists are displacing the blame away from humans. This view sees the Marxist “political” response to the ecological emergency as hypocrisy. The ethics– politics binary is a true differend: an opposition so radical that it is in some sense insuperable. Consider this. If I think ethics, I seem to want to reduce the field of action to one-on-one encounters between beings. If I think politics, I hold that one-on-one encounters are never as significant as the world (of economic, class, moral, and so on), relations in which they take place. These two ways of talking form what Adorno would have called two halves of a torn whole, which nonetheless don’t add up together. Some nice compromise “between” the two is impossible. Aren’t we then hobbled when it comes to issues that affect society as a whole— nay the biosphere as a whole— yet affect us all individually (I have mercury in my blood, and ultraviolet rays affect me unusually strongly)?
Yet the deeper problem is that our (admittedly cartoonish) Marxist and anarchist see the problem as hypocrisy. Hypocrisy is denounced from the standpoint of cynicism. Both the Marxist and the anti-Marxist are still wedded to the game of modernity, in which she who grabs the most cynical “meta” position is the winner: Anything You Can Do, I Can Do Meta. Going meta has been the intellectual gesture par excellence for two centuries. I am smarter than you because I can see through you. You are smarter than they are because you ground their statements in conditions of possibility. From a height, I look down on the poor fools who believe what they think. But it is I who believes, more than they. I believe in my distance, I believe in the poor fools, I believe they are deluded. I have a belief about belief: I believe that belief means gripping something as tightly as possible with my mind. Cynicism becomes the default mode of philosophy and of ideology. Unlike the poor fool, I am undeluded— either I truly believe that I have exited from delusion, or I know that no one can, including myself, and I take pride in this disillusionment.
This attitude is directly responsible for the ecological emergency, not the corporation or the individual per se, but the attitude that inheres both in the corporation and in the individual, and in the critique of the corporation and of the individual. Philosophy is directly embodied in the size and shape of a paving stone, the way a Coca Cola bottle feels to the back of my neck, the design of an aircraft, or a system of voting. The overall guiding view, the “top philosophy,” has involved a cynical distance. It is logical to suppose that many things in my world have been affected by it— the way a shopping bag looks, the range of options on the sports channel, the way I think Nature is “over yonder.” By thinking rightness and truth as the highest possible elevation, as cynical transcendence, I think Earth and its biosphere as the stage set on which I prance for the amusement of my audience. Indeed, cynicism has already been named in some forms of ideology critique as the default mode of contemporary ideology. 48 But as we have seen, cynicism is only hypocritical hypocrisy.
Cynicism is all over the map: left, right, green, indifferent. Isn’t Gaian holism a form of cynicism? One common Gaian assertion is that there is something wrong with humans. Nonhumans are more Natural. Humans have deviated from the path and will be wiped out (poor fools!). No one says the same about dolphins, but it’s just as true. If dolphins go extinct, why worry? Dolphins will be replaced. The parts are greater than the whole. A mouse is not a mouse if it is not in the network of Gaia. 49 The parts are replaceable. Gaia will replace humans with a less defective component. We are living in a gigantic machine— a very leafy one with a lot of fractals and emergent properties to give it a suitably cool yet nonthreatening modern aesthetic feel.
It is fairly easy to discern how refusing to see the big picture is a form of what Harman calls undermining. 50 Undermining is when things are reduced to smaller things that are held to be more real. The classic form of undermining in contemporary capitalism is individualism: “There are only individuals and collective decisions are ipso facto false.” But this is a problem that the left, and environmentalism more generally, recognize well.
The blind spot lies in precisely the opposite direction: in how common ideology tends to think that bigger is better or more real. Environmentalism, the right, and the left seem to have one thing in common: they all hold that incremental change is a bad thing. Yet doesn’t the case against incrementalism, when it comes to things like global warming, amount to a version of what Harman calls overmining, in the domain of ethics and politics? Overmining is when one reduces a thing “upward” into an effect of some supervenient system (such as Gaia or consciousness). 51 Since bigger things are more real than smaller things, incremental steps will never accomplish anything. The critique of incrementalism laughs at the poor fools who are trying to recycle as much as possible or drive a Prius. By postponing ethical and political decisions into an idealized future, the critique of incrementalism leaves the world just as it is, while maintaining a smug distance toward it. In the name of the medium-sized objects that coexist on Earth (aspen trees, polar bears, nematode worms, slime molds, coral, mitochondria, Starhawk, and Glenn Beck), we should forge a genuinely new ethical view that doesn’t reduce them or dissolve them.
You must be logged in to post a comment.