“Yes, tea banished the fairies.”

“There have been numerous explanations for why the fairies disappeared in Britain – education, advent of electrical lighting, evangelical religion. But one old man in the village of Alves, Moray, Scotland knew the real reason in 1851: tea drinking. Yes, tea banished the fairies.”

The historian Owen Davies wrote this in referring to a passage from an old source. He didn’t mention where it came from. In doing a search on Google Books, multiple results came up. The earliest is supposedly on page 624 of the 1850 Family Herald – Volumes 8-9, but there is no text for it available online. Several books from the 1850s to 1880s quote it. (1)

Below is the totality of what Davies shared. It might have originally been part of a longer passage, but it is all that I could find from online sources. It’s a short account and intriguing.

“How do you account,” said a north country minister of the last age (the late Rev. Mr. M’Bean, of Alves,) to a sagacious old elder of his session, “for the almost total disappearance of the ghosts and fairies that used to be common in your young days?” “Tak’ my word for’t, minister,” replied the old man, “it’s a’ owing to the tea; whan the tea cam’ in, ghaists an’ fairies gaed out. Weel do I mind whan at a’ our neebourly meetings — bridals, christenings, lyke-wakes, an’ the like — we entertained ane anither wi’ rich nappy ale; an’ when the verra dowiest o’ us used to get warm i’ the face, an’ a little confused i’ the head, an’ weel fit to see amaist onything when on the muirs on yer way hame. But the tea has put out the nappy; an’ I have remarked that by losing the nappy we lost baith ghaists and fairies.”

Will Hawkes noted that, “‘nappy’ ale meant strong ale.” In response to Davies, James Evans suggested that, “One thing which I haven’t seen mentioned here is that there is an excellent chance that the beer being produced in this region, at this time, was mildly hallucinogenic.” And someone following that asked, “Due to ergot?” Now that makes it even more intriguing to consider. There might have been good reason people used to see more apparitions. Whether or not ergot was involved, we do know that in the past all kinds of herbs were put into beers for nutrition and medicinal purposes but also maybe for the affect they had on the mind.

Let me make some connections. Alcohol is a particular kind of drug. Chuck Pezeshki argues that, “alcohol is much more of a ‘We’ drug when used in moderation, than an ‘I’ drug” (Leadership for Creativity Isn’t all Child’s Play). He adds that, “There’s a reason for the old saying ‘when the pub closes, the revolution starts!’” Elsewhere, he offers the contrast that, “Alcohol is on average is pro-empathetic, sugar anti-empathetic” (The Case Against Sugar — a True Psychodynamic Meta-Review).

Think about it. Both tea and sugar were foods introduced through colonialism. It took a while for them to become widespread. They first were accessible to those in the monied classes, including the intellectual elite. Some have argued that these stimulants are what fueled the Enlightenment Age. And don’t forget that tea played a key role in instigating the American Revolution. Changes in diet often go hand in hand with changes in culture (see below).

There are those like Terrence McKenna who see psychedelics as having much earlier played a central role in shaping the human mind. This is indicated by the wide use of psychedelics by indigenous populations all over the planet and by evidence of their use among ancient people. Psychedelics preceded civilization and it seems that their use declined as civilization further developed. What replaced psychedelics over time were the addictive stimulants. That other variety of drug has a far different affect on the human mind and culture.

The change began millennia ago. But the full takeover of the addictive mentality only seems to have come fully into its own in recent centuries. The popularizing of caffeinated drinks in the 19th century is a key example of the modernizing of the mind. People didn’t simply have more active imaginations in the past. They really did live in a cultural worldview where apparitions were common, maybe in the way that Julian Jaynes proposed that in the Bronze Age people heard voices. These weren’t mere hallucinations. It was central to the lived reality of their shared culture.

In traditional societies, alcohol was used for social gatherings. It brought people together and, maybe combined with other substances, made possible a certain experience of the world. With the loss of that older sense of communal identity, there was the rise of the individual mindset isolated by addictive stimulants. This is what has fueled all of modernity. We’ve been buzzing ever since. Stimulants broke the spell of the fairies only to put us under a different spell, that of the demiurgic ego-consciousness.

“The tea pots full of warm water,” as Samuel Tissot put in his 1768 An Essay on Diseases Incident to Literary and Sedentary Persons, “I see upon their tables, put me in mind of Pandora’s box, from whence all sorts of evils issue forth, with this difference however, that they do not even leave the hopes of relief behind them; but, on the contrary, by inducing hypochondriac complaints, diffuse melancholy and despair.” (2)

* * *

(1) The Country Gentleman – Vol. XII No. 23 (1858), William Hopkin’s “The Cruise of the Betsey” from Fraser’s Magazine for Town and Country – Volume 58 (1858) and from Littell’s Living Age – Volume 59 (1858), John William Kirton’s One Thousand Temperance Anecdotes [&c.] (1868), John William Kirton’s A Second Thousand of Temperance Anecdotes (1877), The Church of England Temperance Chronicle – No. 42 Vol. VIII (1880), and The Guernsey Magazine – Vol. X No. 12 (1882).

(2) “There is another kind of drink not less hurtful to studious men than wine; and which they usually indulge in more freely; I mean warm liquors [teas], the use of which is become much more frequent since the end of the last century. A fatal prejudice insinuated itself into physic about this period. A new spirit of enthusiasm had been excited by the discovery of the circulation: it was thought necessary for the preservation of health to facilitate it as much as possible, by supplying a great degree of fluidity to the blood, for which purpose it was advised to drink a large quantity of warm water. Cornelius Bontekoe, a Dutch physician, who died afterwards at Berlin, first physician to the elector of Brandenburgh, published in 1679 a small treatise in Dutch, upon tea, coffee, and chocolate, in which he bestows the most extravagant encomiums on tea, even when taken to the greatest excess, as far as one or two hundred cups in a day, and denies the possibility of its being hurtful to the stomach. This error spread itself with surprising rapidity all over the northern part of Europe; and was attended with the most grievous effects. The æra of its introduction is marked by an unhappy revolution in the account of the general state of health at that time. The mischief was soon noticed by accurate observers. M. Duncan, a French physician settled at Rotterdam, published a small work in 1705, wherein we find, amidst a great deal of bad theory, some useful precepts against the use of hot liquors (I). M. Boerhaave strongly opposed this pernicious custom; all his pupils followed his example, and all our eminent physicians are of the same opinion. The prejudice has at last been prevented from spreading, and within these few years seems to have been rather less prevalent (m); but unfortunately it subsists still among valetudinarians, who are induced to continue these pernicious liquors, upon the supposition that all their disorders proceed from a thickness of blood. The tea-pots full of warm water I see upon their tables, put. me in mind of Pandora’s box, from whence all sorts of evils issue forth, with this difference however, that they do not even leave the hopes of relief behind them; but, on the contrary, by inducing hypochondriac complaints, diffuse melancholy and despair. […]

“The danger of these drinks is considerably increased, as I have before observed, by the properties of the plants infused in them; the most fatal of these when too often or too freely used, is undoubtedly the tea, imported to us since near two centuries past from China and Japan, which has so much increased diseases of a languid nature in the countries where it has been introduced, that we may discover, by attending to the health of the inhabitants of any city, whether they drink tea or not; and I should imagine one and the greatest benefits that could accrue to Europe, would be to prohibit the . importation of this famous leaf, which contains no essential parts besides an acrid corrosive gum, with a few astringent particles (o), imparting to the tea when strong, or when the infusion has stood a long time and grown cold, a styptic taste, slightly felt by the tongue, but which does not prevent the pernicious effects of the warm water it i$ drenched in. These effects are so striking, that I have often seen very strong and healthy men, seized with faintness, gapings, and uneasiness, which lasted for some hours after they had drank a few cups of tea fasting, and sometimes continued the whole day. I am sensible that these bad effects do not shew themselves so plainly in every body, and that there are some who drink tea every day, and remain still in good health; but these people drink it with moderation. Besides, the non-existence of any danger cannot be argued from the instances «f some few who have been fortunate enough to escape it.

“The effects of coffee differing from’ those of tea, it cannot be placed in the same class ; for coffee, although made with- warm water, is not so pernicious for this reason, as it is on account of its being a powerful stimulus, producing strong irritations in the fibres by its bitter aromatic oil This oil combined as it is with a kind of very nourishing meal, and of easy digestion, would make this berry of great consequence, in pharmacy, as one of the bitter stomachics, among which it would be the most agreeable, as well as one of the most active. This very circumstance is sufficient to interdict the common use of it, which must be exceedingly hurtful. A continual irritation of the fibres of the stomach must at length destroy their powers; the mucus is, carried off, the nerves are irritated and acquire singular spasms, strength fails, hectic fevers come on with a train of other diseases, the cause of which is industriously concealed, and is so much the more difficult to eradicate, as this sharpness united with an oil seems not only to infect the fluids, but even to adhere to the vessels themselves. On the contrary, when seldom taken, it exhilerates, breaks down the slimy substances in the stomach, quickens its action, dispels the load and pains of the head, proceeding from interrupted digestions, and even clears the ideas and sharpens the understanding, if we may credit the accounts of men of letters, who have therefore used it very freely. But let me be permitted to ask, whether Homer, Thucydides, Plato, Xenophon, Lucretius, Virgil, Ovid, Horace, Petronius, to which I may venture to add Corneille and Moliere, whose masterpieces will ever be the delight of the remotest posterity, let me ask, I say, whether they drank coffee? Milk rather takes off from the irritation occasioned by coffee, but still does not entirely prevent all its pernicious effects, for even this mixture has some disadvantages peculiar to itself. Men of learning, there fore, who are prudent, ought in general to keep coffee as their favourite medicine, but should never use it as a common drink. The custom is so much the more dangerous, as it soon degenerates into a habit of necessity, which few men have the resolution to deprive themselves of. We are sensible of the poison, and swallow (32) it because it is palatable.”

* * *

The Agricultural Mind

Addiction, of food or drugs or anything else, is a powerful force. And it is complex in what it affects, not only physiologically and psychologically but also on a social level. Johann Hari offers a great analysis in Chasing the Scream. He makes the case that addiction is largely about isolation and that the addict is the ultimate individual. It stands out to me that addiction and addictive substances have increased over civilization. Growing of poppies, sugar, etc came later on in civilization, as did the production of beer and wine (by the way, alcohol releases endorphins, sugar causes a serotonin high, and both activate the hedonic pathway). Also, grain and dairy were slow to catch on, as a large part of the diet. Until recent centuries, most populations remained dependent on animal foods, including wild game. Americans, for example, ate large amounts of meat, butter, and lard from the colonial era through the 19th century (see Nina Teicholz, The Big Fat Surprise; passage quoted in full at Malnourished Americans). In 1900, Americans on average were only getting 10% of carbs as part of their diet and sugar was minimal.

Something else to consider is that low-carb diets can alter how the body and brain functions. That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past. Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is cravings and food addictions disappear. It’s a non-addictive or maybe even anti-addictive state of mind. Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, and that is typical of ketosis. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. […]

It is an onslaught taxing our bodies and minds. And the consequences are worsening with each generation. What stands out to me about autism, in particular, is how isolating it is. The repetitive behavior and focus on objects resonates with extreme addiction. As with other conditions influenced by diet (shizophrenia, ADHD, etc), both autism and addiction block normal human relating in creating an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon. And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic consciousness with its rigid psychological boundaries is the result of our food system, as part of the civilizational project of mass agriculture?

The Spell of Inner Speech

This person said a close comparison was being in the zone, sometimes referred to as runner’s high. That got me thinking about various factors that can shut down the normal functioning of the egoic mind. Extreme physical activity forces the mind into a mode that isn’t experienced that often and extensively by people in the modern world, a state of mind combining exhaustion, endorphins, and ketosis — a state of mind, on the other hand, that would have been far from uncommon before modernity with some arguing ketosis was once the normal mode of neurocogntivie functioning. Related to this, it has been argued that the abstractions of Enlightenment thought was fueled by the imperial sugar trade, maybe the first time a permanent non-ketogenic mindset was possible in the Western world. What sugar (i.e., glucose), especially when mixed with the other popular trade items of tea and coffee, makes possible is thinking and reading (i.e., inner experience) for long periods of time without mental tiredness. During the Enlightenment, the modern mind was borne out of a drugged-up buzz. That is one interpretation. Whatever the cause, something changed.

Also, in the comment section of that article, I came across a perfect description of self-authorization. Carla said that, “There are almost always words inside my head. In fact, I’ve asked people I live with to not turn on the radio in the morning. When they asked why, they thought my answer was weird: because it’s louder than the voice in my head and I can’t perform my morning routine without that voice.” We are all like that to some extent. But for most of us, self-authorization has become so natural as to largely go unnoticed. Unlike Carla, the average person learns to hear their own inner voice despite external sounds. I’m willing to bet that, if tested, Carla would show results of having thin mental boundaries and probably an accordingly weaker egoic will to force her self-authorization onto situations. Some turn to sugar and caffeine (or else nicotine and other drugs) to help shore up rigid thick boundaries and maintain focus in this modern world filled with distractions — likely a contributing factor to drug addiction.

The Crisis of Identity

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).

There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

Diets and Systems

Chuck Pezeshki is a published professor of engineering in the field of design theory and high performance work teams. I can claim no specialty here, as I lack even a college degree. Still, Pezeshki and I have much in common — like  me: He prefers a systems view, as he summarizes his blog on his About page, “As we relate, so we think.” He states that, “My work exists at, and reaches far above the micro-neuroscience level, into larger systemic social organization.”

An area of focus we share is diet and health and we’ve come to similar conclusions. Like me, he sees a relationship between sugar, obesity, addiction, trauma, individuality, empathy issues, authoritarianism, etc (and inequality comes up as well; by the way, my favorite perspective on inequality in this context is Keith Payne’s The Broken Ladder). And like me, he is informed by a low-carb and ketogenic approach that was initially motivated by weight loss. Maybe these commonalities are unsurprising, as we do have some common intellectual interests.

Much of his blog is about what he calls “structural memetics” involving value memes (v-memes). Even though I haven’t focused as much on value memes recently, Ken Wilber’s version of spiral dynamics shaped my thought to some extent (that kind of thing being what brought me to Pezeshki’s blog in the first place). As important, we are both familiar with Bruce K. Alexander’s research on addiction, although my familiarity comes from Johann Hari’s writings (I learned of the rat park research in Chasing the Scream). A more basic link in our views comes from each of us having read the science journalism of Gary Taubes and Nina Teicholz, along with some influence from Dr. Jason Fung. He has also read Dr. Robert H. Lustig, a leading figure in this area who I know of through the work of others.

Related to diet, Pezeshki does bring up the issue of inflammation. As I originally came around to my present diet from a paleo viewpoint, I became familiar with the approach of functional medicine that puts inflammation as a central factor (Essentialism On the Decline). Inflammation is a bridge between the physiological and the psychological, the individual and the social. Where and how inflammation erupts within the individual determines how a disease condition or rather a confluence of symptoms gets labeled and treated, even if the fundamental cause originated elsewhere, maybe in the ‘external’ world (socioeconomic stress, transgenerational trauma, environmental toxins, parasites because of lack of public sanitation, etc. Inflammation is linked to leaky gut, leaky brain, arthritis, autoimmune disorders, mood disorders, ADHD, autism, schizophrenia, impulsivity, short-term thinking, addiction, aggression, etc — and such problems increase under high inequality.

There are specific examples to point to. Diabetes and mood disorders co-occur. There is the connection of depression and anhedonia, involving the reward circuit and pleasure, which in turn can be affected by inflammation. Also, inflammation can lead to changes in glutamate in depression, similar to the glutamate alterations in autism from diet and microbes, and that is significant considering that glutamate is not only a major neurotransmitter but also a common food additive. Dr. Roger McIntyre writes that, “MRI scans have shown that if you make someone immune activated, the hypervigilance center is activated, activity in the motoric region is reduced, and the person becomes withdrawn and hypervigilant. And that’s what depression is. What’s the classic presentation of depression? People are anxious, agitated, and experience a lack of spontaneous activity and increased emotional withdrawal” (Inflammation, Mood Disorders, and Disease Model Convergence). Inflammation is a serious condition and, in the modern world, quite pervasive. The implications of this are not to be dismissed.

I’ve been thinking about this kind of thing for years now. But this is the first time I’ve come across someone else making these same connections, at least to this extent and with such a large context. The only thing I would add or further emphasize is that, from a functional medicine perspective (common among paleo, low-carb, and keto advocates), the body itself is a system as part of the larger systems of society and the environment — it is a web of connections not only in which we are enmeshed but of which forms everything we are, that is to say we aren’t separate from it. Personal health is public health is environmental health, and think of that in relation to the world of hyperobjects overlapping with hypersubjectivity (as opposed to the isolating psychosis of hyper-individualism):

“We shouldn’t personally identify with our health problems and struggles. We aren’t alone nor isolated. The world is continuously affecting us, as we affect others. The world is built on relationships, not just between humans and other species but involving everything around us — what some describe as embodied, embedded, enacted, and extended (we are hypersubjects among hyperobjects). The world that we inhabit, that world inhabits us, our bodies and minds. There is no world “out there” for there is no possible way for us to be outside the world. Everything going on around us shapes who we are, how we think and feel, and what we do — most importantly, shapes us as members of a society and as parts of a living biosphere, a system of systems all the way down. The personal is always the public, the individual always the collective, the human always the more than human” (The World Around Us).

In its earliest meaning, diet meant a way of life, not merely an eating regimen. And for most of history, diet was rooted in cultural identity and communal experience. It reinforced a worldview and social order. This allows diet to be a perfect lens through which to study societal patterns and changes over time.

* * *

Relevant posts by Chuck Pezeshki:

Weight Loss — it’s in the V-Memes
Weight Loss — It’s in the v-Memes (II)
Weight Loss by the V-Memes — (III) What’s the v-Meme stack look like?
Weight Loss by the V-Memes (IV) or Channeling your Inner Australopithecine
Weight Loss by the v-Memes (V) – Cutting out Sugar — The Big Psycho-Social-Environmental Picture
The Case Against Sugar — a True Psychodynamic Meta-Review
Quickie Post — the Trans-Cultural Diabolical Power of Sugar
How Health Care Deprivation and the Consequences of Poor Diet is Feeding Contemporary Authoritarianism – The Trump ACA Debacle
Quickie Post — Understanding the Dynamics of Cancer Requires a Social Structure that can Create Cellular Dynamics
Finding a Cure for Cancer — or Why Physicists May Have the Upper Hand
Quickie Post –A Sober Utopia
Rat Park — Implications for High-Productivity Environments — Part I
Rat Park — Implications for High-Productivity Environments — Part II
Leadership for Creativity Isn’t all Child’s Play
Relational Disruption in Organizations
The Neurobiology of Education and Critical Thinking — How Do We Get There?
What Caused the Enlightenment? And What Threatens to Unravel It?

* * *

Relevant posts from my own blog:

It’s All Your Fault, You Fat Loser!
The World Around Us
The Literal Metaphor of Sickness
Health From Generation To Generation
The Agricultural Mind
Spartan Diet
Ketogenic Diet and Neurocognitive Health
Fasting, Calorie Restriction, and Ketosis
Like water fasts, meat fasts are good for health.
The Creed of Ancel Keys
Dietary Dictocrats of EAT-Lancet
Eliminating Dietary Dissent
Cold War Silencing of Science
Essentialism On the Decline

There is also some discussion of diet in this post and the comments section:

Western Individuality Before the Enlightenment Age

And related to that:

Low-Carb Diets On The Rise

“It has become an overtly ideological fight, but maybe it always was. The politicization of diet goes back to the early formalized food laws that became widespread in the Axial Age and regained centrality in the Middle Ages, which for Europeans meant a revival of ancient Greek thought, specifically that of Galen. And it is utterly fascinating that pre-scientific Galenic dietary philosophy has since taken on scientific garb and gets peddled to this day, as a main current in conventional dietary thought (see Food and Faith in Christian Culture ed. by Ken Albala and Trudy Eden […]; I made this connection in realizing that Stephen Le, a biological anthropologist, was without awareness parroting Galenic thought in his book 100 Million Years of Food).”

* * *

Mental health, Psychopathy, Addiction, Inflammation, Diet, Nutrition, etc:

Dark triad traits and health outcomes: An exploratory study
by Jasna Hudek-Knezevic et al

Brain chemical is reward for psychopathic traits
by Ewen Callaway

Psychopaths’ brains wired to seek rewards, no matter the consequences
from Science Daily

Psychopathic traits modulate brain responses to drug cues in incarcerated offenders
by Lora M. Cope et al

Links Between Substance Abuse and Antisocial Personality Disorder (ASPD)
from Promises Behavioral Health

Antisocial Personality Disorder and depression in relation to alcoholism: A community-based sample
by Laura C. Holdcraft et al

More inflammation but less brain-derived neurotrophic factor in antisocial personality disorder
by Tzu-Yun Wang et al

High Neuroticism and Low Conscientiousness Are Associated with Interleukin-6
by Sutin, Angelina

Aggressive and impulsive personality traits and inflammatory markers in cerebrospinal fluid and serum: Are they interconnected?
by S. Bromander et al

Inflammation Predicts Decision-Making Characterized by Impulsivity, Present Focus, and an Inability to Delay Gratification
by Jeffrey Gassen et al

Could Your Immune System Be Making You Impulsive?
by Emma Young

Impulsivity-related traits are associated with higher white blood cell counts
by Angelina R. Sutin et al

Dietary long-chain omega-3 fatty acids are related to impulse control and anterior cingulate function in adolescents
by Valerie L. Darcey

Diabetes Risk and Impulsivity
by David Perlmutter

Experimentally-Induced Inflammation Predicts Present Focus
by Jeffrey Gassen et al

Penn Vet researchers link inflammation and mania
by Katherine Unger Baillie

Anger Disorders May Be Linked to Inflammation
by Bahar Gholipour

Markers of Inflammation in the Blood Linked to Aggressive Behaviors
from University of Chicago Medical Center

Anhedonia as a clinical correlate of inflammation in adolescents across psychiatric conditions
by R. D. Freed et al

From Stress to Anhedonia: Molecular Processes through Functional Circuits
by Colin H. Stanton et al

Mapping inflammation onto mood: Inflammatory mediators of anhedonia
by Walter Swardfager et al

Understanding anhedonia: What happens in the brain?
by Tim Newman

Depression, Anhedonia, Glutamate, and Inflammation
by Peter Forster et al

Depression and anhedonia caused by inflammation affecting the brain
from Bel Marra Health

Inflammation linked to weakened reward circuits in depression
from Emory Health Sciences

Depression in people with type 2 diabetes: current perspectives
by L. Darwish et al

The Link Between Chronic Inflammation and Mental Health
by Kayt Sukel

Emory team links inflammation to a third of all cases of depression
by Oliver Worsley

Brain Inflammation Linked to Depression
by Emily Downwar

The Brain on Fire: Depression and Inflammation
by Marwa Azab

Inflammation, Mood Disorders, and Disease Model Convergence
by Lauren LeBano

High-inflammation depression linked to reduced functional connectivity
by Alice Weatherston

Does Inflammation Cause More Depression or Aggression?
by Charles Raison

A probe in the connection between inflammation, cognition and suicide
by Ricardo Cáceda et al

What If We’re Wrong About Depression?
by Anna North

People with ‘rage’ disorder twice as likely to have parasitic infection
by Kevin Jiang

Rage Disorder Linked with Parasite Found in Cat Feces
by Christopher Wanjek

Maternal Inflammation Can Affect Fetal Brain Development
by Janice Wood

The effects of increased inflammatory markers during pregnancy
from Charité – Universitätsmedizin Berlin

Inflammation in Pregnancy Tied to Greater Risk for Mental Illness in Child
by Traci Pedersen

Inflammation may wield sex-specific effects on developing brain
by Nicholette Zeliadt

Childhood obesity is linked to poverty and parenting style
from Concordia University

The Obesity–Impulsivity Axis: Potential Metabolic Interventions in Chronic Psychiatric Patients
by Adonis Sfera et al

The pernicious satisfaction of eating carbohydrates
by Philip Marais

Your Brain On Paleo
from Paleo Leap

The Role of Nutrition and the Gut-Brain Axis in Psychiatry: A Review of the Literature
by S. Mörkl et al

Emerging evidence linking the gut microbiome to neurologic disorders
by Jessica A. Griffiths and Sarkis K. Mazmanian

New Study Shows How Gut Bacteria Affect How You See the World
by David Perlmutter

The Surprising Link Between Gut Health and Mental Health
from LoveBug Probiotics

Nutritional Psychiatry: Is Food The Next Big Frontier In Mental Health Treatment?
by Stephanie Eckelkamp

Ketogenic Diets for Psychiatric Disorders: A New 2017 Review
by Georgia Ede

Low-Carbohydrate Diet Superior to Antipsychotic Medications
by Georgia Ede

Gut microbiome, SCFAs, mood disorders, ketogenic diet and seizures
by Jonathan Miller

Can the Ketogenic Diet Treat Depression and Anxiety, Even Schizophrenia?
by Rebekah Edwards

Erosion of the Bronze Age

I’ve previously made an argument about the development of large-scale agriculture in the late Bronze Age. It may have helped cause a psychological transformation that preceded the societal collapse. The late Bronze Age empires became too large to be sustainable, specifically according to the social order that had developed (i.e., Julian Jaynes’ theory of the bicameral mind).

Prior to this, the Bronze Age had been dominated by smaller city-states that were spread further apart. They had some agriculture but still with heavy reliance on hunting, fishing, trapping, and gathering. It would have been a low-carb, high-fat diet. But growing populations, as time went on, became ever more dependent upon agriculture. This meant a shift toward an increasingly high-carb diet that was much less nutrient-dense, along with a greater prevalence of addictive substances.

Agriculture may have had other impacts as well. The appearance of a more fully agricultural diet meant the need for vaster areas to farm. The only way to accomplish that was deforestation. Along with the destabilizing of psychological changes, there also would have been the destabilizing forces of erosion. Then a perfect storm of environmental stressors hit in a short period of time: volcanoes, earthquakes, tidal waves, flooding, etc. With waves of refugees and marauders, the already weakened empires fell like dominoes.

Erosion probably had been making the farmland less fertile for centuries. This wasn’t too much of a problem until overpopulation reached a breaking point. Small yields for multiple years in a row no doubt left grain reserves depleted. Much starvation would have followed. And the already sickly agricultural populations would have fell prey to plagues.

This boom and bust cycle of agricultural civilizations would repeat throughout history. And it often would coincide with major changes in psychology and social order. Our own civilization appears to be coming near the end of a boom period. Erosion is now happening faster and at a larger scale than seen with any prior civilization. But like the archaic bicameral societies, we are trapped by our collective mentality and can’t imagine how to change.

* * *

Trees, the ancient Macedonians, and the world’s first environmental disaster
by Anthony Dosseto and Alex Francke

Recently, we have studied sediments from Lake Dojran, straddling the border between Northern Macedonia and Greece. We looked at the past 12,000 years of sediment archive and found about 3,500 years ago, a massive erosion event happened.

Pollen trapped in the lake’s sediment suggests this is linked to deforestation and the introduction of agriculture in the region. Macedonian timber was highly praised for ship building at the time, which could explain the extent of deforestation.

A massive erosion event would have catastrophic consequences for agriculture and pasture. Interestingly, this event is followed by the onset of the so-called Greek “Dark Ages” (3,100 to 2,850 years ago) and the demise of the highly sophisticated Bronze Age Mycenaean civilisation.

A Common Diet

“English peasants in Medieval times lived on a combination of meat stews, leafy vegetables and dairy products which scientists say was healthier than modern diets.”
~ Frédéric Leroy

There is an idea that, in the past, the poor were fed on bread while the rich monopolized meat. Whether or not this was true of some societies, it certainly wasn’t true of many. For example, in ancient Egypt, all levels of society seemed to have had the same basic high-carb diet with lots of bread. It consisted of the types and amounts of foods that are recommended in the USDA Food Pyramid. And their health suffered for it. As with people eating the same basic diet today, they had high rates of the diseases of civilization, specifically metabolic syndrome: obesity, diabetes, and heart disease. Also, they had serious tooth decay, something not seen with low-carb hunter-gatherers.

The main difference for ancient Egyptians was maybe the quality of bread. The same thing was true in Medieval Europe. Refined flour was limited to the wealthy. White breads didn’t become commonly available to most Westerners until the 1800s, about the same time that surplus grain harvests allowed for a high-carb diet and for the practice of fattening up cows with grains. Unsurprisingly, grain-fed humans also started become fat during this time with the earliest commentary on obesity coming from numerous writers of the era: Jane Austen, Jean Anthelme Brillat-Savarin, William Banting, etc.

In the Middle Ages, there were some other class differences in eating patterns. The basic difference is that the feudal serfs ate more salmon and aristocracy more chicken. It is not what a modern person would expect considering salmon is far more healthy, but the logic is that chickens were a rare commodity in that the poor wouldn’t want to regularly eat what produces the eggs they were dependent upon. Besides the bread issue, the Medieval aristocracy were also eating more sugary deserts. Back then, only the rich had access to or could afford sugar. Even fruit would have been rare for peasants.

Feudalism, especially early feudalism, was actually rather healthy for peasants. It’s not that anyone’s diet was exactly low-carb, at least not intentionally, although that would have been more true in the centuries of the early Middle Ages when populations returned to a more rural lifestyle of hunting, trapping and gathering, a time when any peasant had access to what was called the ‘commons’. But that did change over time as laws became more restrictive about land use. Still, in the centuries following the collapse of the Roman Empire, health and longevity drastically improved for most of the population.

The living conditions for the poor only got worse again as society moved toward modernity with the increase of large-scale agriculture and more processed foods. But even into the late Middle Ages, the diet remained relatively healthy since feudal laws protected the rights of commoners in raising their own food and grazing animals. Subsistence farming combined with some wild foods was not a bad way to feed a population, as long as there was enough land to go around.

A similar diet was maintained among most Americans until the 20th century when urbanization became the norm. As late as the Great Depression, much of the population was able to return to a rural lifestyle or otherwise had access to rural areas, as it was feasible with the then much smaller numbers. Joe Bageant describes his childhood in a West Virginia farming community from 1940s-to-1950s as still having been mostly subsistence farming with a barter economy. We’ve only seen the worst health outcomes among the poor since mass urbanization, which for African Americans only happened around the 1960s or 1970s when the majority finally became urbanized, centuries after it happened in Europe. The healthier diet of non-industrialized rural areas was a great equalizer for most of human existence.

The main thing I thought interesting was that diets didn’t always differ much between populations in the same society. The commonalities of a diet in any given era were greater than the differences. We now think of bread and refined flour as being cheap food, but at an earlier time such food would have been far more expensive and generally less available across all of society. As agriculture expanded, natural sources of food such as wild game became scarce and everyone became increasingly dependent on grains, along with legumes and tubers. This was a dramatic change with detrimental outcomes and it contributed to other larger changes going on in society.

The divergences of diets by class seems to primarily be a modern shift, including the access the upper classes now have to a diversity of fruits and vegetables, even out of season and grown in distant places. Perception of grains as poor people food and cattle feed only become a typical view starting in the 1800s, something discussed by Bryan Kozlowski in The Jane Austen Diet. As with the Roman Empire, the poorest of the poor lost access to healthy foods during the enclosure movement and extending into industrialization. It was only then that the modern high-carb diet became prevalent. It was also the first time that inequality had risen to such an extreme level, which forced a wedge into the once commonly held diet.

The early Middle Age communities (more akin to ancient city-states) established a more similar lifestyle between the rich and poor, as they literally lived close together, worshiped together, celebrated Carnival together, even ate together. A lord or knight would have maintained a retinue of advisers, assistants and servants plus a large number of dependents and workers who ate collective meals in the main house or castle. Later on, knights were no longer needed to defend communities and aristocracy became courtesans spending most of their time in the distant royal court. Then the enclosure movement created the landless peasants that would become the working poor. As class divides grew, diets diverged accordingly. We are so entrenched in a high inequality society, we have forgotten that this is severely abnormal compared to most societies throughout history. The result of greater inequality of wealth and power has been a worsening inequality of nutrition and health.

* * *

Reconciling organic residue analysis, faunal, archaeobotanical and historical records: Diet and the medieval peasant at West Cotton, Raunds, Northamptonshire
by J. Dunne, A. Chapman, P. Blinkhorn, R. P. Evershed

  • Medieval peasant diet comprises meat and cabbage stews cooked on open hearths.
  • Dairy products, butter and cheese, known as ‘white meats of the poor’ also eaten.

The medieval peasant diet that was ‘much healthier’ than today’s average eating habits: Staples of meat, leafy vegetables and cheese are found in residue inside 500-year-old pottery
by Joe Pinkstone

They found the surprisingly well-rounded diet of the peasants would have kept them well-fed and adequately nourished.

Dr Julie Dunne at the University of Bristol told MailOnline: ‘The medieval peasant had a healthy diet and wasn’t lacking in anything major!

‘It is certainly much healthier than the diet of processed foods many of us eat today.

‘The meat stews (beef and mutton) with leafy vegetables (cabbage, leek) would have provided protein and fibre and important vitamins and the dairy products (butter and ‘green’ cheeses) would also have provided protein and other important nutrients.

‘These dairy products were sometimes referred to as the “white meats” of the poor, and known to have been one of the mainstays of the medieval peasants diet. […]

Historical documents state that medieval peasants ate meat, fish, dairy products, fruit and vegetables.

But the researchers say that before their study there was little direct evidence to support this.

Bicameralism and Bilingualism

A paper on multilingualism was posted by Eva Dunkel in the Facebook group for The Origin of Consciousness in the Breakdown of the Bicameral Mind: Consequences of multilingualism for neural architecture by Sayuri Hayakawa and Viorica Marian. It is a great find. The authors look at how multiple languages are processed within the brain and how they can alter brain structure.

This probably also relates to learning of music, art, and math — one might add that learning music later improves the ability to learn math. These are basically other kinds of languages, especially the former in terms of  musical languages (along with whistle and hum languages) that might indicate language having originated in music, not to mention the close relationship music has to dance, movement, and behavior and close relationship of music to group identity. The archaic authorization of command voices in the bicameral mind quite likely came in the form of music and one could imagine the kinds of synchronized collective activities that could have dominated life and work in bicameral societies. There is something powerful about language that we tend to overlook and take for granted. Also, since language is so embedded in culture, monolinguals never see outside of the cultural reality tunnel they exist within. This could bring us to wonder about the role played post-bicameral society by syncretic languages like English. We can’t forget the influence psychedelics might have had on language development and learning at different periods of human existence. And with psychedelics, there is the connection to shamanism with caves as aural spaces and locations of art, possibly the earliest origin of proto-writing.

There is no reason to give mathematics a mere secondary place in our considerations. Numeracy might be important as well in thinking about the bicameral mind specifically and certainly about the human mind in general (Caleb Everett, Numbers and the Making of Us), as numeracy was an advancement or complexification beyond the innumerate tribal societies (e.g., Piraha). Some of the earliest uses of writing was for calculations: accounting, taxation, astrology, etc. Bicameral societies, specifically the early city-states, can seem simplistic in many ways with their lack of complex hierarchies, large centralized governments, standing armies, police forces, or even basic infrastructure such as maintained roads and bridges. Yet they were capable of immense projects that required impressively high levels of planning, organizing, and coordination — as seen with the massive archaic pyramids and other structures built around the world. It’s strange how later empires in the Axial Age and beyond that, though so much larger and extensive with greater wealth and resources, rarely even attempted the seemingly impossible architectural feats of bicameral humans. Complex mathematical systems probably played a major role in the bicameral mind, as seen in how astrological calculations sometimes extended over millennia.

Hayakawa and Marian’s paper could add to the explanation of the breakdown of the bicameral mind. A central focus of their analysis is the increased executive function and neural integration in managing two linguistic inputs — I could see how that would relate to the development of egoic consciousness. It has been proposed that the first to develop Jaynesian consciousness may have been traders who were required to cross cultural boundaries and, of course, who would have been forced to learn multiple languages. As bicameral societies came into regular contact with more diverse linguistic cultures, their bicameral cognitive and social structures would have been increasingly stressed.

Multilingualism goes hand in hand with literacy. Rates of both have increased over the millennia. That would have been a major force in the post-bicameral Axial Age. The immense multiculturalism of societies like the Roman Empire is almost impossible for us to imagine. Hundreds of ethnicities, each with their own language, would co-exist in the same city and sometimes the same neighborhood. On a single street, there could be hundreds of shrines to diverse gods with people praying, people invoking and incantating in their separate languages. These individuals were suddenly forced to deal with complete strangers and learn some basic level of understanding foreign languages and hence foreign understandings.

This was simultaneous with the rise of literacy and its importance to society, only becoming more important over time as the rate of book reading continues to climb (more books are printed in a year these days than were produced in the first several millennia of writing). Still, it was only quite recently that the majority of the population became literate, following from that is the ability of silent reading and its correlate of inner speech. Multilingualism is close behind and catching up. The consciousness revolution is still under way. I’m willing to bet American society will be transformed as we return to multilingualism as the norm, considering that in the first centuries of American history there was immense multilingualism (e.g., German was once one of the most widely spoken languages in North America).

All of this reminds me of linguistic relativity. I’ve pointed out that, though not explicitly stated, Jaynes obviously was referring to linguistic relativity in his own theorizing about language. He talked quite directly about the power language —- and metaphors within language —- had over thought, perception, behavior, and identity (Anke Snoek has some good insights about this in exploring the thought of Giorgio Agamben). This was an idea maybe first expressed by Wilhelm von Humboldt (On Language) in 1836: “Via the latter, qua character of a speech-sound, a pervasive analogy necessarily prevails in the same language; and since a like subjectivity also affects language in the same notion, there resides in every language a characteristic world-view.” And Humboldt even considered the power of learning another language in stating that, “To learn a foreign language should therefore be to acquire a new standpoint in the world-view hitherto possessed, and in fact to a certain extent is so, since every language contains the whole conceptual fabric and mode of presentation of a portion of mankind.”

Multilingualism is multiperspectivism, a core element of the modern mind and modern way of being in the world. Language has the power to transform us. To study language, to learn a new language is to become something different. Each language is not only a separate worldview but locks into place a different sense of self, a persona. This would be true not only for learning different cultural languages but also different professional languages with their respective sets of terminology, as the modern world has diverse areas with their own ways of talking and we modern humans have to deal with this complexity on a regular basis, whether we are talking about tax codes or dietary lingo.

It’s hard to know what that means for humanity’s trajectory across the millennia. But the more we are caught within linguistic worlds and are forced to navigate our way within them the greater the need for a strong egoic individuality to self-initiate action, that is to say the self-authorization of Jaynesian consciousness. We step further back into our own internal space of meta-cognitive metaphor. To know more than one language strengthens an identity separate from any given language. The egoic self retreats behind its walls and looks out from its parapets. Language, rather than being the world we are immersed in, becomes the world we are trapped in (a world that is no longer home and from which we seek to escape, Philip K. Dick’s Black Iron Prison and William S. Burroughs Control). It closes in on us and forces us to become more adaptive to evade the constraints.

Boredom in the Mind: Liberals and Reactionaries

“Hobsbawm was obsessed with boredom; his experience of it appears at least twenty-seven times in Evans’s biography. Were it not for Marx, Hobsbawm tells us, in a book of essays, he never would “have developed any special interest in history.” The subject was too dull. The British writer Adam Phillips describes boredom as “that state of suspended anticipation in which things are started and nothing begins.” More than a wish for excitement, boredom contains a longing for narrative, for engagement that warrants attention to the world.

“A different biographer might have found in Hobsbawm’s boredom an opening onto an entire plane of the Communist experience. Marxism sought to render political desire as objective form, to make human intention a causal force in the world. Not since Machiavelli had political people thought so hard about the alignment of action and opportunity, about the disjuncture between public performance and private wish. Hobsbawm’s life and work are a case study in such questions.”

That is another great insight from Corey Robin, as written in his New Yorker piece, Eric Hobsbawm, the Communist Who Explained History. Boredom does seem key. It is one of the things that stood out to me in Robin’s writings about the reactionary mind. Reactionaries dislike, even fear, boredom more than almost anything else. The rhetoric of reactionaries is often to create the passionate excitement of melodrama, such as how Burke describes the treatment of the French queen.

The political left too often forgets the power of storytelling, especially simplistic and unoriginal storytelling, as seen with Trump. Instead, too many on the left fear the populist riling up of the masses. I remember Ralph Nader warning about this in a speech he gave in his 2000 presidential campaign. There is a leftist mistrust of passion and maybe there is good reason for this mistrust, considering it forms the heartbeat of the reactionary mind. Still, without passion, there is no power of persuasion and so all attempts are doomed from the start. The left will have to learn to fight on this turf or simply embrace full resignation and so fall into cynicism.

The thing is that those on the political left seem to have a higher tolerance for boredom, maybe related to their higher tolerance for cognitive dissonance shown in social science research. It requires greater uncertainty and stress to shut down the liberal-minded person (liberal in the psychological sense). I noticed this in myself. I’m not prone to the reactionary maybe because I don’t get bored easily and so don’t need something coming from outside to motivate me.

But it might go beyond mere tolerance in demonstrating an active preference for boredom. There is something about the liberal mind that is prone to complexity, nuance, and ambiguity that can only be grown amidst boredom — that is to say the open-mindedness of curiosity, doubt, and questioning are only possible when one acknowledges ignorance. It’s much more exciting to proclaim truth, instead, and proclaim it with an entertaining story. This is problematic in seeking political victories, if one is afraid of the melodrama of hard fights. Right-wingers might burn themselves out on endless existential crises, whereas left-wingers typically never build up enough fire to lightly toast a marshmallow.

The political left doesn’t require or thrive with a dualistic vision of opposition and battle, in the way does the political right. This is a central strength and weakness for the left. On the side of weakness, this is why it is so hard for the left to offer a genuinely threatening challenge to the right. Most often what happens is the reactionaries simply co-opt the left and the left too easily falls in line. See how many liberals will repeat reactionary rhetoric. Or notice how many on the political left turned full reactionary during times of conflict (e.g., world war era).

Boredom being the comfort zone of liberals is all the more reason they should resist settling down within its confines. There is no where to hide from the quite real drama that is going on in the world. The liberal elite can’t forever maintain their delusion of being a disinterested aristocracy. As Eric Hobsbawm understood and Karl Marx before him, only a leftist vision can offer a narrative that can compete against the reactionary mind

* * *

“Capitalism is boring. Devoting your life to it, as conservatives do, is horrifying if only because it’s so repetitious. It’s like sex.”
~William F. Buckley Jr., in an interview with Corey Robin

Violent Fantasy of Reactionary Intellectuals

The last thing in the world a reactionary wants is to be bored, as happened with the ending of the ideological battles of the Cold War. They need a worthy enemy or else to invent one. Otherwise, there is nothing to react to and so nothing to get excited about, followed by a total loss of meaning and purpose, resulting in dreaded apathy and ennui. This leads reactionaries to become provocative, in the hope of provoking an opponent into a fight. Another strategy is simply to portray the whole world as a battleground, such that everything is interpreted as a potential attack, working oneself or one’s followers into a froth.

The Fantasy of Creative Destruction

To the reactionary mind, sacrifice of self can be as acceptable as sacrifice of others. It’s the fight, the struggle itself that gives meaning — no matter the costs and consequences, no matter how it ends. The greatest sin is boredom, the inevitable result of victory. As Irving Kristol said to Corey Robin, the defeat of the Soviet Union “deprived us of an enemy.” It was the end of history for, without an enervating battle of moral imagination, it was the end of the world.

Sailors’ Rations, a High-Carb Diet

In the 18th century British Navy, “Soldiers and sailors typically got one pound of bread a day,” in the form of hard tack, a hard biscuit. That is according to James Townsend. On top of that, some days they had been given peas and on other days a porridge called burgoo. Elsewhere, Townsend shares a some info from a 1796 memoir of the period — the author having written that, “every man and boy born on the books of any of his Majesty’s ships are allowed as following a pound of biscuit bread and a gallon of beer per day” (William Spavens, Memoirs of A Seafaring Life,  p. 106). So, grains and more grains, in multiple forms, foods and beverages.

About burgoo, it is a “ground oatmeal boiled up,” as described by Townsend. “Now you wouldn’t necessarily eat that all by itself. Early on, you were given to go with that salt beef fat. So the slush that came to the top when you’re boiling all your salt beef or salt pork. You get all that fat that goes up on top — they would scrape that off, they keep that and give it to you to go with your burgoo. But later on they said maybe that cause scurvy so they let you have some molasses instead.”

They really didn’t understand scurvy at the time. Animal foods, especially fat, would have some vitamin C in it, whereas the oats and molasses had none. They made up for this deficiency later on by adding in cabbage to the sailors’ diet, though not a great choice considering vegetables don’t store well on ships. I’d point out that it’s not that they weren’t getting enough vitamin C, at least for a healthy traditional diet, as they got meat four days a week and even on the other meat-free banyan-days they had some butter and cheese. That would have given them sufficient vitamin C for a low-carb diet, especially with seafood caught along the way.

A high-carb diet, however, is a whole other matter. The amount of carbs and sugar sailors ate daily was quite large. This came about with colonial trade that made grains cheap and widely available, along with the sudden access to sugar from distant sugarcane plantations. Glucose competes with the processing of vitamin C and so requires higher intake of the latter for basic health, specifically to avoid scurvy. A low-carb diet, on the other hand, can avoid scurvy with very little vitamin C since sufficient amounts are in animal foods. Also, a low-carb diet is less inflammatory and so this further decreases the need for antioxidants like vitamin C.

This is why Inuit could eat few plants and immense amounts of meat and fat. They got more vitamin C on a regular basis from seal fat than they did from the meager plant foods they could gather in the short warm period of the far north. But with almost no carbohydrates in the traditional Inuit diet, the requirement for vitamin C was so low as to not be a problem. This is probably the same explanation for why Vikings and Polynesians could travel vast distances across the ocean without getting sick, as they were surely eating mostly fresh seafood and very little, if any, starchy foods.

Unlike protein and fat, carbohydrate is not an essential macronutrient. Yes, carbohydrates provide glucose that the body needs in limited amounts, but through gluceogenesis proteins can be turned into glucose on demand. So, a long sea voyage with zero carbs would never have been a problem.

Sailors in the colonial era ate all of those biscuits, porridge, and peas not because it offered any health value beyond mere survival but because it was cheap food. Those sailors weren’t being fed to have long, healthy lives as labor was cheap and no one cared about them. As soon as a sailor was no longer useful, he would no longer be employed in that profession and he’d find himself among the impoverished masses. For all the health problems of a sailor’s diet, it was better than the alternative of starvation or near starvation that so many others faced.

Grain consumption had been increasing in late feudalism, but peasants still maintained wider variety in their diet through foods they could hunt or gather, not to mention some fresh meat, fat, eggs, and dairy from animals they raised. That all began to change with the enclosure movement. The end of feudal village life and loss of the peasants’ commons was not a pretty picture and did not lead to happy results, as the landless peasants evicted from their homes flooded into the cities where most of them died. The economic desperation made for much cheap labor. Naval sailors with their guaranteed rations, in spite of nutritional deficiencies, were comparably lucky.

* * *

This understanding of low-carb, animal-based diets isn’t new either. If you look back to previous centuries, you see that low-carb diets have been advocated going back to the late 1700s. Advocating such diets prior to that was irrelevant since low-carb was the dietary norm that was assumed without being needing to be stated.

Only beginning a couple of centuries ago did new forms of agriculture take hold that created large surplus yields for the first time in human existence. Suddenly, right when a high-carb diet became possible for a larger part of the population, it was unsurprising that the health problems of a high-carb diet began to appear and so the voices for low-carb soon followed.

In prior centuries, one even sees examples in old books describing the health advantages of animal foods. But I’m not sure if anyone made such connections of high-carb diets to scurvy until more recently. Still, this understanding is older than most people realize, going back at least to the late 1800s. L. Amber O’Hearn shares the following passages from (C is for Carnivore):

Selected notes from the Lancet volume 123
You can find this in Google books [1].

p 329. From a medical report from Mr. W. H. Neale, M.B. B.S. medical officer of the Eira, about an Arctic expedition:

“For the boat journey we saved 40 lb. of tinned meat (per man), and 351b. of tinned soups(per man), 3cwt. of biscuit, and about 800lb. of walrus me it, which was cooked and soldered up by our blacksmith in old provision tins. About 80 lb. of tea were saved, enabling us to have tea night and morning till almost the day we were picked up. No lime-juice was saved. A few bottles of wine and brandy were secured, and kept for Mr. Leigh-Smith and invalids. All the rum was saved, and every man was allowed one-fifth of a gill per day until May 1st, 1882, when it was decided to keep the remaining eighteen gallons for the boats. One man was a teetotaler from January to June, and was quite as healthy as anyone else. Personally it made very little difference whether I took the allowance of “grog” or not. One of the sick men was also a teetotaler nearly all the time. During the boat journey the men preferred their grog when doing any hard work, a fact I could never agree to, but when wet and cold a glass of grog before going to sleep seemed to give warmth to the body and helped to send one to sleep. Whilst sailing, also, one glass of grog would give temporary warmth ; but everyone acknowledged that a mug of hot tea was far better when it was fit weather to make a fire. I do not think that spirits or lime-juice is much use as anti scorbutics ; for if you live on the flesh of the country even, I believe, without vegetables, you will run very little risk of scurvy. There was not a sign of scurvy amongst us, not even an anaemic face. I have brought home a sample of bear and walrus meat in a tin, which I intend to have analysed if it is still in good preservation ; and then it will be a question as to how it will be best to preserve the meat of the country in such a form as to enable a sufficient supply to be taken on long sledge journeys ; for as long as you have plenty of ventilation and plenty of meat, anyone can live out an Arctic winter without fear of scurvy, even if they lie for days in their beds, as our men were compelled to do in the winter when the weather was too bad to go outside (there being no room inside for more than six or seven to be up at one time).”

p331, John Lucas: “Sir, —A propos the annotation appearing under the above heading in The Lancet of June 24th, pp. 1048-9, I would beg permission to observe that almost every medical man in India will be able to endorse the views of Dr. Moore, to which you refer. Medical officers of native regiments notice almost daily in their hospital practice that—to use your writer’s words—”insufficient diet will cause scurvy even if fresh vegetable material forms a part of the diet, though more rapidly if it is withheld.” Indeed, so far as my humble experience as a regimental surgeon from observations on the same men goes, I am inclined to think that the meat-eating classes of our Sepoys—to wit, the Mahomedans, especially those from the Punjaub—are comparatively seldom seen with the scorbutic taint ; while, on the contrary, the subjects are, in the main, vegetable feeders who are their non-meat-eating comrades, the Hindus (Parboos from the North- West Provinces and Deccan Mahrattas), especially those whose daily food is barely sufficient either in quality or quantity. A sceptic may refuse to accept this view on the ostensible reason that though the food of the meat-eating classes be such, it may, perchance, contain vegetable ingredients as well as meat. To this I would submit the rejoinder that as a matter of fact, quite apart from all theory and hypothesis, the food of these meat-eating classes does not always contain much, or any, vegetables. In the case of the semi-savage hill tribes of Afghanistan and Baluchistan, their food contains large amounts of meat (mutton), and is altogether devoid of vegetables. The singular immunity from scurvy of these races has struck me as a remarkable physiological circumstance, which should make us pause before accepting the vegetable doctrine in relation to scurvy et hoc genus omne.”

p370 Charles Henry Ralphe “To the Editor of The Lancet. Sir, —I was struck by two independent observations which occurred in your columns last week with regard to the etiology of scurvy, both tending to controvert the generally received opinion that the exclusive cause of the disease is the prolonged and complete withdrawal of succulent vegetables from the dietary of those affected. Thus Mr. Neale, of the Eira Arctic Expedition, says : ” I do not think that spirit or limejuice is of much use as an anti scorbutic ; for if you live on the flesh of the country, even, I believe, without vegetables, you will run very little risk of scurvy.” Dr. Lucas writes: “In the case of the semi- savage hill tribes of Afghanistan and Beluchistan their food contains a large amount of meat, and is altogether devoid of vegetables. The singular immunity from scurvy of these races has struck me as a remarkable physiological circumstance, which should make us pause before accepting the vegetable doctrine in relation to scurvy.” These observations do not stand alone. Arctic voyagers have long pointed out the antiscorbutic properties of fresh meat, and Baron Larrey, with regard to hot climates, arrived at the same conclusion in the Egyptian expedition under Bonaparte, at the end of last century.”

p495 “SCURVY. Dr. Buzzard, in a letter which appeared in oar columns last week, considers the fact that the crew of the Eira were supplied with preserved vegetables tells against the supposition advanced by Mr. Neale, that if Arctic voyagers were to feed only on the flesh of the animals supplied by the country they would be able to dispense with lime-juice. The truth is, it is an open question with many as to the relative antiscorbutic properties of preserved vegetables, and whether under the circumstances in which the Eira’s crew were placed they would have been sufficient, in the absence of lime-juice and fresh meat, to have preserved the crew from scurvy. A case in point is the outbreak that occurred on board the Adventure, in the surveying voyages of that vessel and the Beagle. The Adventure had been anchored in Port Famine for several months, and although “pickles, cranberries, large quantities of wild celery, preserved meats and soups, had been abundantly supplied,” still great difficulty had been experienced in obtaining fresh meat, and they were dependent on an intermittent supply from wild-fowl and a few shell-fish. Scurvy appeared early in July, fourteen cases, including the assistant-surgeon, being down with it. At the end of July fresh meat was obtained; at first it seemed to prove ineffectual, but an ample supply being continued, the commander was able to report, by the end of August, ” the timely supply of guanaco meat had certainly checked the scurvy.” This is an instance in which articles of diet having recognised antiscorbutic properties proved insufficient, in the absence of lime-juice and fresh meat, and under conditions of exceptional hardship, exposure, and depressing influence, to prevent the occurrence of scurvy. So with the Eira, we believe that had they not fortunately been able to obtain abundant supplies of fresh meat, scurvy would have appeared, and that the preserved vegetables in the absence of lime-juice would have proved insufficient as antiscorbutics. This antiscorbutic virtue of fresh meat has long been recognised by Arctic explorers, and, strangely, their experience in this respect is quite at variance with ours in Europe. It has been sought to explain the immunity from the disease of the Esquimaux, who live almost exclusively on seal and walrus flesh daring the winter months, by maintaining that the protection is derived from the herbage extracted from the stomach of reindeer they may kill. In view, however, of the small proportion of vegetable matter that would be thus obtained for each member of the tribe, and the intermittent nature of the supply, it can hardly be maintained that the antiscorbutic supplied in this way is sufficient unless there are other conditions tending in the same direction. And of these, one, as we have already stated, consists probably in the fact that the flesh is eaten without lactic acid decomposition having taken place, owing either to its being devoured immediately, or from its becoming frozen. The converse being the case in Europe, where meat is hung some time after rigor mortis has passed off, and lactic acid develops to a considerable extent. This seems a rational explanation, and it reconciles the discrepancy of opinion that exists between European and Arctic observers with regard to meat as an antiscorbutic. In bringing forward the claims of the flesh of recently killed animals as an antiscorbutic, it must be understood that we fully uphold the doctrine that the exclusive cause of scurvy is due to the insufficient supply of fresh vegetable food, and that it can be only completely cured by their administration ; but if the claims advanced with regard to the antiscorbutic qualities of recently slaughtered flesh be proved, then we have ascertained a fact which ought to be of the greatest practical value with regard to the conduct of exploring expeditions, and every effort should be made to obtain it. Everything, moreover, conducive to the improvement of the sailor’s dietary ought to receive serious consideration, and it has therefore seemed to us that the remarks of Mr. Neale and Dr. Lucas are especially worthy of attention, whilst we think the suggestion of the former gentleman with regard to the use of the blood of slaughtered animals likely to prove of special value.”

p913 “Sir, —In a foot-note to page 49G of his ” Manual of Practical Hygiene,”, fifth edition, (London, Churchill, 1878), Parkes says : —”For a good deal of evidence up to 1818, I beg to refer to a review I contributed on scurvy in the British and Foreign. Medico-Chirurgical Review in that year. The evidence since this period has added, I believe, little to our knowledge, except to show that the preservation and curative powers of fresh meat in large quantities, and especially raw meat (Kane’s Arctic Expedition), will not only prevent, but will cure scurvy. Kane found the raw meat of the walrus a certain cure. For the most recent evidence and much valuable information, see the Report of the Admiralty Committee on the Scurvy which occurred in the Arctic Expedition of 1875-76 (Blue Hook, 1877).” I think that the last sentence in the above is not Parkes’ own, but that it must have been added by the editor in order to bring it up to the date of the issue of the current edition. The experience since then of the Arctic Expedition in the Eira coincides with these. I refer to that portion of the report where the author tells us that “our food consisted chiefly of War and walrus meat, mixing some of the bear’s blood with the soup when possible.” And again: “I do not think that, spirits or lime-juice is much use as an antiscorbutic, for if you live on the flesh of the country, even, I believe, without vegetables, you will run very little risk of scurvy. There was not a sign of scurvy amongst us, not even an anaemic face,” (Lancet, Aug. 26th.) So that, as far as this question of fresh meat and raw meat and their prophylactic and curative properties are concerned, ample evidence will be found in other published literature to corroborate that of the Eira. But when you take up the question of the particular change which takes place in meat from its fresh to its stale condition, you will find a great deal of diversity and little harmony at opinion. Without taking up other authors on the subject, we stick to Parkes and compare his with Pr. I ; life’.-, views on this point. Parkes thought “fresh, and especially raw meat, is also useful, and this is conjectured to be from its amount of lactic acid ; but this is uncertain,”1 while on the other hand Dr. Ralfe repeats, as a probable explanation, too, of the reason of fresh meat being an anti scorbutic, but that it is due to the absence of lactic acid. For, from well-known chemical facts he deduces the following: — ” In hot climates meat has to be eaten so freshly killed that no lime is allowed for the development of the lactic acid : in arctic regions the freezing arrests its formation. The muscle plasma, therefore, remains alkaline. In Europe the meat is invariably hung, lactic acid is developed freely, and the muscle plasma is consequently acid. If, therefore, scurvy is, as I have endeavoured to show (“Inquiry into the General Pathology of Scurvy”), due to diminished alkalinity of the blood, it can be easily understood that meat may be antiscorbutic when fresh killed, or frozen immediately after killing, but scorbutic when these alkaline salts have been converted into acid ones by lactic acid decomposition.'”-‘ The view of the alkalinity of the blood coincides with Dr. Garrod’s theory, which, however, appears to have as a sine qua turn the absence of a particular salt- namely, potash. I am inclined to think that, taking into account the nervous symptoms which are not infrequently associated with a certain proportion of scorbutic cases, resulting probably from the changes taking place in the blood, not unlike those which occur in gout and rheumatism, there must be some material change produced in the sympathetic system. In many of the individuals tainted with scurvy there were slight and severe attacks of passing jaundice in the cases which occurred in Afghanistan. Can we possibly trace this icteric condition to this cause? This is but a conjecture so far. But there certainly is in Garrod’s observations an important point which, if applicable to all countries, climates, and conditions of life, is sufficiently weighty to indicate the necessity for farther research in that direction, and that point is this : the scorbutic condition disappeared on the patient being given a few grains of potash, though kept strictly on precisely the same diet which produced scurvy. —I am, Sir, yours truly, Ahmedabad, India, 30th Sept., 1882. JOHN C. LUCAS.”

The Crisis of Identity

“Have we lived too fast?”
~Dr. Silas Weir Mitchell, 1871
Wear and Tear, or Hints for the Overworked

I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is describing:

“Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”

That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent. (The term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ became a more common diagnosis, although ‘nostalgia’ was first referred to in the 17th century. Today, we might speak of ‘neurasthenia’ as stress and, even earlier, they had other ways of talking about it — as Bryan Kozlowski explained in The Jane Austen Diet, p. 231: A multitude of Regency terms like “flutterings,” “fidgets,” “agitations,” “vexations,” and, above all, “nerves” are the historical equivalents to what we would now recognize as physiological stress.”)

This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.

Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” This class bias goes back even earlier, a phenomenon explored by Bryan Kozlowski in one chapter of The Jane Austen Diet (p. 232-233):

“Yet the idea that this was acceptable—nay, encouraged—behavior was rampant throughout the late 18th century. Ever since Jane was young, stress itself was viewed as the right and prerogative of the rich and well-off. The more stress you felt, the more you demonstrated to the world how truly delicate and sensitive your wealthy, softly pampered body actually was. The common catchword for this was having a heightened sensibility—one of the most fashionable afflictions in England at the time. Mainly affecting the “nerves,” a Regency woman who caught the sensibility but “disdains to be strong minded,” wrote a cultural observer in 1799, “she trembles at every breeze, faints at every peril and yields to every assailant.” Austen knew real-life strutters of this sensibility, writing about one acquaintance who rather enjoys “her spasms and nervousness and the consequence they give her.” It’s the same “sensibility” Marianne wallows in throughout the novel that bears its name, “feeding and encouraging” her anxiety “as a duty.” Readers of the era would have found nothing out of the ordinary in Marianne’s high-strung embrace of stress.”

This condition was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains:

“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.

“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””

I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry,  with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).

There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

Still, it goes far beyond diet. There has been a diversity of stressors that have continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.

Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).

There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:

“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)

This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.

The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.

By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.

In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.

Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Churchland and Peterson did acknowledge that this went beyond mere individual or even familial issues: “It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”

Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.

Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).

The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.

A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).

“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as “Red Republicans”, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.

This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the “civilizing process” that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate. Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.

To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the “First Wave” of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.

This was at a time when the abortion rate was sky-rocketing, indicating most women held other views. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” (Rickie Solinger, Pregnancy and Power, p. 61). In the unGodly and unChurched period of early America (“We forgot.”), organized religion was weak and “premarital sex was typical, many marriages following after pregnancy, but some people simply lived in sin. Single parents and ‘bastards’ were common” (A Vast Experiment). Early Americans, by today’s standards, were not good Christians — visiting Europeans often saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):

“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”

Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants.

The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of  larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?

“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)

This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.

Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why abortion laws were designed to target male doctors, although they rarely did, and not their female patients. Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us for own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the contained self. But the container is weak and keeps leaking all over the place.

To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or sel- understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging.

“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.

* * *

I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
pp. 34

WOMEN AND THE WOMB: The Emerging Birth Control Debate

The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31

“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32

As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33

But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.

“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.

Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37

Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.

“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19 th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41

“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”

“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44

Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.

Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.

Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg

Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland

Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz

Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz

Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel

Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler

Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer

American Sexual Histories
edited by Elizabeth Reis

Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff

From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin

Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman

One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker

* * *

 

Moralizing Gods as Effect, Not Cause

There is a new study on moralizing gods and social complexity, specifically as populations grow large. The authors are critical of the Axial Age theory: “Although our results do not support the view that moralizing gods were necessary for the rise of complex societies, they also do not support a leading alternative hypothesis that moralizing gods only emerged as a byproduct of a sudden increase in affluence during a first millennium ‘Axial Age’. Instead, in three of our regions (Egypt, Mesopotamia and Anatolia), moralizing gods appeared before 1500.”

I don’t take this criticism as too significant, since it is mostly an issue of dating. Objectively, there are no such things as distinct historical periods. Sure, you’ll find precursors of the Axial Age in the late Bronze Age. Then again, you’ll find precursors of the Renaissance and Protestant Reformation in the Axial Age. And you’ll find the precursors of the Enlightenment in the Renaissance and Protestant Reformation. It turns out all of history is continuous. No big shocker there. Changes build up slowly, until they hit a breaking point. It’s that breaking point, often when it becomes widespread, that gets designated as the new historical period. But the dividing line from one era to the next is always somewhat arbitrary.

This is important to keep in mind. And it does have more than slight relevance. This reframing of what has been called the Axial Age accords perfectly with Julian Jaynes’ theories on the ending of the bicameral mind and the rise of egoic consciousness, along with the rise of the egoic gods with their jealousies, vengeance, and so forth. A half century ago, Jaynes was noting that aspects of moralizing social orders were appearing in the late Bronze Age and he speculated that it had to do with increasing complexity that set those societies up for collapse.

Religion itself, as a formal distinct institution with standardized practices, didn’t exist until well into the Axial Age. Before that, rituals and spiritual/supernatural experience were apparently inseparable from everyday life, as the archaic self was inseparable from the communal sense of the world. Religion as we now know it is what replaced that prior way of being in relationship to ‘gods’, but it wasn’t only a different sense of the divine for the texts refer to early people hearing the voices of spirits, godmen, dead kings, and ancestors. Religion was only necessary, according to Jaynes, when the voices went silent (i.e., when they were no longer heard externally because a singular voice had become internalized). The pre-religious mentality is what Jaynes called the bicameral mind and it represents the earliest and largest portion of civilization, maybe lasting for millennia upon millennia going back to the first city-states.

The pressures on the bicameral mind began to stress the social order beyond what could be managed. Those late Bronze Age civilizations had barely begun to adapt to that complexity and weren’t successful. Only Egypt was left standing and, in its sudden isolation amidst a world of wreckage and refugees, it too was transformed. We speak of the Axial Age in the context of a later date because it took many centuries for empires to be rebuilt around moralizing religions (and other totalizing systems and often totalitarian institutions; e.g., large centralized governments with rigid hierarchies). The archaic civilizations had to be mostly razed to the ground before something else could more fully take their place.

There is something else to understand. To have moralizing big gods to maintain social order, what is required is introspectable subjectivity (i.e., an individual to be controlled by morality). That is to say you need a narratizing inner space where a conscience can operate in the voicing of morality tales and the imagining of narratized scenarios such as considering alternate possible future actions, paths, and consequences. This is what Jaynes was arguing and it wasn’t vague speculation, as he was working with the best evidence he could accrue. Building on Jaynes work with language, Brian J. McVeigh has analyzed early texts to determine how often mind-words were found. Going by language use during the late Bronze Age, there was an increased focus on psychological ways of speaking. Prior to that, morality as such wasn’t necessary, no more than were written laws, court systems, police forces, and standing armies — all of which appeared rather late in civilization.

What creates the introspectable subjectivity of the egoic self, i.e., Jaynesian ‘consciousness’? Jaynes suggests that writing was a prerequisite and it needed to be advanced beyond the stage of simple record-keeping. A literary canon likely developed first to prime the mind for a particular form of narratizing. The authors of the paper do note that written language generally came first:

“This megasociety threshold does not seem to correspond to the point at which societies develop writing, which might have suggested that moralizing gods were present earlier but were not preserved archaeologically. Although we cannot rule out this possibility, the fact that written records preceded the development of moralizing gods in 9 out of the 12 regions analysed (by an average period of 400 years; Supplementary Table 2)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies — suggests that such beliefs were not widespread before the invention of writing. The few small-scale societies that did display precolonial evidence of moralizing gods came from regions that had previously been used to support the claim that moralizing gods contributed to the rise of social complexity (Austronesia and Iceland), which suggests that such regions are the exception rather than the rule.”

As for the exceptions, it’s possible they were influenced by the moralizing religions of societies they came in contact with. Scandinavians, long before they developed complex societies with large concentrated populations, they were traveling and trading all over Eurasia, the Levant, and into North Africa. This was happening in the Bronze Age, during the period of rising big gods and moralizing religion: “The analysis showed that the blue beads buried with the [Nordic] women turned out to have originated from the same glass workshop in Amarna that adorned King Tutankhamun at his funeral in 1323 BCE. King Tut´s golden deathmask contains stripes of blue glass in the headdress, as well as in the inlay of his false beard.” (Philippe Bohstrom, Beads Found in 3,400-year-old Nordic Graves Were Made by King Tut’s Glassmaker). It would be best to not fall prey to notions of untouched primitives.

We can’t assume that these exceptions were actually exceptional, in supposedly being isolated examples contrary to the larger pattern. Even hunter-gatherers have been heavily shaped by the millennia of civilizations that surrounded them. Occasionally finding moralizing religions among simpler and smaller societies is no more remarkable than finding metal axes and t-shirts among tribal people today. All societies respond to changing conditions and adapt as necessary to survive. The appearance of moralizing religions and the empires that went with them transformed the world far beyond the borders of any given society, not that borders were all that defined back then anyway. The large-scale consequences spread across the earth these past three millennia, a tidal wave hitting some places sooner than others but in the end none remain untouched. We are all now under the watchful eye of big gods or else their secularized equivalent, big brother of the surveillance state.

* * *

Moralizing gods appear after, not before, the rise of social complexity, new research suggests
by Redazione Redazione

Professor Whitehouse said: ‘The original function of moralizing gods in world history may have been to hold together large but rather fragile, ethnically diverse societies. It raises the question as to how some of those functions could still be performed in today’s increasingly secular societies – and what the costs might be if they can’t. Even if world history cannot tell us how to live our lives, it could provide a more reliable way of estimating the probabilities of different futures.’

When Ancient Societies Hit a Million People, Vengeful Gods Appeared
by Charles Q. Choi

“For we know Him who said, ‘And I will execute great vengeance upon them with furious rebukes; and they shall know that I am the Lord, when I shall lay my vengeance upon them.'” Ezekiel 25:17.

The God depicted in the Old Testament may sometimes seem wrathful. And in that, he’s not alone; supernatural forces that punish evil play a central role in many modern religions.

But which came first: complex societies or the belief in a punishing god? […]

The researchers found that belief in moralizing gods usually followed increases in social complexity, generally appearing after the emergence of civilizations with populations of more than about 1 million people.

“It was particularly striking how consistent it was [that] this phenomenon emerged at the million-person level,” Savage said. “First, you get big societies, and these beliefs then come.”

All in all, “our research suggests that religion is playing a functional role throughout world history, helping stabilize societies and people cooperate overall,” Savage said. “In really small societies, like very small groups of hunter-gatherers, everyone knows everyone else, and everyone’s keeping an eye on everyone else to make sure they’re behaving well. Bigger societies are more anonymous, so you might not know who to trust.”

At those sizes, you see the rise of beliefs in an all-powerful, supernatural person watching and keeping things under control, Savage added.

Complex societies gave birth to big gods, not the other way around: study
from Complexity Science Hub Vienna

“It has been a debate for centuries why humans, unlike other animals, cooperate in large groups of genetically unrelated individuals,” says Seshat director and co-author Peter Turchin from the University of Connecticut and the Complexity Science Hub Vienna. Factors such as agriculture, warfare, or religion have been proposed as main driving forces.

One prominent theory, the big or moralizing gods hypothesis, assumes that religious beliefs were key. According to this theory, people are more likely to cooperate fairly if they believe in gods who will punish them if they don’t. “To our surprise, our data strongly contradict this hypothesis,” says lead author Harvey Whitehouse. “In almost every world region for which we have data, moralizing gods tended to follow, not precede, increases in social complexity.” Even more so, standardized rituals tended on average to appear hundreds of years before gods who cared about human morality.

Such rituals create a collective identity and feelings of belonging that act as social glue, making people to behave more cooperatively. “Our results suggest that collective identities are more important to facilitate cooperation in societies than religious beliefs,” says Harvey Whitehouse.

Society Creates God, God Does Not Create Society
by  Razib Khan

What’s striking is how soon moralizing gods shows up after the spike in social complexity.

In the ancient world, early Christian writers explicitly asserted that it was not a coincidence that their savior arrived with the rise of the Roman Empire. They contended that a universal religion, Christianity, required a universal empire, Rome. There are two ways you can look at this. First, that the causal arrow is such that social complexity leads to moralizing gods, and that’s that. The former is a necessary condition for the latter. Second, one could suggest that moralizing gods are a cultural adaptation to large complex societies, one of many, that dampen instability and allow for the persistence of those societies. That is, social complexity leads to moralistic gods, who maintain and sustain social complexity. To be frank, I suspect the answer will be closer to the second. But we’ll see.

Another result that was not anticipated I suspect is that ritual religion emerged before moralizing gods. In other words, instead of “Big Gods,” it might be “Big Rules.” With hindsight, I don’t think this is coincidental since cohesive generalizable rules are probably essential for social complexity and winning in inter-group competition. It’s not a surprise that legal codes emerge first in Mesopotamia, where you had the world’s first anonymous urban societies. And rituals lend themselves to mass social movements in public to bind groups. I think it will turn out that moralizing gods were grafted on top of these general rulesets, which allow for coordination, cooperation, and cohesion, so as to increase their import and solidify their necessity due to the connection with supernatural agents, which personalize the sets of rules from on high.

Complex societies precede moralizing gods throughout world history
by Harvey Whitehouse, Pieter François, Patrick E. Savage, Thomas E. Currie, Kevin C. Feeney, Enrico Cioni, Rosalind Purcell, Robert M. Ross, Jennifer Larson, John Baines, Barend ter Haar, Alan Covey, and Peter Turchin

The origins of religion and of complex societies represent evolutionary puzzles1–8. The ‘moralizing gods’ hypothesis offers a solution to both puzzles by proposing that belief in morally concerned supernatural agents culturally evolved to facilitate cooperation among strangers in large-scale societies9–13. Although previous research has suggested an association between the presence of moralizing gods and social complexity3,6,7,9–18, the relationship between the two is disputed9–13,19–24, and attempts to establish causality have been hampered by limitations in the availability of detailed global longitudinal data. To overcome these limitations, here we systematically coded records from 414societies that span the past 10,000years from 30regions around the world, using 51measures of social complexity and 4measures of supernatural enforcement of morality. Our analyses not only confirm the association between moralizing gods and social complexity, but also reveal that moralizing gods follow—rather than precede—large increases in social complexity. Contrary to previous predictions9,12,16,18, powerful moralizing ‘big gods’ and prosocial supernatural punishment tend to appear only after the emergence of ‘megasocieties’ with populations of more than around one million people. Moralizing gods are not a prerequisite for the evolution of social complexity, but they may help to sustain and expand complex multi-ethnic empires after they have become established. By contrast, rituals that facilitate the standardization of religious traditions across large populations25,26 generally precede the appearance of moralizing gods. This suggests that ritual practices were more important than the particular content of religious belief to the initial rise of social complexity.

 

 

Spartan Diet

There are a number well known low-carb diets. The most widely cited is that of the Inuit, but the Masai are often mentioned as well. I came across another example in Jack Weatherford’s Genghis Khan and the Making of the Modern World (see here for earlier discussion).

Mongols lived off of meat, blood, and milk paste. This diet, as the Chinese observed, allowed the Mongol warriors to ride and fight for days on end without needing to stop for meals. Part of this is because they could eat while riding, but there is a more key factor. This diet is so low-carb as to be ketogenic. And long-term ketosis leads to fat-adaptation which allows for high energy and stamina, even without meals as long as one has enough fat reserves (i.e., body fat). The feast and fast style of eating is common among non-agriculturalists.

There are other historical examples I haven’t previously researched. Ori Hofmekler in The Warrior Diet, claims that Spartans and Romans ate in a brief period each day, about a four hour window — because of the practice of having a communal meal once a day. This basically meant fasting for lengthy periods, although today it is often described as time-restricted eating. As I recall, Sikh monks have a similar practice of only eating one meal a day during which they are free to eat as much as they want. The trick to this diet is that it decreases overall food intake and keeps the body in ketosis more often — if starchy foods are restricted enough and the body is fat-adapted, this lessens hunger and cravings.

The Mongols may have been doing something similar. The thing about ketosis is your desire to snack all the time simply goes away. You don’t have to force yourself into food deprivation and it isn’t starvation, even if going without food for several days. As long as there is plenty of body fat and you are fat-adapted, the body maintains health, energy and mood just fine until the next big meal. Even non-warrior societies do this. The meat-loving and blubber-gluttonous Inuit don’t tolerate aggression in the slightest, and they certainly aren’t known for amassing large armies and going on military campaigns. Or consider the Piraha who are largely pacifists, banishing their own members if they kill another person, even someone from another tribe. The Piraha get about 70% of their diet from fish and other meat, that is to say a ketogenic diet. Plus, even though surrounded by lush forests filled with a wide variety of food, plants and animals, the Piraha regularly choose not to eat — sometimes for no particular reason but also sometimes when doing communal dances over multiple days.

So, I wouldn’t be surprised if Spartan and Roman warriors had similar practices, especially the Spartans who didn’t farm much (the grains that were grown by the Spartans’ slaves likely were most often fed to the slaves, not as much to the ruling Spartans). As for Romans, their diet probably became more carb-centric as Rome grew into an agricultural empire. But early on in the days of the Roman Republic, Romans probably were like Spartans in the heavy focus they would have put on raising cattle and hunting game. Still, a diet doesn’t have to be heavy in fatty meat to be ketogenic, as long as it involves some combination of calorie restriction, portion control, narrow periods of meals, intermittent fasting, etc — all being other ways of lessening the total intake of starchy foods.

One of the most common meals for Spartans was a blood and bone broth using boiled pork mixed with salt and vinegar, the consistency being thick and the color black. That would have included a lot of fat, fat-soluble vitamins, minerals, collagen, electrolytes, and much else. It was a nutrient-dense elixir of health, however horrible it may seem to the modern palate. And it probably was low-carb, depending on what else might’ve been added to it. Even the wine Spartans drink was watered down, as drunkenness was frowned upon. The purpose was probably more to kill unhealthy microbes in the water, as was watered down beer millennia later for early Americans, and so it would have added little sugar to the diet. Like the Mongols, they also enjoyed dairy. And they did have some grains such as bread, but apparently it was never a staple of their diet.

One thing they probably ate little of was olive oil, assuming it was used at all, as it was rarely mentioned in ancient texts and only became popular among Greeks in recent history, specifically the past century (discussed by Nina Teicholz in The Big Fat Surprise). Instead, Spartans as with most other early Greeks would have preferred animal fat, mostly lard in the case of the Spartans, whereas many other less landlocked Greeks preferred fish. Other foods the ancient Greeks, Spartans and otherwise, lacked was tomatoes later introduced from the New World and noodles later introduced from China, both during the colonial era of recent centuries. So, a traditional Greek diet would have looked far different than what we think of as the modern ‘Mediterranean diet’.

On top of that, Spartans were proud of eating very little and proud of their ability to fast. Plutarch (2nd century AD) writes in Parallel Lives “For the meals allowed them are scanty, in order that they may take into their own hands the fight against hunger, and so be forced into boldness and cunning”. Also, Xenophon who was alive whilst Sparta existed, writes in Spartan Society 2, “furnish for the common meal just the right amount for [the boys in their charge] never to become sluggish through being too full, while also giving them a taste of what it is not to have enough.” (from The Ancient Warrior Diet: Spartans) It’s hard to see how this wouldn’t have been ketogenic. Spartans were known for being great warriors achieving feats of military prowess that would’ve been impossible from lesser men. On their fatty meat diet of pork and game, they were taller and leaner than other Greeks. They didn’t have large meals and fasted for most of the day, but when they did eat it was food dense in fat, calories, and nutrition.

* * *

Ancient Spartan Food and Diet
from Legend & Chronicles

The Secrets of Spartan Cuisine
by Helena P. Schrader