Below is a passage from a book I got for my birthday. I was skimming through this tome and came across a note from one of the later chapters. It discusses a theory about how new substances, caffeine and sugar, helped cause changes in mentality during colonialism, early modernity, and industrialization. I first came across a version of this theory back in the late ’90s or early Aughts, in a book I no longer own and haven’t been able to track down since.
So, it was nice coming across this brief summary with references. But in the other version, the argument was that these substances (including nicotine, cocaine, etc; along with a different kind of drug like opium) were central to the Enlightenment Age and the post-Enlightenment world, something only suggested by this author. This is a supporting theory for my larger theory on addictive substances, including some thoughts on how they replaced psychedelics, as written about previously: Sugar is an Addictive Drug, The Agricultural Mind, Diets and Systems, and “Yes, tea banished the fairies.”. It has to do with what has built the rigid boundaries of modern egoic consciousness and hyper-individualism. It was a revolution of the mind.
Many have made arguments along these lines. It’s not hard to make the connection. Diverse leading figures over history have observed the importance changes that followed along as these substances were introduced and spread. In recent years, this line of thought has been catching on. Michael Pollan came out with an audiobook about the role coffee has played, “Caffeine: How Coffee and Tea Created the Modern World.” I haven’t listened to it because it’s only available through Audible and I don’t do business with Amazon, but reviews of it and interviews with Pollan about it make it sound fascinating. Pollan has many thoughts about psychedelics as well, although I’m not sure if he has talked about psychedelics in relation to stimulants. Steven Johnson has also written and talked about this.
As a side note, there is also an interesting point that connects rising drug addiction with an earlier era of moral panic, specifically a crisis of identity. There was a then new category of disease called neurasthenia, as first described by George Miller Beard. It replaced earlier notions of ‘nostalgia’ and ‘nerves’. In many ways, neurasthenia could be thought of as some kind of variant of mood disorder with some overlap with depression. But a passage from another work, also included below, indicates that drug addiction was closely linked in this developing ideology about the diseased mind and crippled self. At that stage, the relationship wasn’t entirely clear. All that was understood was that, in a fatigued and deficient state, increasing numbers turned to drugs as a coping mechanism.
Drugs may have helped to build modern civilization. But then they quickly came to be taken as a threat. This concern was implicitly understood and sometimes overtly applied right from the beginning. With the colonial trade, laws were often quickly put in place to make sugar and coffee controlled substances. Sugar for a long time was only sold in pharmacies. And a number of fearful rulers tried to ban coffee for fear of it, not unlike how psychedelics were perceived in the 1960s. It’s not only that these substances were radicalizing and revolutionary within the mind and society as seen in retrospect. Many at the time realized these addictive and often stimulating drugs (and one might even call sugar a drug) were powerful substances right from the beginning. That is what made them such profitable commodities requiring an emergent militaristic capitalism that was violently brutal in fulfilling this demand with forced labor.
* * *
The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous by Joseph Henrich Ch. 13 “Escape Velocity”, section “More Inventive?” p. 289, note 58
People’s industriousness may have been bolstered by new beverages: sugar mixed into caffeinated drinks—tea and coffee. These products only began arriving in Europe in large quantities after 1500, when overseas trade began to dramatically expand. The consumption of sugar, for example, rose 20-fold between 1663 and 1775. By the 18th century, sugary caffeinated beverages were not only becoming part of the daily consumption of the urban middle class, but they were also spreading into the working class. We know from his famous diary that Samuel Pepys was savoring coffee by 1660. The ability of these beverages to deliver quick energy—glucose and caffeine—may have provided innovators, industrialists, and laborers, as well as those engaged in intellectual exchanges at cafés (as opposed to taverns), with an extra edge in self-control, mental acuity, and productivity. While sugar, coffee, and tea had long been used elsewhere, no one had previously adopted the practice of mixing sugar into caffeinated drinks (Hersh and Voth, 2009; Nunn and Qian, 2010). Psychologists have linked the ingestion of glucose to greater self-control, though the mechanism is a matter of debate (Beedie and Lane, 2012; Gailliot and Baumeister, 2007; Inzlicht and Schmeichel, 2012; Sanders et al., 2012). The anthropologist Sidney Mintz (1986, p. 85) suggested that sugar helped create the industrial working class, writing that “by provisioning, sating—and, indeed, drugging—farm and factory workers, [sugar] sharply reduced the overall cost of creating and reproducing the metropolitan proletariat.”
One such observer was George Miller Beard, the well-known physician who gave the name neurasthenia to the age’s most representative neurological disorder. In 1871 Beard wrote that drug use “has greatly extended and multiplied with the progress of civilization, and especially in modern times.” He found that drug use had spread through “the discovery and invention of new varieties [of narcotic], or new modifications of old varieties.” Alongside technological and scientific progress, Beard found another cause for the growth of drug use in “the influence of commerce, by which the products of each clime became the property of all.” He thus felt that a new economic interconnectedness had increased both the knowledge and the availability of the world’s regionally specific intoxicants. He wrote that “the ancient civilizations knew only of home made varieties; the moderns are content with nothing less than all of the best that the world produces.” Beard blamed modern progress for increased drug use, and he identified technological innovation and economic interconnectedness as the essence of modernity. Those were, of course, two central contributors to the modern cultural crisis. As we shall see, many experts believed that this particular form of (narcotic) interconnectedness produced a condition of interdependence, that it quite literally reduced those on the receiving end from even a nominal state of independence to an abject dependence on these chemical products and their suppliers.
There was probably no more influential authority on the relationship between a physical condition and its historical moment than George Miller Beard. In 1878 Beard used the term “neurasthenia” to define the “lack of nerve strength” that he believed was “a functional nervous disease of modern, and largely, though not entirely, of American origin.” He had made his vision of modern America clear two years earlier, writing that “three great inventions-the printing press, the steam engine, and the telegraph, are peculiar to our modern civilization, and they give it a character for which there is no precedent.” The direct consequence of these technological developments was that “the methods and incitements of brain-work have multiplied far in excess of average cerebral developments.” Neurasthenia was therefore “a malady that has developed mainly during the last half century.” It was, in short, “the cry of the system struggling with its environment.” Beard’s diagnosis is familiar, but less well known is his belief that a “susceptibility to stimulants and narcotics and various drugs” was among neurasthenia’s most attention-worthy symptoms. The new sensitivity to narcotics was “as unprecedented a fact as the telegraph, the railway, or the telephone.” Beard’s claim suggests that narcotic use might fruitfully be set alongside other diseases of “overcivilization,” including suicide, premarital sex (for women), and homosexuality. As Dr. W. E Waugh wrote in 1894, the reasons for the emergence of the drug habit “are to be found in the conditions of modern life, and consist of the causative factors of suicide and insanity.” Waugh saw those afflictions as “the price we pay for our modern civilization.”24
Though Beard was most concerned with decreased tolerance-people seemed more vulnerable to intoxication and its side effects than they once were-he also worried that the changing modern environment exacerbated the development of the drug habit. Beard explained that a person whose nervous system had become “enfeebled” by the demands of modern society would naturally turn wherever he could for support, and thus “anything that gives ease, sedation, oblivion, such as chloral, chloroform, opium or alcohol, may be resorted to at first as an incident, and finally as a habit.” Not merely to overcome physical discomfort, but to obtain “the relief of exhaustion, deeper and more distressing than pain, do both men and women resort to the drug shop.” Neurasthenia was brought on “under the press and stimulus of the telegraph and railway,” and Beard believed that it provided “the philosophy of many cases of opium or alcohol inebriety.”25
Sugar is addictive. That is not a metaphor. It is literally an addictive drug, a gateway drug. Sugar is the first drug that most humans ever experience.
For many Americans, the addictive nature of it begins shaping the brain in infancy, as sweeteners are put into formula. And if you didn’t get formula, I bet you didn’t make it past toddlerhood without getting regularly dosed with sugar: sweet baby food, candy, cake, etc.
Addiction is trained into us during the most key years of physiological development. What we eat in the first few years, as research shows, determines what tastes good to us for the rest of our lives. We are hooked.
The addictive properties of sugar are perhaps the most studied.[6] Rats will choose sugar, high fructose corn syrup, and saccharine over cocaine and heroin. Rats have shown a withdrawal syndrome similar to that of morphine [7]. Sugar activates the dopamine pathway. [8] Food addiction recovery groups often recommend abstinence from sugar and sweeteners. [8]
Indeed, research on rats from Connecticut College has shown that Oreo cookies activate more neurons in the brain’s pleasure center than cocaine does (and just like humans, the rats would eat the filling first). And a 2008 Princeton studyTrusted Source found that, under certain circumstances, not only could rats become dependent on sugar, but this dependency correlated with several aspects of addiction, including craving, binging, and withdrawal.
Finally, there is strong evidence of the existence of sugar addiction, both at preclinical and clinical level. Our model has demonstrated that five out of eleven criteria for SUD are met, specifically: use of larger amounts and for longer than intended, craving, hazardous use, tolerance, and withdrawal. From an evolutionary perspective, we must consider addiction as a normal trait that permitted humans to survive primitive conditions when food was scarce. As we evolved culturally, the neural circuits involved in addictive behaviors became dysfunctional and instead of helping us survive they are in fact compromising our health. From a revolutionary perspective, understanding the molecular, and neurological/psychological intricacies of addiction (sugar, drugs of abuse) will permit the discovery of new therapies (pharmacological and non-pharmacological) and possible management of at least one crucial factor in the occurrence of obesity.
German Lopez: Walk me through the argument for treating sugar like a controlled substance.
Robert Lustig: The definition of addicted is that you know it’s bad for you and you can’t stop anyway, like heroin, cocaine, alcohol, and nicotine. You know it’s bad for you. You know it will kill you. But you can’t stop anyway, because the biochemical drive to consume is greater than any cognitive ability to restrain oneself.
There are two phenomena attached to addiction: one’s called tolerance, the other is withdrawal. It turns out sugar does both of those as well.
If a substance is abused and addictive and it contributes to societal problems, that’s criteria for regulation.
GL: Is that really grounds for considering it a controlled substance, though?
RL: There are four things that have to be met in order to consider a substance worthy of regulation. Number one: ubiquity — you can’t get rid of it, it’s everywhere. Number two: toxicity — it has to hurt you. Number three: abuse. Number four: externalities, which means it has a negative impact on society.
Sugar meets all four criteria, hands down. One, it’s ubiquitous — it’s everywhere, and it’s cheap. Two, as I mentioned, we have a dose threshold, and we are above it. Three, if it’s addictive, it’s abused. Four, how does your sugar consumption hurt me? Well, my employer has to pay $2,750 per employee for obesity management and medicine, whether I’m obese or not.
GL: The thing that led me to look into your paper is that I wrote an article a couple weeks back about how the three most dangerous drugs in the country are legal: tobacco, alcohol, and prescription painkillers. And a few people mentioned that I forgot sugar. That idea really interested me.
RL: Yeah, that’s right. The Wall Street Journal asked Americans what are the most dangerous of four substances in America: tobacco, 49 percent; alcohol, 24 percent; sugar, 15 percent; and then marijuana, 8 percent. Sugar was doubly worrisome to Americans than marijuana was. How about that?
GL: One potential hurdle is that controlled substances are typically seen as drugs. Do you consider sugar a drug?
RL: Of course it’s a drug. It’s very simple: a drug is a substance that has effects on the body, and the effects have to be exclusive of calories.
So in order to qualify it as a drug, the negative effects of sugar have to be exclusive of its calories. Is 100 calories of sugar different from, say, 100 calories in broccoli? The answer is absolutely.
Can you name another substance of abuse for which the effect of the substance is more dangerous than the calories it harbors? Alcohol. Its calories are dangerous not because they’re calories; they’re dangerous because they’re part of alcohol. Sugar is the same.
Sugar is the alcohol of a child. You would never let a child drink a can of Budweiser, but you would never think twice about a can of Coke. Yet what it does to the liver, what it does to the arteries, what it does to the heart is all the same. And that’s why we have adolescents with type 2 diabetes.
There are some studies of rats that are completely addicted to cocaine. So they have this drip, cocaine just comes out, and so they’re consuming it all the time. This is the crazy part. As soon as they taste sugar, they don’t care about the cocaine anymore and all they care about is a sugar. That is how addictive sugar is. It’s so addictive that rats that are addicted to cocaine, which we all know is an addictive substance, they would prefer the sugar over cocaine.
There is another study where rats are pulling a cord and every time they pull the cord a little bit a little drip of sugar water comes out. So they’re confined into this space and that is all they get. So then they learn to pull the cord so that they can get their drip of sugar. And over time the researchers open the door so that they have access to the outside. They even have access to family and they have access to all these other foods.
And guess what these rats do. They don’t care about anything else, but they just wait and wait and obsessively pull the cord to try to get sugar. This is how scary and addictive sugar is.
So the question is, is fast food addictive? What do you think? Yes? No? Okay, so we actually looked at that question.
So everybody familiar with this book? Michael Moss put this out, “Salt, sugar, fat, how the giants hooked us”, right? This is wrong, this is a mistake. Because there is one thing not on the list. What’s missing? Caffeine.
Now we’ve got fast food! Okay, salt, sugar, fat and caffeine, right? So the question is, of these four which are addictive?
Let’s talk about salt. Is salt addictive? No, it’s not addictive. In humans the threshold is physiologically fixed, higher levels are attributable to preference but you can alter that preference, lots of people do especially when they have to go low salt for some reason. And we know because we take care of a disease in endocrinology called salt-losing congenital adrenal hyperplasia where their kidneys are losing salt non stop. But when we give them the salt retaining hormone that works in the kidney called aldosterone, their salt intake goes way down. And if they were addicted that wouldn’t happen.
So when we fix their physiology, their preference gets a lot better. So salt? Not addictive.
Now let’s take fat. Is fat addictive? What do you think? Nope, rodents binge but show no signs of dependence, and humans they always binge on high fat high carb or high sugar items, like pizza and ice cream, you don’t binge on high fat per se, otherwise the Atkins diet would have everybody addicted and they’ll tell you, you know they are losing weight, how could they lose weight if they are all addicted?
Energy density actually has a stronger association with obesity and metabolic syndrome than fat does.
So, fat? Not addictive.
So we are left with these two. Caffeine? Oh man, caffeine is addictive and if you take my Starbucks away from me I’ll kill you. Model drug of dependence, gateway drug in fact, dependence show in children, adolescence, adults, 30% who consume it meet the DSM criteria for dependence and physiological addiction is well established with the headache, and the test performance, and everything else. Mega addictive.
But do you see anybody going out and regulating Starbucks or Pizza or anything like that? Why? Because it’s not toxic. It’s addictive, but not toxic, unless you mix it with alcohol and then you got something called four loco and that we are banning, everybody got it?
So when it’s toxic and addictive we ban it or we regulate it. And so, caffeine and alcohol together that’s a bad deal. But caffeine alone? Keep your hands of my Starbucks.
So caffeine? Yes, addictive.
Okay, that leaves this one. Sugar, is sugar addictive? What do you think? You know, we’ve known this for a long time, because, anybody know what this is? It’s called sweeties. This is a super concentrated sucrose, sugar solution, that you dip the pacifier in and you put in the newborn baby boy’s mouth before you do the circumcision, because it releases opioids and deadens the pain. And this has been known forever. Then you mix it with a little wine and then you got a really good cocktail, eh?
So is there really such a thing as sugar addiction, we have to look for similarities to other drugs of dependence like nicotine, morphine, amphetamine, cocaine. The one I think is most appropriate is alcohol, because after all alcohol and sugar are basically metabolized the same way, because after all where do you get alcohol from? Fermentation of sugar, it’s called wine, right? We do it every day, up in Sonoma. The big difference between alcohol and sugar is that for alcohol the yeast does the first step of metabolism called glycolysis; for sugar we do our own first step, but after that when the mitochondria see it, it doesn’t matter where it came from. And that’s the point, and that’s why they both cause the same diseases. And they do the same thing to the brain.
So for the criteria for addiction in animals are bingeing, withdrawal, craving, and then there is one down here called cross-sensitization with other drugs of abuse, that means that if you expose an animal to one drug of abuse, like cocaine for 3 weeks and addict them, and then you expose them to a second drug they’ve never seen before, like say amphetamine, they’re addicted to the amphetamine even though they’d never seen it before, because the dopamine receptors are already down-regulated because they are the same dopamine receptors, everybody got it?
Okay, and so, does sugar do this? Absolutely. Q.E.D. slammed on, sugar is addictive in animals.
What about humans? Who saw this movie? Right? Did you like it? More or less?
I’ve a big problem with this movie, because if you watch the movie his doctor, Morgan’s doctor keeps saying: “You gotta get off this high fat diet, high fat diet, high fat diet, high fat diet, high fat diet” Not the high fat diet, it’s the high sugar diet, high sugar diet, that’s what caused all the problems.
So, can sugar be addictive? Watch.
“I was feeling bad” “In the car, feeling like…I was feeling really, really sick and unhappy…started eating, feel great…feel really good now… I feel so good as crazy… Ain’t that right baby? Yeah you’re right darling”
This was on day 18, of his 30 day sojourn from McDonald’s. He just described withdrawal, that’s withdrawal, and he needed another hit in order to feel good again. He just described withdrawal, he was a vegan, right? Because his girlfriend was a vegan chef and in 18 days he’s a sugar addict.
So, you tell me. So this is what we are dealing with. We are dealing with an industry that wants us to consume its product, well gee, every industry wants us to consume their product in some fashion or another, the question is what if it hurts you? What if it hurts you?
“There have been numerous explanations for why the fairies disappeared in Britain – education, advent of electrical lighting, evangelical religion. But one old man in the village of Alves, Moray, Scotland knew the real reason in 1851: tea drinking. Yes, tea banished the fairies.”
The historian Owen Davies wrote this in referring to a passage from an old source. He didn’t mention where it came from. In doing a search on Google Books, multiple results came up. The earliest is supposedly on page 624 of the 1850 Family Herald – Volumes 8-9, but there is no text for it available online. Several other books from the 1850s to 1880s reprinted it. (1)
Below is the totality of what Davies shared. It might have originally been part of a longer passage, but it is all that I could find from online sources. It’s a short account and intriguing.
“How do you account,” said a north country (3) minister of the last age (the late Rev. Mr. M’Bean, of Alves,) to a sagacious old elder of his session, “for the almost total disappearance of the ghosts and fairies that used to be common in your young days?” “Tak’ my word for’t, minister,” replied the old man, “it’s a’ owing to the tea; whan the tea cam’ in, ghaists an’ fairies gaed out. Weel do I mind whan at a’ our neebourly meetings — bridals, christenings, lyke-wakes, an’ the like — we entertained ane anither wi’ rich nappy ale; an’ when the verra dowiest o’ us used to get warm i’ the face, an’ a little confused i’ the head, an’ weel fit to see amaist onything when on the muirs on yer way hame. But the tea has put out the nappy; an’ I have remarked that by losing the nappy we lost baith ghaists and fairies.”
Will Hawkes noted that, “‘nappy’ ale meant strong ale.” In response to Davies, James Evans suggested that, “One thing which I haven’t seen mentioned here is that there is an excellent chance that the beer being produced in this region, at this time, was mildly hallucinogenic.” And someone following that asked, “Due to ergot?” Now that makes it even more intriguing to consider. There might have been good reason people used to see more apparitions. Whether or not ergot was involved, we do know that in the past all kinds of herbs were put into groot or gruit ales for nutritional and medicinal purposes but also maybe for the affect they had on the mind and mood. Herbs, instead of hops, used to be what distinguished ale from beer.
Let me make some connections. Alcohol is a particular kind of drug. Chuck Pezeshki argues that, “alcohol is much more of a ‘We’ drug when used in moderation, than an ‘I’ drug” (Leadership for Creativity Isn’t all Child’s Play). He adds that, “There’s a reason for the old saying ‘when the pub closes, the revolution starts!’” Elsewhere, he offers the contrast that, “Alcohol is on average is pro-empathetic, sugar anti-empathetic” (The Case Against Sugar — a True Psychodynamic Meta-Review).
Alcohol from grains and grapes defined much of civilization, more so than any other mind-altering substance. Or that was the case once farming became more systematized and productive during the Axial Age, such that something had to be done with the more consistent surplus yields at a time when storage and preservation was a problem. And with increased surpluses came increased size and concentration of populations, i.e., mass urbanization of ever larger cities and ever more expansive empires.
That brought with it a need to solve the problem of unsafe drinking water, easily resolved by making alcohol that kills microbes. Whatever the case may have been in prior ages, at least by the early medieval period drunkenness had become a defining feature of many cultures in the Western world:
“Indeed the general rowdiness of English drinking at this period, which precluded priests from participating, is attested in the twelfth century historian William of Malmesbury. In describing the character of the Angles before the Norman conquest of AD 1066 he says that all drank together, throughout the night and day, that intoxication was common, making men effeminate, and that the people would eat until they were sick and drink until they threw up (these last two habits being passed on to the conquerors). Another tradition of theirs not mentioned until after the conquest is that of the wassail, or toast, first found in the poet Layamon around AD 1200.”
It makes sense. Until the modern era of reforms focused on public health, a large part of the population was dependent on alcoholic beverages for safe drinking. Even common wells were easily contaminated because people with unclean hands were constantly handling the bucket that was repeatedly dropped back down into the water. No one back then understood that it was bacteria that made them sick, but they did understand that alcohol kept them healthy. The health component was emphasized by adding in nutritional and medicinal herbs.
So they developed a drinking culture. Everyone, not only men but women and children as well, drink alcohol on a daily basis. Much of it was low in alcohol level (e.g., small beer), but much of it had greater amounts. Many people spent their lives in a mild state of near constant inebriation. Throw in some mildly mind-altering herbs and it would’ve shaped the entire mentality and culture. Going back to the ancient world, alcohol had been associated with spirituality, religion, ritual, worship, ecstasy, and the supernatural. The specific practice of groot ales is described in the earliest records. Such a cultural habit that probably extended over millennia had to have had a major impact on their sense of identity and reality, in supporting an overt expression of the bundled mind, what Julian Jaynes describes as the bicameral mind.
Think about it. Tea, coffee, cocoa, and sugar weren’t introduced into the West until colonialism; i.e., early modernity. It took a while for them to become widespread. They first were accessible only to those in the monied classes, including the intellectual and artistic elite but also the clerical elite. Some have argued that these stimulants are what fueled the Enlightenment Age. And don’t forget that tea played a key role in instigating the American Revolution. Changes in diet often go hand in hand with changes in culture (see below). One might note that this was the precise era when the elite began talking about the early spread of a mental health epidemic referred to as melancholia and acedia, what we would now understand as depression.
There are those like Terrence McKenna who see psychedelics as having much earlier played a central role in shaping the human mind. This is indicated by the wide use of psychedelics by indigenous populations all over the planet and by evidence of their use among ancient people. Psychedelics and entheogens preceded civilization and it seems that their use declined as civilization further developed. What replaced psychedelics over time were the addictive stimulants. That other variety of drug has a far different affect on the human mind and culture.
The change slowly began millennia ago. But the full takeover of the addictive mentality only seems to have come fully into its own in recent centuries. The popularizing of caffeinated drinks in the 19th century is a key example of the modernizing of the mind. People didn’t simply have more active imaginations in the past. They really did live in a cultural worldview where apparitions were common, maybe in the way that Julian Jaynes proposed that in the Bronze Age people heard communal voices. These weren’t mere hallucinations. It was central to the lived reality of their shared culture.
In traditional societies, alcohol was used for social gatherings. It brought people together and, maybe combined with other substances, made possible a certain experience of the world. With the loss of that older sense of communal identity, there was the rise of the individual mindset isolated by addictive stimulants. This is what has fueled all of modernity. We’ve been buzzing ever since. Stimulants broke the spell of the fairies only to put us under a different spell, that of the demiurgic ego-consciousness.
“The tea pots full of warm water,” as Samuel Tissot put in his 1768 An Essay on Diseases Incident to Literary and Sedentary Persons, “I see upon their tables, put me in mind of Pandora’s box, from whence all sorts of evils issue forth, with this difference however, that they do not even leave the hopes of relief behind them; but, on the contrary, by inducing hypochondriac complaints, diffuse melancholy and despair.”(2) That is to say the modern mind was transformed and, according to some, not in a good way. The living world became inanimate, no longer bustling with animistic beings, and presumably the last of the bicameral voices went silent.
This change didn’t come easily, though. The bundled mind is probably the evolutionary norm of the human species. It’s our default mode. That is because humans evolved with with psychedelic plants and rotting alcoholic fruit (some other species have been observed stashing fruit to eat later maybe with the intention to become inebriated). The stimulants, although around previously, were far less common until agriculture. The targeting of alcohol and psychedelics with criminalization was, as previously noted, social control as substance control; but it could be added that it was also mind control. Still, archaic habits die hard. If stimulants help people survive the unnatural stresses of the capitalist work week, it is alcohol that many turn to on the weekends to forget those stresses and become themselves again.
Immediately preceding U.S. Prohibition, there was a health epidemic involving moral panic and culture war. It was mixed up with ideological conflict over proper social norms, social roles, and social identities. In particular, there was an obsession over masculinity and the fear of emasculation or else of libidinous dissipation (e.g., an anti-masturbation campaign focused on young boys). Addiction and alcoholism were seen to play a role. This was at a time when large number of northern European immigrants established a heavy beer-drinking culture. Going back to the Greeks and Romans, there had long been a cultural belief that, according to Galenic humoral theory, beer was ‘cold’ and hence effeminizing. All alcohol was deemed to be potentially ‘cold’, but specifically beer.
This is the origins of the class bias toward wine, although wine too sometimes was seen as ‘cold’. It might also contribute to why stimulants were so important to the male-dominated work culture of capitalism. To be a man, meant to be assertive and aggressive, to be mentally focused and intense, to always be on one’s game. The sedating and slowing affect of alcohol is the opposite of the ideal attributes of a successful alpha male as part of the capitalist elite.
* * *
(1) The Country Gentleman – Vol. XII No. 23 (1858), William Hopkin’s “The Cruise of the Betsey” from Fraser’s Magazine for Town and Country – Volume 58 (1858) and from Littell’s Living Age – Volume 59 (1858), John William Kirton’s One Thousand Temperance Anecdotes [&c.] (1868), John William Kirton’s A Second Thousand of Temperance Anecdotes (1877), The Church of England Temperance Chronicle – No. 42 Vol. VIII (1880), and The Guernsey Magazine – Vol. X No. 12 (1882).
(2) “There is another kind of drink not less hurtful to studious men than wine; and which they usually indulge in more freely; I mean warm liquors [teas], the use of which is become much more frequent since the end of the last century. A fatal prejudice insinuated itself into physic about this period. A new spirit of enthusiasm had been excited by the discovery of the circulation: it was thought necessary for the preservation of health to facilitate it as much as possible, by supplying a great degree of fluidity to the blood, for which purpose it was advised to drink a large quantity of warm water. Cornelius Bontekoe, a Dutch physician, who died afterwards at Berlin, first physician to the elector of Brandenburgh, published in 1679 a small treatise in Dutch, upon tea, coffee, and chocolate, in which he bestows the most extravagant encomiums on tea, even when taken to the greatest excess, as far as one or two hundred cups in a day, and denies the possibility of its being hurtful to the stomach. This error spread itself with surprising rapidity all over the northern part of Europe; and was attended with the most grievous effects. The æra of its introduction is marked by an unhappy revolution in the account of the general state of health at that time. The mischief was soon noticed by accurate observers. M. Duncan, a French physician settled at Rotterdam, published a small work in 1705, wherein we find, amidst a great deal of bad theory, some useful precepts against the use of hot liquors (I). M. Boerhaave strongly opposed this pernicious custom; all his pupils followed his example, and all our eminent physicians are of the same opinion. The prejudice has at last been prevented from spreading, and within these few years seems to have been rather less prevalent (m); but unfortunately it subsists still among valetudinarians, who are induced to continue these pernicious liquors, upon the supposition that all their disorders proceed from a thickness of blood. The tea-pots full of warm water I see upon their tables, put. me in mind of Pandora’s box, from whence all sorts of evils issue forth, with this difference however, that they do not even leave the hopes of relief behind them; but, on the contrary, by inducing hypochondriac complaints, diffuse melancholy and despair. […]
“The danger of these drinks is considerably increased, as I have before observed, by the properties of the plants infused in them; the most fatal of these when too often or too freely used, is undoubtedly the tea, imported to us since near two centuries past from China and Japan, which has so much increased diseases of a languid nature in the countries where it has been introduced, that we may discover, by attending to the health of the inhabitants of any city, whether they drink tea or not; and I should imagine one and the greatest benefits that could accrue to Europe, would be to prohibit the . importation of this famous leaf, which contains no essential parts besides an acrid corrosive gum, with a few astringent particles (o), imparting to the tea when strong, or when the infusion has stood a long time and grown cold, a styptic taste, slightly felt by the tongue, but which does not prevent the pernicious effects of the warm water it i$ drenched in. These effects are so striking, that I have often seen very strong and healthy men, seized with faintness, gapings, and uneasiness, which lasted for some hours after they had drank a few cups of tea fasting, and sometimes continued the whole day. I am sensible that these bad effects do not shew themselves so plainly in every body, and that there are some who drink tea every day, and remain still in good health; but these people drink it with moderation. Besides, the non-existence of any danger cannot be argued from the instances «f some few who have been fortunate enough to escape it.
“The effects of coffee differing from’ those of tea, it cannot be placed in the same class ; for coffee, although made with- warm water, is not so pernicious for this reason, as it is on account of its being a powerful stimulus, producing strong irritations in the fibres by its bitter aromatic oil This oil combined as it is with a kind of very nourishing meal, and of easy digestion, would make this berry of great consequence, in pharmacy, as one of the bitter stomachics, among which it would be the most agreeable, as well as one of the most active. This very circumstance is sufficient to interdict the common use of it, which must be exceedingly hurtful. A continual irritation of the fibres of the stomach must at length destroy their powers; the mucus is, carried off, the nerves are irritated and acquire singular spasms, strength fails, hectic fevers come on with a train of other diseases, the cause of which is industriously concealed, and is so much the more difficult to eradicate, as this sharpness united with an oil seems not only to infect the fluids, but even to adhere to the vessels themselves. On the contrary, when seldom taken, it exhilerates, breaks down the slimy substances in the stomach, quickens its action, dispels the load and pains of the head, proceeding from interrupted digestions, and even clears the ideas and sharpens the understanding, if we may credit the accounts of men of letters, who have therefore used it very freely. But let me be permitted to ask, whether Homer, Thucydides, Plato, Xenophon, Lucretius, Virgil, Ovid, Horace, Petronius, to which I may venture to add Corneille and Moliere, whose masterpieces will ever be the delight of the remotest posterity, let me ask, I say, whether they drank coffee? Milk rather takes off from the irritation occasioned by coffee, but still does not entirely prevent all its pernicious effects, for even this mixture has some disadvantages peculiar to itself. Men of learning, there fore, who are prudent, ought in general to keep coffee as their favourite medicine, but should never use it as a common drink. The custom is so much the more dangerous, as it soon degenerates into a habit of necessity, which few men have the resolution to deprive themselves of. We are sensible of the poison, and swallow (32) it because it is palatable.”
(3) The original passage that inspired this post asserted that the source of the anecdote was “a north country minister of the last age”. One might wonder what was considered the last age. A generation earlier? The century before? Anyway, maybe more significant was that, as the storyteller was of the north country, presumably the ‘old man’ who responded was also of the north country. At the very least, that meant north of London, but probably referring to the rural north from the Midlands to the borderlands, maybe all of it being part of the vast north to the literary imagination of the southern elite and urbanites.
This brings us to another point about ‘nappy ale’, in probably referring to entheogenic groot ale. Often non-herbal beer made out of hops apparently first most strongly took hold in southern England, although with exceptions along the northern coast and major waterways. Yet ale remained the most popular and widespread alcoholic drink until the colonial era began in early modernity. The takeover of hops beer, at least in Merry Ol’ England (i.e., London and surrounding region), happened to coincide with the English Civil War. That conflict is what some consider the first modern revolution and class war. Also, it was primarily a contest for power between southern Cavaliers and northern Roundheads. One is then reminded of how important taverns were in the colonies during the American Revolution.
Joshua Thomas Ravenhill writes, “Brewing with hops had been established in the Low Countries by the early fourteenth century. As has been demonstrated by Milan Pajic, Doche aliens, mainly from Holland and Zeeland, were the first to start brewing beer in England by at least the end of the fourteenth century, and there were alien beerbrewers in England by at least 1399. It took a long time for beer to become as popular as ale in the capital. Bennett and Bich Luu, writing of London, argue that it was not until the reign of Elizabeth I that natives were drinking beer in greater quantities than ale and that large numbers of natives brewed beer” (The Experiences of Aliens in Later Medieval London and the Negotiation of Belonging, 1400 – 1540).
By the way, the use of hops in general only began to significantly take hold, in England and Europe, during the 14th century. That was the period of pre-Reformation religious hereticism, political radicalism, class war, and peasants’ revolts. The English Peasants’ Revolt is the earliest conflict that sometimes gets called the first modern revolution because the ideological rhetoric was showing the signs of what later would become more well articulated. That was the major shifting point for the decline of feudalism. The earliest of the enclosures began around then, only becoming a government-sanctioned enclosure movement with the Glorious Revolution following the English Civil War. So, the two major English internal conflicts that get called modern or modern-like revolutions bookended the period of the rise of hops beer in replacing groot ale.
As the original passage maybe indicates, it was the lower class of rural former peasantry that had been still drinking groot ale into the first centuries of the modern period, such that it was still a clear cultural memory in 1850. In Europe where hops were used earlier, “Ale made with gruit was a drink for the poor and the sick” (Richard W. Unger, Beer in the Middle Ages and the Renaissance). Hops was often a trade good and the poor were more dependent on local ingredients. Yet the groot tradition barely hung on in a few isolated locations: “In rural western Norway in the 1950s brewers still used pors, that is bog myrtle. The survival of the practice was certainly exceptional since in the sixteenth, but especially in the seventeenth centuries, there were campaigns in central Europe to get rid of grut or pors.”
Communal peasant-like identities also lasted longer in rural areas. Possibly, it was the fact that groot ale supported such a mentality that it became a target of the reformers of morality, land, and agriculture. Not only how land and property is structured will structure the mind but also what is grown on the land and who owns the land to decide what is grown there. That is what the enclosure movement was all about, in order to promote the enclosure of mind and identity in the form of the hyper-individualistic self as capitalist, consumer, and worker; as opposed to being defined by the communal reality of feudal villages and the commons (The Enclosure of the Mind).
Hops beer could be industrially mass-produced, as opposed to the more local and often female-dominated home production of groot ale. But also it was about productivity. Two things. Hallucinogens, even if mild, don’t put people in the mindset for long grueling hours of labor that requires intense focus, such as working in a factory or mine, particularly the night shift or double shifts. Hallucinogens mixed into alcohol is even less supportive of profitable efficiency of workers. But even alcohol alone, if only hops beer, is not all that beneficial for the bottom line. In early capitalism, workers were sometimes paid with beer; but that practice quickly ended. Interestingly, early modern politics was also rife with alcohol. Candidates would give out beer and, in places like the American South, election day was basically a drunken carnival (Edmund S. Morgan, Inventing the People, pp. 202-203; Winter Season and Holiday Spirit). That was likewise suppressed as time went on.
* * *
6/26/21 – Update: There is some relevant and interesting info to be added. It’s not merely that stimulants replaced psychedelics. The shift is a bit more complex. As tea and coffee became more common drinks, so did beer brewed with hops that replaced the archaic practice of gruit ales brewed with herbs. That used to be the distinction between beer and ale, whether or not it included hops. Also, the distinction was that those herbs were often stimulating, aphrodisiacal, and psychotropic. For example, some of the same herbal ingredients are used in absinthe.
The use of hops in brewing beer is first recorded in Northern France in 800s. It didn’t spread to England until the 1400s, then began to catch on in the 1500s, and became useful for beer preservation as colonial trade expanded in the following centuries. Even with the advantages, gruit ales without hops remained common, particularly in rural areas. At a time when most alcohol produced was consumed personally or sold locally, there was little need to preserve beer with hops. It wasn’t until the mass production later in the industrial age that hops became king. But that was already being felt by the 19th century when the fairies were disappearing.
There was motivation for this. There is an obvious benefit for modern capitalism in the use of stimulants. Hops, on the other hand, is a depressive and lowers sex drive. The more that stimulants are used, the more that depressives are needed to wind down at the end of the day. That is opposed to the gruit ales, often lower in alcohol, that were imbibed all day long to maintain a mild buzz without the constant up and down cycle of addiction to stimulants and depressives. The Church, by the 1500s, had already caught on that hops would make for a more passive population in subduing people’s sinful nature; similar to why they used diet for social control (i.e., banning red meat before and during Carnival).
The increasing use of hops coincided with the rise of modernity, the enclosure movement, mass urbanization, colonial trade, capitalism, and industrialization. This also included land reforms and agricultural improvements that led to grain surpluses. So, with industrial farming and industrial breweries, beer could be produced in vast amounts, preserved with hops, and then shipped where needed. It was a marriage made in heaven. Meanwhile, the workers were forced to suck down the caffeine to keep up with the new grueling factory work. The older tradition of alewives making gruit ale at home probably was decimated with the moral panic of witch persecutions. Yet home brewing continued in many places into the early 20th century before finally making a more recent comeback.
1673: A group of citizens petitioned Parliment for legislation to prohibit brandy, coffee, rum, tea and chocolate. It was because ‘these greatly hinder the consumption of Barley, Malt, and Wheat, the product of our land.’ Parliment did not take action.58 (Bickerdyke, J. The Curiosities of Ale and Beer. London: Spring, 1965, p. 118.)
1700-1730: ‘Housewives in the northern colonies [of what is now the US] brewed beer every few days, since their product had a short shelf life.’6 (Blocker, J. Kaleidoscope in Motion. Drinking in the United States, 1400-2000. In: Holt. M. (Ed.) Alcohol. Oxford: Berg, 2006. Pp. 225-240. P. 227.) […]
1790″ “Parliament made it illegal to pay wages in liquor.54” (Magee, M. 1000 Years of Irish Whiskey. Dublin: O’Brien, 1980, p. 76.)
People had accepted drunkenness as part of life in the eighteenth century.2 (Austin, G. Alcohol in Western Society from Antiquity to 1800. Santa Barbara, CA: ABC-Clio, 1985, p. xxv.) But the nineteenth century brought a change in attitudes as a result of increasing industrialization. This created the need for a reliable and punctual work force.3 (Porter, R. Introduction. In: Sournia, J.-C. A History of Alcoholism. Oxford: Basil Blackwell, 1990, p. xii.) Employers wanted self-discipline instead of self-expression. They wanted task orientation in place of relaxed conviviality. It followed that drunkenness was a threat to industrial efficiency and growth. […]
People blamed alcohol for problems caused by industrialization and urbanization. Thus, they blamed it for problems such as urban crime, poverty and high infant mortality. However, gross overcrowding and unemployment contributed greatly to these problems.9 (Porter, R. Introduction. In: Sournia, J.-C. A History of Alcoholism. Oxford: Basil Blackwell, 1990, p. 21)
People also blamed alcohol for more and more personal, social and religious/moral problems. […]
1804: As early as 1804, temperance organizations began in the Netherlands.15 (Garrelsen, H., and van de Goor, I. The Netherlands. In: Heath. Pp. 190-200. P. 191.)
British physician Thomas Trotter suggested that chronic drunkenness was a disease.16 (Plant, M. The United kingdom. In: Heath, D. Pp. 289-299. P. 291.) […]
Post-1865: After the American Civil War (1861-1865) beer replaced whiskey as preferred beverage of working men.62 (Rorabaugh, W. The Alcoholic Republic. NY: Oxford U Press, 1979.) […]
1886: Coca-Cola [i.e., cocaine] was a temperance beverage.93 (Blocker, J., et al. Alcohol and Temperance in Modern History: An International Encyclopedia. Vol. 1. Santa Barbara, CA: ABC-CLIO, 2003, xxxi-xiv.)
Most don’t know that the herbal collections making up gruit were the original “hops” – at least before gruit’s use began to dwindle in a large way during the 15th and 16th centuries. Many factors went into its disappearance, including the passing of the German beer purity law, Reinheitsgebot, which originally stated that water, barley, and hops were the only ingredients that could be used in beer production.
Another explanation for the disuse of gruit is based in religion – since some herbs used were known to have stimulating and even aphrodisiac effects, switching to a sedative substance like hops satisfied a Puritan need to keep people from enjoying themselves (sound familiar?).
Gruit ale’s are much stronger than beer made with hops, causing narcotic, aphrodisiacal, and psychotropic effects. While this led to its recreational use popularity, it also led to its downfall.
Hops is an anaphradesiacal herb – meaning it lowers sexual drive. This is offset by the alcohol in beer. However, gruit doesn’t react this way and instead includes chemicals known as alkaloids.
Alkaloids are known to cause a chemical reaction with receptors in the brain similar to that of THC found in Marijuana and Absinthe. Many times gruit and absinthe share common ingredients such as wormwood and exhibit similar effects.
Despite gruit beers being alcoholic in nature, it is likely the effects of the herb mix contributed to its recreational effects, popularity and downfall. Each of the main herbs is considered much stronger in effect, psychotropic even, than beer’s modern substitute, Humulus lupulus, writes Buhner. “It is important to keep in mind the properties of gruit ale: it is highly intoxicating – narcotic, aphrodisiacal, and psychotropic when consumed in sufficient quantities,” Buhner explains. “The hopped ale that took its place is quite different.”
Gruit beers were favored by many in medieval Europe dating back prior to the predominant use of hops, writes Buhner, but the narcotic effects of the herbs, kept closely guarded by the church or lordships made the blend a target. A bitter battle between the religions, regions and businessmen made the attack against gruit beers reminiscent of the war on drugs. “Hops, when they began to be suggested for use as a primary additive, in both Germany and England, were bitterly resisted,” explained Buhner. (Stephen Harrod Buhners’ book “Sacred and Herbal Healing Beers”)
The war between ingredients played out over the course of two centuries,” writes Buhner, “simultaneously with the Protestant Reformation.”
As part of the Reformation, “Protestant religious intolerance of Catholic indulgence that was the genesis of the temperance movement.” Buhner goes on to explain, “The Protestant reformists were joined by merchants and competing royals to break the financial monopoly of the Church. The result was ultimately the end of a many-thousand-years’ tradition of herbal beer making in Europe and the limiting of beer and ale into one limited expression of beer production — that of hopped ales or what we call beer today.”
Before the beer purity laws which swept Europe in the 1500s, beer was made with many different admixtures, and Gruit was one variety which was popular. Recipes for gruit were different depending on which herbs grew locally. According to GruitAle.com, gruit usually included the following herbs: Yarrow (Achillea millefolium), Bog Myrtle (Myrica Gale), and Marsh Rosemary (Ledum palustre). This claim is also supported by the book Sacred & Healing Herbal Beers, by Stephen Harrod Buhner. This book contains many ancient recipes for beer, including a section on gruit. Additional herbs which have been found in gruit recipes are Juniper berries, Mugwort, Wormwood, Labrador Tea, Heather, Licorice, and some others.
There are a few factors to consider when comparing the inebriatory qualities of gruit in comparison to more commonly made beer. It is held amongst those experienced in gruit inebriation that gruit rivals hopped beer on many accounts. One factor is that hops create a sedentary spirit in the imbiber. Amongst those knowledgeable about herbs, hops tea is well known as a catalyst for dreams, and creates drowsiness for the beer drinker. Hops is also an anaphradesiacal herb – meaning that it lessens sexual desire. While the alcohol in beer can lessen inhibitions – which may result in bawdier activities in many – the anaphradesiacal effect of the hops does counter act this to some degree. Gruit, on the other hand, does not counter this effect and also has a unique inebriatory effect due to the chemical composition of the herbs involved in its manufacture. One of noticeable aspect of this chemical composition is the Thujone content.
Thujones are chemicals known as alkaloids, which cause an additional form of inebriation when imbibed in beer. According to Jonathan Ott’s book, Pharmaecotheon I, Thujones act upon some of the same receptors in the brain as tetrahydrocannabinol (THC, as found in Marijuana), and are also present in the spirit known as Absinthe. Gruit and Absinthe sometimes share the same herbs in their manufacture, such as Wormwood, Anise seed, and Nutmeg, but it is the herb Yarrow (Achilles Millefolium) that contains the lion’s share of thujones in the gruit concoction.
Yarrow is an herb with many uses and plays a profound part in history and myth. According to Buhner, its use can be traced back 60,000 years. Through many different cultures, from Dakota to ancient Romans, Yarrow has been used to staunch serious wounds – it is even rumored to have been used by Achilles (hence the name Achilles Millefolium, the thousand leaved plant of Achilles). According to Buhner, the plants aphrodisiacal qualities are also documented in the Navaho culture. As an inebriant, it has been used in the Scandinavian countries and in North America as well.
Bog Myrtle (Myrica gale) and Wild Rosemary (Ledum glandulosum) also have many uses in the realm of herbalism, but not nearly as many as Yarrow. Both tend to have inebriation enhancing effects in beer, but also tend to cause a headache and probably a wicked hangover, if too much is drunk. The use of Bog Myrtle in ale was continued through the 1940s in Europe and the 1950s in outlying areas of England and the Scandinavian countries – Wild Rosemary probably through the 18th century.
BEER HAS BEEN AN ESSENTIAL aspect of human existence for at least 4,000 years—and women have always played a central role in its production. But as beer gradually moved from a cottage industry into a money-making one, women were phased out through a process of demonization and character assassination. […]
Professional brewsters and alewives had several means of identifying themselves and promoting their businesses. They wore tall hats to stand out on crowded streets. To signify that their homes or taverns sold ale, they would place broomsticks—a symbol of domestic trade—outside of the door. Cats often scurried around the brewsters’ bubbling cauldrons, killing the mice that liked to feast on the grains used for ale.
If all of this sounds familiar, it’s because this is all iconography that we now associate with witches. While there’s no definitive historical proof that modern depictions of witches were modeled after alewives, some historians see uncanny similarities between brewsters and anti-witch propaganda. One such example exists in a 17th-century woodcut of a popular alewife, Mother Louise, who was well-known in her time for making excellent beer.
While the relationship between alewives and witch imagery has still yet to be proven, we do know for sure that alewives and brewsters had a bad reputation from the jump. Beyond the cheating that some of their counterparts engaged in, brewsters also had to deal with the bad rap their entire gender suffered because of original sin. […]
Brewsters’ bad reputation didn’t help their case when wealthier, more socially-connected men started taking up the trade. After the devastation of the Black Plague, people began drinking a lot more ale, doing so in public alehouses instead of at home. This also marked a shift in people’s relationship with beer, which moved from being just a necessity and occasional indulgence to something closer to what we have today. Men suddenly saw they could make a real profit off of what was once seen as a semi-lucrative side gig for women. So they built taverns that were bigger and cleaner than the makeshift ones that alewives provided, and people flocked to them to revel and conduct business alike. Over time, alewives grew to be seen not only as tricky, but also dirty and their beer unsanitary.
Women continued to make low-alcohol ale for their family’s daily consumption after the Industrial Revolution increased production methods, which made buying beer cheaper and easier than making it at home. But that died in the 1950s and 1960s, when marketing campaigns branded beer as a “manly drink.” Companies such as Schlitz, Heineken, and Budweiser depicted beer as a means of unwinding after a long day of work, often featuring women serving their suited-up husbands cold bottles of brew.
For those of you still with me: here’s a quote on ale and beer from 1912, less than a century ago, from a book called Brewing, by Alfred Chaston Chapman:
“At the present day the two words are very largely synonymous, beer being used comprehensively to include all classes of malt liquor, whilst the word ale is applied to all beers other than stout and porter.”
Why weren’t stout and porter called ales? This is a reflection, 200 years on, of the origin of porter (and brown stout) in the brown beers made by the beer brewers of London, rivals of the ale brewers for 500 years, ever since immigrants from the Low Countries began brewing in England with hops.
“Obadiah Poundage”, the aged brewery worker who wrote a letter to the London Chronicle in 1760 about the tax on “malt liquors” (the general term used for ale and beer as a class in the 18th century), is usually mined for the light he threw on the history of porter, but he is also very revealing on the continuing difference between ale and beer. In Queen Anne’s reign, about 1710, Poundage said, the increase in taxes on malt (caused by the expense of the War of the Spanish Succession) caused brewers to look to make a drink with less malt and more hops: “Thus the drinking of beer became encouraged in preference to ale … but the people not easily weaned from their heavy sweet drink, in general drank ale mixed with beer.”
This ale seems to have been brown ale (and the beer brown beer), for Poundage says that it was the gentry, “now residing in London more than they had done in former times”, who “introduced the pale ale, and the pale small beer they were habituated to in the country; and either engaged some of their friends, or the London brewers to make for them these kinds of drinks.” The pale ale “was sold by the victualler at 4d per quart and under the name of two-penny.” It was the need to counter the success of this pale ale that “excited the brown beer trade to produce, if possible, a better sort of commodity, in their way, than heretofore had been made”, an effort that “succeeded beyond expectation” with the development of what became known as porter, because of its popularity with London’s many street porters. But while the “brown beer trade” developed into the porter brewers, the ale brewers continued to find a market.
Indeed, outside London and the south of England, beer does not seem to have been that popular until Queen Anne’s time at the earliest. Daniel Defoe, writing in his Tour through the Eastern Counties of England, published in 1722, about the great hop fair at Stourbridge, just outside Cambridge, on the banks of the Cam, said:
“As to the north of England, they formerly used but few hops there, their drink being chiefly pale smooth ale, which required no hops, and consequently they planted no hops in all that part of England, north of the Trent; nor did I ever see one acre of hop ground planted beyond Trent in my observation; but as for some years past, they not only brew great quantities of beer in the north, but also use hops in the brewing their ale much more than they did before; so they all come south of Trent to buy their hops; and here being vast quantities brought, it is great part of their back carriage into Yorkshire and Northamptonshire, Derbyshire, Lancashire, and all those counties; nay, of late since the Union, even to Scotland itself.”
It looks to have taken a century for the habit of putting hops in ale to spread north: in 1615, Gervase Markham published The English Huswife, in which he declared:
“The generall use is by no means to put any hops into ale, making that the difference betwixt it and beere, that the one hath hops the other none; but the wiser huswives do find an error in that opinion, and say the utter want of hops is the reason why ale lasteth so little a time, but either dyeth or soureth, and therefore they will to every barrell of the best ale allow halfe a pound of good hops.”
Fourteen years after Defoe’s report on North of England pale ale, the first edition of the London and Country Brewer, by the Hertfordshire farmer William Ellis, succinctly summed up the difference between ale and beer in the 1730s:
“For strong brown ale brewed in any of the winter months and boiled an hour, one pound is but barely sufficient for a hogshead, if it be tapped in three weeks or a month. If for pale ale brewed at that time, and for that age, one pound and a quarter of hops; but if these ales are brewed in any of the summer months there should be more hops allowed.
“For October or March brown beer, a hogshead made from eleven bushels of malt boiled an hour and a quarter, to be kept nine months, three pounds and a half ought to be boiled in such drink at the least. For October or March pale beer, a hogshead made from fourteen bushels, boiled an hour and a quarter and kept twelve months, six pounds ought to be allowed to a hogshead of such drink and more if the hops are shifted in two bags, and less time given the wort to boil.”
Going on Ellis’s figures, early 18th century ale contained up to 60 per cent more hops than Gervaise Markham’s “huswives” used in ale brewing a century earlier, but still only around a quarter as much hops as the beer. This, Ellis said, was because “Ale … to preserve in its mild Aley Taste, will not admit of any great Quantity of Hops.”
Addiction, of food or drugs or anything else, is a powerful force. And it is complex in what it affects, not only physiologically and psychologically but also on a social level. Johann Hari offers a great analysis in Chasing the Scream. He makes the case that addiction is largely about isolation and that the addict is the ultimate individual. It stands out to me that addiction and addictive substances have increased over civilization. Growing of poppies, sugar, etc came later on in civilization, as did the production of beer and wine (by the way, alcohol releases endorphins, sugar causes a serotonin high, and both activate the hedonic pathway). Also, grain and dairy were slow to catch on, as a large part of the diet. Until recent centuries, most populations remained dependent on animal foods, including wild game. Americans, for example, ate large amounts of meat, butter, and lard from the colonial era through the 19th century (see Nina Teicholz, The Big Fat Surprise; passage quoted in full at Malnourished Americans). In 1900, Americans on average were only getting 10% of carbs as part of their diet and sugar was minimal.
Something else to consider is that low-carb diets can alter how the body and brain functions. That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past. Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is cravings and food addictions disappear. It’s a non-addictive or maybe even anti-addictive state of mind. Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, and that is typical of ketosis. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. […]
It is an onslaught taxing our bodies and minds. And the consequences are worsening with each generation. What stands out to me about autism, in particular, is how isolating it is. The repetitive behavior and focus on objects resonates with extreme addiction. As with other conditions influenced by diet (shizophrenia, ADHD, etc), both autism and addiction block normal human relating in creating an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon. And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic consciousness with its rigid psychological boundaries is the result of our food system, as part of the civilizational project of mass agriculture?
This person said a close comparison was being in the zone, sometimes referred to as runner’s high. That got me thinking about various factors that can shut down the normal functioning of the egoic mind. Extreme physical activity forces the mind into a mode that isn’t experienced that often and extensively by people in the modern world, a state of mind combining exhaustion, endorphins, and ketosis — a state of mind, on the other hand, that would have been far from uncommon before modernity with some arguing ketosis was once the normal mode of neurocogntivie functioning. Related to this, it has been argued that the abstractions of Enlightenment thought was fueled by the imperial sugar trade, maybe the first time a permanent non-ketogenic mindset was possible in the Western world. What sugar (i.e., glucose), especially when mixed with the other popular trade items of tea and coffee, makes possible is thinking and reading (i.e., inner experience) for long periods of time without mental tiredness. During the Enlightenment, the modern mind was borne out of a drugged-up buzz. That is one interpretation. Whatever the cause, something changed.
Also, in the comment section of that article, I came across a perfect description of self-authorization. Carla said that, “There are almost always words inside my head. In fact, I’ve asked people I live with to not turn on the radio in the morning. When they asked why, they thought my answer was weird: because it’s louder than the voice in my head and I can’t perform my morning routine without that voice.” We are all like that to some extent. But for most of us, self-authorization has become so natural as to largely go unnoticed. Unlike Carla, the average person learns to hear their own inner voice despite external sounds. I’m willing to bet that, if tested, Carla would show results of having thin mental boundaries and probably an accordingly weaker egoic will to force her self-authorization onto situations. Some turn to sugar and caffeine (or else nicotine and other drugs) to help shore up rigid thick boundaries and maintain focus in this modern world filled with distractions — likely a contributing factor to drug addiction.
Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).
There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.
Chuck Pezeshki is a published professor of engineering in the field of design theory and high performance work teams. I can claim no specialty here, as I lack even a college degree. Still, Pezeshki and I have much in common — like me: He prefers a systems view, as he summarizes his blog on his About page, “As we relate, so we think.” He states that, “My work exists at, and reaches far above the micro-neuroscience level, into larger systemic social organization.”
An area of focus we share is diet and health and we’ve come to similar conclusions. Like me, he sees a relationship between sugar, obesity, addiction, trauma, individuality, empathy issues, authoritarianism, etc (and inequality comes up as well; by the way, my favorite perspective on inequality in this context is Keith Payne’s The Broken Ladder). And like me, he is informed by a low-carb and ketogenic approach that was initially motivated by weight loss. Maybe these commonalities are unsurprising, as we do have some common intellectual interests.
Much of his blog is about what he calls “structural memetics” involving value memes (v-memes). Even though I haven’t focused as much on value memes recently, Ken Wilber’s version of spiral dynamics shaped my thought to some extent (that kind of thing being what brought me to Pezeshki’s blog in the first place). As important, we are both familiar with Bruce K. Alexander’s research on addiction, although my familiarity comes from Johann Hari’s writings (I learned of the rat park research in Chasing the Scream). A more basic link in our views comes from each of us having read the science journalism of Gary Taubes and Nina Teicholz, along with some influence from Dr. Jason Fung. He has also read Dr. Robert H. Lustig, a leading figure in this area who I know of through the work of others.
Related to diet, Pezeshki does bring up the issue of inflammation. As I originally came around to my present diet from a paleo viewpoint, I became familiar with the approach of functional medicine that puts inflammation as a central factor (Essentialism On the Decline). Inflammation is a bridge between the physiological and the psychological, the individual and the social. Where and how inflammation erupts within the individual determines how a disease condition or rather a confluence of symptoms gets labeled and treated, even if the fundamental cause originated elsewhere, maybe in the ‘external’ world (socioeconomic stress, transgenerational trauma, environmental toxins, parasites because of lack of public sanitation, etc. Inflammation is linked to leaky gut, leaky brain, arthritis, autoimmune disorders, mood disorders, ADHD, autism, schizophrenia, impulsivity, short-term thinking, addiction, aggression, etc — and such problems increase under high inequality.
There are specific examples to point to. Diabetes and mood disorders co-occur. There is the connection of depression and anhedonia, involving the reward circuit and pleasure, which in turn can be affected by inflammation. Also, inflammation can lead to changes in glutamate in depression, similar to the glutamate alterations in autism from diet and microbes, and that is significant considering that glutamate is not only a major neurotransmitter but also a common food additive. Dr. Roger McIntyre writes that, “MRI scans have shown that if you make someone immune activated, the hypervigilance center is activated, activity in the motoric region is reduced, and the person becomes withdrawn and hypervigilant. And that’s what depression is. What’s the classic presentation of depression? People are anxious, agitated, and experience a lack of spontaneous activity and increased emotional withdrawal” (Inflammation, Mood Disorders, and Disease Model Convergence). Inflammation is a serious condition and, in the modern world, quite pervasive. The implications of this are not to be dismissed.
I’ve been thinking about this kind of thing for years now. But this is the first time I’ve come across someone else making these same connections, at least to this extent and with such a large context. The only thing I would add or further emphasize is that, from a functional medicine perspective (common among paleo, low-carb, and keto advocates), the body itself is a system as part of the larger systems of society and the environment — it is a web of connections not only in which we are enmeshed but of which forms everything we are, that is to say we aren’t separate from it. Personal health is public health is environmental health, and think of that in relation to the world of hyperobjects overlapping with hypersubjectivity (as opposed to the isolating psychosis of hyper-individualism):
“We shouldn’t personally identify with our health problems and struggles. We aren’t alone nor isolated. The world is continuously affecting us, as we affect others. The world is built on relationships, not just between humans and other species but involving everything around us — what some describe as embodied, embedded, enacted, and extended (we are hypersubjects among hyperobjects). The world that we inhabit, that world inhabits us, our bodies and minds. There is no world “out there” for there is no possible way for us to be outside the world. Everything going on around us shapes who we are, how we think and feel, and what we do — most importantly, shapes us as members of a society and as parts of a living biosphere, a system of systems all the way down. The personal is always the public, the individual always the collective, the human always the more than human” (The World Around Us).
In its earliest meaning, diet meant a way of life, not merely an eating regimen. And for most of history, diet was rooted in cultural identity and communal experience. It reinforced a worldview and social order. This allows diet to be a perfect lens through which to study societal patterns and changes over time.
“It has become an overtly ideological fight, but maybe it always was. The politicization of diet goes back to the early formalized food laws that became widespread in the Axial Age and regained centrality in the Middle Ages, which for Europeans meant a revival of ancient Greek thought, specifically that of Galen. And it is utterly fascinating that pre-scientific Galenic dietary philosophy has since taken on scientific garb and gets peddled to this day, as a main current in conventional dietary thought (see Food and Faith in Christian Culture ed. by Ken Albala and Trudy Eden […]; I made this connection in realizing that Stephen Le, a biological anthropologist, was without awareness parroting Galenic thought in his book 100 Million Years of Food).”
Let me make an argument about (hyper-)individualism, rigid egoic boundaries, and hence Jaynesian consciousness (about Julian Jaynes, see other posts). But I’ll come at it from a less typical angle. I’ve been reading much about diet, nutrition, and health. With agriculture, the entire environment in which humans lived was fundamentally transformed, such as the rise of inequality and hierarchy, concentrated wealth and centralized power; not to mention the increase of parasites and diseases from urbanization and close cohabitation with farm animals (The World Around Us). We might be able to thank early agricultural societies, as an example, for introducing malaria to the world.
Maybe more importantly, there are significant links between what we eat and so much else: gut health, hormonal regulation, immune system, and neurocognitive functioning. There are multiple pathways, one of which is direct, connecting the gut and the brain: nervous system, immune system, hormonal system, etc — with the affect of diet and nutrition on immune response, including leaky gut, consider the lymphatic-brain link (Neuroscience News, Researchers Find Missing Link Between the Brain and Immune System) with the immune system as what some refer to as the “mobile mind” (Susan L. Prescott & Alan C. Logan, The Secret Life of Your Microbiome, pp. 64-7, pp. 249-50). As for a direct and near instantaneous gut-brain link, there was a recent discovery of the involvement of the vagus nerve, a possible explanation for the ‘gut sense’, with the key neurotransmitter glutamate modulating the rate of transmission in synaptic communication between enteroendocrine cells and vagal nerve neurons (Rich Haridy, Fast and hardwired: Gut-brain connection could lead to a “new sense”), and this is implicated in “episodic and spatial working memory” that might assist in the relocation of food sources (Rich Haridy, Researchers reveal how disrupting gut-brain communication may affect learning and memory). The gut is sometimes called the second brain because it also has neuronal cells, but in evolutionary terms it is the first brain. To demonstrate one example of a connection, many are beginning to refer to Alzheimer’s as type 3 diabetes, and dietary interventions have reversed symptoms in clinical studies. Also, gut microbes and parasites have been shown to influence our neurocognition and psychology, even altering personality traits and behavior such as with toxoplasma gondii. [For more discussion, see Fasting, Calorie Restriction, and Ketosis.]
The gut-brain link explains why glutamate as a food additive might be so problematic for so many people. Much of the research has looked at other health areas, such as metabolism or liver functioning. It would make more sense to look at its effect on neurocognition, but as with many other particles many scientists have dismissed the possibility of glutamate passing the blood-brain barrier. Yet we now know many things that were thought to be kept out of the brain do, under some conditions, get into the brain. After all, the same mechanisms that cause leaky gut (e.g., inflammation) can also cause permeability in the brain. So, we know the mechanism about how this could happen. Evidence is pointing in this direction: “MSG acts on the glutamate receptors and releases neurotransmitters which play a vital role in normal physiological as well as pathological processes (Abdallah et al., 2014[1]). Glutamate receptors have three groups of metabotropic receptors (mGluR) and four classes of ionotropic receptors (NMDA, AMPA, delta and kainite receptors). All of these receptor types are present across the central nervous system. They are especially numerous in the hypothalamus, hippocampus and amygdala, where they control autonomic and metabolic activities (Zhu and Gouaux, 2017[22]). Results from both animal and human studies have demonstrated that administration of even the lowest dose of MSG has toxic effects. The average intake of MSG per day is estimated to be 0.3-1.0 g (Solomon et al., 2015[18]). These doses potentially disrupt neurons and might have adverse effects on behaviour” (Kamal Niaz, Extensive use of monosodium glutamate: A threat to public health?).
One possibility to consider is the role of exorphins that are addictive and can be blocked in the same way as opioids. Exorphin, in fact, means external morphine-like substance, in the way that endorphin means indwelling morphine-like substance. Exorphins are found in milk and wheat. Milk, in particular, stands out. Even though exorphins are found in other foods, it’s been argued that they are insignificant because they theoretically can’t pass through the gut barrier, much less the blood-brain barrier. Yet exorphins have been measured elsewhere in the human body. One explanation is gut permeability (related to permeability throughout the body) that can be caused by many factors such as stress but also by milk. The purpose of milk is to get nutrients into the calf and this is done by widening the space in gut surface to allow more nutrients through the protective barrier. Exorphins get in as well and create a pleasurable experience to motivate the calf to drink more. Along with exorphins, grains and dairy also contain dopaminergic peptides, and dopamine is the other major addictive substance. It feels good to consume dairy as with wheat, whether you’re a calf or a human, and so one wants more. Think about that the next time you pour milk over cereal.
Addiction, of food or drugs or anything else, is a powerful force. And it is complex in what it affects, not only physiologically and psychologically but also on a social level. Johann Hari offers a great analysis in Chasing the Scream. He makes the case that addiction is largely about isolation and that the addict is the ultimate individual (see To Put the Rat Back in the Rat Park, Rationalizing the Rat Race, Imagining the Rat Park, & Individualism and Isolation), and by the way this connects to Jaynesian consciousness with its rigid egoic boundaries as opposed to the bundled and porous mind, the extended and enmeshed self of bicameralism and animism. It stands out to me that addiction and addictive substances have increased over civilization, and I’ve argued that this is about a totalizing cultural system and a fully encompassing ideological worldview, what some call a reality tunnel (see discussion of addiction and social control in Diets and Systems & Western Individuality Before the Enlightenment Age). Growing of poppies, sugar cane, etc came later on in civilization, as did the production of beer and wine — by the way, alcohol releases endorphins, sugar causes a serotonin high, and both activate the hedonic pathway. Also, grain and dairy were slow to catch on, as a large part of the diet. Until recent centuries, most populations remained dependent on animal foods, including wild game (I discuss this era of dietary transition and societal transformation in numerous posts with industrialization and technology pushing the already stressed agricultural mind to an extreme: Ancient Atherosclerosis?, To Be Fat And Have Bread, Autism and the Upper Crust, “Yes, tea banished the fairies.”, Voice and Perspective, Hubris of Nutritionism, Health From Generation To Generation, Dietary Health Across Generations, Moral Panic and Physical Degeneration, The Crisis of Identity, The Disease of Nostalgia, & Technological Fears and Media Panics). Americans, for example, ate large amounts of meat, butter, and lard from the colonial era through the 19th century (see Nina Teicholz, The Big Fat Surprise; passage quoted in full at Malnourished Americans). In 1900, Americans on average were only getting 10% of their calorie intake from carbohydrates and sugar was minimal, a potentially ketogenic diet considering how much lower calorie the average diet was back then.
Something else to consider is that low-carb diets can alter how the body and brain functions (the word ‘alter’ is inaccurate, though, since in evolutionary terms ketosis would’ve been the normal state; and so rather the modern high-carb diet is altered from the biological norm). That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past (Past Views On One Meal A Day (OMAD)). Interestingly, this only applies to adults since we know that babies remain in ketosis during breastfeeding, there is evidence that they are already in ketosis in utero, and well into the teen years humans apparently remain in ketosis: “It is fascinating to see that every single child , so far through age 16, is in ketosis even after a breakfast containing fruits and milk” (Angela A. Stanton, Children in Ketosis: The Feared Fuel). “I have yet to see a blood ketone test of a child anywhere in this age group that is not showing ketosis both before and after a meal” (Angela A. Stanton, If Ketosis Is Only a Fad, Why Are Our Kids in Ketosis?). Ketosis is not only safe but necessary for humans (“Is keto safe for kids?”). Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is that, for many people, cravings and food addictions disappear. [For more discussion of this topic, see previous posts: Fasting, Calorie Restriction, and Ketosis, Ketogenic Diet and Neurocognitive Health, Is Ketosis Normal?, & “Is keto safe for kids?”.] Ketosis is a non-addictive or maybe even anti-addictive state of mind (FranciscoRódenas-González, et al, Effects of ketosis on cocaine-induced reinstatement in male mice), similar to how certain psychedelics can be used to break addiction — one might argue there is a historical connection over the millennia between a decrease of psychedelic use and an increase of addictive substances: sugar, caffeine, nicotine, opium, etc (Diets and Systems, “Yes, tea banished the fairies.”, & Wealth, Power, and Addiction). Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, such as Daniel Everett’s account of the Piraha, and that is typical of ketosis — fasting forces one into ketosis, if one isn’t already in ketosis, and so beginning a fast in ketosis makes it even easier. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group, the opposite of the addictive egoic mind of high-carb agricultural societies. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. Hold that thought for a moment.
Now go back to the two staples of the modern diet, grains and dairy. Besides exorphins and dopaminergic substances, they also have high levels of glutamate, as part of gluten and casein respectively. Dr. Katherine Reid is a biochemist whose daughter was diagnosed with autism and it was severe. She went into research mode and experimented with supplementation and then diet. Many things seemed to help, but the greatest result came from restriction of dietary glutamate, a difficult challenge as it is a common food additive (see her TED talk here and another talk here or, for a short and informal video, look here). This requires going on a largely whole foods diet, that is to say eliminating processed foods (also see Traditional Foods diet of Weston A. Price and Sally Fallon Morell, along with the GAPS diet of Natasha Campbell-McBride). But when dealing with a serious issue, it is worth the effort. Dr. Reid’s daughter showed immense improvement to such a degree that she was kicked out of the special needs school. After being on this diet for a while, she socialized and communicated normally like any other child, something she was previously incapable of. Keep in mind that glutamate, as mentioned above, is necessary as a foundational neurotransmitter in modulating communication between the gut and brain. But typically we only get small amounts of it, as opposed to the large doses found in the modern diet. In response to the TED Talk given by Reid, Georgia Ede commented that it’s, “Unclear if glutamate is main culprit, b/c a) little glutamate crosses blood-brain barrier; b) anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production.” Either way, glutamate plays a powerful role in brain functioning. And no matter the exact line of causation, industrially processed foods in the modern diet would be involved. By the way, an exacerbating factor might be mercury in its relation to anxiety and adrenal fatigue, as it ramps up the fight or flight system via over-sensitizing the glutamate pathway — could this be involved in conditions like autism where emotional sensitivity is a symptom? Mercury and glutamate simultaneously increasing in the modern world demonstrates how industrialization can push the effects of the agricultural diet to ever further extremes.
Glutamate is also implicated in schizophrenia: “The most intriguing evidence came when the researchers gave germ-free mice fecal transplants from the schizophrenic patients. They found that “the mice behaved in a way that is reminiscent of the behavior of people with schizophrenia,” said Julio Licinio, who co-led the new work with Wong, his research partner and spouse. Mice given fecal transplants from healthy controls behaved normally. “The brains of the animals given microbes from patients with schizophrenia also showed changes in glutamate, a neurotransmitter that is thought to be dysregulated in schizophrenia,” he added. The discovery shows how altering the gut can influence an animals behavior” (Roni Dengler, Researchers Find Further Evidence That Schizophrenia is Connected to Our Guts; reporting on Peng Zheng et al, The gut microbiome from patients with schizophrenia modulates the glutamate-glutamine-GABA cycle and schizophrenia-relevant behaviors in mice, Science Advances journal). And glutamate is involved in other conditions as well, such as in relation to GABA: “But how do microbes in the gut affect [epileptic] seizures that occur in the brain? Researchers found that the microbe-mediated effects of the Ketogenic Diet decreased levels of enzymes required to produce the excitatory neurotransmitter glutamate. In turn, this increased the relative abundance of the inhibitory neurotransmitter GABA. Taken together, these results show that the microbe-mediated effects of the Ketogenic Diet have a direct effect on neural activity, further strengthening support for the emerging concept of the ‘gut-brain’ axis.” (Jason Bush, Important Ketogenic Diet Benefit is Dependent on the Gut Microbiome). Glutamate is one neurotransmitter among many that can be affected in a similar manner; e.g., serotonin is also produced in the gut.
That reminds me of propionate, a short chain fatty acid and the conjugate base of propioninic acid. It is another substance normally taken in at a low level. Certain foods, including grains and dairy, contain it. The problem is that, as a useful preservative, it has been generously added to the food supply. Research on rodents shows injecting them with propionate causes autistic-like behaviors. And other rodent studies show how this stunts learning ability and causes repetitive behavior (both related to the autistic demand for the familiar), as too much propionate entrenches mental patterns through the mechanism that gut microbes use to communicate to the brain how to return to a needed food source, similar to the related function of glutamate. A recent study shows that propionate not only alters brain functioning but brain development (L.S. Abdelli et al, Propionic Acid Induces Gliosis and Neuro-inflammation through Modulation of PTEN/AKT Pathway in Autism Spectrum Disorder), and this is a growing field of research (e.g., Hyosun Choi, Propionic acid induces dendritic spine loss by MAPK/ERK signaling and dysregulation of autophagic flux). As reported by Suhtling Wong-Vienneau at University of Central Florida, “when fetal-derived neural stem cells are exposed to high levels of Propionic Acid (PPA), an additive commonly found in processed foods, it decreases neuron development” (Processed Foods May Hold Key to Rise in Autism). This study “is the first to discover the molecular link between elevated levels of PPA, proliferation of glial cells, disturbed neural circuitry and autism.”
The impact is profound and permanent — Pedersen offers the details: “In the lab, the scientists discovered that exposing neural stem cells to excessive PPA damages brain cells in several ways: First, the acid disrupts the natural balance between brain cells by reducing the number of neurons and over-producing glial cells. And although glial cells help develop and protect neuron function, too many glia cells disturb connectivity between neurons. They also cause inflammation, which has been noted in the brains of autistic children. In addition, excessive amounts of the acid shorten and damage pathways that neurons use to communicate with the rest of the body. This combination of reduced neurons and damaged pathways hinder the brain’s ability to communicate, resulting in behaviors that are often found in children with autism, including repetitive behavior, mobility issues and inability to interact with others.” According to this study, “too much PPA also damaged the molecular pathways that normally enable neurons to send information to the rest of the body. The researchers suggest that such disruption in the brain’s ability to communicate may explain ASD-related characteristics such as repetitive behavior and difficulties with social interaction” (Ana Sandoiu, Could processed foods explain why autism is on the rise?).
So, the autistic brain develops according to higher levels of propionate and maybe becomes accustomed to it. A state of dysfunction becomes what feels normal. Propionate causes inflammation and, as Dr. Ede points out, “anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production”. High levels of propionate and glutamate become part of the state of mind the autistic becomes identified with. It all links together. Autistics, along with cravings for foods containing propionate (and glutamate), tend to have larger populations of a particular gut microbe that produces propionate. In killing microbes, this might be why antibiotics can help with autism. But in the case of depression, gut issues are associated instead with the lack of certain microbes that produce butyrate, another important substance that also is found in certain foods (Mireia Valles-Colomer et al, The neuroactive potential of the human gut microbiota in quality of life and depression). Depending on the specific gut dysbiosis, diverse neurocognitive conditions can result. And in affecting the microbiome, changes in autism can be achieved through a ketogenic diet, temporarily reducing the microbiome (similar to an antibiotic) — this presumably takes care of the problematic microbes and readjusts the gut from dysbiosis to a healthier balance. Also, ketosis would reduce the inflammation that is associated with glutamate production.
As with propionate, exorphins injected into rats will likewise elicit autistic-like behaviors. By two different pathways, the body produces exorphins and propionate from the consumption of grains and dairy, the former from the breakdown of proteins and the latter produced by gut bacteria in the breakdown of some grains and refined carbohydrates (combined with the propionate used as a food additive; and also, at least in rodents, artificial sweeteners increase propionate levels). [For related points and further discussion, see section below about vitamin B1 (thiamine/thiamin). Also covered are other B vitamins and nutrients.] This is part of the explanation for why many autistics have responded well to ketosis from carbohydrate restriction, specifically paleo diets that eliminate both wheat and dairy, but ketones themselves play a role in using the same transporters as propionate and so block their buildup in cells and, of course, ketones offer a different energy source for cells as a replacement for glucose which alters how cells function, specifically neurocognitive functioning and its attendant psychological effects.
There are some other factors to consider as well. With agriculture came a diet high in starchy carbohydrates and sugar. This inevitably leads to increased metabolic syndrome, including diabetes. And diabetes in pregnant women is associated with autism and attention deficit disorder in children. “Maternal diabetes, if not well treated, which means hyperglycemia in utero, that increases uterine inflammation, oxidative stress and hypoxia and may alter gene expression,” explained Anny H. Xiang. “This can disrupt fetal brain development, increasing the risk for neural behavior disorders, such as autism” (Maternal HbA1c influences autism risk in offspring); by the way, other factors such as getting more seed oils and less B vitamins are also contributing factors to metabolic syndrome and altered gene expression, including being inherited epigenetically, not to mention mutagenic changes to the genes themselves (Catherine Shanahan, Deep Nutrition). The increase of diabetes, not mere increase of diagnosis, could partly explain the greater prevalence of autism over time. Grain surpluses only became available in the 1800s, around the time when refined flour and sugar began to become common. It wasn’t until the following century that carbohydrates finally overtook animal foods as the mainstay of the diet, specifically in terms of what is most regularly eaten throughout the day in both meals and snacks — a constant influx of glucose into the system.
A further contributing factor in modern agriculture is that of pesticides, also associated with autism. Consider DDE, a product of DDT, which has been banned for decades but apparently it is still lingering in the environment. “The odds of autism among children were increased, by 32 percent, in mothers whose DDE levels were high (high was, comparatively, 75th percentile or greater),” one study found (Aditi Vyas & Richa Kalra, Long lingering pesticides may increase risk for autism: Study). “Researchers also found,” the article reports, “that the odds of having children on the autism spectrum who also had an intellectual disability were increased more than two-fold when the mother’s DDE levels were high.” A different study showed a broader effect in terms of 11 pesticides still in use:
“They found a 10 percent or more increase in rates of autism spectrum disorder, or ASD, in children whose mothers lived during pregnancy within about a mile and a quarter of a highly sprayed area. The rates varied depending on the specific pesticide sprayed, and glyphosate was associated with a 16 percent increase. Rates of autism spectrum disorders combined with intellectual disability increased by even more, about 30 percent. Exposure after birth, in the first year of life, showed the most dramatic impact, with rates of ASD with intellectual disability increasing by 50 percent on average for children who lived within the mile-and-a-quarter range. Those who lived near glyphosate spraying showed the most increased risk, at 60 percent” (Nicole Ferox, It’s Personal: Pesticide Exposures Come at a Cost).
An additional component to consider are plant anti-nutrients. For example, oxalates may be involved in autism spectrum disorder (Jerzy Konstantynowicz et al, A potential pathogenic role of oxalate in autism). With the end of the Ice Age, vegetation became more common and some of the animal foods less common. That increased plant foods as part of the human diet. But even then it was limited and seasonal. The dying off of the megafauna was a greater blow, as it forced humans to both rely on less desirable lean meats from smaller prey but also more plant foods. And of course, the agricultural revolution followed shortly after that with its devastating effects. None of these changes were kind to human health and development, as the evidence shows in the human bones and mummies left behind. Yet they were minor compared to what was to come. The increase of plant foods was a slow process over millennia. All the way up to the 19th century, Americans were eating severely restricted amounts of plant foods and instead depending on fatty animal foods, from pasture-raised butter and lard to wild-caught fish and deer — the abundance of wilderness and pasturage made such foods widely available, convenient, and cheap, besides being delicious and nutritious. Grain crops and vegetable gardens were simply too hard to grow, as described by Nina Teicholz in The Big Fat Surprise (see quoted passage at Malnourished Americans).
While maintaining a garden at Walden Pond by growing beans, peas, corn, turnips and potatoes, a plant-based diet (Jennie Richards, Henry David Thoreau Advocated “Leaving Off Eating Animals”) surely contributed to Henry David Thoreau’s declining health from tuberculosis in weakening his immune system from deficiency in the fat-soluble vitamins, although his nearby mother occasionally made him a fruit pie that would’ve had nutritious lard in the crust: “lack of quality protein and excess of carbohydrate foods in Thoreau’s diet as probable causes behind his infection” (Dr. Benjamin P. Sandler, Thoreau, Pulmonary Tuberculosis and Dietary Deficiency). Likewise, Franz Kafka who became a vegetarian also died from tuberculosis (Old Debates Forgotten). Weston A. Price observed the link between deficiency of fat-soluble vitamins and high rates of tuberculosis, not that one causes the other but that nutritious diet is key to a strong immune system (Dr. Kendrick On Vaccines & Moral Panic and Physical Degeneration). Besides, eliminating fatty animal foods typically means increasing starchy and sugary plant foods, which lessens the anti-inflammatory response from ketosis and autophagy and hence the capacity for healing.
It should be re-emphasized the connection of physical health to mental health, another insight of Price. Interestingly, Kafka suffered from psychological, presumably neurocognitive, issues long before tubercular symptoms showed up and he came to see the link between them as causal, although he saw it the the other way around as psychosomatic. Even more intriguing, Kafka suggests that, as Sander L. Gilman put it, “all urban dwellers are tubercular,” as if it is a nervous condition of modern civilization akin to what used to be called neurasthenia (about Kafka’s case, see Sander L. Gilman’s Franz Kafka, the Jewish Patient). He even uses the popular economic model of energy and health: “For secretly I don’t believe this illness to be tuberculosis, at least primarily tuberculosis, but rather a sign of general bankruptcy” (for context, see The Crisis of Identity). Speaking of the eugenic, hygienic, sociological and aesthetic, Gillman further notes that, “For Kafka, that possibility is linked to the notion that illness and creativity are linked, that tuberculars are also creative geniuses,” indicating an interpretation of neurasthenia among the intellectual class, an interpretation that was more common in the United States than in Europe.
The upper classes were deemed the most civilized and so it was expected they they’d suffer the most from the diseases of civilization, and indeed the upper classes fully adopted the modern industrial diet before the rest of the population. In contrast, while staying at a sanatorium (a combination of the rest cure and the west cure), Kafka stated that, “I am firmly convinced, now that I have been living here among consumptives, that healthy people run no danger of infection. Here, however, the healthy are only the woodcutters in the forest and the girls in the kitchen (who will simply pick uneaten food from the plates of patients and eat it—patients whom I shrink from sitting opposite) but not a single person from our town circles,” from a letter to Max Brod on March 11, 1921. It should be pointed out that tuberculosis sanatoriums were typically located in rural mountain areas where local populations were known to be healthy, the kinds of communities Weston A. Price studied in the 1930s; a similar reason for why in America tuberculosis patients were sometimes sent west (the west cure) for clean air and a healthy lifestyle, probably with an accompanying change toward a rural diet, with more wild-caught animal foods higher in omega-3s and lower in omega-6s, not to mention higher in fat-soluble vitamins.
The historical context of public health overlapped with racial hygiene, and indeed some of Kafka’s family members and lovers would later die at the hands of Nazis. Eugenicists were obsessed with body types in relation to supposed racial features, but non-eugenicists also accepted that physical structure was useful information to be considered; and this insight is supported, if not the eugenicist ideology, by the more recent scientific measurements of stunted bone development in the early agricultural societies. Hermann Brehmer, a founder of the sanitorium movement, asserted that a particular body type (habitus phthisicus, equivalent to habitus asthenicus) was associated with tuberculosis, the kind of thinking that Weston A. Price would pick up in his observations in physical development, although Price saw the explanation as dietary and not racial. The other difference is that Price saw “body type” not as a cause but as a symptom of ill health, and so the focus on re-forming the body (through lung exercises, orthopedic corsets, etc) to improve health was not the most helpful advice. On the other hand, if re-forming the body involved something like the west cure in changing the entire lifestyle and environmental conditions, it might work by way of changing other factors of health and, along with diet, exercise and sunshine and clean air and water would definitely improve immune function, lower inflammation, and much else (sanitoriums prioritized such things as getting plenty of sunshine and dairy, both of which would increase vitamin D3 that is necessary for immunological health). Improvements in physical health, of course, would go hand in hand with that of mental health. An example of this is that winter conceptions, when vitamin D3 production is low, result in higher rates later on of childhood learning disabilities and other problems in neurocognitive development (BBC, Learning difficulties linked with winter conception).
As a side note, physical development was tied up with gender issues and gender roles, especially for boys in becoming men. There became a fear that the newer generations of urban youth were failing to develop properly, physically and mentally, morally and socially. Fitness became a central concern for the civilizational project and it was feared that we modern humans might fail this challenge. Most galling of all was ‘feminization’, not only about loss of an athletic build but loss of something to the masculine psychology, involving the depression and anxiety, sensitivity and weakness of conditions like neurasthenia while also overlapping with tubercular consumption. Some of this could be projected onto racial inferiority, far from being limited to the distinction between those of European descent and all others for it also was used to divide humanity up in numerous ways (German vs French, English vs Irish, North vs South, rich vs poor, Protestants vs Catholics, Christians vs Jews, etc).
Gender norms were applied to all aspects of health and development, including perceived moral character and personality disposition. This is a danger to the individual, but also potentially a danger to society. “Here we can return for the moment to the notion that the male Jew is feminized like the male tubercular. The tubercular’s progressive feminization begins in the middle of the nineteenth century with the introduction of the term: infemminire, to feminize, which is supposedly a result of male castration. By the 1870s, the term is used to describe the feminisme of the male through the effects of other disease, such as tuberculosis. Henry Meige, at the Salpetriere, saw this feminization as an atavism, in which the male returns to the level of the “sexless” child. Feminization is therefore a loss, which can cause masturbation and thus illness in certain predisposed individuals. It is also the result of actual castration or its physiological equivalent, such as an intensely debilitating illness like tuberculosis, which reshapes the body” (Sanders L. Gilman, Franz Kafka, the Jewish Patient). There was a fear that all of civilization was becoming effeminate, especially among the upper classes who were expected to be the leaders. That was the entire framework of neurasthenia-obsessed rhetoric in late nineteenth to early twentieth century America. The newer generations of boys, the argument went, were somehow deficient and inadequate. Looking back on that period, there is no doubt that physical and mental illness was increasing, while bone structure was becoming underdeveloped in a way one could perceive as effeminate; such bone development problems are particularly obvious among children raised on plant-based diets, especially veganism and near-vegan vegetarianism, but also anyone on a diet lacking nutritious animal foods.
Let me make one odd connection before moving on. The Seventh Day Adventist Dr. John Harvey Kellogg believed masturbation was both a moral sin and a cause of ill health but also a sign of inferiority, and his advocacy of a high-fiber vegan diet including breakfast cereals was based on the Galenic theory that such foods decreased libido. Dr. Kellogg was also an influential eugenicist and operated a famous sanitorium. He wasn’t alone in blaming masturbation for disease. The British Dr. D. G. Macleod Munro treated masturbation as a contributing factor for tuberculosis: “the advent of the sexual appetite in normal adolescence has a profound effect upon the organism, and in many cases when uncontrolled, leads to excess about the age when tuberculosis most frequently delivers its first open assault upon the body,” as quoted by Gilman. This related to the ‘bankruptcy’ Kafka mentioned, the idea that one could waste one’s energy reserves. Maybe there is an insight in this belief, despite it being misguided and misinterpreted. The source of the ‘bankruptcy’ may have in part been a nutritional debt and certainly a high-fiber vegan diet would not refill ones energy and nutrient reserves as an investment in one’s health — hence, the public health risk of what one might call a hyper-agricultural diet as exemplified by the USDA dietary recommendations and corporate-backed dietary campaigns like EAT-Lancet (Dietary Dictocrats of EAT-Lancet; & Corporate Veganism), but it’s maybe reversing course, finally (Slow, Quiet, and Reluctant Changes to Official Dietary Guidelines; American Diabetes Association Changes Its Tune; & Corporate Media Slowly Catching Up With Nutritional Studies).
So far, my focus has mostly been on what we ingest or are otherwise exposed to because of agriculture and the food system, in general and more specifically in industrialized society with its refined, processed, and adulterated foods, largely from plants. But the other side of the picture is what our diet is lacking, what we are deficient in. As I touched upon directly above, an agricultural diet hasn’t only increased certain foods and substances but simultaneously decreased others. What promoted optimal health throughout human evolution has, in many cases, been displaced or interrupted. Agriculture is highly destructive and has depleted the nutrient-level in the soil (Carnivore Is Vegan) and, along with this, even animal foods as part of the agricultural system are similarly depleted of nutrients as compared to animal foods from pasture or free-range. For example, fat-soluble vitamins (true vitamin A as retinol, vitamin D3, vitamin K2 not to be confused with K1, and vitamin E complex) are not found in plant foods and are found in far less concentration with foods from animals from factory-farming or from grazing on poor soil from agriculture, especially the threat of erosion and desertification. Rhonda Patrick points to deficiencies of vitamin D3, EPA and DHA and hence insufficient serotonin levels as being causally linked to autism, ADHD, bipolar disorder, schizophrenia, etc (TheIHMC, Rhonda Patrick on Diet-Gene Interactions, Epigenetics, the Vitamin D-Serotonin Link and DNA Damage). She also discusses inflammation, epigenetics, and DNA damage which relates to the work by others (Dr. Catherine Shanahan On Dietary Epigenetics and Mutations).
One of the biggest changes with agriculture was the decrease of fatty animal foods that were nutrient-dense and nutrient-bioavailable. It’s in the fat that are found the fat-soluble vitamins and fat is necessary for their absorption (i.e., fat-soluble), and these key nutrients relate to almost everything else such as minerals as calcium and magnesium that also are found in animal foods (Calcium: Nutrient Combination and Ratios); the relationship of seafood with the balance of sodium, magnesium, and potassium is central (On Salt: Sodium, Trace Minerals, and Electrolytes) and indeed populations that eat more seafood live longer. These animal foods used to hold the prized position in the human diet and the earlier hominid diet as well, as part of our evolutionary inheritance from millions of years of adaptation to a world where fatty animals once were abundant (J. Tyler Faith, John Rowan & Andrew Du, Early hominins evolved within non-analog ecosystems). That was definitely true in the paleolithic before the megafauna die-off, but even to this day hunter-gatherers when they have access to traditional territory and prey will seek out the fattest animals available, entirely ignoring lean animals because rabbit sickness is worse than hunger (humans can always fast for many days or weeks, if necessary, and as long as they have reserves of body fat they can remain perfectly healthy).
We’ve already discussed autism in terms of many other dietary factors, especially excesses of otherwise essential nutrients like glutamate, propionate, and butyrate. But like most modern people, those on the autistic spectrum can be nutritionally deficient in other ways and unsurprisingly that would involve fat-soluble vitamins. In a fascinating discussion one of her more recent books, Nourishing Fats, Sally Fallon Morell offers a hypothesis of an indirect causal mechanism. First off, she notes that, “Dr. Mary Megson of Richmond, Virginia, had noticed that night blindness and thyroid conditions—both signs of vitamin A deficiency—were common in family members of autistic children” (p. 156), and so indicating a probable deficiency of the same in the affected child. This might be why supplementing cod liver oil, high in true vitamin A, helps with autistic issues. “As Dr. Megson explains, in genetically predisposed children, autism is linked to a G-alpha protein defect. G-alpha proteins form one of the most prevalent signaling systems in our cells, regulating processes as diverse as cell growth, hormonal regulation and sensory perception—like seeing” (p. 157).
The sensory issues common among autistics may seem to be neurocognitive in origin, but the perceptual and psychological effects may be secondary to the real cause in altered eye development. Because the rods in their eyes don’t function properly, they have distorted vision that is experienced as a blurry and divided visual field, like a magic-eye puzzle, that takes constant effort in making coherent sense of the world around them. “According to Megson, the blocked visual pathways explain why children on the autism spectrum “melt down” when objects are moved or when you clean up their lines or piles of toys sorted by color They work hard to piece together their world; it frightens and overwhelms them when the world as they are able to see it changes. It also might explain why children on the autism spectrum spend time organizing tings so carefully. It’s the only way they can “see” what’s out there” (p. 157). The rods at the edge of their vision work better and so they prefer to not look directly at people.
The vitamin A link is not merely speculative. In other aspects seen in autism, studies have sussed out some of the proven and possible factors and mechanisms: “Decreased vitamin A, and its retinoic acid metabolites, lead to a decrease in CD38 and associated changes that underpin a wide array of data on the biological underpinnings of ASD, including decreased oxytocin, with relevance both prenatally and in the gut. Decreased sirtuins, poly-ADP ribose polymerase-driven decreases in nicotinamide adenine dinucleotide (NAD+), hyperserotonemia, decreased monoamine oxidase, alterations in 14-3-3 proteins, microRNA alterations, dysregulated aryl hydrocarbon receptor activity, suboptimal mitochondria functioning, and decreases in the melatonergic pathways are intimately linked to this. Many of the above processes may be modulating, or mediated by, alterations in mitochondria functioning. Other bodies of data associated with ASD may also be incorporated within these basic processes, including how ASD risk factors such as maternal obesity and preeclampsia, as well as more general prenatal stressors, modulate the likelihood of offspring ASD” (Michael Maes et al, Integrating Autism Spectrum Disorder Pathophysiology: Mitochondria, Vitamin A, CD38, Oxytocin, Serotonin and Melatonergic Alterations in the Placenta and Gut). By the way, some of those involved pathways are often discussed in terms of longevity, which indicates autistics might be at risk for shortened lifespan. Autism, indeed, is comorbid with numerous other health issues and genetic syndromes. So autism isn’t just an atypical expression on a healthy spectrum of neurodiversity.
The affect of the agricultural diet, especially in its industrially-processed variety, has a powerful impact on numerous systems simultaneously, as autism demonstrates. There is unlikely any single causal factor and causal mechanism with most other health conditions as well. We can take this a step further. With historical changes in diet, it wasn’t only fat-soluble vitamins that were lost. Humans traditionally ate nose-to-tail and this brought with it a plethora of nutrients, even some thought of as being only sourced from plant foods. In its raw or lightly cooked form, meat has more than enough vitamin C for a low-carb diet; whereas a high-carb diet, since glucose competes with vitamin C, requires higher intake of this antioxidant which can lead to deficiencies at levels that otherwise would be adequate (Sailors’ Rations, a High-Carb Diet). Also, consider that prebiotics can be found in animal foods as well and animal-based prebiotics likely feeds a very different kind of microbiome that could shift so much else in the body, such as neurotransmitter production: “I found this list of prebiotic foods that were non-carbohydrate that included cellulose, cartilage, collagen, fructooligosaccharides, glucosamine, rabbit bone, hair, skin, glucose. There’s a bunch of things that are all — there’s also casein. But these tend to be some of the foods that actually have some of the highest prebiotic content,” from Vanessa Spina as quoted in Fiber or Not: Short-Chain Fatty Acids and the Microbiome).
Let’s briefly mention fat-soluble vitamins again in making a point about other animal-based nutrients. Fat-soluble vitamins, similar to ketosis and autophagy, have a profound effect on human biological functioning, including that of the mind (see the work of Weston A. Price as discussed in Health From Generation To Generation; also see the work of those described in Physical Health, Mental Health). In many ways, they are closer to hormones than mere nutrients, as they orchestrate entire systems in the body and how other nutrients get used, particularly seen with vitamin K2 that Weston A. Price discovered in calling it “Activator X” (only found in animal and fermented foods, not in whole or industrially-processed plant foods). I bring this up because some other animal-based nutrients play a similar important role. Consider glycine that is the main amino acid in collagen. It is available in connective tissues and can be obtained through soups and broths made from bones, skin, ligaments, cartilage, and tendons. Glycine is right up there with the fat-soluble vitamins in being central to numerous systems, processes, and organs.
As I’ve already discussed glutamate at great length, let me further that discussion by pointing out a key link. “Glycine is found in the spinal cord and brainstem where it acts as an inhibitory neurotransmitter via its own system of receptors,” writes Afifah Hamilton. “Glycine receptors are ubiquitous throughout the nervous system and play important roles during brain development. [Ito, 2016] Glycine also interacts with the glutaminergic neurotransmission system via NMDA receptors, where both glycine and glutamate are required, again, chiefly exerting inhibitory effects” (10 Reasons To Supplement With Glycine). Hamilton elucidates the dozens of roles played by this master nutrient and the diverse conditions that follow from its deprivation or insufficiency — it’s implicated in obsessive compulsive disorder, schizophrenia, and alcohol use disorder, along with much else such as metabolic syndrome. But it’s being essential to glutamate really stands out for this discussion. “Glutathione is synthesised,” Hamilton further explains, “from the amino acids glutamate, cysteine, and glycine, but studies have shown that the rate of synthesis is primarily determined by levels of glycine in the tissue. If there is insufficient glycine available the glutathione precursor molecules are excreted in the urine. Vegetarians excrete 80% more of these precursors than their omnivore counterparts indicating a more limited ability to complete the synthesis process.” Did you catch what she is saying there? Autistics already have too much glutamate and, if they are deficient in glycine, they won’t be able to convert glutamate into the important glutathione. When the body is overwhelmed with unused glutamate, it does what it can to eliminate them, but when constantly flooded with high-glutamate intake it can’t keep up. The excess glutamate then wreaks havoc on neurocognitive functioning.
The whole mess of the agricultural diet, specifically in its modern industrialized form, has been a constant onslaught taxing our bodies and minds. And the consequences are worsening with each generation. What stands out to me about autism, in particular, is how isolating it is. The repetitive behavior and focus on objects to the exclusion of human relationships resonates with how addiction isolates the individual. As with other conditions influenced by diet (shizophrenia, ADHD, etc), both autism and addiction block normal human relating in creating an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon (Afifah Hamilton, Why No One Should Eat Grains. Part 3: Ten More Reasons to Avoid Wheat). And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic consciousness with its rigid psychological boundaries is the result of our food system, as part of the civilizational project of mass agriculture?
* * *
Mongolian Diet and Fasting:
“Heaven grew weary of the excessive pride and luxury of China… I am from the Barbaric North. I wear the same clothing and eat the same food as the cowherds and horse-herders. We make the same sacrifices and we share our riches. I look upon the nation as a new-born child and I care for my soldiers as though they were my brothers.”
~Genghis Khan, letter of invitation to Ch’ang Ch’un
For anyone who is curious to learn more, the original point of interest was a quote by Jack Weatherford in his book Genghis Khan and the Making of the Modern World. He wrote that, “The Chinese noted with surprise and disgust the ability of the Mongol warriors to survive on little food and water for long periods; according to one, the entire army could camp without a single puff of smoke since they needed no fires to cook. Compared to the Jurched soldiers, the Mongols were much healthier and stronger. The Mongols consumed a steady diet of meat, milk, yogurt, and other diary products, and they fought men who lived on gruel made from various grains. The grain diet of the peasant warriors stunted their bones, rotted their teeth, and left them weak and prone to disease. In contrast, the poorest Mongol soldier ate mostly protein, thereby giving him strong teeth and bones. Unlike the Jurched soldiers, who were dependent on a heavy carbohydrate diet, the Mongols could more easily go a day or two without food.” By the way, that biography was written by an anthropologist who lived among and studied the Mongols for years. It is about the historical Mongols, but filtered through the direct experience of still existing Mongol people who have maintained a traditional diet and lifestyle longer than most other populations.
As nomadic herders living on arid grasslands with no option of farming, they had limited access to plant foods from foraging and so their diet was more easily applied to horseback warfare, even over long distances when food stores ran out. That meant, when they had nothing else, on “occasion they will sustain themselves on the blood of their horses, opening a vein and letting the blood jet into their mouths, drinking till they have had enough, and then staunching it.” They could go on “quite ten days like this,” according to Marco Polo’s observations. “It wasn’t much,” explained Logan Nye, “but it allowed them to cross the grasses to the west and hit Russia and additional empires. […]On the even darker side, they also allegedly ate human flesh when necessary. Even killing the attached human if horses and already-dead people were in short supply” (How Mongol hordes drank horse blood and liquor to kill you). The claim of their situational cannibalism came from the writings of Giovanni da Pian del Carpini who noted they’d eat anything, even lice. The specifics of what they ate was also determined by season: “Generally, the Mongols ate dairy in the summer, and meat and animal fat in the winter, when they needed the protein for energy and the fat to help keep them warm in the cold winters. In the summers, their animals produced a lot of milk so they switched the emphasis from meat to milk products” (from History on the Net, What Did the Mongols Eat?). In any case, animal foods were always the staple.
By the way, some have wondered how long humans have been consuming dairy, since the gene for lactose tolerance is fairly recent. In fact, “a great many Mongolians, both today and in Genghis Khan’s time are lactose intolerant. Fermentation breaks down the lactose, removing it almost entirely, making it entirely drinkable to the Mongols” (from Exploring History, Food That Conquered The World: The Mongols — Nomads And Chaos). Besides mare’s milk fermented into alcohol, they had a wide variety of other cultured dairy and aged cheese. Even then, much of the dairy would contain significant amounts of lactose. A better explanation is that many of the dairy-loving microbes have been incorporated into the Mongolian microbiome, and these microbes in combination as a microbial ecosystem do some combination of: digest lactose, moderate the effects of lactose intolerance, and/or somehow alter the body’s response to lactose. But looking at a single microbe might not tell us much. “Despite the dairy diversity she saw,” wrote Andrew Curry, “an estimated 95 percent of Mongolians are, genetically speaking, lactose intolerant. Yet, in the frost-free summer months, she believes they may be getting up to half their calories from milk products. […] Rather than a previously undiscovered strain of microbes, it might be a complex web of organisms and practices—the lovingly maintained starters, the milk-soaked felt of the yurts, the gut flora of individual herders, the way they stir their barrels of airag—that makes the Mongolian love affair with so many dairy products possible” (The answer to lactose intolerance might be in Mongolia).
Here is what is interesting. Based on study of ancient corpses, it’s been determined that lactose intolerant people in this region have been including dairy in their diet for 5,000 years. It’s not limited to the challenge of lactose intolerant people depending on a food staple that is abundant in lactose. The Mongolian population also has high rates of carrying the APOE4 gene variation that can make problematic a diet high in saturated fat (Helena Svobodová et al, Apolipoprotein E gene polymorphism in the Mongolian population). That is a significant detail, considering dairy has a higher amount of saturated fat than any other food. These people should be keeling over with nearly every disease known to humanity, particularly as they commonly drink plenty of alcohol and smoke tobacco (as was likewise true of the heart-healthy and long-lived residents of mid-20th century Roseto, Pennsylvania with their love of meat, lard, alcohol, and tobacco; see Blue Zones Dietary Myth). Yet, it’s not the traditional Mongolians but the the industrialized Mongolians who show all the health problems. A major difference between these two populations in Mongolia is diet, much of it being a difference of much low-carb animal foods eaten versus the amount of high-carb plant foods. Genetics are not deterministic, not in the slightest. As some others have noted, the traditional Mongolian diet would be accurately described as a low-carb paleo diet that, in the wintertime, would often have been a strict carnivore diet and ketogenic diet; although even rural Mongolians, unlike in the time of Genghis Khan, now get a bit more starchy agricultural foods. Maybe there is a protective health factor found in a diet that relies on nutrient-dense animal foods and leans toward the ketogenic.
It isn’t only that the Mongolian diet was likely ketogenic because of being low-carbohydrate, particularly on their meat-based winter diet, but also because it involved fasting. From Mongolia Volume 1 The Tangut Country, and the Solitudes of Northernin (1876), Nikolaĭ Mikhaĭlovich Przhevalʹskiĭ writes in the second note on p. 65 under the section Calendar and Year-Cycle: “On the New Year’s Day, or White Feast of the Mongols, see ‘Marco Polo’, 2nd ed. i. p. 376-378, and ii. p. 543. The monthly fetival days, properly for the Lamas days of fasting and worship, seem to differ locally. See note in same work, i. p. 224, and on the Year-cycle, i. p. 435.” This is alluded to in another text, in describing that such things as fasting were the norm of that time: “It is well known that both medieval European and traditional Mongolian cultures emphasized the importance of eating and drinking. In premodern societies these activities played a much more significant role in social intercourse as well as in religious rituals (e.g., in sacrificing and fasting) than nowadays” (Antti Ruotsala, Europeans and Mongols in the middle of the thirteenth century, 2001). A science journalist trained in biology, Dyna Rochmyaningsih, also mentions this: “As a spiritual practice, fasting has been employed by many religious groups since ancient times. Historically, ancient Egyptians, Greeks, Babylonians, and Mongolians believed that fasting was a healthy ritual that could detoxify the body and purify the mind” (Fasting and the Human Mind).
Mongol shamans and priests fasted, no different than in so many other religions, but so did other Mongols — more from Przhevalʹskiĭ’s 1876 account showing the standard feast and fast cycle of many traditional ketogenic diets: “The gluttony of this people exceeds all description. A Mongol will eat more than ten pounds of meat at one sitting, but some have been known to devour an average-sized sheep in twenty-four hours! On a journey, when provisions are economized, a leg of mutton is the ordinary daily ration for one man, and although he can live for days without food, yet, when once he gets it, he will eat enough for seven” (see more quoted material in Diet of Mongolia). Fasting was also noted of earlier Mongols, such as Genghis Khan: “In the spring of 2011, Jenghis Khan summoned his fighting forces […] For three days he fasted, neither eating nor drinking, but holding converse with the gods. On the fourth day the Khakan emerged from his tent and announced to the exultant multitude that Heaven had bestowed on him the boon of victory” (Michael Prawdin, The Mongol Empire, 1967). Even before he became Khan, this was his practice as was common among the Mongols, such that it became a communal ritual for the warriors:
“When he was still known as Temujin, without tribe and seeking to retake his kidnapped wife, Genghis Khan went to Burkhan Khaldun to pray. He stripped off his weapons, belt, and hat – the symbols of a man’s power and stature – and bowed to the sun, sky, and mountain, first offering thanks for their constancy and for the people and circumstances that sustained his life. Then, he prayed and fasted, contemplating his situation and formulating a strategy. It was only after days in prayer that he descended from the mountain with a clear purpose and plan that would result in his first victory in battle. When he was elected Khan of Khans, he again retreated into the mountains to seek blessing and guidance. Before every campaign against neighboring tribes and kingdoms, he would spend days in Burhkhan Khandun, fasting and praying. By then, the people of his tribe had joined in on his ritual at the foot of the mountain, waiting his return” (Dr. Hyun Jin Preston Moon, Genghis Khan and His Personal Standard of Leadership).
As an interesting side note, the Mongol population have been studied to some extent in one area of relevance. In Down’s Anomaly (1976), Smith et al writes that, “The initial decrease in the fasting blood sugar was greater than that usually considered normal and the return to fasting blood sugar level was slow. The results suggested increased sensitivity to insulin. Benda reported the initial drop in fating blood sugar to be normal but the absolute blood sugar level after 2 hours was lower for mongols than for controls.” That is probably the result of a traditional low-carb diet that had been maintained continuously since before history. For some further context, I noticed some discussion about the Mongolian keto diet (Reddit, r/keto, TIL that Ghenghis Khan and his Mongol Army ate a mostly keto based diet, consisting of lots of milk and cheese. The Mongols were specially adapted genetically to digest the lactase in milk and this made them easier to feed.) that was inspired by the scientific documentary “The Evolution of Us” (presently available on Netflix and elsewhere).
As a concluding thought, we may have the Mongols to thank for the modern American hamburger: “Because their cavalry was traveling so much, they would often eat while riding their horses towards their next battle. The Mongol soldiers would soften scraps of meat by placing it under their saddles while they rode. By the time the Mongols had time for a meal, the meat would be “tenderized” and consumed raw. […] By no means did the Mongols have the luxury of eating the kind of burgers we have today, but it was the first recorded time that meat was flattened into a patty-like shape” (Anna’s House, Brunch History: The Shocking Hamburger Origin Story You Never Heard; apparently based on the account of Jean de Joinville who was born a few years after Genghis Khan’s death). The Mongols introduced it to Russia, in what was called steak tartare (Tartars being one of the ethnic groups in the Mongol army), the Russians introduced it to Germany where it was most famously called hamburg steak (because sailors were served it at the ports of Hamburg), from which it was introduced to the United States by way of German immigrants sailing out of Hamburg. Another version of this is Salisbury steak that was invented during the American Civil War by Dr. James Henry Salisbury (physician, chemist, and medical researcher) as part of a meat-based, low-carb diet in medically and nutritionally treating certain diseases and ailments.
* * *
3/30/19 – An additional comment: I briefly mentioned sugar, that it causes a serotonin high and activates the hedonic pathway. I also noted that it was late in civilization when sources of sugar were cultivated and, I could add, even later when sugar became cheap enough to be common. Even into the 1800s, sugar was minimal and still often considered more as medicine than food.
To extend this thought, it isn’t only sugar in general but specific forms of it (Yu Hue, Fructose and glucose can regulate mammalian target of rapamycin complex 1 and lipogenic gene expression via distinct pathways). Fructose, in particular, has become widespread because of United States government subsidizing corn agriculture which has created a greater corn yield that humans can consume. So, what doesn’t get fed to animals or turned into ethanol, mostly is made into high fructose corn syrup and then added into almost every processed food and beverage imaginable.
Fructose is not like other sugars. This was important for early hominid survival and so shaped human evolution. It might have played a role in fasting and feasting. In 100 Million Years of Food, Stephen Le writes that, “Many hypotheses regarding the function of uric acid have been proposed. One suggestion is that uric acid helped our primate ancestors store fat, particularly after eating fruit. It’s true that consumption of fructose induces production of uric acid, and uric acid accentuates the fat-accumulating effects of fructose. Our ancestors, when they stumbled on fruiting trees, could gorge until their fat stores were pleasantly plump and then survive for a few weeks until the next bounty of fruit was available” (p. 42).
That makes sense to me, but he goes on to argue against this possible explanation. “The problem with this theory is that it does not explain why only primates have this peculiar trait of triggering fat storage via uric acid. After all, bears, squirrels, and other mammals store fat without using uric acid as a trigger.” This is where Le’s knowledge is lacking for he never discusses ketosis that has been centrally important for humans unlike other animals. If uric acid increases fat production, that would be helpful for fattening up for the next starvation period when the body returned to ketosis. So, it would be a regular switching back and forth between formation of uric acid that stores fat and formation of ketones that burns fat.
That is fine and dandy under natural conditions. Excess fructose on a continuous basis, however, is a whole other matter. It has been strongly associated with metabolic syndrome. One pathway of causation is that increased production of uric acid. This can lead to gout (wrongly blamed on meat) but other things as well. It’s a mixed bag. “While it’s true that higher levels of uric acid have been found to protect against brain damage from Alzheimer’s, Parkinson’s, and multiple sclerosis, high uric acid unfortunately increases the risk of brain stroke and poor brain function” (Le, p. 43).
The potential side effects of uric acid overdose are related to other problems I’ve discussed in relation to the agricultural mind. “A recent study also observed that high uric acid levels are associated with greater excitement-seeking and impulsivity, which the researchers noted may be linked to attention deficit hyperactivity disorder (ADHD)” (Le, p. 43). The problems of sugar go far beyond mere physical disease. It’s one more factor in the drastic transformation of the human mind.
* * *
4/2/19 – More info: There are certain animal fats, the omega-3 fatty acids EPA and DHA, that are essential to human health (Georgia Ede, The Brain Needs Animal Fat). These were abundant in the hunter-gatherer diet. But over the history of agriculture, they have become less common.
This is associated with psychiatric disorders and general neurocognitive problems, including those already mentioned above in the post. Agriculture and industrialization have replaced these healthy lipids with industrially-processed seed oils that are high in linoleic acid (LA), an omega-6 fatty acids. LA interferes with the body’s use of omega-3 fatty acids. Worse still, these seed oils appear to not only alter gene expression (epigenetics) but also to be mutagenic, a possible causal factor behind conditions like autism (Dr. Catherine Shanahan On Dietary Epigenetics and Mutations).
“Biggest dietary change in the last 60 years has been avoidance of animal fat. Coincides with a huge uptick in autism incidence. The human brain is 60 percent fat by weight. Much more investigation needed on correspondence between autism and prenatal/child ingestion of dietary fat.”
~ Brad Lemley
The agricultural diet, along with a drop in animal foods, saw a loss of access to the high levels and full profile of B vitamins. As with the later industrial seed oils, this had a major impact on genetics:
“The phenomenon wherein specific traits are toggled up and down by variations in gene expression has recently been recognized as a result of the built-in architecture of DNA and dubbed “active adaptive evolution.” 44
“As further evidence of an underlying logic driving the development of these new autism-related mutations, it appears that epigenetic factors activate the hotspot, particularly a kind of epigenetic tagging called methylation. 45 In the absence of adequate B vitamins, specific areas of the gene lose these methylation tags, exposing sections of DNA to the factors that generate new mutations. In other words, factors missing from a parent’s diet trigger the genome to respond in ways that will hopefully enable the offspring to cope with the new nutritional environment. It doesn’t always work out, of course, but that seems to be the intent.”
~Catherine Shanahan, Deep Nutrition, p. 56
And one last piece of evidence on the essential nature of animal fats:
“Maternal intake of fish, a key source of fatty acids, has been investigated in association with child neurodevelopmental outcomes in several studies. […]
“Though speculative at this time, the inverse association seen for those in the highest quartiles of intake of ω-6 fatty acids could be due to biological effects of these fatty acids on brain development. PUFAs have been shown to be important in retinal and brain development in utero (37) and to play roles in signal transduction and gene expression and as components of cell membranes (38, 39). Maternal stores of fatty acids in adipose tissue are utilized by the fetus toward the end of pregnancy and are necessary for the first 2 months of life in a crucial period of development (37). The complex effects of fatty acids on inflammatory markers and immune responses could also mediate an association between PUFA and ASD. Activation of the maternal immune system and maternal immune aberrations have been previously associated with autism (5, 40, 41), and findings suggest that increased interleukin-6 could influence fetal brain development and increase risk of autism and other neuropsychiatric conditions (42–44). Although results for effects of ω-6 intake on interleukin-6 levels are inconsistent (45, 46), maternal immune factors potentially could be affected by PUFA intake (47). […]
“Our results provide preliminary evidence that increased maternal intake of ω-6 fatty acids could reduce risk of offspring ASD and that very low intakes of ω-3 fatty acids and linoleic acid could increase risk.”
~Kristen Lyall et al, Maternal Dietary Fat Intake in Association With Autism Spectrum Disorders
* * *
6/13/19 – About the bicameral mind, I saw some other evidence for it in relationship to fasting. In the following quote, it is described that after ten days of fasting ancient humans would experience spirits. One thing for certain is that one can be fully in ketosis in three days. This would be true even if it wasn’t total fasting, as the caloric restriction would achieve the same end.
The author, Michael Carr, doesn’t think fasting was the cause of the spirit visions, but he doesn’t explain the reason(s) for his doubt. There is a long history of fasting used to achieve this intended outcome. If fasting was ineffective for this purpose, why has nearly every known traditional society for millennia used such methods? These people knew what they were doing.
By the way, imbibing alcohol after the fast would really knock someone into an altered state. The body becomes even more sensitive to alcohol when in ketogenic state during fasting. Combine this altered state with ritual, setting, cultural expectation, and archaic authorization. I don’t have any doubt that spirit visions could easily be induced.
Reflections on the Dawn of Consciousness
ed. by Marcel Kuijsten
Kindle Location 5699-5718
Chapter 13
The Shi ‘Corpse/ Personator’ Ceremony in Early China
by Michael Carr
“”Ritual Fasts and Spirit Visions in the Liji” 37 examined how the “Record of Rites” describes zhai 齋 ‘ritual fasting’ that supposedly resulted in seeing and hearing the dead. This text describes preparations for an ancestral sacrifice that included divination for a suitable day, ablution, contemplation, and a fasting ritual with seven days of sanzhai 散 齋 ‘relaxed fasting; vegetarian diet; abstinence (esp. from sex, meat, or wine)’ followed by three days of zhizhai 致 齋 ‘strict fasting; diet of grains (esp. gruel) and water’.
“Devoted fasting is inside; relaxed fasting is outside. During fast-days, one thinks about their [the ancestor’s] lifestyle, their jokes, their aspirations, their pleasures, and their affections. [After] fasting three days, then one sees those [spirits] for whom one fasted. On the day of the sacrifice, when one enters the temple, apparently one must see them at the spirit-tablet. When one returns to go out the door [after making sacrifices], solemnly one must hear sounds of their appearance. When one goes out the door and listens, emotionally one must hear sounds of their sighing breath. 38
“This context unequivocally uses biyou 必 有 ‘must be/ have; necessarily/ certainly have’ to describe events within the ancestral temple; the faster 必 有 見 “must have sight of, must see” and 必 有 聞 “must have hearing of, must hear” the deceased parent. Did 10 days of ritual fasting and mournful meditation necessarily cause visions or hallucinations? Perhaps the explanation is extreme or total fasting, except that several Liji passages specifically warn against any excessive fasts that could harm the faster’s health or sense perceptions. 39 Perhaps the explanation is inebriation from drinking sacrificial jiu 酒 ‘( millet) wine; alcohol’ after a 10-day fast. Based on measurements of bronze vessels and another Liji passage describing a shi personator drinking nine cups of wine, 40 York University professor of religious studies Jordan Paper calculates an alcohol equivalence of “between 5 and 8 bar shots of eighty-proof liquor.” 41 On the other hand, perhaps the best explanation is the bicameral hypothesis, which provides a far wider-reaching rationale for Chinese ritual hallucinations and personation of the dead.”
This made my mind immediately wonder how this relates. Changes in diets alter hormonal functioning. Endocrinology, the study of hormones, has been a major part of the diet debate going back to European researchers from earlier last century (as discussed by Gary Taubes). Diet affects hormones and hormones in turn affect diet. But I had something more specific in mind.
What about propionate and glutamate? What might their relationship be to testosterone? In a brief search, I couldn’t find anything about propionate. But I did find some studies related to glutamate. There is an impact on the endocrine system, although these studies weren’t looking at the results in terms of autism specifically or neurocognitive development in general. It points to some possibilities, though.
One could extrapolate from one of these studies that increased glutamate in the pregnant mother’s diet could alter what testosterone does to the developing fetus, in that testosterone increases the toxicity of glutamate which might not be a problem under normal conditions of lower glutamate levels. This would be further exacerbated during breastfeeding and later on when the child began eating the same glutamate-rich diet as the mother.
11/28/21 – Here is some discussion of vitamin B1 (thiamin/thiamine). It couldn’t easily fit into the above post without revising and rewriting some of it. And it could’ve been made into a separate post by itself. But, for the moment, we’ll look at some of the info here, as relevant to the above survey and analysis. This section will be used as a holding place for some developing thoughts, although we’ll try to avoid getting off-topic in a post that is already too long. Nonetheless, we are going to have to trudge a bit into the weeds so as to see the requisite details more clearly.
Related to autism, consider this highly speculative hypothesis: “Thiamine deficiency is what made civilization. Grains deplete it, changing the gut flora to make more nervous and hyperfocused (mildly autistic) humans who are afraid to stand out. Conformity. Specialization in the division of labor” (JJ, Is Thiamine Deficiency Destroying Your Digestive Health? Why B1 Is ESSENTIAL For Gut Function, EONutrition). Thiamine deficiency is also associated with delirium and psychosis, such as schizophrenia (relevant scientific papers available are too numerous to be listed). By the way, psychosis, along with mania, has an established psychological and neurocognitive overlap with measures of modern conservatism; in opposition to the liberal link to mood disorders, addiction, and alcoholism (Uncomfortable Questions About Ideology; & Radical Moderates, Depressive Realism, & Visionary Pessimism). This is part of some brewing thoughts that won’t be further pursued here.
The point is simply to emphasize the argument that modern ideologies, as embodied worldviews and social identities, may partly originate in or be shaped by dietary and nutritional factors, among much else in modern environments and lifestyles. Nothing even comparable to conservatism and liberalism existed as such prior to the expansion and improvement of agriculture during the Axial Age (farm fields were made more uniform and well-managed, and hence with higher yields; e.g., systematic weeding became common as opposed to letting fields grow in semi-wild state); and over time there were also innovations in food processing (e.g., removing hulls from grains made them last longer in storage while having the unintended side effect of also removing a major source of vitamin B1 to help metabolize carbs).
In the original writing of this post, one focus was on addiction. Grains and dairy were noted as sources of exorphins and dopaminergic peptides, as well as propionate and glutamate. As already explained, this goes a long way to explain the addictive quality of these foods and their relationship to the repetitive behavior of obsessive-compulsive disorder. This is seen in many psychiatric illnesses and neurocognitive conditions, including autism (Derrick Lonsdale et al, Dysautonomia in Autism Spectrum Disorder: Case Reports of a Family with Review of the Literature):
“It has been hypothesized that autism is due to mitochondrial dysfunction [49], supported more recently [50]. Abnormal thiamine homeostasis has been reported in a number of neurological diseases and is thought to be part of their etiology [51]. Blaylock [52] has pointed out that glutamate and aspartate excitotoxicity is more relevant when there is neuron energy failure. Brain damage from this source might be expected in the very young child and the elderly when there is abnormal thiamine homeostasis. In thiamine-deficient neuroblastoma cells, oxygen consumption decreases, mitochondria are uncoupled, and glutamate, formed from glutamine, is no longer oxidized and accumulates [53]. Glutamate and aspartate are required for normal metabolism, so an excess or deficiency are both abnormal. Plaitakis and associates [54] studied the high-affinity uptake systems of aspartate/glutamate and taurine in synaptosomal preparations isolated from brains of thiamine-deficient rats. They concluded that thiamine deficiency could impair cerebellar function by inducing an imbalance in its neurotransmitter systems.”
We’ve previously spoken of glutamate, a key neurotransmitter; but let’s summarize it while adding in new info. Among those on the autistic spectrum, there is commonly a glutamate excess. This is caused by eating a lot of processed foods that use glutamate as an additive (e.g., MSG). And there is the contributing factor of many autistics being drawn to foods naturally high in glutamate, specifically dairy and wheat. A high-carb diet also promotes the body’s own production of glutamate, with carb-related inflammation spiking glutamate levels in the brain; and it downregulates the levels of the inhibitory neurotransmitter GABA that balances glutamate. GABA is important for sleep and much else.
Keep in mind that thiamine is required in the production of numerous other neurotransmitters and required in the balanced interaction between them. Another B vitamin, B12 (cobalamin), plays a similar role; and it deficiency is not uncommonly seen there as well. The B vitamins, by the way, are particularly concentrated in animal foods, as are other key nutrients. Think about choline, precursor of acetylecholine, that promotes sensory habituation, perceptual regulation, attentional focus, executive function, and selective responsiveness while supporting mental flexibility (thiamine is also needed in making acetylcholine, and notably choline has some similarities to B vitamins); while similarly the amino acid L-tyrosine further promotes mental flexibility — the two form a balance of neurocognitive functioning, both of which can be impaired in diverse psychiatric diseases, neurological conditions, speech/language issues, learning disabilities, etc.
There is way too much scientific evidence to be cited and surveyed here, but let’s briefly focus in on some examples involving choline, such an easily found nutrient in eggs, meat, liver, and seafood. Studies indicate choline prevents mental health issues like schizophrenia and ADHD that involve sensory inhibition and attention problems that can contribute to social withdrawal (Bret Stetka, Can Mental Illness Be Prevented In The Womb?). Autism spectrum disorders and mood disorders, in being linked to choline deficiency, likewise exhibit social withdrawal. In autism, the sensory inhibition challenge is experienced as sensory overload and hyper-sensitivity (Anuradha Varanasi, Hypersensitivity Might Be Linked To A Transporter Protein Deficiency In The Brain: Study).
Mental flexibility, specifically, seems less relevant to modern society or rather, maybe it’s suppression, has made possible the rise of modern society; as hyper-specialization has become central for most modern work that is narrowly focused and repetitive. Yet one might note that modern liberalism strongly correlates with mental flexibility; e.g., Ernest Hartmann’s fluid and thin boundaries of mind, Big Five’s trait of openness to experience, and Myers-Briggs intuition and perceiving — by the way, a liberal arts education is defined by its not being specialized, and that is precisely what makes it ‘liberal’ (i.e., generous, expansive, inclusive, diverse, tolerant, multiperspectival, etc).
Maybe this also relates to how modern liberalism, as an explicit socio-ideological identity, has typically been tied into the greater wealth of the middle-to-upper classes and hence involving greater access to nutritious foods and costly supplements, not to mention high quality healthcare that tests for nutritional deficiencies and treats them early on; along with higher status, more privileges, and less stress within the high inequality hierarchy of the American caste system. There is a significant amount of truth to the allegation about a ‘liberal elite’, which in some ways applies to the relatively more liberal-minded conservative elites as well. It would be interesting to know if malnutrition or specific nutritional deficiencies increases social conservatism, similar to studies that have proven a link between parasite load and authoritarianism (in this blog, it’s been pointed out that all authoritarianism is socially conservative, not only the likes of Nazis but also Soviets, Maoists, and others; all of which targeted social liberals and those under the protection of socially liberal society).
Many other factors can exacerbate the delicate system. To return to glutamate, one of three precursors in producing the endogenous antioxidant glutathione. A major limit to this process is glycine that primarily comes from the connective tissue of animal foods (tough meats, gristle, bone broths, etc). Without sufficient glycine, glutamate won’t get used up and so will accumulate. Plus, glycine directly interacts with the glutaminergic neurotransmission system and so is needed for healthy functioning of glutamate. Further complicating can be mercury toxicity that over-excites the glutamate pathway. Then, as already described, the modern diet dumps even more glutamate on the fire. It’s a whole freaking mess, the complex and overlapping conditions of modernity. Altering any single factor would throw a wrench into the works, but what we’re talking about is nearly every major factor along with many minor factors all being tossed up in the air.
The standard American diet is high in refined carbs while low in certain animal-based nutrients that were more typical on a traditional nose-to-tail diet. About the first part, refined carbs are low in vitamin B1 (thiamin/thiamine), but governments have required fortification of such key nutrients. The problem is that thiamine is required for metabolism of carbs. The more carbs one eats the more thiamine that is needed. Carb intake has risen so vastly that, as some argue, the levels of fortification aren’t enough. To make matters worse, because thiamine deficiency causes carb metabolism disruption, there is an increasing craving for carbs as the body struggles to get the fuel it needs. Then, as those cravings lead to continued overeating of carbs, thiamine deficiency gets worse which makes the carb cravings even stronger. It becomes a lifelong addiction, in some cases involving alcoholism as liquid carbs (the body treats alcohol the same as sugar).
The only alternative fuel for the body is fat. Here we get to another wrinkle. A high-carb diet also causes insulin resistance. The hormone insulin, like thiamine, is also needed in energy metabolism. This often leads to obesity where excess calories get stored as fat but, without insulin sensitivity, the body can’t easily access that stored energy. So, this is why fat people are constantly hungry, despite having immense stored energy. Their bodies can’t fully use that stored energy and neither can their bodies fully use the carbs they’re eating. Thiamine deficiency combined with insulin resistance is a spiral of metabolic dysfunction. This is why some experts in this field worry that thiamine insufficiency might be greater than acknowledged and that it might not show up on standard tests, as what is not being considered is the higher demand for thiamine with a higher intake of carbs that has ever before existed. To further obscure this health crisis, it is irrelevant how much thiamine a test shows in one’s bloodstream, if one lacks the cofactors (e.g., magnesium) to help the body process thiamine and transport it into cells.
Insulin resistance, along with the rest of metabolic syndrome, has many neurological consequences. Numerous neurocognitive conditions are directly linked to it and often involve thiamine deficiency — besides autism: mood disorders, obsessive-compulsive disorder, schizophrenia, etc. For example, consider Alzheimer’s that some are now referring to as type III diabetes because there is insulin resistance in the brain; and the brain requires glucose that in turn requires insulin and insulin sensitivity. All cells need energy and this goes to the centrality of the mitochondria, the powerhouses of cellular energy (each cell can have thousands of mitochondria). Besides autoimmune conditions like multiple sclerosis, mitochondrial dysfunction might also involved in conditions like autism. That is related to thiamine deficiency causing energy deficiency and affects the role of glutamate.
It’s a morass of intertwining mechanisms, pathways, and systems that are hard for a laymen to comprehend. But it is serious stuff on so many levels, for individuals and society. For a moment, let’s step back and look again at the big picture. In The Crisis of Identity, public health was explained as a moral panic and existential crisis. One aspect that wasn’t explored in that post is cancer, but we did briefly note that, “in the mid-1800s, Stanislas Tanchou did a statistical analysis that correlated the rate of grain consumption with the rate of cancer; and he observed that cancer, like insanity, spread along with civilization.” We only bring this up now because we’ve been reading Sam Apple’s book Ravenous that is about the Nazi obsession about cancer with the same mass hysteria as was going on elsewhere in the Western world, such as with neurasthenia and tuberculosis; and bringing up antisemitism everywhere it was found.
Cancer, though, can help us understand an aspect of thiamine deficiency and insufficiency. It also has to do with neurological and mental health. In interfering with carb metabolism, insufficient thiamine also interferes with mitochondrial oxidation and so mitochondria turn to fermenting glucose for energy. This is what happens in cancer cells, as the Jewish-Nazi scientist Otto Warburg thought so important. In general, mitochondrial dysfunction results and energy production goes down. Also, the mitochondria are closely related to immune functioning and so autoimmune disorders can follow: multiple sclerosis, Hashimoto’s, rheumatoid arthritis, etc. Along with causing gut issues and a diversity of other symptoms, this is why thiamine deficiency is known as a disease mimic in so often getting misdiagnosed as something else.
That is a problem with something like psychiatric categories and labels, as they are simply groupings of symptoms; but then again that is true for most conventional healthcare. We need to discern the underlying cause(s). To demonstrate this, we’ll now move on to the limbic system that is part of the primitive brain stem, having to do with emotional processing and control of the autonomic nervous system. Thiamine deficiency have a strong impact on limbic cells, similar to an oxygen deficiency because of the aforementioned altered energy metabolism of mitochondria that prioritizes oxygen in production of ATP (the main fuel used by most cells). There is not only a loss of energy but eventually mitochondrial death and hence cell death, also from decreased glucose utilization in cells; or, in some cases, something worse when cells refuse to die (i.e., cancer) in turning to glucose fermentation in mitochondria that allows those cells to proliferate. In either case, the involvement of carbs and glucose becomes dramatically changed and imbalanced.
This points to how the same fundamental issues deep within our physiology can become expressed in numerous ways, such as the link between cancer and metabolic syndrome (particularly obesity). But, in terms of subjective experience, we can’t realize most of this is going on and even doctors often aren’t able to detect it with the crude tools at hand. Yet the individual might experience the consequences of what can’t be seen. If thiamine deficiency causes brain damage in the limbic system and elsewhere, the results can be depression, anxiety, irritability, fatigue, bipolar, emotional instability, moodiness, confusion, schizophrenia, cognitive decline, learning difficulties, inability to form memories, loss of memory recall, confabulation (making up stories), etc; with the worse symptoms corresponding to Wernicke-Korsakoff syndrome. And can ultimately (and very rapidly) etc. Now multiply that across an entire society and no wonder the reactionary mind has taken hold and created such a powerful psychological undertow, not only for conservatives but for everyone.
* * *
6/2/22 – Let’s make yet another subsection to throw in some other info. This is an extension of what has already been said on the growing number of factors involved in autism spectrum disorder, not to mention often overlapping with numerous other physical and cognitive conditions. There are so many proven and potential factors (correlated, contributing, and causal) that it can give one a headache trying to piece it all together and figure out what it means. Writing about it here is nearly headache-inducing, and so empathy goes out to any readers trying to work their way through this material. Such diverse and wide-ranging evidence might imply that so-called autism spectrum disorder is not really a single disorder but a blanket label to cover up mass complexity and confusion. Okay. Take a deep breath.
An interesting substance is carnitine that is needed for energy production by helping transport fatty acids into the mitochondria. Low carnitine levels are prevalent in certain neurocognitive conditions, from depression to autism. “Some tenuous links between carnitine and autism already exist. Defects in the mitochondria, which have previously been linked to autism, can sometimes lead to carnitine deficiency. And treating children with autism with valproic acid, an anti-seizure medicine that can lower carnitine levels, can have serious side effects” (Emily Singer, Defects in carnitine metabolism may underlie autism). It’s one of the many nutrients that is mostly found in or entirely exclusive to animal foods, and so having much to do with the agricultural diet and even more so in terms of modern industrial food production. For such an easily obtained substance, there is a significant number of Westerners who are not getting enough of it. But all they’d need to do to obtain it is eat some red meat, which is precisely the main food that health experts and public officials have been telling Americans to avoid.
Beef consumption is almost half of what it was at the beginning of the 19th century and has leveled out since then, whereas low-carnitine meats such as chicken and fish have increasingly replaced beef. About the agricultural angle, it might be noted that grain-fed animals have lower amounts of diverse nutrients (carnitine, choline, CoQ10, zinc, carotenoids, vitamin A3, E vitamins, omega-3s, etc) as compared to pasture-raised and wild-caught animals; except with certain nutrients that are typically added to animal feed — and this might partly explain why the agricultural revolution led to increased stunting and sickliness, many thousands of years before the modern industrialized diet of hyper-processed foods produced from industrial agriculture. So, it’s not only that modern Americans are eating less red meat but replacing such nutrient-density with lower quality animal foods from factory farming; while overall meat consumption has dropped since the 19th century, along with animal fat intake having drastically declined after being mostly replaced with industrial seed oils by the 1930s. It’s safe to say that the average American is consuming approximately zero fatty ruminant meat or any other animal foods from pasture-raised or wild-caught animals. Yet the intake of vegetables, fruits, nuts, seeds, and seed oils is greater than past centuries.
To refocus, the human body has some capacity to produce carnitine de novo, but it’s limited and far from optimal. Autistics, in particular, can have carnitine-related genetic defects with a deletion in the gene trimethyllysine hydroxylase epsilon (TMLHE); a genetic effect that is mostly found in families with multiple autistic boys. Also, as expected, vegans and vegetarians measure as having low plasma levels of this key nutrient. Such deficiencies are potentially a worse problem for certain modern populations but less so in the past because “genetic deficiencies in carnitine synthesis were tolerated in the European population because their effects were nutritionally complemented by a carnitine-rich diet. In this manner, the selection pressures that would have otherwise eliminated such mutations from the population were effectively removed” (Vytas A. Bankaitis & Zhigang Xie, The neural stem cell/carnitine malnutrition hypothesis: new prospects for effective reduction of autism risk?). As for the present, the authors “estimate that some 20%–30% of pregnant women in the United States might be exposing the developing fetus to a suboptimal carnitine environment.”
Carnitine underpins many physiological factors and functions involving embryonic neural stem cells, long-chain fatty acids, mitochondrial function, ATP production, oxidative stress, inflammation, epigenetic regulation of gene expression, etc. As mediated by epigenetic control, carnitine promotes “the switch from solitary to gregarious social behavior” in other species and likely in humans as well (Rui Wu et al, Metabolomic analysis reveals that carnitines are key regulatory metabolites in phase transition of the locusts). Certainly, as Bankaitis and Xie explains, carnitine is directly correlated to language/speech delay, language weakness, or speech deficits along with stunted motor development and common autistic behaviors that are causally linked by way of long-chain fatty acid (LCFA) β-oxidation deficits, medium-chain FAO deficits, etc. To emphasize this point, overlapping with the same deficiencies (carnitine, B vitamins, fat-soluble vitamins, choline, etc) and excesses (glutamate, propionate, etc) as found in autism, there are many other speech and language conditions: dyslexia, specific language impairment (SLI), developmental language disorder (DLD), etc; along with ADHD, learning disabilities, and much else (about all of this, approximately a million studies have been done and another million articles written) — these might not always be entirely distinct categories but imperfect labels for capturing a swarm of underlying issues, as has been suggested by some experts in the field.
To worsen these problems are toxins: “Exposure of a pregnant woman to high levels of heavy metals in drinking water or otherwise also carries the risk of impairing de novo carnitine biosynthesis.” In the main text of this post, there was much exploration of glutamate (e.g., MSG) as a neurotoxin. On a related note, acetyl-L-carnitine (ALCAR or LAC) “supplements ameliorate depressive symptoms in mice by reversing brain-cell impairment caused by an excess of glutamate” (Bruce S. McEwen, Lack of a single molecule may indicate severe and treatment-resistant depression; see: Carla Nasca et al, Acetyl-L-carnitine deficiency in patients with major depressive disorder). A similar protective role is found with other “compounds containing a trimethylamine group (carbachol, betaine, etc.)” (Marta Llansola et al, Prevention of ammonia and glutamate neurotoxicity by carnitine: molecular mechanisms). Furthermore, “L-carnitine can protect from Hepatotoxic, neurotoxic, renal impairment and genotoxic effects functionally, biochemically and histopathologically with a corresponding reduction of oxidative stress” (Krishna Murthy Meesala & Pratima Khandayataray, Monosodium Glutamate Toxicity and the Possible Protective Role of L–Carnitine). It’s fascinating that one set of toxins, heavy metals, would interfere with carnitine levels when carnitine is needed to deal with other toxins, glutamate and ammonia.
Bankaitis and Zhigang Xie then conclude that, “Finally, we are struck by the fact that two developments dominating public interest in contemporary news cycles detail the seemingly unrelated topics of the alarming rise of autism in young children and the damaging human health and planetary-scale environmental costs associated with cattle farming and consumption of red meat (86.). The meteoric rise of companies promoting adoption of meatless mimetics of beef and chicken at major fast food outlets testifies to the rapidly growing societal appetite for reducing meat consumption. This philosophy is even rising to the level of circulation of scientific petitions exhorting world governments to unite in adopting global measures to restrict meat consumption (87). We now pose the question whether such emerging societal attitudes regarding nutrition and its environmental impact are on collision course with increased ASD risk. Food for thought, indeed.” It’s been shown that mothers of autistic children ate less meat before conception, during pregnancy, or during lactation period; and had lower levels of calcium (Ya-Min Li, Maternal dietary patterns, supplements intake and autism spectrum disorders). Sure, we could supplement carnitine and every other nutrient concentrated in meat. That certainly would help bring the autism rate back down again (David A. Geier et al, A prospective double-blind, randomized clinical trial of levocarnitine to treat autism spectrum disorders). But maybe, instead, we should simply emphasize a healthy diet of nutrient-dense animal foods, particularly as whole foods.
It might be about finding the right form in the right amount, maybe in the needed ratio with other nutrients — our partial knowledge and vast ignorance being the eternal problem (Hubris of Nutritionism); whereas animal foods, particularly pasture-raised and wild-caught, have all of the nutrients we need in the forms, amounts, and ratios we need them. As clever monkeys, we’ve spent the past century failing in our endeavor to industrially and medically re-create the wheel that Mother Nature invented through evolution. To put this in context of everything analyzed here in this unwieldy piece, if most modern people weren’t following a nutritionally-deficient agricultural diet largely consisting of industrially hyper-processed and fortified plant foods, nearly all of the scientific disagreement and debate would be irrelevant. We’ve painted ourselves into a corner. The fact of the matter is we are a sickly people and much of that is caused by diet, although not limited to micronutrients or whatever as the macronutrients play a particular role in metabolic health or lack thereof which in turn is another contributing factor to autism (Alison Jean Thomas, Is a Risk of Autism Related to Nutrition During Pregnancy?). And metabolic dysfunction and disease has much to do with addictive and/or harmful overconsumption of agricultural foods like grains, potatoes, sugar cane, high fructose corn syrup, seed oils, etc.
For vitamin B9, some speculate that increased risk of autism might have to do with methylation defects caused by mutations in the MTHFR gene (A1298C and C667T) or even possibly mimicking this phenomenon for those without it (Karen E Christensen, High folic acid consumption leads to pseudo-MTHFR deficiency, altered lipid metabolism, and liver injury in mice). This relates to a reason behind recommendations for methylated forms of B vitamins; which is a good source of methyl groups required for various physiological functions. For example, in demonstrating how one thing leads to another: “The methyl group from methyl folate is given to SAMe, whose job it is to deliver methyl to 200 essential pathways in the body. […] After receiving methyl donors, SAMe delivers methyl to 200 pathways in the body including ones needed to make carnitine, creatine and phosphotidylcholine. Carnitine supplementation improves delivery of omega 3 & 6 fatty acids needed to support language, social and cognitive development. Phosphatidylcholine is important in cell membrane health and repair. […] Repair of the cell membrane is an important part of improving sensory issues and motor planning issues in children with autism, ADHD and sensory integration disorder. Dimethylglycine (DMG) and trimethylglycine (TMG) donate methyl groups to the methylation cycle. TMG is needed to recycle homocysteine and help produce SAMe” (Treat Autism, Autism and Methylation – Are you helping to repair your child’s methylation cycle?).
Others dismiss these skeptical concerns and alternative theories as pseudo-scientific fear-mongering. The debate began with a preliminary study done in 2016; and, in the following year, a published review concurred that, “Based on the evidence evaluated, we conclude that caution regarding over supplementing is warranted” (Darrell Wiens & M. Catherine DeSoto, Is High Folic Acid Intake a Risk Factor for Autism?—A Review). There are other issues, besides that. There has been a quarter century of mass supplementation of folate with fortified foods, but there apparently never was done any safety studies or analysis for the general population. On top of that, phthalate exposure from plastic contamination in water and such disrupts genetic signals for the processing of folate (Living On Earth, Plastics Linked to Rising Rates of Autism). But supplementation of folic acid might compensate for this (Nancy Lemieux, Study reports link between phthalates and autism, with protective effects of folic acid). The breakdown of plastic into microplastic can accumulate in biological tissue that humans consume, if unsure the same is true in plants and if unsure how much phthalates can accumulate up the food chain. So, it’s not clear how this how this may or may not be a problem specifically within present agriculture, but one suspects it might be an issue. Certainly, the majority of water in the world now is contaminated by microplastics and much else; and that water is used for livestock and agricultural goods. It’s hard to imagine how such things couldn’t be getting into everything or what it might mean for changes in the human body-mind, as compounded by all the rest (e.g., how various substances interact within the body). About pesticides in the water or from other sources, one might note that folic acid may have a protective effect against autism (Arkansas Folic Acid Coalition, Folic Acid May Reduce Autism Risk from Pesticides).
Whatever it all means, it’s obvious that the B vitamins are among the many super important nutrients mostly found in animal foods and concentrated in highest amounts in the most quality sources from animals grown on pasture or in the wild. Much of the B vitamin debate about autism risk is too complex and murky to further analyze here, not to mention to mixed up with confounders and replication crisis; with one potential confounder being the birth order effect or stoppage effect (Gideon Koren, High-Dose Gestational Folic Acid and the Risk for Autism? The Birth Order Effect). As one person noted, “If the literature is correct, and folic acid really causes a 42% reduction in autism, we should see a sharp decrease in autism diagnosis for births starting in 1997. Instead, autism rates continued to increase at exactly the same rate they had before. There is nothing in the data to suggest even a small drop in autism around the time of folic acid fortification” (Chris Said, Autism, folic acid, and the trend without a blip). And elsewhere it’s recently stated that, “The overall evidence for all these claims remains inconclusive. While some meta-analyses have found a convincing pattern, a comprehensive 2021 Nutrients review failed to find a “robust” statistical association — a more definitive outcome in the field of epidemiology“ (Molly Glick, A Popular Supplement’s Confusing Links With Autism Development). That same assessment is repeated by others: “Studies have pointed out a potential beneficial effect of prenatal folic acid maternal supplementation (600 µg) on the risk of autism spectrum disorder onset, but opposite results have been reported as well” (Bianka Hoxha et al, Folic Acid and Autism: A Systematic Review of the Current State of Knowledge). It doesn’t add up, but we won’t attempt to solve that mystery.
To further muck up the works, it’s amusing that some suggest a distinction be made: “The signs and symptoms of pediatric B12 deficiency frequently mimic those of autism spectrum disorders. Both autistic and brain-injured B12– deficient children have obsessive-compulsive behaviors and difficulty with speech, language, writing, and comprehension. B12 deficiency can also cause aloofness and withdrawal. Sadly, very few children presenting with autistic symptoms receive adequate testing for B12 deficiency” (Sally M. Pacholok, Pediatric Vitamin B12 Deficiency: When Autism Isn’t Autism). Not being alone in that claim, someone else said, “A vitamin B12 deficiency can cause symptoms and behaviours that sometimes get wrongly diagnosed as autism” (). That second person’s motivation was to deny the culpability of veganism: “Vegans and vegetarians often struggle to get sufficient levels of B12 in their diets. Therefore the children of pregnant vegans may be more likely to have B12 deficiency.” But also that, “Early research shows that many genuinely autistic people have excessive levels of B12 in their systems. […] Vegans are more like likely to take supplements to boost the vitamins they lack in their diet, including B12.” A deficiency in early life and a compensatory excess in later life could both be tied into vegan malnourishment — maybe or maybe not. Apparently, however explained or else rationalized away, just because something looks like duck, walks like a duck, and quacks like a duck doesn’t necessarily mean it’s actually a duck. But has the autistic label ever been anything other than a constellation of factors, symptoms, behaviors, and traits? It’s like asking if ‘depression’ variously caused by stress, overwork, sleep deprivation, trauma, nutritional deficiency, toxicity, parasitism, or physical disease really are all the same mental illness. Admittedly, that is a useful line of thinking, from the perspective of functional medicine that looks for underlying causes and not mere diagnoses for the sake of insurance companies, bureaucratic paperwork, and pharmaceutical prescriptions.
Anyway, let’s just drop a load of links for anyone who is interested to explore it for themselves:
Health is a longtime interest of mine. My focus has been on the relationship between mental health and physical health. The personal component of this is my depression as it has connected, specifically in the past, to my junk food addiction and lack of exercise at times. When severely depressed, there isn’t motivation to do much about one’s health. But if one doesn’t do anything about one’s health, the symptoms of depression get worse.
It’s for this reason that I’ve sought to understand health. I’ve tried many diets. A big thing for me was restricting refined sugar and simple carbs. It’s become clear to me that sugar, in particular, is one of the most addictive drugs around. It boosts your serotonin which makes you feel good, but then it drops your serotonin levels lower than before you ate the sugar. This creates an endless craving, once you get into the addictive cycle. On top of that, sugar is extremely harmful to your health in general, not only maybe resulting in diabetes but also suppressing your immune system.
Most addictive behavior, though, isn’t necessarily and primarily physical. The evidence shows that it’s largely based on social conditions. That has been shown with the rat park research, with inequality data, and with Portugal’s model of decriminalization and treatment. Humans, like rats, are social creatures. Those living in optimal social conditions have lower rates of addiction, even when drugs are easily available. I’m sure this same principle applies to food addictions as well. It also relates to other mental illnesses, which show higher rates in Western industrialized countries.
This occurred to me a while back while reading about the Piraha. Daniel Everett noted that they didn’t worry much about food. They ate food when it was there and they would eat it until it was gone, but they were fine when there was no food to eat. They live in an environment of great abundance. They don’t lack anything they need.
Yet it’s common for them to skip eating for a day because they have something better to do with their time, such as relaxing and socializing. Everett had seen Piraha individuals dance for several days straight with only occasional breaks and no food. Hunger didn’t seem to bother them because they knew at any moment they could go a short distance and find food. A few hours of a single person hunting, fishing, or gathering could feed the entire extended family for a day.
The same thing was seen with their sleep patterns. The Piraha rarely slept through the entire night. There were always people awake and talking. They didn’t worry about not getting enough sleep. They slept sporadically through the night and day, whenever they felt like it. According to Everett, the Piraha are a happy and relaxed people. They don’t seem to fear much, not even death, despite living in a dangerous environment. They have a low anxiety existence.
Modern Westerners also live amidst great abundance. But you wouldn’t know it from our behavior. We are constantly eating, as if we aren’t sure where our next meal is coming from. And we obsess over the idea of getting a full night’s rest. Our lives are driven by stress and anxiety. The average Westerner has a mindset of scarcity. We are constantly working, buying, consuming, and hoarding. The only time we typically relax is to escape all the stress and anxiety, by numbing ourselves with our addictions: food, sugar, alcohol, drugs, television, social media, etc.
That has been true of me. I’ve felt that constant background of unease. I’ve felt that addictive urge to escape. It’s not healthy. But it’s also not inevitable. We have chosen to create this kind of society. And we can choose to create a different one. Addiction makes us feel helpless, just as it makes us feel isolated. But we aren’t helpless.
As Thomas Paine wrote at the beginning of this country, “We have it in our power to begin the world over again.” Imagine a society where we could be at peace with ourselves, where we could have a sense of trust that our needs will be taken care of, to know that there is enough abundance to go around. A world where the hungry are fed, the homeless are housed, and the poor lifted up. All of that is within our means. We know how to do it, if only we could imagine it. That would mean creating a new mindset, a new way of being in the world, a new way of relating.
* * *
I was thinking about a particular connection to addiction, mental illness, and other health problems. This is part of the isolation and loneliness of a hyper-individualistic society. But American society adds another dynamic to this in also being highly conformist — for various reasons: the entrenched class hierarchy, the strictly oppressive racial order, the history of religiosity, the propagandistic nature of national media, the harsh Social Darwinism of capitalist realism, etc.
Right before this post, I was writing about authoritarian libertarianism. There is a weird, secret link between the extremes of individualism and the extremes of collectivism. There is a long history of libertarians praising individualism while supporting the collectivism of authoritarians.
Many right-wing libertarians are in love with corporatism which was a foundation of fascism. Corporations are collective entities that are created by the public institution of government through the public system of corporate charters. A corporate charter, by government fiat, doles out special privileges and protections. Business often does well under big government, at least big business does.
This dynamic might seem strange, but it has a certain logic. Carl Jung called it enantiodromia. That is a fancy word for saying that things taken to their extreme tend to become or produce their opposite. The opposite is never eliminated, even if temporarily suppressed into the shadow and projected onto others. It’s a state where balance is lacking and so the imbalance eventually tips the other direction.
That is the nature of the oppositional paradigm of any dualistic ideology. That is seen in the perceived divide of mind (or spirit) and matter, and this leads to Cartesian anxiety. The opposition is false and so psychologically and socially unsustainable. This false ideology strains the psyche in the futile effort to maintain it.
This has everything to do with health, addiction, and all of that. This condition creates a divide within the human psyche, a divide within awarenesss and thought, perception and behavior. Then this divide plays out in the real world, easily causing dissociation of experience and splintering of the self. Addiction is one of the ways we attempt to deal with this, the repetitive seeking of reconnection that the can’t be satisfied, for addiction can’t replace the human bond. We don’t really want the drug, sugar, or work we are addicted to, even as it feels like the best substitute available to us or at least better than nothing. The addiction eases the discomfort, temporarily fills the emptiness.
It is worth noting that the Piraha have little apparent depression and no known incidents of suicide. I would see this as related to the tight-knit community they live within. The dogmatic dualism of individual vs collective would make no sense to them, as this dualism depends on a rigidly defended sense of identity that they don’t share with modern people. Their psychic boundaries are thinner and more open. Social hierarchy and permanent social positions are foreign to them. There is no government or corporations, not even a revered class of wise elders. Inequality and segregation, and disconnection and division are not part of their world.
You might argue that the Piraha society can’t be translated into lessons applicable to Western countries. I would argue otherwise. They are human like the rest of us. Nothing makes them special. That is probably how most humans once lived. It is in our nature, no matter how hidden it has become. Countries that have avoided or remedied the worst divides such as inequality have found that problems are far fewer and less severe. We may not be able or willing to live like the Piraha, but much of what their lifestyle demonstrates is relevant to our own.
This can be seen in the Western world. Lower inequality states in the US have lower rates of mental illness, obesity, teen pregnancies, homicides, suicide, etc as compared to higher inequality states. Countries with less segregated populations have greater societal trust and political moderation than countries with highly segregated populations. In modern societies, it might be impossible to eliminate inequality and segregation, but we certainly can lessen them far below present conditions. And countries have shown when social conditions are made healthy the people living there are also more healthy.
The world of the Piraha isn’t so distant from our own. We’ve just forgotten our own history. From Dancing in the Streets, Barbara Ehrenreich discusses how depression becomes an increasing issue in texts over the centuries. If you go far back enough, anything akin to depression is rarely mentioned.
She puts this in the context of the loss of community, of communal lifestyle and experience. During feudal times, people lived cheek to jowl, almost never alone. As family and neighbors, they lived together, ate together, worked together, worshipped together, and like the Piraha they would wake up together in the night. They also celebrated and danced together. Festivals and holy days were a regular occurrence. This is because most of the work they did was seasonal, but even during the main work season they constantly put on communal events.
Like the Piraha, they worked to live, not lived to work. Early feudal villages were more like tribal villages than they were like modern towns. And early feudal lords very much lived among the people, even joining in their celebrations. For example, during a festival, a feudal lord might be seen wrestling a blacksmith or even playing along with role reversal. The feudal identity hadn’t yet solidified into modern individuality with its well partitioned social roles. That is partly just the way small-scale subsistence lifestyles operate, but obviously there is more going on than that. This involved the entire order and impacted every aspect of life.
Let’s consider again Paine’s suggestion that we begin over again. This was stated in the context of revolution, but revolution was understood differently at the time. It implied a return to what came before. He wasn’t only speaking to what might be gained for he had a clear sense of what had been lost. The last remnants of feudalism continued into the post-revolutionary world, even as they were disappearing quickly. Paine hoped to save, re-create, or somehow compensate for what was being lost. A major concern was inequality, as the commons were stolen and the public good was eroded.
Even though it wasn’t how it typically would’ve been framed at the time, the focus in this was public health. Paine on occasion did use the metaphor of health and sickness — such as when he wrote, “That the king is not to be trusted without being looked after, or in other words, that a thirst for absolute power is the natural disease of monarchy.” The monarchy wasn’t just about the ruler but about the whole social order that was ruled over, along with its attendant inequality of wealth and power. The sickness was systemic. As with the human body, the body politic could become sick and so it could also be healed.
It never occurred to the American revolutionaries that the problems they faced should be blamed on isolated individuals. It wasn’t limited to a few malcontents. A growing unease spread across colonial society. Even as we think of our society having progressed much over the centuries, we can’t shake the mood of anxiety that continues to spread. Surrounded by abundance and with greater healthcare than our ancestors could have dreamed of, we manage to lead immensely unhealthy and unhappy lives. We are never fully content nor feel like we like we fully belong.
As individuals, we hunger for our next fix. And as a society, we are rapacious and ravenous toward the world, as if our bountiful wealth and resources are never enough. Early colonial trade was strongly motivated by the demand for sugar and now we find present neo-colonial globalization being driven by the demand for oil. Sugar and oil, along with much else, have been the fuel of restless modernity. It’s an addictive social order.
The corrupt old order may have ended. But the disease is still with us and worsening. It’s going to require strong medicine.
I live and work in downtown Iowa City. I regularly walk through and spend time in the downtown area. Having lived here (with a few years spent elsewhere) since the 1980s, I’m always trying to get perspective about this city and where it is heading.
As I was meandering to work today, I went through the pedestrian mall and my mind was naturally drawn to the numerous bars. I’ve had a theory for a while about what drove out so many of the stores I used to like, the stores that the average person would want to shop at and could afford to shop at. There is a general gentrification going on that is being promoted and funded by TIFs (among I’m sure other causes), but there is more than just that going on. I’ve considered that maybe the bars have been so profitable that they’ve driven up the rental costs in the downtown, driven them too high for the average small business owner.
This is problematic. Few things can compete with alcohol. All that has been able to compete are mostly high end restaraunts, art galleries, gift shops, jewelry stores, etc.
I was thinking about what this means. Why is it that it is so hard to compete with bars? The first thing that came to mind is that alcohol is an addictive substance. For a large number of people, the more alcohol they drink the more they want to drink. It guarantees repeat customers who are willing to pay high costs for their preferred drug. There is a reason the only mom and pop grocery story left in town is a major retailer of alcohol, and of course it is downtown.
I’m not for prohibition of addictive substances. But we have to get serious about the externalized costs, whether from legal or illegal markets. I’m in favor of making most addictive substances legal, but putting high sin taxes on them and providing the highest quality rehab centers (along with whatever else is beneficial). The sin taxes should go to deal with all the externalized costs, from rehab centers to homeless shelters… also to deal with the problems developing in the downtown and other impacted areas.
There is something telling about how gentrification and the sale of addictive substances act as twin forces in utterly transforming this town. I’m far from convinced that these changes are positive.
* * * *
What is the relationship between gentrification, crony capitalism, and bars? Or to put it another way: What is the relationship between wealth, power, and addiction?
I wouldn’t be the first person to associate addiction with the consumerism of a capitalist society. Nor would I be the first to associate addiction to power relationships. I know William S. Burroughts had many interesting thoughts on the matter. Is it simply about social control? If so, to what end? Or is it as Burroughs suggests, just power serving power, like a disease?
I’m specifically thinking of the city I live in, but all of this applies more broadly. Also, the issue of alchol should be widened to all addictions and everything related to it: drug wars, mass incarceration, etc. Part of my context here is the book “Chasing the Scream” by Johann Hari. That author sees addiction as a social failure, rather than a mere personal issue. It isnt just the addict who is addicted, but the entire society addicted to the system. The alcoholic is addicted to alcohol, the bar owners are addicted to the profit they can make, and the local government is addicted to the tax money that is brought in.
The difference with alcohol, though is that it is a socially acceptable addiction. The entire identity of a small college town like Iowa City is tied up with alcoholism. The UI is famous for being a party school. The town was well known as a drinking town going back for more than a century. Generations of people have traveled from far away just to get drunk in this town.
What is at the heart of this? What is the driving force behind it all?
* * * *
I originally posted these thoughts on Facebook.
It was on my mind for some reason. Several people commented and it led to a detailed discussion, but my mind was no more clear afterwards. I still don’t quite know what to make of this line of thought.
It’s complicated, as I’m always repeating. There is a much larger context involved (German immigration, Prohibition, TIFs, etc). No changes come out of nowhere. There are always underlying causes that go much deeper, often to historical roots.
Here are a few other things I’ve written before about related issues. Also, along with them, I’ll throw in some articles about the local area.