The Crisis of Identity

“Have we lived too fast?”
~Dr. Silas Weir Mitchell, 1871
Wear and Tear, or Hints for the Overworked

I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is describing:

“Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”

That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent (the term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ began being diagnosed).

This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.

Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” It was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains:

“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.

“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””

I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry,  with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).

There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

Still, it goes far beyond diet. There has been a diversity of stressors that have continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.

Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).

There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:

“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)

This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.

The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.

By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.

In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.

Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Churchland and Peterson did acknowledge that this went beyond mere individual or even familial issues: “It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”

Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.

Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).

The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.

A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).

“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as “Red Republicans”, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.

This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the “civilizing process” that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate. Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.

To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the “First Wave” of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.

This was at a time when the abortion rate was sky-rocketing, indicating most women held other views. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” (Rickie Solinger, Pregnancy and Power, p. 61). In the unGodly and unChurched period of early America (“We forgot.”), organized religion was weak and “premarital sex was typical, many marriages following after pregnancy, but some people simply lived in sin. Single parents and ‘bastards’ were common” (A Vast Experiment). Early Americans, by today’s standards, were not good Christians — visiting Europeans often saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):

“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”

Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants.

The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of  larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?

“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)

This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.

Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why abortion laws were designed to target male doctors, although they rarely did, and not their female patients. Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us for own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the contained self. But the container is weak and keeps leaking all over the place.

To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or sel- understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging.

“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.

* * *

I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
pp. 34

WOMEN AND THE WOMB: The Emerging Birth Control Debate

The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31

“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32

As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33

But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.

“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.

Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37

Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.

“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19 th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41

“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”

“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44

Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.

Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.

Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg

Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland

Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz

Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz

Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel

Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler

Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer

American Sexual Histories
edited by Elizabeth Reis

Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff

From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin

Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman

One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker

Moralizing Gods as Effect, Not Cause

There is a new study on moralizing gods and social complexity, specifically as populations grow large. The authors are critical of the Axial Age theory: “Although our results do not support the view that moralizing gods were necessary for the rise of complex societies, they also do not support a leading alternative hypothesis that moralizing gods only emerged as a byproduct of a sudden increase in affluence during a first millennium ‘Axial Age’. Instead, in three of our regions (Egypt, Mesopotamia and Anatolia), moralizing gods appeared before 1500.”

I don’t take this criticism as too significant, since it is mostly an issue of dating. Objectively, there are no such things as distinct historical periods. Sure, you’ll find precursors of the Axial Age in the late Bronze Age. Then again, you’ll find precursors of the Renaissance and Protestant Reformation in the Axial Age. And you’ll find the precursors of the Enlightenment in the Renaissance and Protestant Reformation. It turns out all of history is continuous. No big shocker there. Changes build up slowly, until they hit a breaking point. It’s that breaking point, often when it becomes widespread, that gets designated as the new historical period. But the dividing line from one era to the next is always somewhat arbitrary.

This is important to keep in mind. And it does have more than slight relevance. This reframing of what has been called the Axial Age accords perfectly with Julian Jaynes’ theories on the ending of the bicameral mind and the rise of egoic consciousness, along with the rise of the egoic gods with their jealousies, vengeance, and so forth. A half century ago, Jaynes was noting that aspects of moralizing social orders were appearing in the late Bronze Age and he speculated that it had to do with increasing complexity that set those societies up for collapse.

Religion itself, as a formal distinct institution with standardized practices, didn’t exist until well into the Axial Age. Before that, rituals and spiritual/supernatural experience were apparently inseparable from everyday life, as the archaic self was inseparable from the communal sense of the world. Religion as we now know it is what replaced that prior way of being in relationship to ‘gods’, but it wasn’t only a different sense of the divine for the texts refer to early people hearing the voices of spirits, godmen, dead kings, and ancestors. Religion was only necessary, according to Jaynes, when the voices went silent (i.e., when they were no longer heard externally because a singular voice had become internalized). The pre-religious mentality is what Jaynes called the bicameral mind and it represents the earliest and largest portion of civilization, maybe lasting for millennia upon millennia going back to the first city-states.

The pressures on the bicameral mind began to stress the social order beyond what could be managed. Those late Bronze Age civilizations had barely begun to adapt to that complexity and weren’t successful. Only Egypt was left standing and, in its sudden isolation amidst a world of wreckage and refugees, it too was transformed. We speak of the Axial Age in the context of a later date because it took many centuries for empires to be rebuilt around moralizing religions (and other totalizing systems and often totalitarian institutions; e.g., large centralized governments with rigid hierarchies). The archaic civilizations had to be mostly razed to the ground before something else could more fully take their place.

There is something else to understand. To have moralizing big gods to maintain social order, what is required is introspectable subjectivity (i.e., an individual to be controlled by morality). That is to say you need a narratizing inner space where a conscience can operate in the voicing of morality tales and the imagining of narratized scenarios such as considering alternate possible future actions, paths, and consequences. This is what Jaynes was arguing and it wasn’t vague speculation, as he was working with the best evidence he could accrue. Building on Jaynes work with language, Brian J. McVeigh has analyzed early texts to determine how often mind-words were found. Going by language use during the late Bronze Age, there was an increased focus on psychological ways of speaking. Prior to that, morality as such wasn’t necessary, no more than were written laws, court systems, police forces, and standing armies — all of which appeared rather late in civilization.

What creates the introspectable subjectivity of the egoic self, i.e., Jaynesian ‘consciousness’? Jaynes suggests that writing was a prerequisite and it needed to be advanced beyond the stage of simple record-keeping. A literary canon likely developed first to prime the mind for a particular form of narratizing. The authors of the paper do note that written language generally came first:

“This megasociety threshold does not seem to correspond to the point at which societies develop writing, which might have suggested that moralizing gods were present earlier but were not preserved archaeologically. Although we cannot rule out this possibility, the fact that written records preceded the development of moralizing gods in 9 out of the 12 regions analysed (by an average period of 400 years; Supplementary Table 2)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies — suggests that such beliefs were not widespread before the invention of writing. The few small-scale societies that did display precolonial evidence of moralizing gods came from regions that had previously been used to support the claim that moralizing gods contributed to the rise of social complexity (Austronesia and Iceland), which suggests that such regions are the exception rather than the rule.”

As for the exceptions, it’s possible they were influenced by the moralizing religions of societies they came in contact with. Scandinavians, long before they developed complex societies with large concentrated populations, they were traveling and trading all over Eurasia, the Levant, and into North Africa. This was happening in the Bronze Age, during the period of rising big gods and moralizing religion: “The analysis showed that the blue beads buried with the [Nordic] women turned out to have originated from the same glass workshop in Amarna that adorned King Tutankhamun at his funeral in 1323 BCE. King Tut´s golden deathmask contains stripes of blue glass in the headdress, as well as in the inlay of his false beard.” (Philippe Bohstrom, Beads Found in 3,400-year-old Nordic Graves Were Made by King Tut’s Glassmaker). It would be best to not fall prey to notions of untouched primitives.

We can’t assume that these exceptions were actually exceptional, in supposedly being isolated examples contrary to the larger pattern. Even hunter-gatherers have been heavily shaped by the millennia of civilizations that surrounded them. Occasionally finding moralizing religions among simpler and smaller societies is no more remarkable than finding metal axes and t-shirts among tribal people today. All societies respond to changing conditions and adapt as necessary to survive. The appearance of moralizing religions and the empires that went with them transformed the world far beyond the borders of any given society, not that borders were all that defined back then anyway. The large-scale consequences spread across the earth these past three millennia, a tidal wave hitting some places sooner than others but in the end none remain untouched. We are all now under the watchful eye of big gods or else their secularized equivalent, big brother of the surveillance state.

* * *

Moralizing gods appear after, not before, the rise of social complexity, new research suggests
by Redazione Redazione

Professor Whitehouse said: ‘The original function of moralizing gods in world history may have been to hold together large but rather fragile, ethnically diverse societies. It raises the question as to how some of those functions could still be performed in today’s increasingly secular societies – and what the costs might be if they can’t. Even if world history cannot tell us how to live our lives, it could provide a more reliable way of estimating the probabilities of different futures.’

When Ancient Societies Hit a Million People, Vengeful Gods Appeared
by Charles Q. Choi

“For we know Him who said, ‘And I will execute great vengeance upon them with furious rebukes; and they shall know that I am the Lord, when I shall lay my vengeance upon them.'” Ezekiel 25:17.

The God depicted in the Old Testament may sometimes seem wrathful. And in that, he’s not alone; supernatural forces that punish evil play a central role in many modern religions.

But which came first: complex societies or the belief in a punishing god? […]

The researchers found that belief in moralizing gods usually followed increases in social complexity, generally appearing after the emergence of civilizations with populations of more than about 1 million people.

“It was particularly striking how consistent it was [that] this phenomenon emerged at the million-person level,” Savage said. “First, you get big societies, and these beliefs then come.”

All in all, “our research suggests that religion is playing a functional role throughout world history, helping stabilize societies and people cooperate overall,” Savage said. “In really small societies, like very small groups of hunter-gatherers, everyone knows everyone else, and everyone’s keeping an eye on everyone else to make sure they’re behaving well. Bigger societies are more anonymous, so you might not know who to trust.”

At those sizes, you see the rise of beliefs in an all-powerful, supernatural person watching and keeping things under control, Savage added.

Complex societies gave birth to big gods, not the other way around: study
from Complexity Science Hub Vienna

“It has been a debate for centuries why humans, unlike other animals, cooperate in large groups of genetically unrelated individuals,” says Seshat director and co-author Peter Turchin from the University of Connecticut and the Complexity Science Hub Vienna. Factors such as agriculture, warfare, or religion have been proposed as main driving forces.

One prominent theory, the big or moralizing gods hypothesis, assumes that religious beliefs were key. According to this theory, people are more likely to cooperate fairly if they believe in gods who will punish them if they don’t. “To our surprise, our data strongly contradict this hypothesis,” says lead author Harvey Whitehouse. “In almost every world region for which we have data, moralizing gods tended to follow, not precede, increases in social complexity.” Even more so, standardized rituals tended on average to appear hundreds of years before gods who cared about human morality.

Such rituals create a collective identity and feelings of belonging that act as social glue, making people to behave more cooperatively. “Our results suggest that collective identities are more important to facilitate cooperation in societies than religious beliefs,” says Harvey Whitehouse.

Society Creates God, God Does Not Create Society
by  Razib Khan

What’s striking is how soon moralizing gods shows up after the spike in social complexity.

In the ancient world, early Christian writers explicitly asserted that it was not a coincidence that their savior arrived with the rise of the Roman Empire. They contended that a universal religion, Christianity, required a universal empire, Rome. There are two ways you can look at this. First, that the causal arrow is such that social complexity leads to moralizing gods, and that’s that. The former is a necessary condition for the latter. Second, one could suggest that moralizing gods are a cultural adaptation to large complex societies, one of many, that dampen instability and allow for the persistence of those societies. That is, social complexity leads to moralistic gods, who maintain and sustain social complexity. To be frank, I suspect the answer will be closer to the second. But we’ll see.

Another result that was not anticipated I suspect is that ritual religion emerged before moralizing gods. In other words, instead of “Big Gods,” it might be “Big Rules.” With hindsight, I don’t think this is coincidental since cohesive generalizable rules are probably essential for social complexity and winning in inter-group competition. It’s not a surprise that legal codes emerge first in Mesopotamia, where you had the world’s first anonymous urban societies. And rituals lend themselves to mass social movements in public to bind groups. I think it will turn out that moralizing gods were grafted on top of these general rulesets, which allow for coordination, cooperation, and cohesion, so as to increase their import and solidify their necessity due to the connection with supernatural agents, which personalize the sets of rules from on high.

Complex societies precede moralizing gods throughout world history
by Harvey Whitehouse, Pieter François, Patrick E. Savage, Thomas E. Currie, Kevin C. Feeney, Enrico Cioni, Rosalind Purcell, Robert M. Ross, Jennifer Larson, John Baines, Barend ter Haar, Alan Covey, and Peter Turchin

The origins of religion and of complex societies represent evolutionary puzzles1–8. The ‘moralizing gods’ hypothesis offers a solution to both puzzles by proposing that belief in morally concerned supernatural agents culturally evolved to facilitate cooperation among strangers in large-scale societies9–13. Although previous research has suggested an association between the presence of moralizing gods and social complexity3,6,7,9–18, the relationship between the two is disputed9–13,19–24, and attempts to establish causality have been hampered by limitations in the availability of detailed global longitudinal data. To overcome these limitations, here we systematically coded records from 414societies that span the past 10,000years from 30regions around the world, using 51measures of social complexity and 4measures of supernatural enforcement of morality. Our analyses not only confirm the association between moralizing gods and social complexity, but also reveal that moralizing gods follow—rather than precede—large increases in social complexity. Contrary to previous predictions9,12,16,18, powerful moralizing ‘big gods’ and prosocial supernatural punishment tend to appear only after the emergence of ‘megasocieties’ with populations of more than around one million people. Moralizing gods are not a prerequisite for the evolution of social complexity, but they may help to sustain and expand complex multi-ethnic empires after they have become established. By contrast, rituals that facilitate the standardization of religious traditions across large populations25,26 generally precede the appearance of moralizing gods. This suggests that ritual practices were more important than the particular content of religious belief to the initial rise of social complexity.

 

 

The World Around Us

What does it mean to be in the world? This world, this society, what kind is it? And how does that affect us? Let me begin with the personal and put it in the context of family. Then I’ll broaden out from there.

I’ve often talked about my own set of related issues. In childhood, I was diagnosed with learning disability. I’ve also suspected I might be on the autistic spectrum which could relate to the learning disability, but that kind of thing wasn’t being diagnosed much when I was in school. Another label to throw out is specific language impairment, something I only recently read about — it maybe better fits my way of thinking than autistic spectrum disorder. After high school, specifically after a suicide attempt, I was diagnosed with depression and thought disorder, although my memory of the latter label is hazy and I’m not sure exactly what was the diagnosis. With all of this in mind, I’ve thought that some of it could have been caused by simple brain damage, since I played soccer since early childhood. Research has found that children regularly head-butting soccer balls causes repeated micro-concussions and micro-tears which leads to brain inflammation and permanent brain damage, such as lower IQ (and could be a factor in depression as well). On the other hand, there is a clear possibility of genetic and/or epigenetic factors, or else some other kind of shared environmental conditions. There are simply too many overlapping issues in my family. It’s far from being limited to me.

My mother had difficulty learning when younger. One of her brothers had even more difficulty, probably with a learning disability as I have. My grandfather dropped out of school, not that such an action was too uncommon at the time. My mother’s side of the family has a ton of mood disorders and some alcoholism. In my immediate family, my oldest brother also seems like he could be somewhere on the autistic spectrum and, like our grandfather, has been drawn toward alcoholism. My other brother began stuttering in childhood and was diagnosed with anxiety disorder, and interestingly I stuttered for a time as well but in my case it was blamed on my learning disability involving word recall. There is also a lot of depression in the family, both immediate and extended. Much of it has been undiagnosed and untreated, specifically in the older generations. But besides myself, both of my brothers have been on antidepressants along with my father and an uncle. Now, my young niece and nephew are on anti-depressants, that same niece is diagnosed with Asperger’s, the other even younger niece is probably also autistic and has been diagnosed with obsessive-compulsive disorder, and that is only what I know about.

I bring up these ailments among the next generation following my own as it indicates something serious going on in the family or else in society as a whole. I do wonder what gets epigenetically passed on with each generation worsening and, even though my generation was the first to show the strongest symptoms, it may continue to get far worse before it gets better. And it may not have anything specifically to do with my family or our immediate environment, as many of these conditions are increasing among people all across this country and in many other countries as well. The point relevant here is that, whatever else may be going on in society, there definitely were factors specifically impacting my family that seemed to hit my brothers and I around the same time. I can understand my niece and nephew going on antidepressants after their parents divorced, but there was no obvious triggering condition for my brothers and I, well besides moving into a different house in a different community.

Growing up and going into adulthood, my own issues always seemed worse, though, or maybe just more obvious. Everyone who has known me knows that I’ve struggled for decades with depression, and my learning disability adds to this. Neither of my brothers loved school, but neither of them struggled as I did, neither of them had delayed reading or went to a special education teacher. Certainly, neither of them nearly flunked out of a grade, something that would’ve happened to me in 7th grade if my family hadn’t moved. My brothers’ conditions were less severe or at least the outward signs of it were easier to hide — or maybe they are simply more talented at acting normal and conforming to social norms (unlike me, they both finished college, got married, had kids, bought houses, and got respectable professional jobs; basically the American Dream). My brother with the anxiety and stuttering learned how to manage it fairly early on, and it never seemed have a particularly negative affect on his social life, other than making him slightly less confident and much more conflict-avoidant, sometimes passive-aggressive. I’m the only one in the family who attempted suicide and was put in a psychiatric ward for my effort, the only one to spend years in severe depressive funks of dysfunction.

This caused me to think about my own problems as different, but in recent years I’ve increasingly looked at the commonalities. It occurs to me that there is an extremely odd coincidence that brings together all of these conditions, at least for my immediate family. My father developed depression in combination with anxiety during a stressful period of his life, after we moved because he got a new job. He began having moments of rapid heartbeat and it worried him. My dad isn’t an overly psychologically-oriented person, though not lacking in self-awareness, and so it is unsurprising that it took a physical symptom to get his attention. It was a mid-life crisis. Added to his stress were all the problems developing in his children. It felt like everything was going wrong.

Here is the strange part. Almost all of this started happening specifically when we moved into that new house, my second childhood home. It was a normal house, not that old. The only thing that stood out, as my father told me, was that the electricity usage was much higher than it was at the previous house, and no explanation for this was ever discovered. Both that house and the one we lived in before were in the Lower Midwest and so there were no obvious environmental differences. It only now struck me, in talking to my father again about it, that all of the family’s major neurocognitive and psychological issues began or worsened while living in that house.

About my oldest brother, he was having immense behavioral issues from childhood onward: refused to do what he was told, wouldn’t complete homework, and became passive-aggressive. He was irritable, angry, and sullen. Also, he was sick all the time, had a constant runny nose, and was tired. It turned out he had allergies that went undiagnosed for a long time, but once treated the worst symptoms went away. The thing about allergies is that it is an immune condition where the body is attacking itself. During childhood, allergies can have a profound impact on human biology, including neurocognitive and psychological development, often leaving the individual with a condition of emotional sensitivity for the rest of their lives, as if the body is stuck in permanent defensive mode. This was a traumatic time for my brother and he has never recovered from it — still seething with unresolved anger and still blaming my parents for what happened almost a half century ago.

One of his allergies was determined to be mold, which makes sense considering the house was on a shady lot. This reminds me of how some molds can produce mycotoxins. When mold is growing in a house, it can create a toxic environment with numerous symptoms for the inhabitants that can be challenging to understand and connect. Unsurprisingly, research does show that air quality is important for health and cognitive functioning. Doctors aren’t trained in diagnosing environmental risk factors and that was even more true of doctors decades ago. It’s possible that something about that house was behind all of what was going on in my family. It could have been mold or it could have been some odd electromagnetic issue or else it could have been a combination of factors. This is what is called sick building syndrome.

Beyond buildings themselves, it can also involve something brought into a building. In one fascinating example, a scientific laboratory was known to have a spooky feeling that put people at unease. After turning off a fan, this strange atmosphere went away. It was determined the fan was vibrating at a level that was affecting the human nervous system or brain. There has been research into how vibrations and electromagnetic energy can cause stressful and disturbing symptoms (the human body is so sensitive that the brain can detect the weak magnetic field of the earth, something that earlier was thought to be impossible). Wind turbines, for example, can cause the eyeball to resonate in a way to cause people to see glimpses of things that aren’t there (i.e., hallucinations). So, it isn’t always limited to something directly in a building itself but can include what is in the nearby environment. I discuss all of this in an earlier post: Stress Is Real, As Are The Symptoms.

This goes along with the moral panic about violent crime in the early part of my life during the last several decades of the 20th century. It wasn’t an unfounded moral panic, not mere mass hysteria. There really was a major spike in the rate of homicides (not to mention suicides, child abuse, bullying, gang activity, etc). All across society, people were acting more aggressive (heck, aggression became idealized, as symbolized by the ruthless Wall Street broker who wins success through social Darwinian battle of egoic will and no-holds-barred daring). Many of the perpetrators and victims of violence were in my generation. We were a bad generation, a new Lost Generation. It was the period when the Cold War was winding down and then finally ended. There was a sense of ennui in the air, as our collective purpose in fighting a shared enemy seemed less relevant and eventually disappeared altogether. But that was in the background and largely unacknowledged. Similar to the present mood, there was a vague sense of something being terribly wrong with society. Those caught up in the moral panic blamed it on all kinds of things: video games, mass media, moral decline, societal breakdown, loss of strict parenting, unsupervised latchkey kids, gangs, drugs, and on and on. With so many causes, many solutions were sought, not only in different cities and states across the United States but also around the world: increased incarceration or increased rehabilitation programs, drug wars or drug decriminalization, stop and frisk or gun control, broken window policies or improved community relations, etc. No matter what was done or not done, violent crime went down over the decades in almost every population around the planet.

It turned out the strongest correlation was also one of the simplest. Lead toxicity drastically went up in the run up to those violent decades and, depending on how quickly environmental regulations for lead control were implemented, lead toxicity dropped back down again. Decline of violent crime followed with a twenty year lag in every society (twenty years is the time for a new generation to reach adulthood). Even to this day, in any violent population from poor communities to prisons, you’ll regularly find higher lead toxicity rates. It was environmental all along and yet it’s so hard for us to grasp environmental conditions like this because they can’t be directly felt or seen. Most people still don’t know about lead toxicity, despite it being one of the most thoroughly researched areas of public health. So, there is not only sick building syndrome for entire societies can become sick. When my own family was going bonkers, it was right in the middle of this lead toxicity epidemic and we were living right outside of industrial Chicago and, prior to that, we were living in a factory town. I have wondered about lead exposure, since my generation saw the highest lead exposure rate in the 20th century and probably one of the highest since the Roman Empire started using lead water pipes, what some consider to have been the cause of its decline and fall.

There are other examples of this environmental impact. Parasite load in a population is correlated to culture of distrust and violence (parasites-stress theory of values, culture, and sociality; involving the behavioral immune system), among other problems — parasite load is connected to diverse things, both individually and collectively: low extraversion, higher conscientiousnessauthoritarianism (conformity, obedience), in-group loyalty (in situations of lower life expectancy and among populations with faster life histories)collectivism, income inequality, female oppressionconservatism, low openness to experience, support for barriers between social groups, adherence to local norms, traditionalism, religiosity, strength of family ties, in-group assortative sociality, perceived ‘ugliness’ of bodily abnormalityhomicide, child abuse, etc. Specific parasites like toxoplasmosis gondii have been proven to alter mood, personality, and behavior — this can be measured across entire populations, maybe altering the culture itself of entire regions where infection is common.

Or consider high inequality that can cause widespread bizarre and aggressive behavior, as it mimics the fear and anxiety of poverty even among those who aren’t poor. Other social conditions have various kinds of effects, in some cases with repercussions that last for centuries. But in any of these examples, the actual cause is rarely understood by many people. The corporate media and politicians are generally uninterested in reporting on what scientists have discovered, assuming scientists can get the funding to do the needed research. Large problems requiring probing thought and careful analysis don’t sell advertising nor do they sell political campaigns, and the corporations behind both would rather distract the public from public problems that would require public solutions, such as corporate regulations and higher taxation.

In our society, almost everything gets reduced to the individual. And so it is the individual who is blamed or treated or isolated, which is highly effective for social control. Put them in prison, give them a drug, scapegoat them in the media, or whatever. Anything so long as we don’t have to think about the larger conditions that shape individuals. The reality is that psychological conditions are never merely psychological. In fact, there is no psychology as separate and distinct from all else. The same is true for many physical diseases as well, such as autoimmune disorders. Most mental and physical health concerns are simply sets of loosely associated symptoms with thousands of possible causal and contributing factors. Our categorizing diseases by which drugs treat them is simply a convenience for the drug companies. But if you look deeply enough, you’ll typically find basic things that are implicated: gut dysbiosis, mitochondrial dysfunction, etc —- inflammation, for example, is found in numerous conditions, from depression and Alzheimer’s to heart disease and arthritis — the kinds of conditions that have been rapidly spreading over the past century (also, look at psychosis). Much of it is often dietary related, since in this society we are all part of the same food system and so we are all hit by the same nutrient-deficient foods, the same macronutrient ratios, the same harmful hydrogenated and partially-hydrogenated vegetable oils/margarine, the same food additives, the same farm chemicals, the same plastic-originated hormone mimics, the same environmental toxins, etc. I’ve noticed the significant changes in my own mood, energy, and focus since turning to a low-carb, high-fat diet based mostly on whole foods and traditional foods that are pasture-fed, organic, non-GMO, local, and in season — lessening the physiological stress load. It is yet another factor that I see as related to my childhood difficulties, as diverse research has shown how powerful is diet in every aspect of health, especially neurocognitive health.

This makes it difficult for individuals in a hyper-individualistic society. We each feel isolated in trying to solve our supposedly separate problems, an impossible task, one might call it a Sisyphean task. And we rarely appreciate how much childhood development shapes us for the rest of our lives and how much environmental factors continue to influence us. We inherit so much from the world around us and the larger society we are thrown into, from our parents and the many generations before them. A society is built up slowly with the relationship between causes and consequences often not easily seen and, even when noticed, rarely appreciated. We are born and we grow up in conditions that we simply take for granted as our reality. But those conditions don’t have to be taken as fatalistic for, if we seek to understand them and embrace that understanding, we can change the very conditions that change us. This will require us first to get past our culture of blame and shame.

We shouldn’t personally identify with our health problems and struggles. We aren’t alone nor isolated. The world is continuously affecting us, as we affect others. The world is built on relationships, not just between humans and other species but involving everything around us — what some describe as embodied, embedded, enacted, and extended (we are hypersubjects among hyperobjects). The world that we inhabit, that world inhabits us, our bodies and minds. There is no world “out there” for there is no possible way for us to be outside the world. Everything going on around us shapes who we are, how we think and feel, and what we do — most importantly, shapes us as members of a society and as parts of a living biosphere, a system of systems all the way down. The personal is always the public, the individual always the collective, the human always the more than human.

* * *

When writing pieces like this, I should try to be more balanced. I focused solely on the harm that is caused by external factors. That is a rather lopsided assessment. But there is the other side of the equation implied in everything I wrote.

As higher inequality causes massive dysfunction and misery, greater equality brings immense benefit to society as a whole and each member within it. All you have to do in order to understand this is to look to cultures of trust such as the well functioning social democracies, with the Nordic countries being the most famous examples (The Nordic Theory of Everything by Anu Partanen). Or consider how, no matter your intelligence, you are better off being in an on average high IQ society than to be the smartest person in an on average low IQ society. Other people’s intelligence has greater impact on your well being and socioeconomic situation than does your own intelligence (see Hive Mind by Garett Jones).

This other side was partly pointed to in what I already wrote in the first section, even if not emphasized. For example, I pointed out how something so simple as regulating lead pollution could cause violent crime rates around the world to drop like a rock. And that was only looking at a small part of the picture. Besides impulsive behavior and aggression that can lead to violent crime, lead by itself is known to cause a wide array of problems: lowered IQ, ADHD, dyslexia, schizophrenia, Alzheimer’s, etc; and also general health issues, from asthma to cardiovascular disease. Lead is only one among many such serious toxins, with others including cadmium and mercury. The latter is strange. Mercury can actually increase IQ, even as it causes severe dysfunction in other ways. Toxoplasmosis also can do the same for the IQ of women, even as the opposite pattern is seen in men.

The point is that solving or even lessening major public health concerns can potentially benefit the entire society, maybe even transform society. We act fatalistic about these collective conditions, as if there is nothing to be done about inequality, whether the inequality of wealth, resources, and opportunities or the inequality of healthy food, clean water, and clean air. We created these problems and we can reverse them. It often doesn’t require much effort and the costs in taking action are far less than the costs of allowing these societal wounds to fester. It’s not as if Americans lack the ability to tackle difficult challenges. Our history is filled with examples of public projects and programs with vast improvements being made. Consider the sewer socialists who were the first to offer clean water to all citizens in their cities, something that once demonstrated as successful was adopted by every other city in the United States (more or less adopted, if we ignore the continuing lead toxicity crisis).

There is no reason to give up in hopelessness, not quite yet. Let’s try to do some basic improvements first and see what happens. We can wait for environmental collapse, if and when it comes, before we resign ourselves to fatalism. It’s not a matter if we can absolutely save all of civilization from all suffering. Even if all we could accomplish is reducing some of the worst harm (e.g., aiming for less than half of the world’s population falling victim to environmental sickness and mortality), I’d call it a wild success. Those whose lives were made better would consider it worthwhile. And who knows, maybe you or your children and grandchildren will be among those who benefit.

Stress and Shittiness

What causes heart disease – Part 63
by Malcolm Kendrick

To keep this simple, and stripping terminology down things down to basics, the concept I am trying to capture, and the word that I am going to use, here to describe the factor that can affect entire populations is ‘psychosocial stress’. By which I mean an environment where there is breakdown of community and support structures, often poverty, with physical threats and suchlike. A place where you would not really want to walk down the road unaccompanied.

This can be a zip code in the US, known as postcode in the UK. It can be a bigger physical area than that, such as a county, a town, or whole community – which could be split across different parts of a country. Such as native Americans living in areas that are called reservations.

On the largest scale it is fully possible for many countries to suffer from major psychosocial stress at the same time. […] Wherever you look, you can see that populations that have been exposed to significant social dislocation, and major psychosocial stressors, have extremely high rate of coronary heart disease/cardiovascular disease.

The bad news is we’re dying early in Britain – and it’s all down to ‘shit-life syndrome’
by Will Hutton

Britain and America are in the midst of a barely reported public health crisis. They are experiencing not merely a slowdown in life expectancy, which in many other rich countries is continuing to lengthen, but the start of an alarming increase in death rates across all our populations, men and women alike. We are needlessly allowing our people to die early.

In Britain, life expectancy, which increased steadily for a century, slowed dramatically between 2010 and 2016. The rate of increase dropped by 90% for women and 76% for men, to 82.8 years and 79.1 years respectively. Now, death rates among older people have so much increased over the last two years – with expectations that this will continue – that two major insurance companies, Aviva and Legal and General, are releasing hundreds of millions of pounds they had been holding as reserves to pay annuities to pay to shareholders instead. Society, once again, affecting the citadels of high finance.

Trends in the US are more serious and foretell what is likely to happen in Britain without an urgent change in course. Death rates of people in midlife(between 25 and 64) are increasing across the racial and ethnic divide. It has long been known that the mortality rates of midlife American black and Hispanic people have been worse than the non-Hispanic white population, but last week the British Medical Journal published an important study re-examining the trends for all racial groups between 1999 and 2016 .

The malaises that have plagued the black population are extending to the non-Hispanic, midlife white population. As the report states: “All cause mortality increased… among non-Hispanic whites.” Why? “Drug overdoses were the leading cause of increased mortality in midlife, but mortality also increased for alcohol-related conditions, suicides and organ diseases involving multiple body systems” (notably liver, heart diseases and cancers).

US doctors coined a phrase for this condition: “shit-life syndrome”. Poor working-age Americans of all races are locked in a cycle of poverty and neglect, amid wider affluence. They are ill educated and ill trained. The jobs available are drudge work paying the minimum wage, with minimal or no job security. They are trapped in poor neighbourhoods where the prospect of owning a home is a distant dream. There is little social housing, scant income support and contingent access to healthcare. Finding meaning in life is close to impossible; the struggle to survive commands all intellectual and emotional resources. Yet turn on the TV or visit a middle-class shopping mall and a very different and unattainable world presents itself. Knowing that you are valueless, you resort to drugs, antidepressants and booze. You eat junk food and watch your ill-treated body balloon. It is not just poverty, but growing relative poverty in an era of rising inequality, with all its psychological side-effects, that is the killer.

The UK is not just suffering shit-life syndrome. We’re also suffering shit-politician syndrome.
by Richard Murphy

Will Hutton has an article in the Guardian in which he argues that the recent decline in the growth of life expectancy in the UK (and its decline in some parts) is down to what he describes as ‘shit-life syndrome’. This is the state where life is reduced to an exercise in mere survival as a result of the economic and social oppression lined up against those suffering the condition. And, as he points out, those suffering are not just those on the economic and social margins of society. In the UK, as in the US, the syndrome is spreading.

The reasons for this can be debated. I engaged in such argument in my book The Courageous State. In that book I argued that we live in a world where those with power do now, when they identify a problem, run as far as they might from it and say the market will find a solution. The market won’t do that. It is designed not to do so. Those suffering shit-life syndrome have, by default, little impact on the market. That’s one of the reasons why they are suffering the syndrome in the first place. That is why so much of current politics has turned a blind eye to this issue.

And they get away with it. That’s because the world of make belief advertising which drives the myths that underpin the media, and in turn out politics, simply pretends such a syndrome does not exist whilst at the same time perpetually reinforcing the sense of dissatisfaction that is at its core.

With Brexit, It’s the Geography, Stupid
by Dawn Foster

One of the major irritations of public discourse after the United Kingdom’s Brexit vote has been the complete poverty of analysis on the reasons behind different demographics’ voting preferences. Endless time, energy, and media attention has been afforded to squabbling over the spending of each campaign for and against continued European Union membership — and now more on the role social media played in influencing the vote — mirroring the arguments in the United States that those who voted to Leave were, like Trump voters, unduly influenced by shady political actors, with little transparency behind political ads and social media tactics.

It’s a handy distraction from the root causes in the UK: widening inequality, but also an increasingly entrenched economic system that is geographically specific, meaning your place of birth and rearing has far more influence over how limited your life is than anything within your control: work, education and life choices.

Across Britain, territorial injustice is growing: for decades, London has boomed in comparison to the rest of the country, with more and more wealth being sucked towards the southeast and other regions being starved of resources, jobs and infrastructure as a result. A lack of secure and well-remunerated work doesn’t just determine whether you can get by each month without relying on social security to make ends meet, but also all aspects of your health, and the health of your children. A recent report by researchers at Cambridge University examined the disproportionate effect of central government cuts on local authorities and services: inner city areas with high rates of poverty, and former industrial areas were hardest hit. Mia Gray, one of the authors of the Cambridge report said: “Ever since vast sums of public money were used to bail out the banks a decade ago, the British people have been told that there is no other choice but austerity imposed at a fierce and relentless rate. We are now seeing austerity policies turn into a downward spiral of disinvestment in certain people and places. This could affect the life chances of entire generations born in the wrong part of the country.”

Life expectancy is perhaps the starkest example. In many other rich countries, life expectancy continues to grow. In the United Kingdom it is not only stalling, but in certain regions falling. The gap between the north and south of England reveals the starkest gap in deaths among young people: in 2015, 29.3 percent more 25-34-year-olds died in the north of England than the south. For those aged 35-44, the number of deaths in the north was 50 percent higher than the south.

In areas left behind economically, such as the ex-mining towns in the Welsh valleys, the post-industrial north of England, and former seaside holiday destinations that have been abandoned as people plump for cheap European breaks, doctors informally describe the myriad tangle of health, social and economic problems besieging people as “Shit Life Syndrome”. The term, brought to public attention by the Financial Times, sounds flippant, but it attempts to tease out the cumulative impact of strict and diminished life chances, poor health worsened by economic circumstances, and the effects of low paid work and unemployment on mental health, and lifestyle issues such as smoking, heavy drinking, and lack of exercise, factors worsened by a lack of agency in the lives of people in the most deprived areas. Similar to “deaths of despair” in the United States, Shit Life Syndrome leads to stark upticks in avoidable deaths due to suicide, accidents, and overdoses: several former classmates who remained in the depressed Welsh city I grew up in have taken their own lives, overdosed, or died as a result of accidents caused by alcohol or drugs. Their lives prior to death were predictably unhappy, but the opportunity to turn things around simply didn’t exist. To move away, you need money and therefore a job. The only vacancies that appear pay minimum wage, and usually you’re turned away without interview.

Simply put, it’s a waste of lives on an industrial scale, but few people notice or care. One of the effects of austerity is the death of public spaces people can gather without being forced to spend money. Youth clubs no longer exist, and public health officials blame their demise on the rise in teenagers becoming involved in gangs and drug dealing in inner cities. Libraries are closing at a rate of knots, despite the government requiring all benefits claims to be submitted via computers. More and more public spaces and playgrounds are being sold off to land-hungry developers, forcing more and more people to shoulder their misery alone, depriving them of spaces and opportunities to meet people and socialise. Shame is key in perpetuating the sense that poverty is deserved, but isolation and loneliness help exacerbate the self-hatred that stops you fighting back against your circumstances.

“Shit-Life Syndrome” (Oxycontin Blues)
by Curtis Price

In narrowing drug use to a legal or public health problem, as many genuinely concerned about the legal and social consequences of addiction will argue, I believe a larger politics and political critique gets lost (This myopia is not confined to drug issues. From what I’ve seen, much of the “social justice” perspective in the professional care industry is deeply conservative; what gets argued for amounts to little more than increased funding for their own services and endless expansion of non-profits). Drug use, broadly speaking, doesn’t take place in a vacuum. It is a thermometer for social misery and the more social misery, the greater the use. In other words, it’s not just a matter of the properties of the drug or the psychological states of the individual user, but also of the social context in which such actions play out.

If we accept this as a yardstick, then it’s no accident then that the loss of the 1984-1985 U.K. Miners’ Strike, with the follow-on closure of the pits and destruction of pit communities’ tight-knit ways of life, triggered widespread heroin use (2). What followed the defeat of the Miners’ Strike only telescoped into a few years the same social processes that in much of the U.S. were drawn out, more prolonged, insidious, and harder to detect. Until, that is, the mortality rates – that canary in the epidemiological coalmine -sharply rose to everyone’s shock.

US doctors have coined a phrase for the underlying condition of which drug use and alcoholism is just part: “shit-life syndrome.” As Will Hutton in the Guardian describes it,

“Poor working-age Americans of all races are locked in a cycle of poverty and neglect, amid wider affluence. They are ill educated and ill trained. The jobs available are drudge work paying the minimum wage, with minimal or no job security. They are trapped in poor neighborhoods where the prospect of owning a home is a distant dream. There is little social housing, scant income support and contingent access to healthcare. Finding meaning in life is close to impossible; the struggle to survive commands all intellectual and emotional resources. Yet turn on the TV or visit a middle-class shopping mall and a very different and unattainable world presents itself. Knowing that you are valueless, you resort to drugs, antidepressants and booze. You eat junk food and watch your ill-treated body balloon. It is not just poverty, but growing relative poverty in an era of rising inequality, with all its psychological side-effects, that is the killer”(3).

This accurately sums up “shit-life syndrome.” So, by all means, end locking up non-violent drug offenders and increase drug treatment options. But as worthwhile as these steps may be, they will do nothing to alter “shit-life syndrome.” “Shit-life syndrome” is just one more expression of the never-ending cruelty of capitalism, an underlying cruelty inherent in the way the system operates, that can’t be reformed out, and won’t disappear until new ways of living and social organization come into place.

The Human Kind, A Doctor’s Stories From The Heart Of Medicine
Peter Dorward
p. 155-157

It’s not like this for all kinds of illness, of course. Illness, by and large, is as solid and real as the chair I’m sitting on: and nothing I say or believe about it will change its nature. That’s what people mean when they describe an illness as ‘real’. You can see it and touch it, and if you can’t do that, then at least you can measure it. You can weigh a tumour; you can see on the screen the ragged outline of the plaque of atheroma in your coronary artery which is occluded and crushing the life out of you, and you would be mad to question the legitimacy of this condition that prompts the wiry cardiologist to feed the catheter down the long forks and bends of your clogged arterial tree in order to feed an expanding metal stent into the blocked artery and save you.

No one questions the reality and medical legitimacy of those things in the world that can be seen, felt, weighed, touched. That creates a deep bias in the patient; it creates a profound preference among us, the healers.

But a person is interactive . Minds can’t exist independently of other minds: that’s the nature of our kind. The names we have for things in the world and the way that we choose to talk about them affect how we experience them. Our minds are made of language, and grammar, intentions, emotions, perceptions and memory. We can only experience the world through the agency of our minds, and how our minds interact with others. Science is a great tool for talking about the external world: the world that is indifferent to what we think. Science doesn’t begin to touch the other, inner, social stuff. And that’s a challenge in medicine. You need other tools for that.

‘Shit-life syndrome,’ offers Becky, whose skin is so pale it looks translucent, who wears white blouses with little ruffs buttoned to the top and her blonde hair in plaits, whose voice is vicarage English and in whose mouth shit life sounds anomalous. Medicine can have this coarsening effect. ‘Shit-life syndrome provides the raw material. We doctors do all the rest.’

‘Go on…’

‘That’s all I ever seem to see in GP. People whose lives are non-specifically crap. Women single parenting too many children, doing three jobs which they hate, with kids on Ritalin, heads wrecked by smartphone and tablet parenting. Women who hate their bodies and have a new diagnosis of diabetes because they’re too fat. No wonder they want a better diagnosis! What am I meant to do?’

I like to keep this tutorial upbeat. I don’t like it to become a moan-fest, which is pointless and damaging. Yet, I don’t want to censor.

‘… Sometimes I feel like a big stone, dropped into a river of pain. I create a few eddies around me, the odd wave or ripple, but the torrent just goes on…’

‘… I see it different. It’s worse! I think half the time we actually cause the problems. Or at least we create our own little side channels in the torrent. Build dams. Deep pools of misery of our own creation!’

That’s Nadja. She’s my trainee. And I recognise something familiar in what she is saying – the echo of something that I have said to her. It’s flattering, and depressing.

‘For example, take the issuing of sick notes. They’re the worst. We have all of these people who say they’re depressed, or addicted, or stressed, who stay awake all night because they can’t sleep for worry, and sleep all day so they can’t work, and they say they’re depressed or anxious, or have backache or work-related stress, and we drug them up and sign them off, but what they’re really suffering from are the symptoms of chronic unemployment and the misery of poverty, which are the worst illnesses that there are! And every time I sign one of these sick notes, I feel another little flake chipped off my integrity. You’re asking about vectors for social illness? Sick notes! It’s like we’re … shitting in the river, and worrying about the cholera!’

Strong words. I need to speak to Nadja about her intemperate opinions…

‘At least, that’s what he keeps saying,’ says Nadja, nodding at me.

Nadja’s father was a Croatian doctor, who fled the war there. Brought up as she was, at her father’s knee, on his stories of war and torture, of driving his motorbike between Kiseljac and Sarajevo and all the villages in between with his medical bag perched on the back to do his house calls, she can never quite believe the sorts of things that pass for ‘suffering’ here. It doesn’t make Nadja a more compassionate doctor. She sips her coffee, with a smile.

Aly, the one training to be an anaesthetist-traumatologist, says, ‘We shouldn’t do it. Simple as that. It’s just not medicine. We should confine ourselves to the physical, and send the rest to a social worker, or a counsellor or a priest. No more sick notes, no more doing the dirty work of governments. If society has a problem with unemployment, that’s society’s problem, not mine. No more convincing people that they’re sick. No more prescriptions for crap drugs that don’t work. If you can’t see it or measure it, it isn’t real. We’re encouraging all this pseudo-­illness with our sick notes and our crap drugs. What’s our first duty? Do no harm! End of.’

She’ll be a great trauma doctor, no doubt about it.

* * *

From Bad to Worse: Trends Across Generations
Rate And Duration of Despair
Trauma, Embodied and Extended
Facing Shared Trauma and Seeking Hope
Society: Precarious or Persistent?
Union Membership, Free Labor, and the Legacy of Slavery
The Desperate Acting Desperately
Social Disorder, Mental Disorder
Social Conditions of an Individual’s Condition
Society and Dysfunction
It’s All Your Fault, You Fat Loser!
To Grow Up Fast
Individualism and Isolation
To Put the Rat Back in the Rat Park
Rationalizing the Rat Race, Imagining the Rat Park
The Unimagined: Capitalism and Crappiness
Stress Is Real, As Are The Symptoms
On Conflict and Stupidity
Connecting the Dots of Violence
Inequality in the Anthropocene
Morality-Punishment Link

One Story or Another

In every period of history, there have been those who were nostalgic about a lost Golden Age, who believed we had reached a pinnacle and were now on the decline, who complained this was the worst generation ever and the problems we face are worse than anything that came before, who declared there were no new major discoveries or inventions left to be made, who concluded that it was the end of history or maybe even the End Times itself.

On the other side, there are those who see all of history as endless progress and the future bright and shiny with possibilities and utopian visions, who spin the present as the best time to be alive or at least not so bad if you keep a positive attitude, who state with conviction that we make our own reality.

But the fact of the matter is simply that the world continues on, no matter what we think or believe, hope or dread. Sure, the world can be shitty but it has its upsides as was also true in the past, just in different ways. And the future flickers with as many dark shadows to obscure our vision as bright flames to light the way.

We humans have always been in permanent mode of survival and innovation with brief periods of seeming stability and security, until the norm of drastic change returns to shake things up again. From one crisis to another, ever pushing humanity into new territory of the unknown, clever monkeys reacting to the next threat or opportunity. We never fully grasp either where we’ve come from nor where we’re going. We aren’t captains of this ship.

We are but one species among many in a complex world beyond our ken, in a universe that stretches into infinity. We don’t understand a fraction of it and yet the world goes on just fine in our ignorance. Heck, we are barely conscious of our own actions, living mostly in a state of mindless momentum of habit. Entire civilizations rise and fall, again and again and again, with every generation feeling unique and special. Nonetheless, someday our species will go extinct, and no one will miss us nor will there be an empty space where we once existed, all traces disappearing with the incoming tide.

That is neither good nor bad. It just is. Not that this simple truth will stop us from getting excited about the next thing that comes along, whether real or imagined. If nothing else, we humans are great storytellers and there is no more attentive listener than the very person spinning their preferred tale of wonder or woe. So we will go on speaking to fill the silence, for as long as there is breath left in us. More than anything else, we fear the end of our own chatter, in love as we are with our own voices.

It’s the act of storytelling that matters. Not the specific story. For essentially it is the same story being told, with humanity at the center. The storytelling is our humanity. There is nothing else to us. At least, we are good at what we do. No other species, being, or object in the universe tells a story like us.

Is Adaptation to Collapse the Best Case Scenario?

A little over a decade ago, a report by David Pimentel from Cornell University came out about the health costs of pollution. “About 40 percent of deaths worldwide,” wrote Susan S. Lang, “are caused by water, air and soil pollution, concludes a Cornell researcher. Such environmental degradation, coupled with the growth in world population, are major causes behind the rapid increase in human diseases, which the World Health Organization has recently reported. Both factors contribute to the malnourishment and disease susceptibility of 3.7 billion people, he says” (Water, air and soil pollution causes 40 percent of deaths worldwide, Cornell research survey finds).

That is damning! It is powerful in showing the impact of our actions and the complicity of our indifference. It’s even worse than that. The harm touches upon every area of health. “Of the world population of about 6.5 billion, 57 percent is malnourished, compared with 20 percent of a world population of 2.5 billion in 1950, said Pimentel. Malnutrition is not only the direct cause of 6 million children’s deaths each year but also makes millions of people much more susceptible to such killers as acute respiratory infections, malaria and a host of other life-threatening diseases, according to the research.” This is billions of people who lack the basic resources of clean water and air along with nutritious food, something that was a human birthright for most of human existence.

It’s worse still. This data, as bad as it is, maybe was an underestimation. Another report just came out, Cardiovascular disease burden from ambient air pollution in Europe reassessed using novel hazard ratio functions by Jos Lelieveld et al. Summarized in Hurn Publications, it is stated that, “The number of early deaths caused by air pollution is double previous estimates, according to research, meaning toxic air is killing more people than tobacco smoking. The scientists used new data to estimate that nearly 800,000 people die prematurely each year in Europe because of dirty air, and that each life is cut short by an average of more than two years” (Air pollution deaths are double previous estimates, finds research). This isn’t limited to poor, dark-skinned people in far away countries for it also affects the Western world: “The health damage caused by air pollution in Europe is higher than the global average.” And that doesn’t even include “effects of air pollution on infant deaths”.

Think about that. It was a decade ago that around 40% of deaths were able to be linked to pollution and environmental problems. Since then, these problems have only grown worse, as the world’s population continues to grow as does industrialization. Now it is determined that air pollution is at least twice as fatal as previously calculated. The same is probably true more generally for other forms of pollution along with environmental degradation. Our data was incomplete in the past and, even if improved, it remains incomplete. Also, keep in mind that this isn’t only about deaths. Increasing numbers of sick days, healthcare, and disabilities adds up to costs that are incalculable. Our entire global economy is being dragged down, at the very moment we need all our resources to deal with these problems, not merely to pay for the outcomes but to begin reversing course if we hope to avoid the worst.

This barely touches upon the larger health problems. As I’ve written about before, we are beginning to realize how diet majorly impacts health, not only in terms of malnourishment but also all the problems related to a diet of processed foods with lots of toxins such as farm chemicals, hormone mimics, food additives, starchy carbs, added sugars, artificial sweeteners, and hydrogenated or partially hydrogenated vegetable oils. Most of our healthcare costs go to a few diseases, all of them preventable. And the rates of major diseases are skyrocketing: neurocognitive conditions (mood disorders, personality disorders, autistic spectrum disorders, ADHD, etc), autoimmune disorders (type 1 diabetes, Alzheimer’s, multiple sclerosis, Hashimoto’s disease, many forms of arthritis, etc), metabolic syndrome (type 2 diabetes, heart disease, stroke, etc), and much else. This all relates to industrialized farming and food production that has, among much else, caused the soil to become depleted of nutrients while eroding what is left of the topsoil. At this rate, we have less than a century of topsoil left. And monocrops have been devastating to ecological diversity and set us up for famines when crops fail.

There pretty much is no one who isn’t being harmed. And increasingly the harm is coming at younger ages with diseases of older age now being seen among children and young adults. More of the population is becoming sick and disabled before they even get old enough to enter the workforce. For example, schizophrenia is on the rise among urban youth for reasons not entirely certain — in a summary of a study, it was concluded that “young city-dwellers also have 40% more chance of suffering from psychosis (hearing voices, paranoia or becoming schizophrenic in adulthood) is perhaps is less common knowledge” (see Urban Weirdness). So, it isn’t only that more people are dying younger. The quality of people’s lives is worsening. And with ever more people disabled and struggling, who is going to help them? Or are large swaths of the world’s population simply going to become unwanted and uncared for? And will we allow billions of people to fall further into poverty? If not becoming homeless, is it a better fate that we simply institutionalize these people so that we of the comfortable classes don’t have to see them? Or will we put these useless eaters into ghettos and internment camps to isolate them like a plague to be contained? The externalized costs of modern industrialized capitalism are beyond imagining and they’re quickly becoming worse.

Modernity is a death cult, as I’ve previously concluded. Besides mass extinction on the level never before experienced in all of hominid existence (the last mass extinction was 66 million years ago), we are already feeling the results of climate change with increased super-storms, floods, droughts, wildfires, etc. Recent heatwaves have been unprecedented, including in the Arctic — far from being a mere annoyance since it speeds up the melting of glaciers, sea ice, and permafrost which in turn releases greenhouse gases (possibly pathogens as well), speeds up the warming (Arctic Amplification), and will alter ocean currents and the polar jet stream. These environmental changes are largely what is behind the refugee crises numerous countries are facing, which is also connected to terrorism. Inequality within and between societies will exacerbate the problems further with increased conflicts and wars, with endless crisis after crisis coming from every direction until the available resources are pushed to the limit and beyond — as I wrote last year:

“As economic and environmental conditions worsen, there are some symptoms that will become increasingly apparent and problematic. Based on the inequality and climatology research, we should expect increased stress, anxiety, fear, xenophobia, bigotry, suicide, homicide, aggressive behavior, short-term thinking, reactionary politics, and generally crazy and bizarre behavior. This will likely result in civil unrest, violent conflict, race wars, genocides, terrorism, militarization, civil wars, revolutions, international conflict, resource-based wars, world wars, authoritarianism, ethno-nationalism, right-wing populism, etc.”

If you really want to be depressed, might I suggest reading Deep Adaptation: A Map for Navigating Climate Tragedy by Jem Bendell, a full Professor of Sustainability Leadership and Founder of the Institute for Leadership and Sustainability (IFLAS) at the University of Cumbria (UK): “When I say starvation, destruction, migration, disease, and war, I mean in your own life. With the power down, soon you won’t have water coming out of your tap. You will depend on your neighbors for food and some warmth. You will become malnourished. You won’t know whether to stay or go. You will fear being violently killed before starving to death.” Here is what a Vice piece had to say about it:

“You only needed to step outside during the record-breaking heatwave last year to acknowledge that 17 of the 18 hottest years on the planet have occurred since 2000. Scientists already believe we are soon on course for an ice-free Arctic, which will only accelerate global warming. Back in 2017, even Fox News reported scientists’ warnings that the Earth’s sixth mass extinction was underway. Erik Buitenhuis, a senior researcher at the Tyndall Centre for Climate Change Research, tells me that Bendell’s conclusions may sound extreme, but he agrees with the report’s overall assessment. “I think societal collapse is indeed inevitable,” he says, though adds that “the process is likely to take decades to centuries” ” (Geoff Dembicki, The Climate Change Paper So Depressing It’s Sending People to Therapy).

What are governments and other major institutions doing in response? Very little, despite the consensus among experts and a majority of Americans supporting environmental policies, although the Pentagon and Department of Homeland Security is concerned in maintaining their own power: “Their preparation, however, is not aimed at preventing or slowing down climate change, nor is it principally aimed at relieving distress. Rather it is in protecting the U.S. homeland and American business interests from the desperate masses” (Phil Ebersole, Climate, migration and border militarization). There are many courses of actions we could take. And we know what needs to be done to prevent or mitigate what will otherwise follow. Will we do it? Of course not. The problem is too large, too incomprehensible, and too depressing. We will go on denying it, until well into the global crisis, if not the civilizational collapse. At that point, it probably will no longer matter what we do or don’t do. But until then, we can begin to imagine the unimaginable, if only to prepare for it psychologically.

Then again, maybe we’ll find some way to pull out of this death spiral at the last moment. It’s unlikely, but humans can be innovative under pressure and no doubt there will be plenty of people attempting to create new technologies and adapt to new conditions. Even if there is only minimal success, some of the population could be saved as we shift to smaller-scale societies in the areas that are still viable for farming or else escaping into ecodomes. One way or another, the world as we know it will not continue on as before and, ignoring all the suffering and death, I don’t know that will be an entirely bad thing, at least for the earth if not for humanity. Even in that best case scenario, we would still be facing possibly thousands of years of climate disruption, maybe a new ice age, and on top of that it would take millions of years for the biosphere and ecosystems to recover from mass extinction and find a new balance. So, if we humans plan on surviving, it will be a very long struggle. Thousands of future generations will inherit our mistakes and our mess.

Conceptual Spaces

In a Nautilis piece, New Evidence for the Strange Geometry of Thought, Adithya Rajagopalan reports on the fascinating topic of conceptual or cognitive spaces. He begins with the work of the philosopher and cognitive scientist Peter Gärdenfors who wrote about this in a 2000 book, Conceptual Spaces. Then last year, there was published a Science paper by several neuroscientists: Jacob Bellmund, Christian Doeller, and Edvard Moser. It has to do with the brain’s “inner GPS.”

Anyone who has followed my blog for a while should see the interest this has for me. There is Julian Jaynes’ thought on consciousness, of course. And there are all kinds of other thinkers as well. I could throw out Iain McGilchrist and James L. Kugel who, though critical of Jaynes, make similar points about identity and the divided mind.

The work of Gärdenfors and the above neuroscientists helps explain numerous phenomenon, specifically in what way splintering and dissociation operates. How a Nazi doctor could torture Jewish children at work and then go home to play with his own children. How the typical person can be pious at church on Sunday and yet act in complete contradiction to this for the rest of the week. How we can know that the world is being destroyed through climate change and still go on about our lives as if everything remains the same.How we can simultaneously know and not know so many things. Et cetera.

It might begin to give us some more details in explaining the differences between the bicameral mind and Jaynesian consciousness, between Ernest Hartmann’s thin and thick boundaries of the mind, and much else. Also, in light of Lynne Kelly’s work on traditional mnemonic systems, we might be in a better position of understanding the phenomenal memory feats humans are capable of and why they are so often spatial in organization (e.g., the Songlines of Australian Aborigines) and why these often involve shifts in mental states. It might also clarify how people can temporarily or permanently change personalities and identities, how people can compartmentalize parts of themselves such as their childhood selves and maybe help explain why others fail at compartmentalizing.

The potential significance is immense. Our minds are mansions with many rooms. Below is the meat of Rajagopalan’s article.

* * *

“Cognitive spaces are a way of thinking about how our brain might organize our knowledge of the world,” Bellmund said. It’s an approach that concerns not only geographical data, but also relationships between objects and experience. “We were intrigued by evidence from many different groups that suggested that the principles of spatial coding in the hippocampus seem to be relevant beyond the realms of just spatial navigation,” Bellmund said. The hippocampus’ place and grid cells, in other words, map not only physical space but conceptual space. It appears that our representation of objects and concepts is very tightly linked with our representation of space.

Work spanning decades has found that regions in the brain—the hippocampus and entorhinal cortex—act like a GPS. Their cells form a grid-like representation of the brain’s surroundings and keep track of its location on it. Specifically, neurons in the entorhinal cortex activate at evenly distributed locations in space: If you drew lines between each location in the environment where these cells activate, you would end up sketching a triangular grid, or a hexagonal lattice. The activity of these aptly named “grid” cells contains information that another kind of cell uses to locate your body in a particular place. The explanation of how these “place” cells work was stunning enough to award scientists John O’Keefe, May-Britt Moser, and Edvard Moser, the 2014 Nobel Prize in Physiology or Medicine. These cells activate only when you are in one particular location in space, or the grid, represented by your grid cells. Meanwhile, head-direction cells define which direction your head is pointing. Yet other cells indicate when you’re at the border of your environment—a wall or cliff. Rodent models have elucidated the nature of the brain’s spatial grids, but, with functional magnetic resonance imaging, they have also been validated in humans.

Recent fMRI studies show that cognitive spaces reside in the hippocampal network—supporting the idea that these spaces lie at the heart of much subconscious processing. For example, subjects of a 2016 study—headed by neuroscientists at Oxford—were shown a video of a bird’s neck and legs morph in size. Previously they had learned to associate a particular bird shape with a Christmas symbol, such as Santa or a Gingerbread man. The researchers discovered the subjects made the connections with a “mental picture” that could not be described spatially, on a two-dimensional map. Yet grid-cell responses in the fMRI data resembled what one would see if subjects were imagining themselves walking in a physical environment. This kind of mental processing might also apply to how we think about our family and friends. We might picture them “on the basis of their height, humor, or income, coding them as tall or short, humorous or humorless, or more or less wealthy,” Doeller said. And, depending on whichever of these dimensions matters in the moment, the brain would store one friend mentally closer to, or farther from, another friend.

But the usefulness of a cognitive space isn’t just restricted to already familiar object comparisons. “One of the ways these cognitive spaces can benefit our behavior is when we encounter something we have never seen before,” Bellmund said. “Based on the features of the new object we can position it in our cognitive space. We can then use our old knowledge to infer how to behave in this novel situation.” Representing knowledge in this structured way allows us to make sense of how we should behave in new circumstances.

Data also suggests that this region may represent information with different levels of abstraction. If you imagine moving through the hippocampus, from the top of the head toward the chin, you will find many different groups of place cells that completely map the entire environment but with different degrees of magnification. Put another way, moving through the hippocampus is like zooming in and out on your phone’s map app. The area in space represented by a single place cell gets larger. Such size differences could be the basis for how humans are able to move between lower and higher levels of abstraction—from “dog” to “pet” to “sentient being,” for example. In this cognitive space, more zoomed-out place cells would represent a relatively broad category consisting of many types, while zoomed-in place cells would be more narrow.

Yet the mind is not just capable of conceptual abstraction but also flexibility—it can represent a wide range of concepts. To be able to do this, the regions of the brain involved need to be able to switch between concepts without any informational cross-contamination: It wouldn’t be ideal if our concept for bird, for example, were affected by our concept for car. Rodent studies have shown that when animals move from one environment to another—from a blue-walled cage to a black-walled experiment room, for example—place-cell firing is unrelated between the environments. Researchers looked at where cells were active in one environment and compared it to where they were active in the other. If a cell fired in the corner of the blue cage as well as the black room, there might be some cross-contamination between environments. The researchers didn’t see any such correlation in the place-cell activity. It appears that the hippocampus is able to represent two environments without confounding the two. This property of place cells could be useful for constructing cognitive spaces, where avoiding cross-contamination would be essential. “By connecting all these previous discoveries,” Bellmund said, “we came to the assumption that the brain stores a mental map, regardless of whether we are thinking about a real space or the space between dimensions of our thoughts.”

Reckoning With Violence

The crime debate is another example of how the ruling elite is disconnected from the American majority. Most Americans support rehabilitation, rather than punishment. This support is even stronger among victims of crimes because they understand how tough-on-crime policies have destroyed their communities and harmed the people they care about.

But the ruling elite make massive profits from the privatized prisons and, if nothing else, it is highly effective social control in keeping the population in a permanent state of anxiety and fear. The purpose was never to make the world a better place or to help the average American, much less those struggling near the bottom.

The system works perfectly for its intended purpose. The problem is its intended purpose is psychopathic and evil. And I might add, the ruling elite promoting it is bipartisan. It’s time that we the American people demand justice for our families and communities and refuse anything less from those who attempt to get in our way. Let’s save our righteous wrath for those most deserving of it.

* * *

Reckoning With Violence
by Michelle Alexander

As Ms. [Danielle] Sered explains in her book [Until We Reckon], drawing on her experience working with hundreds of survivors and perpetrators of violence in Brooklyn and the Bronx, imprisonment isn’t just an inadequate tool; it’s often enormously counterproductive — leaving survivors and their communities worse off.

Survivors themselves know this. That’s why fully 90 percent of survivors in New York City, when given the chance to choose whether they want the person who harmed them incarcerated or in a restorative justice process — one that offers support to survivors while empowering them to help decide how perpetrators of violence can repair the damage they’ve done — choose the latter and opt to use the services of Ms. Sered’s nonprofit organization, Common Justice. […]

Ninety percent is a stunning figure considering everything we’ve been led to believe that survivors actually want. For years, we’ve been told that victims of violence want nothing more than for the people who hurt them to be locked up and treated harshly. It is true that some survivors do want revenge or retribution, especially in the immediate aftermath of the crime. Ms. Sered is emphatic that rage is not pathological and a desire for revenge is not blameworthy; both are normal and can be important to the healing process, much as denial and anger are normal stages of grief.

But she also stresses that the number of people who are interested only in revenge or punishment is greatly exaggerated. After all, survivors are almost never offered real choices. Usually when we ask victims “Do you want incarceration?” what we’re really asking is “Do you want something or nothing?” And when any of us are hurt, and when our families and communities are hurting, we want something rather than nothing. In many oppressed communities, drug treatment, good schools, economic investment, job training, trauma and grief support are not available options. Restorative justice is not an option. The only thing on offer is prisons, prosecutors and police.

But what happens, Ms. Sered wondered, if instead of asking, “Do you want something or nothing?” we started asking “Do you want this intervention or that prison?” It turns out, when given a real choice, very few survivors choose prison as their preferred response.

This is not because survivors, as a group, are especially merciful. To the contrary, they’re pragmatic. They know the criminal justice system will almost certainly fail to deliver what they want and need most to overcome their pain and trauma. More than 95 percent of cases end in plea bargains negotiated by lawyers behind the scenes. Given the system’s design, survivors know the system cannot be trusted to validate their suffering, give them answers or even a meaningful opportunity to be heard. Nor can it be trusted to keep them or others safe.

In fact, many victims find that incarceration actually makes them feel less safe. They worry that others will be angry with them for reporting the crime and retaliate, or fear what will happen when the person eventually returns home. Many believe, for good reason, that incarceration will likely make the person worse, not better — a frightening prospect when they’re likely to encounter the person again when they’re back in the neighborhood. […]

A growing body of research strongly supports the anecdotal evidence that restorative justice programs increase the odds of safety, reduce recidivism and alleviate trauma. “Until We Reckon” cites studies showing that survivors report 80 to 90 percent rates of satisfaction with restorative processes, as compared to 30 percent for traditional court systems.

Common Justice’s success rate is high: Only 7 percent of responsible parties have been terminated from the program for a new crime. And it’s not alone in successfully applying restorative justice principles. Numerous organizations — such as Community Justice for Youth Institute and Project NIA in Chicago; the Insight Prison Project in San Quentin; the Community Conferencing Center in Baltimore; and Restorative Justice for Oakland Youth — are doing so in communities, schools, and criminal justice settings from coast-to-coast.

In 2016, the Alliance for Safety and Justice conducted the first national poll of crime survivors and the results are consistent with the emerging trend toward restorative justice. The majority said they “believe that time in prison makes people more likely to commit another crime rather than less likely.” Sixty-nine percent preferred holding people accountable through options beyond prison, such as mental health treatment, substance abuse treatment, rehabilitation, community supervision and public service. Survivors’ support for alternatives to incarceration was even higher than among the general public.

Survivors are right to question incarceration as a strategy for violence reduction. Violence is driven by shame, exposure to violence, isolation and an inability to meet one’s economic needs — all of which are core features of imprisonment. Perhaps most importantly, according to Ms. Sered, “Nearly everyone who has committed violence first survived it,” and studies indicate that experiencing violence is the greater predictor of committing it. Caging and isolating a person who’s already been damaged by violence is hardly a recipe for positive transformation.

The Court of Public Opinion: Part 1

This is about public opinion and public perception as it relates to public policy (see previous posts). I also include some analyses of the opinions of politicians as it relates to public opinion or rather their perception of what they think or want to believe about the public (for background, see here and here).

I’ll begin with a problematic example of a poll. Here is an article that someone offered as proving the public supports tough-on-crime policies:

There were stunning findings in a new poll released Monday on crime in New York City. Keeping crime down is way more important to voters than reforming the NYPD’s controversial stop-and-frisk program…

a new Quinnipiac University poll…reveals that public safety is uppermost on the minds of voters…

Asked which was more important, keeping crime rates down or reforming stop and frisk, 62 percent said keeping crime rates low and 30 percent said reforming stop and frisk.

The article itself isn’t important. There are thousands like it, but I wanted to use it for the polling data it was using.

I don’t know of any bias from Quinnipiac, beyond a basic mainstream bias, and so maybe the wording of the question was simply intellectual laziness. It was phrased as a forced choice question that implied choosing one negated the possibility of the other and it implied those were the only choices for public policy.

I looked further into data related to stop and frisk. It isn’t as simple as the forced choice presents it. For one, a number of studies don’t show that stop and frisk actually keeps crime rates low, as the question assumes. Secondly, when given more information and more options, Americans tend to support funding programs that either help prevent crime or help rehabilitate criminals.

The general public will favor punishment, when no other good choices are offered them. Still, that doesn’t say much about the fundamental values of most Americans. I’m not just interested in the answers given, but also the questions asked, how they are framed and how they are worded.

The Court of Public Opinion: Part 2

I’ll highlight one issue. It is a chicken or the egg scenario.

The political elites are fairly clueless about the views of the general public, including their own constituents. At the same time, the average American is clueless about what those in government are actually doing. This disconnection is what one expects from a society built on a class-based hierarchy with growing extremes of inequality. In countries that have lower inequality, there is far less disconnection between political elites and the citizenry.

It isn’t clear who is leading who. How could politicians simply be doing what the public wants when they don’t know what the public wants? So, what impact does public opinion even have? There is strong evidence that public opinion might simply be following elite opinion and reacting to the rhetoric heard on the MSM.

Populations are easily manipulated by propaganda, as history shows. That seems to be the case with the United States as well.

As such, it isn’t clear how punitive most Americans actually are. When given more and better information, when given more and better options, most Americans tend to focus away from straightforward punitive policies. Imagine what the public might support if we ever had an open and honest debate based on the facts.

The Literal Metaphor of Sickness

I’ve written about Lenore Skenazy before. She is one of my mom’s favorite writers and so she likes to share the articles with me. Skenazy has a another piece about her usual topic, helicopter parents and their captive children. Today’s column, in the local newspaper (The Gazette), has the title “The irony of overprotection” (you can find it on the Creators website or from the GazetteXtra). She begins with a metaphor. In studying how leukemia is contracted, scientist Mel Greaves found that two conditions were required. The first is a genetic susceptibility, which exists only in a certain number of kids, although far from uncommon. But that alone isn’t sufficient without the second factor.

There has to be an underdeveloped or compromised immune system. And sadly this also has become far from uncommon. Further evidence of the hygiene hypothesis keeps accumulating (should be called the hygiene theory at this point). Basically, it is only by being exposed to germs that a child’s immune system experiences healthy stress that activates the immune system into normal development. Without this, many are left plagued by ongoing sickness, allergies, and autoimmune conditions for the rest of their lives.

Parents have not only protected their children from the larger dangers and infinite risks of normal childhood: skinned knees from roughhousing, broken limbs from falling from trees, hurt feelings from bullies, trauma from child molesters, murder from the roving bands of psychotic kidnappers who will sell your children on the black market, etc. Beyond such everyday fears, parents have also protected their kids from minor infections, with endless application of anti-bacterial products and cocooning them in sterile spaces that have been liberally doused with chemicals that kill all known microbial life forms. That is not a good thing for the consequences are dire.

This is where the metaphor kicks in. Skenazy writes:

The long-term effects? Regarding leukemia, “when such a baby is eventually exposed to common infections, his or her unprimed immune system reacts in a grossly abnormal way,” says Greaves. “It overreacts and triggers chronic inflammation.”

Regarding plain old emotional resilience, what we might call “psychological inflammation” occurs when kids overreact to an unfamiliar or uncomfortable situation because they have been so sheltered from these. They feel unsafe, when actually they are only unprepared, because they haven’t been allowed the chance to develop a tolerance for some fears and frustrations. That means a minor issue can be enough to set a kid off — something we are seeing at college, where young people are at last on their own. There has been a surge in mental health issues on campuses.

It’s no surprise that anxiety would be spiking in an era when kids have had less chance to deal with minor risks from childhood on up.

There is only a minor detail of disagreement I’d throw out. There is nothing metaphorical about this. Because of an antiseptic world and other causes (leaky gut, high-carb diet, sugar addiction, food additives, chemical exposure, etc), the immune systems of so many modern Americans are so dysfunctional and overreactive that it wreaks havoc on the body. Chronic inflammation has been directly linked to or otherwise associated with about every major health issue you can think of.

This includes, by the way, neurocognitive conditions such as depression and anxiety, but much worse as well. Schizophrenia, Alzheimer’s, etc also often involve inflammation. When inflammation gets into the brain, gut-brain axis, and/or nervous system, major problems follow with a diversity of symptoms that can be severe and life threatening, but they can also be problematic on a social and psychological level as well. This new generation of children are literally being brain damaged, psychologically maimed, and left in a fragile state. For many of them, their bodies and minds are not fully prepared to deal with the real world with normal healthy responses. It is hard to manage the stresses of life when one is in a constant state of low-grade sickness that permanently sets the immune system on high, when even the most minor risks could endanger one’s well being.

The least of our worries is the fact that diseases like type 2 diabetes, what used to be called adult onset diabetes because it was unknown among children, is now increasing among children. Sure, adult illnesses will find their way earlier and earlier into young adulthood and childhood and the diseases of the elderly will hit people in middle age or younger. This will be a health crisis that could bankrupt and cripple our society. But worse than that is the human cost of sickness and pain, struggle and suffering. We are forcing this fate onto the young generations. That is cruel beyond comprehension. We can barely imagine what this will mean across the entire society when it finally erupts as a crisis.

We’ve done this out of ignorant good intentions of wanting to protect our children from anything that could touch them. It makes us feel better that we have created a bubble world of innocence where children won’t have to learn from the mistakes and failures, harms and difficulties we experienced in growing up. So instead, we’ve created something far worse for them.

Neolithic Troubles

Born Expecting the Pleistocene
by Mark Seely
p. 31

Not our natural habitat

The mismatch hypothesis

Our bodies including our brains—and thus our behavioral predispositions—have evolved in response to very specific environmental and social conditions. Many of those environmental and social conditions no longer exist for most of us. Our physiology and our psychology, all of our instincts and in-born social tendencies, are based on life in small semi-nomadic tribal groups of rarely more than 50 people. There is a dramatic mismatch between life in a crowded, frenetic, technology-based global civilization and the kind of life our biology and our psychology expects [14].

And we suffer serious negative consequences of this mismatch. A clear example can be seen in the obesity epidemic that has swept through developed nations in recent decades: our bodies evolved to meet energy demands in circumstances where the presence of food was less predictable and periods of abundance more variable. Because of this, we have a preference for calorie-dense food, we have a tendency to eat far more than we need, and our bodies are quick to hoard extra calories in the form of body fat.
This approach works quite well during a Pleistocene ice age, but it is maladaptive in our present food-saturated society—and so we have an obesity epidemic because of the mismatch between the current situation and our evolution-derived behavioral propensities with respect to food. Studies on Australian aborigines conducted in the 1980s, evaluating the health effects of the transition from traditional hunter-gatherer lifestyle to urban living, found clear evidence of the health advantages associated with a lifestyle consistent with our biological design [15]. More recent research on the increasingly popular Paleo-diet [16] has since confirmed wide-ranging health benefits associated with selecting food from a pre-agriculture menu, including cancer resistance, reduction in the prevalence of autoimmune disease, and improved mental health.

[14] Ornstein, R. & Ehrlich, P. (1989). New World, New Mind. New York: Simon & Schuster.
[15] O’Dea, K., Spargo, R., & Akerman, K. (1980). The effect of transition from traditional to urban life-style on the insulin secretory response in Australian Aborigines. Diabetes Care, 3(1), 31-37; O’Dea, K., White, N., & Sinclair, A. (1988). An investigation of nutrition-relatedrisk factors in an isolated Aboriginal community in northern Australia: advantagesof a traditionally-orientated life-style. The Medical Journal of Australia, 148 (4), 177-80.
[16] E.g., Frassetto, L. A., Schloetter, M., Mietus-Snyder, M., Morris, R. C., & Sebastian, A. (2009). Metabolic and physiological improvements from consuming a Paleolithic, hunter-gatherer type diet. European Journal of Clinical Nutrition, 63, 947=955.

pp. 71-73

The mechanisms of cultural evolution can be seen in the changing patterns of foraging behavior in response to changes in food availability and changes in population density. Archaeological analyses suggest that there is a predictable pattern of dietary choice that emerges from the interaction among population density, relative abundance of preferred food sources, and factors that relate to the search and handling of various foods. [56] In general, diets become more varied, or broaden, as population increases and the preferred food becomes more difficult to obtain. When a preferred food source is abundant, the calories in the diet may consist largely of that one particular food. But as the food source becomes more difficult to obtain, less preferable foods will be included and the diet will broaden. Such dietary changes imply changes in patterns of behavior within the community—changes of culture.

Behavior ecologists and anthropologists have partitioned the foraging process into two components with respect to the cost-benefit analysis associated with dietary decisions:
search and handling. [57] The search component of the cost-benefit ledger refers to the amount of work per calorie payoff (and other benefits such as the potential for enhanced social standing) associated with a food item’s abundance, distance, terrain, proximity of another group’s territory, water sources, etc. The handling component refers to the work per calorie payoff associated with getting the food into a state (location, form, etc.) in which it can be consumed. Search and handling considerations can be largely independent of each other. The residential permanence involved with the incorporation of agriculture reduces the search consideration greatly, and makes handling the primary consideration. Global industrial food economies change entirely the nature of both search and handling: handling in industrial society—from the perspective of the individual and the individual’s decision processes—is reduced largely to considerations of speed and convenience. The search component has been re-appropriated and refocused by corporate marketing, and reduced to something called shopping.

Domestication, hands down the most dramatic and far-reaching example of cultural evolution, emerges originally as a response to scarcity that is tied to a lack of mobility and an increase in population density. Domestication is a way of further broadening the diet when other local sources of food are already being maximally exploited. Initial experimentation with animal domestication “occurred in situations where forager diets were already quite broad and where the principle goal of domestication was the production of milk, an exercise that made otherwise unusable plants or plant parts available for human consumption. . . .” [58] The transition to life-ways based even partially on domestication has some counter-intuitive technological ramifications as well.

This leads to a further point about efficiency. It is often said that the adoption of more expensive subsistence technology marks an improvement in this aspect of food procurement: better tools make the process more efficient. This is true in the sense that such technology often enables its users to extract more nutrients per unit weight of resource processed or area of land harvested. If, on the other hand, the key criterion is the cost/benefit ratio, the rate of nutrient gained relative to the effort needed to acquire it, then the use of more expensive tools will often be associated with declines in subsistence efficiency. Increased investment in handling associated with the use of high-cost projectile weapons, in plant foods that require extensive tech-related processing, and in more intensive agriculture all illustrate this point. [59]

In modern times, thanks to the advent of—and supportive propaganda associated with—factory industrial agriculture, farming is coupled with ideas of plentitude and caloric abundance. However, in the absence of fossil energy and petroleum-based chemical fortification, farming is expensive in terms of the calories produced as a function of the amount of work involved. For example, “farmers grinding corn with hand-held stone tools can earn no more than about 1800 kcal per hour of total effort devoted to farming, and this from the least expensive cultivation technique.” [60] A successful fishing or bison hunting expedition is orders of magnitude more efficient in terms of the ratio of calories expended to calories obtained.

[56] Bird & O’Connell [Bird, D. W., & O’Connell, J. F. (2006). Behavioral ecology and archaeology. Journal of Archaeological Research, 14, 143-188]
[57] Ibid.
[58] Ibid, p. 152.
[59] Ibid, p. 153.
[60] Ibid, p. 151, italics in original.

pp. 122-123

The birth of the machine

The domestication frame

The Neolithic marks the beginnings of large scale domestication, what is typically referred to as the agricultural revolution. It was not really a revolution in that it occurred over an extended period of time (several thousand years) and in a mosaic piecemeal fashion, both in terms of the adoption of specific agrarian practices and in terms of specific groups of people who practiced them. Foraging lifestyles continue today, and represented the dominant lifestyle on the planet until relatively recently. The agricultural revolution was a true revolution, however, in terms of its consequences for the humans who adopted domestication-based life-ways, and for the rest of the natural world. The transition from nomadic and seminomadic hunting and gathering to sedentary agriculture is the most significant chapter in the chronicle of the human species. But it is clearly not a story of unmitigated success. Jared Diamond, who acknowledges somewhat the self-negating double-edge of technological “progress,” has called domestication the biggest mistake humans ever made.

That transition from hunting and gathering to agriculture is generally considered a decisive step in our progress, when we at last acquired the stable food supply and leisure time prerequisite to the great accomplishments of modern civilization. In fact, careful examination of that transition suggests another conclusion: for most people the transition brought infectious disease, malnutrition, and a shorter lifespan. For human society in general it worsened the relative lot of women and introduced class-based inequality. More than any other milestone along the path from chimpanzeehood to humanity, agriculture inextricably combines causes of our rise and our fall. [143]

The agricultural revolution had profoundly negative consequences for human physical,
psychological, and social well being, as well as a wide-ranging negative impact on the planet.

For humans, malnutrition and the emergence of infectious disease are the most salient physiological results of an agrarian lifestyle. A large variety of foodstuffs and the inclusion of a substantial amount of meat make malnutrition an unlikely problem for hunter gatherers, even during times of relative food scarcity. Once the diet is based on a few select mono-cropped grains supplemented by milk and meat from nutritionally-inferior domesticated animals, the stage is set for nutritional deficit. As a result, humans are not as tall or broad in stature today as they were 25,000 years ago; and the mean age of death is lower today as well. [144] In addition, both the sedentism and population density associated with agriculture create the preconditions for degenerative and infectious disease. “Among the human diseases directly attributable to our sedentary lives in villages and cities are heart and vascular disorders, diabetes, stroke, emphysema,
hypertension, and cirrhoses [sic.] of the liver, which together cause 75 percent of the deaths in the industrial nations.” [145] The diet and activity level of a foraging lifestyle serve as a potent prophylactic against all of these common modern-day afflictions. Nomadic hunter-gatherers are by no means immune to parasitic infection and disease. But the spread of disease is greatly limited by low population density and by a regular change of habitation which reduced exposure to accumulated wastes. Both hunter-gatherers and agriculturalists are susceptible to zoonotic diseases carried by animals, but domestication reduces an animal’s natural immunity to disease and infection, creates crowded conditions that support the spread of disease among animal populations, and increases the opportunity for transmission to humans. In addition, permanent dwellings provide a niche for a new kind of disease-carrying animal specialized for symbiotic parasitic cohabitation with humans, the rat being among the most infamous.
Plagues and epidemic outbreaks were not a problem in the Pleistocene.

There is a significant psychological dimension to the agricultural revolution as well.
A foraging hunter-gatherer lifestyle frames natural systems in terms of symbiosis and interrelationship. Understanding subtle connections among plants, animals, geography,
and seasonal climate change is an important requisite of survival. Human agents are intimately bound to these natural systems and contemplate themselves in terms of these systems, drawing easy analogy between themselves and the natural communities around them, using animals, plants, and other natural phenomena as metaphor. The manipulative focus of domestication frames natural systems in antagonistic terms of control and resistance. “Agriculture removed the means by which men [sic.] could contemplate themselves in any other than terms of themselves (or machines). It reflected back upon nature an image of human conflict and competition . . . .” [146] The domestication frame changed our perceived relationship with the natural world,
and lies at the heart of our modern-day environmental woes. According to Paul Shepard,
with animal domestication we lost contact with an essential component of our human nature, the “otherness within,” that part of ourselves that grounds us to the rest of nature:

The transformation of animals through domestication was the first step in remaking them into subordinate images of ourselves—altering them to fit human modes and purposes. Our perception of not only ourselves but also of the whole of animal life was subverted, for we mistook the purpose of those few domesticates as the purpose of all. Plants never had for us the same heightened symbolic representation of purpose itself. Once we had turned animals into the means of power among ourselves and over the rest of nature, their uses made possible the economy of husbandry that would, with the addition of the agrarian impulse, produce those motives and designs on the earth contrary to respecting it. Animals would become “The Others.” Purposes of their own were not allowable, not even comprehensible. [147]

Domestication had a profound impact on human psychological development. Development—both physiological and psychological—is organized around a series of stages and punctuated by critical periods, windows of time in which the development and functional integration of specific systems are dependent upon external input of a designated type and quality. If the necessary environmental input for a given system is absent or of a sufficiently reduced quality, the system does not mature appropriately. This can have a snowball effect because the future development of other systems is almost always critically dependent on the successful maturation of previously developed systems. The change in focus toward the natural world along with the emergence of a new kind of social order interfered with epigenetic programs that evolved to anticipate the environmental input associated with a foraging lifestyle. The result was arrested development and a culture-wide immaturity:

Politically, agriculture required a society composed of members with the acumen of children. Empirically, it set about amputating and replacing certain signals and experiences central to early epigenesis. Agriculture not only infantilized animals by domestication, but exploited the infantile human traits of normal individual neoteny. The obedience demanded by the organization necessary for anything larger than the earliest village life, associated with the rise of a military caste, is essentially juvenile and submissive . . . . [148]

[143] Diamond (1992), p. 139. [Diamond, J. (1992). The Third Chimpanzee. New York: HarperCollins.]
[144] Shepard (1998) [Shepard, P. (1998). Coming Home to the Pleistocene. Washington, D.C.: Island Press]
[145] Ibid, p. 99.
[146] Shepard (1982), p. 114. [Shepard, P. (1982). Nature and Madness. Athens Georgia: University of Georgia Press]
[147] Shepard (1998), p. 128.
[148] Shepard (1982), pp. 113-114.