The Crisis of Identity

“Have we lived too fast?”
~Dr. Silas Weir Mitchell, 1871
Wear and Tear, or Hints for the Overworked

I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is describing:

“Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”

That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent (the term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ began being diagnosed).

This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.

Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” It was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains:

“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.

“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””

I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry,  with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).

There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

Still, it goes far beyond diet. There has been a diversity of stressors that have continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.

Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).

There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:

“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)

This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.

The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.

By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.

In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.

Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Churchland and Peterson did acknowledge that this went beyond mere individual or even familial issues: “It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”

Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.

Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).

The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.

A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).

“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as “Red Republicans”, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.

This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the “civilizing process” that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate. Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.

To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the “First Wave” of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.

This was at a time when the abortion rate was sky-rocketing, indicating most women held other views. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” (Rickie Solinger, Pregnancy and Power, p. 61). In the unGodly and unChurched period of early America (“We forgot.”), organized religion was weak and “premarital sex was typical, many marriages following after pregnancy, but some people simply lived in sin. Single parents and ‘bastards’ were common” (A Vast Experiment). Early Americans, by today’s standards, were not good Christians — visiting Europeans often saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):

“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”

Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants.

The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of  larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?

“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)

This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.

Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why abortion laws were designed to target male doctors, although they rarely did, and not their female patients. Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us for own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the contained self. But the container is weak and keeps leaking all over the place.

To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or sel- understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging.

“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.

* * *

I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
pp. 34

WOMEN AND THE WOMB: The Emerging Birth Control Debate

The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31

“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32

As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33

But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.

“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.

Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37

Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.

“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19 th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41

“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”

“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44

Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.

Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.

Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg

Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland

Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz

Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz

Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel

Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler

Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer

American Sexual Histories
edited by Elizabeth Reis

Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff

From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin

Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman

One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker

The Agricultural Mind

Let me make an argument about individualism, rigid egoic boundaries, and hence Jaynesian consciousness. But I’ll come at it from a less typical angle. I’ve been reading much about diet, nutrition, and health. There are significant links between what we eat and so much else: gut health, hormonal regulation, immune system, and neurocognitive functioning. There are multiple pathways, one of which is direct, connecting the gut and the brain. The gut is sometimes called the second brain, but in evolutionary terms it is the first brain. To demonstrate one example of a connection, many are beginning to refer to Alzheimer’s as type 3 diabetes, and dietary interventions have reversed symptoms in clinical studies. Also, microbes and parasites have been shown to influence our neurocognition and psychology, even altering personality traits and behavior (e.g., toxoplasma gondii).

One possibility to consider is the role of exorphins that are addictive and can be blocked in the same way as opioids. Exorphin, in fact, means external morphine-like substance, in the way that endorphin means indwelling morphine-like substance. Exorphins are found in milk and wheat. Milk, in particular, stands out. Even though exorphins are found in other foods, it’s been argued that they are insignificant because they theoretically can’t pass through the gut barrier, much less the blood-brain barrier. Yet exorphins have been measured elsewhere in the human body. One explanation is gut permeability that can be caused by many factors such as stress but also by milk. The purpose of milk is to get nutrients into the calf and this is done by widening the space in gut surface to allow more nutrients through the protective barrier. Exorphins get in as well and create a pleasurable experience to motivate the calf to drink more. Along with exorphins, grains and dairy also contain dopaminergic peptides, and dopamine is the other major addictive substance. It feels good to consume dairy as with wheat, whether you’re a calf or a human, and so one wants more.

Addiction, of food or drugs or anything else, is a powerful force. And it is complex in what it affects, not only physiologically and psychologically but also on a social level. Johann Hari offers a great analysis in Chasing the Scream. He makes the case that addiction is largely about isolation and that the addict is the ultimate individual. It stands out to me that addiction and addictive substances have increased over civilization. Growing of poppies, sugar, etc came later on in civilization, as did the production of beer and wine (by the way, alcohol releases endorphins, sugar causes a serotonin high, and both activate the hedonic pathway). Also, grain and dairy were slow to catch on, as a large part of the diet. Until recent centuries, most populations remained dependent on animal foods, including wild game. Americans, for example, ate large amounts of meat, butter, and lard from the colonial era through the 19th century. In 1900, Americans on average were only getting 10% of carbs as part of their diet and sugar was minimal.

Another factor to consider is that low-carb diets can alter how the body and brain functions. That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past. Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is cravings and food addictions disappear. It’s a non-addictive or maybe even anti-addictive state of mind. Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, and that is typical of ketosis. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. Hold that thought for a moment.

Now go back to the two staples of the modern diet, grains and dairy. Besides exorphins and dopaminergic substances, they also have high levels of glutamate, as part of gluten and casein respectively. Dr. Katherine Reid is a biochemist whose daughter was diagnosed with autism and it was severe. She went into research mode and experimented with supplementation and then diet. Many things seemed to help, but the greatest result came from restriction of glutamate, a difficult challenge as it is a common food additive. This requires going on a largely whole foods diet, that is to say eliminating processed foods. But when dealing with a serious issue, it is worth the effort. Dr. Reid’s daughter showed immense improvement to such a degree that she was kicked out of the special needs school. After being on this diet for a while, she socialized and communicated normally like any other child, something she was previously incapable of. Keep in mind that glutamate is necessary as a foundational neurotransmitter in modulating communication between the gut and brain. But typically we only get small amounts of it, as opposed to the large doses found in the modern diet.

Glutamate is also implicated in schizophrenia: “The most intriguing evidence came when the researchers gave germ-free mice fecal transplants from the schizophrenic patients. They found that “the mice behaved in a way that is reminiscent of the behavior of people with schizophrenia,” said ,Julio Licinio, who co-led the new work with Wong, his research partner and spouse. Mice given fecal transplants from healthy controls behaved normally. “The brains of the animals given microbes from patients with schizophrenia also showed changes in glutamate, a neurotransmitter that is thought to be dysregulated in schizophrenia,” he added. The discovery shows how altering the gut can influence an animals behavior” (Roni Dengler, Researchers Find Further Evidence That Schizophrenia is Connected to Our Guts; reporting on Peng Zheng et al, The gut microbiome from patients with schizophrenia modulates the glutamate-glutamine-GABA cycle and schizophrenia-relevant behaviors in mice, Science Advances journal). And glutamate is involved in other conditions as well, such as in relation to GABA: “But how do microbes in the gut affect [epileptic] seizures that occur in the brain? Researchers found that the microbe-mediated effects of the Ketogenic Diet decreased levels of enzymes required to produce the excitatory neurotransmitter glutamate. In turn, this increased the relative abundance of the inhibitory neurotransmitter GABA. Taken together, these results show that the microbe-mediated effects of the Ketogenic Diet have a direct effect on neural activity, further strengthening support for the emerging concept of the ‘gut-brain’ axis.” (Jason Bush, Important Ketogenic Diet Benefit is Dependent on the Gut Microbiome). Glutamate is one neurotransmitter among many that can be affected in a similar manner; e.g., serotonin is also produced in the gut.

That reminds me of propionate, a short chain fatty acid. It is another substance normally taken in at a low level. Certain foods, including grains and dairy, contain it. The problem is that, as a useful preservative, it has been generously added to the food supply. Research on rodents shows injecting them with propionate causes autistic-like behaviors. And other rodent studies show how this stunts learning ability and causes repetitive behavior (both related to the autistic demand for the familiar), as too much propionate entrenches mental patterns through the mechanism that gut microbes use to communicate to the brain how to return to a needed food source. Autistics, along with cravings for propionate-containing foods, tend to have larger populations of a particular gut microbe that produces propionate. In killing microbes, this might be why antibiotics can help with autism. But in the case of depression, it is associated instead with the lack of certain microbes that produce butyrate, another important substance that also is found in certain foods (Mireia Valles-Colomer et al, The neuroactive potential of the human gut microbiota in quality of life and depression). Depending on the specific gut dysbiosis, diverse neurocognitive conditions can result. And in affecting the microbiome, changes in autism can be achieved through a ketogenic diet, reducing the microbiome (similar to an antibiotic) — this presumably takes care of the problematic microbes and readjusts the gut from dysbiosis to a healthier balance.

As with proprionate, exorphins injected into rats will likewise elicit autistic-like behaviors. By two different pathways, the body produces exorphins and proprionate from the consumption of grains and dairy, the former from the breakdown of proteins and the latter produced by gut bacteria in the breakdown of some grains and refined carbohydrates (combined with the proprionate used as a food additive; added to other foods as well and also, at least in rodents, artificial sweeteners increase propionate levels). This is part of the explanation for why many autistics have responded well to low-carb ketosis, specifically paleo diets that restrict both wheat and dairy, but ketones themselves play a role in using the same transporters as propionate and so block their buildup in cells and, of course, ketones offer a different energy source for cells as a replacement for glucose which alters how cells function, specifically neurocognitive functioning and its attendant psychological effects.

What stands out to me about autism is how isolating it is. The repetitive behavior and focus on objects resonates with extreme addiction. Both conditions block normal human relating and create an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon. And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic individuality is the result of our food system, as part of the civilizational project of mass agriculture?

* * *

Mongolian Diet and Fasting

For anyone who is curious to learn more, the original point of interest for me was a quote by Jack Weatherford in his book Genghis Khan and the Making of the Modern World: “The Chinese noted with surprise and disgust the ability of the Mongol warriors to survive on little food and water for long periods; according to one, the entire army could camp without a single puff of smoke since they needed no fires to cook. Compared to the Jurched soldiers, the Mongols were much healthier and stronger. The Mongols consumed a steady diet of meat, milk, yogurt, and other diary products, and they fought men who lived on gruel made from various grains. The grain diet of the peasant warriors stunted their bones, rotted their teeth, and left them weak and prone to disease. In contrast, the poorest Mongol soldier ate mostly protein, thereby giving him strong teeth and bones. Unlike the Jurched soldiers, who were dependent on a heavy carbohydrate diet, the Mongols could more easily go a day or two without food.” By the way, that biography was written by an anthropologist who lived among and studied the Mongols for years. It is about the historical Mongols, but filtered through the direct experience of still existing Mongol people who have maintained a traditional diet and lifestyle longer than most other populations. It isn’t only that their diet was ketogenic because of being low-carb but also because it involved fasting.

From Mongolia Volume 1 The Tangut Country, and the Solitudes of Northernin (1876), Nikolaĭ Mikhaĭlovich Przhevalʹskiĭ writes in the second note on p. 65 under the section Calendar and Year-Cycle: “On the New Year’s Day, or White Feast of the Mongols, see ‘Marco Polo’, 2nd ed. i. p. 376-378, and ii. p. 543. The monthly fetival days, properly for the Lamas days of fasting and worship, seem to differ locally. See note in same work, i. p. 224, and on the Year-cycle, i. p. 435.” This is alluded to in another text, in describing that such things as fasting were the norm of that time: “It is well known that both medieval European and traditional Mongolian cultures emphasized the importance of eating and drinking. In premodern societies these activities played a much more significant role in social intercourse as well as in religious rituals (e.g., in sacrificing and fasting) than nowadays” (Antti Ruotsala, Europeans and Mongols in the middle of the thirteenth century, 2001). A science journalist trained in biology, Dyna Rochmyaningsih, also mentions this: “As a spiritual practice, fasting has been employed by many religious groups since ancient times. Historically, ancient Egyptians, Greeks, Babylonians, and Mongolians believed that fasting was a healthy ritual that could detoxify the body and purify the mind” (Fasting and the Human Mind).

Mongol shamans and priests fasted, no different than in so many other religions, but so did other Mongols — more from Przhevalʹskiĭ’s 1876 account showing the standard feast and fast cycle of many traditional ketogenic diets: “The gluttony of this people exceeds all description. A Mongol will eat more than ten pounds of meat at one sitting, but some have been known to devour an average-sized sheep in twenty-four hours! On a journey, when provisions are economized, a leg of mutton is the ordinary daily ration for one man, and although he can live for days without food, yet, when once he gets it, he will eat enough for seven” (see more quoted material in Diet of Mongolia). Fasting was also noted of earlier Mongols, such as Genghis Khan: “In the spring of 2011, Jenghis Khan summoned his fighting forces […] For three days he fasted, neither eating nor drinking, but holding converse with the gods. On the fourth day the Khakan emerged from his tent and announced to the exultant multitude that Heaven had bestowed on him the boon of victory” (Michael Prawdin, The Mongol Empire, 1967). Even before he became Khan, this was his practice as was common among the Mongols, such that it became a communal ritual for the warriors:

“When he was still known as Temujin, without tribe and seeking to retake his kidnapped wife, Genghis Khan went to Burkhan Khaldun to pray. He stripped off his weapons, belt, and hat – the symbols of a man’s power and stature – and bowed to the sun, sky, and mountain, first offering thanks for their constancy and for the people and circumstances that sustained his life. Then, he prayed and fasted, contemplating his situation and formulating a strategy. It was only after days in prayer that he descended from the mountain with a clear purpose and plan that would result in his first victory in battle. When he was elected Khan of Khans, he again retreated into the mountains to seek blessing and guidance. Before every campaign against neighboring tribes and kingdoms, he would spend days in Burhkhan Khandun, fasting and praying. By then, the people of his tribe had joined in on his ritual at the foot of the mountain, waiting his return” (Dr. Hyun Jin Preston Moon, Genghis Khan and His Personal Standard of Leadership).

As an interesting side note, the Mongol population have been studied to some extent in one area of relevance. In Down’s Anomaly (1976), Smith et al writes that, “The initial decrease in the fasting blood sugar was greater than that usually considered normal and the return to fasting blood sugar level was slow. The results suggested increased sensitivity to insulin. Benda reported the initial drop in fating blood sugar to be normal but the absolute blood sugar level after 2 hours was lower for mongols than for controls.” That is probably the result of a traditional low-carb diet that had been maintained continuously since before history. For some further context, I noticed some discusion about the Mongolian keto diet (Reddit, r/keto, TIL that Ghenghis Khan and his Mongol Army ate a mostly keto based diet, consisting of lots of milk and cheese. The Mongols were specially adapted genetically to digest the lactase in milk and this made them easier to feed.) that was inspired by the scientific documentary “The Evolution of Us” (presently available on Netflix and elsewhere).

* * *

3/30/19 – An additional comment: I briefly mentioned sugar, that it causes a serotonin high and activates the hedonic pathway. I also noted that it was late in civilization when sources of sugar were cultivated and, I could add, even later when sugar became cheap enough to be common. Even into the 1800s, sugar was minimal and still often considered more as medicine than food.

To extend this thought, it isn’t only sugar in general but specific forms of it. Fructose, in particular, has become widespread because of United States government subsidizing corn agriculture which has created a greater corn yield that humans can consume. So, what doesn’t get fed to animals or turned into ethanol, mostly is made into high fructose corn syrup and then added into almost every processed food and beverage imaginable.

Fructose is not like other sugars. This was important for early hominid survival and so shaped human evolution. It might have played a role in fasting and feasting. In 100 Million Years of Food, Stephen Le writes that, “Many hypotheses regarding the function of uric acid have been proposed. One suggestion is that uric acid helped our primate ancestors store fat, particularly after eating fruit. It’s true that consumption of fructose induces production of uric acid, and uric acid accentuates the fat-accumulating effects of fructose. Our ancestors, when they stumbled on fruiting trees, could gorge until their fat stores were pleasantly plump and then survive for a few weeks until the next bounty of fruit was available” (p. 42).

That makes sense to me, but he goes on to argue against this possible explanation. “The problem with this theory is that it does not explain why only primates have this peculiar trait of triggering fat storage via uric acid. After all, bears, squirrels, and other mammals store fat without using uric acid as a trigger.” This is where Le’s knowledge is lacking for he never discusses ketosis that has been centrally important for humans unlike other animals. If uric acid increases fat production, that would be helpful for fattening up for the next starvation period when the body returned to ketosis. So, it would be a regular switching back and forth between formation of uric acid that stores fat and formation of ketones that burns fat.

That is fine and dandy under natural conditions. Excess fructose, however, is a whole other matter. It has been strongly associated with metabolic syndrome. One pathway of causation is that increased production of uric acid. This can lead to gout but other things as well. It’s a mixed bag. “While it’s true that higher levels of uric acid have been found to protect against brain damage from Alzheimer’s, Parkinson’s, and multiple sclerosis, high uric acid unfortunately increases the risk of brain stroke and poor brain function” (p. 43).

The potential side effects of uric acid overdose are related to other problems I’ve discussed in relation to the agricultural mind. “A recent study also observed that high uric acid levels are associated with greater excitement-seeking and impulsivity, which the researchers noted may be linked to attention deficit hyperactivity disorder (ADHD)” (p. 43). The problems of sugar go far beyond mere physical disease. It’s one more factor in the drastic transformation of the human mind.

* * *

4/2/19 – More info: There are certain animal fats, the omega-3 fatty acids EPA and DHA, that are essential to human health. These were abundant in the hunter-gatherer diet. But over the history of agriculture, they have become less common.

This is associated with psychiatric disorders and general neurocognitive problems, including those already mentioned above in the post. Agriculture and industrialization have replaced these healthy oils with overly processed oils that are high in linoleic acid (LA), an omega-6 fatty acids. LA interferes with the body’s use of omega-3 fatty acids.

The Brain Needs Animal Fat
by Georgia Ede

Western Individuality Before the Enlightenment Age

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Introduction
pp. 5-7

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Historical and Philosophical Issues in the Conservation of Cultural Heritage
edited by Nicholas Price, M. Kirby Talley, and Alessandra Melucco Vaccaro
Reading 5: “The History of Art as a Humanistic Discipline”
by Erwin Panofsky
pp. 83-85

Nine days before his death Immanuel Kant was visited by his physician. Old, ill and nearly blind, he rose from his chair and stood trembling with weakness and muttering unintelligible words. Finally his faithful companion realized that he would not sit down again until the visitor had taken a seat. This he did, and Kant then permitted himself to be helped to his chair and, after having regained some of his strength, said, ‘Das Gefühl für Humanität hat mich noch nicht verlassen’—’The sense of humanity has not yet left me’. The two men were moved almost to tears. For, though the word Humanität had come, in the eighteenth century, to mean little more than politeness and civility, it had, for Kant, a much deeper significance, which the circumstances of the moment served to emphasize: man’s proud and tragic consciousness of self-approved and self-imposed principles, contrasting with his utter subjection to illness, decay and all that implied in the word ‘mortality.’

Historically the word humanitas has had two clearly distinguishable meanings, the first arising from a contrast between man and what is less than man; the second between man and what is more. In the first case humanitas means a value, in the second a limitation.

The concept of humanitas as a value was formulated in the circle around the younger Scipio, with Cicero as its belated, yet most explicit spokesman. It meant the quality which distinguishes man, not only from animals, but also, and even more so, from him who belongs to the species homo without deserving the name of homo humanus; from the barbarian or vulgarian who lacks pietas and παιδεια- that is, respect for moral values and that gracious blend of learning and urbanity which we can only circumscribe by the discredited word “culture.”

In the Middle Ages this concept was displaced by the consideration of humanity as being opposed to divinity rather than to animality or barbarism. The qualities commonly associated with it were therefore those of frailty and transience: humanitas fragilis, humanitas caduca.

Thus the Renaissance conception of humanitas had a two-fold aspect from the outset. The new interest in the human being was based both on a revival of the classical antithesis between humanitas and barbartias, or feritas, and on a survival of the mediaeval antithesis between humanitas and divinitas. When Marsilio Ficino defines man as a “rational soul participating in the intellect of God, but operating in a body,” he defines him as the one being that is both autonomous and finite. And Pico’s famous ‘speech’ ‘On the Dignity of Man’ is anything but a document of paganism. Pico says that God placed man in the center of the universe so that he might be conscious of where he stands, and therefore free to decide ‘where to turn.’ He does not say that man is the center of the universe, not even in the sense commonly attributed to the classical phrase, “man the measure of all things.”

It is from this ambivalent conception of humanitas that humanism was born. It is not so much a movement as an attitude which can be defined as the conviction of the dignity of man, based on both the insistence on human values (rationality and freedom) and the acceptance of human limitations (fallibility and frailty); from this two postulates result responsibility and tolerance.

Small wonder that this attitude has been attacked from two opposite camps whose common aversion to the ideas of responsibility and tolerance has recently aligned them in a united front. Entrenched in one of these camps are those who deny human values: the determinists, whether they believe in divine, physical or social predestination, the authoritarians, and those “insectolatrists” who profess the all-importance of the hive, whether the hive be called group, class, nation or race. In the other camp are those who deny human limitations in favor of some sort of intellectual or political libertinism, such as aestheticists, vitalists, intuitionists and hero-worshipers. From the point of view of determinism, the humanist is either a lost soul or an ideologist. From the point of view of authoritarianism, he is either a heretic or a revolutionary (or a counterrevolutionary). From the point of view of “insectolatry,” he is a useless individualist. And from the point of view of libertinism he is a timid bourgeois.

Erasmus of Rotterdam, the humanist par excellence, is a typical case in point. The church suspected and ultimately rejected the writings of this man who had said: “Perhaps the spirit of Christ is more largely diffused than we think, and there are many in the community of saints who are not in our calendar.” The adventurer Uhich von Hutten despised his ironical skepticism and his unheroic love of tranquillity. And Luther, who insisted that “no man has power to think anything good or evil, but everything occurs in him by absolute necessity,” was incensed by a belief which manifested itself in the famous phrase; “What is the use of man as a totality [that is, of man endowed with both a body and a soul], if God would work in him as a sculptor works in clay, and might just as well work in stone?”

Food and Faith in Christian Culture
edited by Ken Albala and Trudy Eden
Chapter 3: “The Food Police”
Sumptuary Prohibitions On Food In The Reformation
by Johanna B. Moyer
pp. 80-83

Protestants too employed a disease model to explain the dangers of luxury consumption. Luxury damaged the body politic leading to “most incurable sickness of the universal body” (33). Protestant authors also employed Galenic humor theory, arguing that “continuous superfluous expense” unbalanced the humors leading to fever and illness (191). However, Protestants used this model less often than Catholic authors who attacked luxury. Moreover, those Protestants who did employ the Galenic model used it in a different manner than their Catholic counterparts.

Protestants also drew parallels between the damage caused by luxury to the human body and the damage excess inflicted on the French nation. Rather than a disease metaphor, however, many Protestant authors saw luxury more as a “wound” to the body politic. For Protestants the danger of luxury was not only the buildup of humors within the body politic of France but the constant “bleeding out” of humor from the body politic in the form of cash to pay for imported luxuries. The flow of cash mimicked the flow of blood from a wound in the body. Most Protestants did not see luxury foodstuffs as the problem, indeed most saw food in moderation as healthy for the body. Even luxury apparel could be healthy for the body politic in moderation, if it was domestically produced and consumed. Such luxuries circulated the “blood” of the body politic creating employment and feeding the lower orders. 72 De La Noue made this distinction clear. He dismissed the need to individually discuss the damage done by each kind of luxury that was rampant in France in his time as being as pointless “as those who have invented auricular confession have divided mortal and venal sins into infinity of roots and branches.” Rather, he argued, the damage done by luxury was in its “entire bulk” to the patrimonies of those who purchased luxuries and to the kingdom of France (116). For the Protestants, luxury did not pose an internal threat to the body and salvation of the individual. Rather, the use of luxury posed an external threat to the group, to the body politic of France.

The Reformation And Sumptuary Legislation

Catholics, as we have seen, called for antiluxury regulations on food and banqueting, hoping to curb overeating and the damage done by gluttony to the body politic. Although some Protestants also wanted to restrict food and banqueting, more often French Protestants called for restrictions on clothing and foreign luxuries. These differing views of luxury during and after the French Wars of Religion not only give insight into the theological differences between these two branches of Christianity but also provides insight into the larger pattern of the sumptuary regulation of food in Europe in this period. Sumptuary restrictions were one means by which Catholics and Protestants enforced their theology in the post-Reformation era.

Although Catholicism is often correctly cast as the branch of Reformation Christianity that gave the individual the least control over their salvation, it was also true that the individual Catholic’s path to salvation depended heavily on ascetic practices. The responsibility for following these practices fell on the individual believer. Sumptuary laws on food in Catholic areas reinforced this responsibility by emphasizing what foods should and should not be eaten and mirrored the central theological practice of fasting for the atonement of sin. Perhaps the historiographical cliché that it was only Protestantism which gave the individual believer control of his or her salvation needs to be qualified. The arithmetical piety of Catholicism ultimately placed the onus on the individual to atone for each sin. Moreover, sumptuary legislation tried to steer the Catholic believer away from the more serious sins that were associated with overeating, including gluttony, lust, anger, and pride.

Catholic theology meshed nicely with the revival of Galenism that swept through Europe in this period. Galenists preached that meat eating, overeating, and the imbalance in humors which accompanied these practices, led to behavioral changes, including an increased sex drive and increased aggression. These physical problems mirrored the spiritual problems that luxury caused, including fornication and violence. This is why so many authors blamed the French nobility for the luxury problem in France. Nobles were seen not only as more likely to bear the expense of overeating but also as more prone to violence. 73

Galenism also meshed nicely with Catholicism because it was a very physical religion in which the control of the physical body figured prominently in the believer’s path to salvation. Not surprisingly, by the seventeenth century, Protestants gravitated away from Galenism toward the chemical view of the body offered by Paracelsus. 74 Catholic sumptuary law embodied a Galenic view of the body where sin and disease were equated and therefore pushed regulations that advocated each person’s control of his or her own body.

Protestant legislators, conversely, were not interested in the individual diner. Sumptuary legislation in Protestant areas ran the gamut from control of communal displays of eating, in places like Switzerland and Germany, to little or no concern with restrictions on luxury foods, as in England. For Protestants, it was the communal role of food and luxury use that was important. Hence the laws in Protestant areas targeted food in the context of weddings, baptisms, and even funerals. The English did not even bother to enact sumptuary restrictions on food after their break with Catholicism. The French Protestants who wrote on luxury glossed over the deleterious effects of meat eating, even proclaiming it to be healthful for the body while producing diatribes against the evils of imported luxury apparel. The use of Galenism in the French Reformed treatises suggests that Protestants too were concerned with a “body,” but it was not the individual body of the believer that worried Protestant legislators. Sumptuary restrictions were designed to safeguard the mystical body of believers, or the “Elect” in the language of Calvinism. French Protestants used the Galenic model of the body to discuss the damage that luxury did to the body of believers in France, but ultimately to safeguard the economic welfare of all French subjects. The Calvinists of Switzerland used sumptuary legislation on food to protect those predestined for salvation from the dangerous eating practices of members of the community whose overeating suggested they might not be saved.

Ultimately, sumptuary regulations in the Reformation spoke to the Christian practice of fasting. Fasting served very different functions in Protestants and Catholic theology. Raymond Mentzer has suggested that Protestants “modified” the Catholic practice of fasting during the Reformation. The major reformers, including Luther, Calvin, and Zwingli, all rejected fasting as a path to salvation. 75 For Protestants, fasting was a “liturgical rite,” part of the cycle of worship and a practice that served to “bind the community.” Fasting was often a response to adversity, as during the French Wars of Religion. For Catholics, fasting was an individual act, just as sumptuary legislation in Catholic areas targeted individual diners. However, for Protestants, fasting was a communal act, “calling attention to the body of believers.” 76 The symbolic nature of fasting, Mentzer argues, reflected Protestant rejection of transubstantiation. Catholics continued to believe that God was physically present in the host, but Protestants believed His was only a spiritual presence. When Catholics took Communion, they fasted to cleanse their own bodies so as to receive the real, physical body of Christ. Protestants, on the other hand, fasted as spiritual preparation because it was their spirits that connected with the spirit of Christ in the Eucharist. 77

“…some deeper area of the being.”

Alec Nevala-Lee shares a passage from Colin Wilson’s Mysteries (see Magic and the art of will). It elicits many thoughts, but I want to focus on the two main related aspects: the self and the will.

The main thing Wilson is talking about is hyper-individualism — the falseness and superficiality, constraint and limitation of anxiety-driven ‘consciousness’, the conscious personality of the ego-self. This is what denies the bundled self and the extended self, the vaster sense of being that challenges the socio-psychological structure of the modern mind. We defend our thick boundaries with great care for fear of what might get in, but this locks us in a prison cell of our own making. In not allowing ourselves to be affected, we make ourselves ineffective or at best only partly effective toward paltry ends. It’s not only a matter of doing “something really well” for we don’t really know what we want to do, as we’ve become disconnected from deeper impulses and broader experience.

For about as long as I can remember, the notion of ‘free will’ has never made sense to me. It isn’t a philosophical disagreement. Rather, in my own experience and in my observation of others, it simply offers no compelling explanation or valid meaning, much less deep insight. It intuitively makes no sense, which is to say it can only make sense if we never carefully think about it with probing awareness and open-minded inquiry. To the degree there is a ‘will’ is to the degree it is inseparable from the self. That is to say the self never wills anything for the self is and can only be known through the process of willing, which is simply to say through impulse and action. We are what we do, but we never know why we do what we do. We are who we are and we don’t know how to be otherwise.

There is no way to step back from the self in order to objectively see and act upon the self. That would require yet another self. The attempt to impose a will upon the self would lead to an infinite regress of selves. That would be a pointless preoccupation, although as entertainments go it is popular these days. A more worthy activity and maybe a greater achievement is stop trying to contain ourselves and instead to align with a greater sense of self. Will wills itself. And the only freedom that the will possesses is to be itself. That is what some might consider purpose or telos, one’s reason for being or rather one’s reason in being.

No freedom exists in isolation. To believe otherwise is a trap. The precise trap involved is addiction, which is the will driven by compulsion. After all, the addict is the ultimate individual, so disconnected within a repeating pattern of behavior as to be unable to affect or be affected. Complete autonomy is impotence. The only freedom is in relationship, both to the larger world and the larger sense of self. It is in the ‘other’ that we know ourselves. We can only be free in not trying to impose freedom, in not struggling to control and manipulate. True will, if we are to speak of such a thing, is the opposite of willfulness. We are only free to the extent we don’t think in the explicit terms of freedom. It is not a thought in the mind but a way of being in the world.

We know that the conscious will is connected to the narrow, conscious part of the personality. One of the paradoxes observed by [Pierre] Janet is that as the hysteric becomes increasingly obsessed with anxiety—and the need to exert his will—he also becomes increasingly ineffective. The narrower and more obsessive the consciousness, the weaker the will. Every one of us is familiar with the phenomenon. The more we become racked with anxiety to do something well, the more we are likely to botch it. It is [Viktor] Frankl’s “law of reversed effort.” If you want to do something really well, you have to get into the “right mood.” And the right mood involves a sense of relaxation, of feeling “wide open” instead of narrow and enclosed…

As William James remarked, we all have a lifelong habit of “inferiority to our full self.” We are all hysterics; it is the endemic disease of the human race, which clearly implies that, outside our “everyday personality,” there is a wider “self” that possesses greater powers than the everyday self. And this is not the Freudian subconscious. Like the “wider self” of Janet’s patients, it is as conscious as the “contracted self.” We are, in fact, partially aware of this “other self.” When a man “unwinds” by pouring himself a drink and kicking off his shoes, he is adopting an elementary method of relaxing into the other self. When an overworked housewife decides to buy herself a new hat, she is doing the same thing. But we seldom relax far enough; habit—and anxiety—are too strong…Magic is the art and science of using the will. Not the ordinary will of the contracted ego but the “true will” that seems to spring from some deeper area of the being.

Colin WilsonMysteries

Delirium of Hyper-Individualism

Individualism is a strange thing. For anyone who has spent much time meditating, it’s obvious that there is no there there. It slips through one’s grasp like an ancient philosopher trying to study aether. The individual self is the modernization of the soul. Like the ghost in the machine and the god in the gaps, it is a theological belief defined by its absence in the world. It’s a social construct, a statement that is easily misunderstood.

In modern society, individualism has been raised up to an entire ideological worldview. It is all-encompassing, having infiltrated nearly every aspect of our social lives and become internalized as a cognitive frame. Traditional societies didn’t have this obsession with an idealized self as isolated and autonomous. Go back far enough and the records seem to show societies that didn’t even have a concept, much less an experience, of individuality.

Yet for all its dominance, the ideology of individualism is superficial. It doesn’t explain much of our social order and personal behavior. We don’t act as if we actually believe in it. It’s a convenient fiction that we so easily disregard when inconvenient, as if it isn’t all that important after all. In our most direct experience, individuality simply makes no sense. We are social creatures through and through. We don’t know how to be anything else, no matter what stories we tell ourselves.

The ultimate value of this individualistic ideology is, ironically, as social control and social justification.

The wealthy, the powerful and privileged, even the mere middle class to a lesser degree — they get to be individuals when everything goes right. They get all the credit and all the benefits. All of society serves them because they deserve it. But when anything goes wrong, they hire lawyers who threaten anyone who challenges them or they settle out of court, they use their crony connections and regulatory capture to avoid consequences, they declare bankruptcy when one of their business ventures fail, and they endlessly scapegoat those far below them in the social hierarchy.

The profits and benefits are privatized while the costs are externalized. This is socialism for the rich and capitalism for the poor, with the middle class getting some combination of the two. This is why democratic rhetoric justifies plutocracy while authoritarianism keeps the masses in line. This stark reality is hidden behind the utopian ideal of individualism with its claims of meritocracy and a just world.

The fact of the matter is that no individual ever became successful. Let’s do an experiment. Take an individual baby, let’s say the little white male baby of wealthy parents with their superior genetics. Now leave that baby in the woods to raise himself into adulthood and bootstrap himself into a self-made man. I wonder how well that would work for his survival and future prospects. If privilege and power, if opportunity and resources, if social capital and collective inheritance, if public goods and the commons have no major role to play such that the individual is solely responsible to himself, we should expect great things from this self-raised wild baby.

But if it turns out that hyper-individualism is total bullshit, we should instead expect that baby to die of exposure and starvation or become the the prey of a predator feeding its own baby without any concerns for individuality. Even simply leaving a baby untouched and neglected in an orphanage will cause failure to thrive and death. Without social support, our very will to live disappears. Social science research has proven the immense social and environmental influences on humans. For a long time now there has been no real debate about this social reality of our shared humanity.

So why does this false belief and false idol persist? What horrible result do we fear if we were ever to be honest with ourselves? I get that the ruling elite are ruled by their own egotistic pride and narcissism. I get that the comfortable classes are attached to their comforting lies. But why do the rest of us go along with their self-serving delusions? It is the strangest thing in the world for a society to deny it is a society.

“illusion of a completed, unitary self”

The Voices Within:
The History and Science of How We Talk to Ourselves
by Charles Fernyhough
Kindle Locations 3337-3342

And we are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent “me” from moment to moment. We are all more or less dissociated. Our selves are constantly constructed and reconstructed in ways that often work well, but often break down. Stuff happens, and the center cannot hold. Some of us have more fragmentation going on, because of those things that have happened; those people face a tougher challenge of pulling it all together. But no one ever slots in the last piece and makes it whole. As human beings, we seem to want that illusion of a completed, unitary self, but getting there is hard work. And anyway, we never get there.

Kindle Locations 3357-3362

This is not an uncommon story among people whose voices go away. Someone is there, and then they’re not there anymore. I was reminded of what I had been told about the initial onset of voice-hearing: how it can be like dialing into a transmission that has always been present. “Once you hear the voices,” wrote Mark Vonnegut of his experiences, “you realise they’ve always been there. It’s just a matter of being tuned to them.” If you can tune in to something, then perhaps you can also tune out.

Kindle Locations 3568-3570

It is also important to bear in mind that for many voice-hearers the distinction between voices and thoughts is not always clear-cut. In our survey, a third of the sample reported either a combination of auditory and thought-like voices, or experiences that fell somewhere between auditory voices and thoughts.

* * *

Charles Fernyhough recommends the best books on Streams of Consciousness
from Five Books

Charles Fernyhough Listens in on Thought Itself in ‘The Voices Within’
by Raymond Tallis, WSJ (read here)

Neuroscience: Listening in on yourself
by Douwe Draaisma, Nature

The Song Of The Psyche
by Megan Sherman, Huffington Post

Choral Singing and Self-Identity

I haven’t previously given any thought to choral singing. I’ve never been much of a singer, not even in private. My desire to sing in public with a group of others is next to non-existent. So, it never occurred to me what might be the social experience and psychological result of being in a choir.

It appears something odd goes on in such a situation. Maybe it’s not odd, but it has come to seem odd to us moderns. This kind of group activity has become uncommon. In our hyper-individualistic society, we forget how much we are social animals. Our individualism is dependent on highly unusual conditions that wouldn’t have existed for most of civilization.

I was reminded a while back of this social aspect when reading about Galen in the Roman Empire. Individualism is not the normal state or, one might argue, the healthy state of humanity. It is rather difficult to create individuals and, even then, our individuality is superficial and tenuous. Humans so quickly lump themselves into groups.

This evidence about choral singing makes me wonder about earlier societies. What role did music play, specifically group singing (along with dancing and ritual), in creating particular kinds of cultures and social identities? And how might that have related to pre-literate memory systems that rooted people in a concrete sense of the world, such as Aboriginal songlines?

I’ve been meaning to write about Lynne Kelly’s book, Knowledge and Power in Prehistoric Societies. This is part of my long term focus on what sometimes is called the bicameral mind and the issue of its breakdown. Maybe choral singing touches upon the bicameral mind.

* * *

It’s better together: The psychological benefits of singing in a choir
by N.A. Stewart & A.J. Lonsdale, Psychology of Music

Previous research has suggested that singing in a choir might be beneficial for an individual’s psychological well-being. However, it is unclear whether this effect is unique to choral singing, and little is known about the factors that could be responsible for it. To address this, the present study compared choral singing to two other relevant leisure activities, solo singing and playing a team sport, using measures of self-reported well-being, entitativity, need fulfilment and motivation. Questionnaire data from 375 participants indicated that choral singers and team sport players reported significantly higher psychological well-being than solo singers. Choral singers also reported that they considered their choirs to be a more coherent or ‘meaningful’ social group than team sport players considered their teams. Together these findings might be interpreted to suggest that membership of a group may be a more important influence on the psychological well-being experienced by choral singers than singing. These findings may have practical implications for the use of choral singing as an intervention for improving psychological well-being.

More Evidence of the Psychological Benefits of Choral Singing
by Tom Jacobs, Pacific Standard

The synchronistic physical activity of choristers appears to create an unusually strong bond, giving members the emotionally satisfying experience of temporarily “disappearing” into a meaningful, coherent body. […]

The first finding was that choral singers and team sports players “reported significantly higher levels of well-being than solo singers.” While this difference was found on only one of the three measures of well-being, it does suggest that activities “pursued as part of a group” are associated with greater self-reported well-being.

Second, they found choral singers appear to “experience a greater sense of being part of a meaningful, or ‘real’ group, than team sports players.” This perception, which is known as “entitativity,” significantly predicted participants’ scores on all three measures of well-being. […]

The researchers suspect this feeling arises naturally from choral singers’ “non-conscious mimicry of others’ actions.” This form of physical synchrony “has been shown to lead to self-other merging,” they noted, “which may encourage choral singers to adopt a ‘we perspective’ rather than an egocentric perspective.”

Not surprisingly, choral singers experienced the lowest autonomy of the three groups. Given that autonomy can be very satisfying, this may explain why overall life-satisfaction scores were similar for choral singers (who reported little autonomy but strong bonding), and sports team members (who experienced moderate levels of both bonding and autonomy).

“How awful for you! By the looks of it, you’ve developed a soul.”

From the boundless green ocean behind the Wall, a wild tidal surge of roots, flowers, twigs, leaves was rolling toward me, standing on its hind legs, and had it flowed over me I would have been transformed from a person—from the finest and most precise of mechanisms—into . . .

But, happily, between me and this wild, green ocean was the glass of the Wall. Oh, the great, divinely bounding wisdom of walls and barriers! They may just be the greatest of all inventions. Mankind ceased to be wild beast when it built its first wall. Mankind ceased to be savage when we built the Green Wall, when we isolated our perfect, machined world, by means of the Wall, from the irrational, chaotic world of the trees, birds, animals . . .

That is from We by Yevgeny Zamyatin (pp. 82-83). It’s a strange novel, an early example of a futuristic dystopia. The author finished writing it in the 1921, but couldn’t get it published in Russia. An English translation was published years later.

In the above quote, the description is of the Green Wall that encloses the society called One State. It is an entirely independent and self-sustaining city-state, outside of which its citizens do not venture. The government is authoritarian and social control is absolute. The collective is everything, such that people have no sense of individuality and have no souls. They don’t even dream, not normally, for to do so is considered a symptom of mental illness.

There was a global war that killed most of the human population. The citizens of One State are aware that human society used to be different. They have no experience of freedom, neither social freedom nor free will. But they have a historical memory of what freedom once meant and, of course, it is seen as having been a bad thing and the source of unhappiness.

Like Julian Jayne’s theory of bicameralism, these people are able to maintain a complex society without an internalized sense of self. The loci of identity is completely outside. But unlike bicameralism, there are no command voices telling them what to do. The society is so ordered that few moments of the day are not occupied by scheduled activity. Even sex is determined by the system of authority. Mathematical order is everything, including how citizens are named.

What fascinated me about the novel is when the protagonist, D-503, gains a soul. It is an imagining of what this experience would be like, to gain a sense of individuality and interiority, for long dormant imagination to be awakened. Having a self means the ability to imagine other selves, to empathize with what others are thinking and feeling, to be able to enter into the world of another.

If bicameral societies did exist, this would have been similar to how they came to an end. A bicameral society would have required a tightly ordered society where the social reality was ever-present and all-consuming. For that to breakdown would have been traumatic. It would have felt like madness.

* * *

We
by Yevgeny Zamyatin
pp. 79-81

With effort, a kind of spiral torque, I finally tore my eyes away from the glass below my feet—and suddenly the golden letters “MEDICINE” splashed in front of me . . . Why had he led me here and not to the Operation Room, and why had he spared me? But, at that moment, I wasn’t thinking of these things: with one leap over the steps, the door solidly banged behind me and I exhaled. Yes: it was as though I hadn’t breathed since dawn, as though my heart hadn’t struck a single beat—and it was only now that I exhaled for the first time, only now that the floodgates of my chest opened . . .

There were two of them: one was shortish, with cinder-block legs, and his eyes, like horns, tossed up the patients; the other one was skinny and sparkling with scissor-lips and a blade for a nose . . . I had met him before.

I flung myself at him as though he were kin, straight into the blade, and said something about being an insomniac, the dreams, the shadows, the yellow world. The scissor-lips flashed and smiled. How awful for you! By the looks of it, you’ve developed a soul.

A soul? That strange, ancient, long-forgotten word. We sometimes said “heart and soul,” “soulful,” “lost souls,” but a “soul”

“This is . . . this is very grave,” I babbled.

“It’s incurable,” incised the scissors.

“But . . . what specifically is at the source of all this? I can’t even begin . . . to imagine.”

“Well, you see . . . it is as if you . . . you’re a mathematician, right?”

“Yes.”

“Well, take, for instance, a plane, a surface, like this mirror here. And on this surface—here, look—are you and I, and we are squinting at the sun, and see the blue electrical spark in that tube. And watch—the shadow of an aero is flashing past. But it is only on this surface for just a second. Now imagine that some heat source causes this impenetrable surface to suddenly grow soft, and nothing slides across it anymore but everything penetrates inside it, into that mirror world, which we used to look into as curious children—children aren’t all that silly, I assure you. The plane has now become a volume, a body, a world, and that which is inside the mirror is inside you: the sun, the whirlwind from the aero’s propeller, your trembling lips, and someone else’s trembling lips. So you see: a cold mirror reflects and rejects but this absorbs every footprint, forever. One day you see a barely noticeable wrinkle on someone’s face—and then it is forever inside you; one day you hear a droplet falling in the silence—and you can hear it again now . . .”

“Yes, yes, exactly . . .” I grabbed his hand. I could hear the faucet in the sink slowly dripping its droplets in the silence. And I knew that they would be inside me forever. But still, why—all of a sudden—a soul? I never had one—never had one—and then suddenly . . . Why doesn’t anyone else have one, but me?

I squeezed harder on the skinny hand: I was terrified to let go of this lifeline.

“Why? And why don’t we have feathers or wings but just scapulas, the foundation of wings? It’s because we don’t need wings anymore—we have the aero; wings would only be extraneous. Wings are for flying, but we don’t need to get anywhere: we have landed, we have found what we were seeking. Isn’t that so?”

I nodded my head in dismay. He looked at me, laughed sharply, javelinishly. The other one, hearing this, stumpily stamped through from out of his office, tossed the skinny doctor up with his hornlike eyes, and then tossed me up, too.

“What’s the problem? What, a soul? A soul, you say? Damn it! We’ll soon get as far as cholera. I told you”—the skinny one was horn-tossed again—“I told you, we must, everyone’s imagination— everyone’s imagination must be . . . excised. The only answer is surgery, surgery alone . . .”

He struggled to put on some enormous X-ray glasses, and walked around for a long time looking through my skull bone at my brain, and making notes in a notebook.

“Extraordinary, extraordinarily curious! Listen to me, would you agree . . . to be preserved in alcohol? This would be, for the One State, an extraordinarily . . . this would aid us in averting an epidemic . . . If you, of course, don’t have any particular reasons not . . .”

“You see,” the other one said, “cipher D-503 is the Builder of the Integral, and I am certain that this would interfere . . .”

“Ah,” he mumbled and lumped off to his cabinet.

And then we were two. A paper-hand lightly, tenderly lay on my hand. A face in profile bent toward me and he whispered: “I’ll tell you a secret—it isn’t only you. My colleague is talking about an epidemic for good reason. Think about it; perhaps you yourself have noticed something similar in someone else—something very similar, very close to . . .” He looked at me intently. What is he hinting at—at whom? It can’t be that—

“Listen . . .” I leapt from my chair. But he had already loudly begun to talk about something else: “. . . And for the insomnia, and your dreams, I can give you one recommendation: walk more. Like, for instance, tomorrow morning, go for a walk . . . to the Ancient House, for example.”

He punctured me again with his eyes and smiled thinly. And it seemed to me: I could clearly and distinctly see something wrapped up in the fine fabric of his smile—a word—a letter—a name, a particular name . . . Or is this again that same imagination?

I was only barely able to wait while he wrote out my certification of illness for today and tomorrow, then I shook his hand once more and ran outside.

My heart was light, quick, like an aero, and carrying, carrying me upward. I knew: tomorrow held some sort of joy. But what would it be?

 

Views of the Self

A Rant: The Brief Discussion of the Birth of An Error
by Skepoet2

Collectivism vs. Individualism is the primary and fundamental misreading of human self-formation out of the Enlightenment and picking sides in that dumb-ass binary has been the primary driver of bad politics left and right for the last 250 years.

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Kindle Locations 80-95

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, sions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine vine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence quence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular ticular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian tian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding derstanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Amusing Ourselves to Death: Public Discourse in the Age of Show Business
by Neil Postman

That does not say much unless one connects it to the more important idea that form will determine the nature of content. For those readers who may believe that this idea is too “McLuhanesque” for their taste, I offer Karl Marx from The German Ideology. “Is the Iliad possible,” he asks rhetorically, “when the printing press and even printing machines exist? Is it not inevitable that with the emergence of the press, the singing and the telling and the muse cease; that is, the conditions necessary for epic poetry disappear?”

Meta-Theory
by bcooney

When I read Jaynes’s book for the first time last year I was struck by the opportunities his theory affords for marrying materialism to psychology, linguistics, and philosophy. The idea that mentality is dependent on social relations, that power structures in society are related to the structure of mentality and language, and the idea that we can only understand mentality historically and socially are all ideas that appeal to me as a historical materialist.

Consciousness: Breakdown Or Breakthrough?
by ignosympathnoramus

The “Alpha version” of consciousness involved memory having authority over the man, instead of the man having authority over his memory. Bicameral man could remember some powerful admonishment from his father, but he could not recall it at will. He experienced this recollection as an external event; namely a visitation from either his father or a god. It was a sort of third-person-perspective group-think where communication was not intentional or conscious but, just like our “blush response,” unconscious and betraying of our deepest being. You can see this in the older versions of the Iliad, where, for instance, we do not learn about Achilles’ suicidal impulse by his internal feelings, thoughts, or his speaking, but instead, by the empathic understanding of his friend. Do you have to “think” in order to empathize, or does it just come on its own, in a rush of feeling? Well, that used to be consciousness. Think about it, whether you watch your friend blush or you blush yourself, the experience is remarkably similar, and seems to be nearly third-person in orientation. What you are recognizing in your friend’s blush the Greeks would have recognized as possession by a god, but it is important to notice that you have no more control over it than the Greeks did. They all used the same name for the same god (emotion) and this led to a relatively stable way of viewing human volition, that is, until it came into contact with other cultures with other “gods.” When this happens, you either have war, or you have conversion. That is, unless you can develop an operating system better than Alpha. We have done so, but at the cost of making us all homeless or orphaned. How ironic that in the modern world the biggest problem is that there are entirely too many individuals in the world, and yet their biggest problem is somehow having too few people to give each individual the support and family-type-structure that humans need to feel secure and thrive. We simply don’t have a shared themis that would allow each of us to view the other as “another self,” to use Aristotle’s phrase, or if we do, we realize that “another self” means “another broken and lost orphan like me.” It is in the nature of self-consciousness to not trust yourself, to remain skeptical, to resist immediate impulse. You cannot order your Will if you simply trust it and cave to every inclination. However, this paranoia is hardly conducive to social trust or to loving another as if he were “another self,” for that would only amount to him being another system of forces that we have to interpret, organize or buffer ourselves from. How much easier it is to empathize and care about your fellow citizens when they are not individuals, but vehicles for the very same muses, daimons, and gods that animate you! The matter is rather a bit worse that this, though. Each child discovers and secures his “inner self” by the discovery of his ability to lie, which further undermines social trust!

Marx’s theory of human nature
by Wikipedia

Marx’s theory of human nature has an important place in his critique of capitalism, his conception of communism, and his ‘materialist conception of history’. Karl Marx, however, does not refer to “human nature” as such, but to Gattungswesen, which is generally translated as ‘species-being’ or ‘species-essence’. What Marx meant by this is that humans are capable of making or shaping their own nature to some extent. According to a note from the young Marx in the Manuscripts of 1844, the term is derived from Ludwig Feuerbach’s philosophy, in which it refers both to the nature of each human and of humanity as a whole.[1] However, in the sixth Thesis on Feuerbach (1845), Marx criticizes the traditional conception of “human nature” as “species” which incarnates itself in each individual, on behalf of a conception of human nature as formed by the totality of “social relations”. Thus, the whole of human nature is not understood, as in classical idealist philosophy, as permanent and universal: the species-being is always determined in a specific social and historical formation, with some aspects being biological.

The strange case of my personal marxism (archive 2012)
by Skepoet2

It is the production capacity within a community that allows a community to exist, but communities are more than their productive capacities and subjectivities are different from subjects. Therefore, it is best to think of the schema we have given societies in terms of integrated wholes, and societies are produced by their histories both ecological and cultural. The separation of ecological and the cultural are what Ken Wilber would call “right-hand” and “left-hand” distinctions: or, the empirical experience of what is outside of us but limits us–the subject here being collective–is and what is within us that limits us.

THE KOSMOS TRILOGY VOL. II: EXCERPT A
AN INTEGRAL AGE AT THE LEADING EDGE
by Ken Wilber

One of the easiest ways to get a sense of the important ideas that Marx was advancing is to look at more recent research (such as Lenski’s) on the relation of techno-economic modes of production (foraging, horticultural, herding, maritime, agrarian, industrial, informational) to cultural practices such as slavery, bride price, warfare, patrifocality, matrifocality, gender of prevailing deities, and so on. With frightening uniformity, similar techno-economic modes have similar probabilities of those cultural practices (showing just how strongly the particular probability waves are tetra-meshed).

For example, over 90% of societies that have female-only deities are horticultural societies. 97% of herding societies, on the other hand, are strongly patriarchal. 37% of foraging tribes have bride price, but 86% of advanced horticultural do. 58% of known foraging tribes engaged in frequent or intermittent warfare, but an astonishing 100% of simple horticultural did so.

The existence of slavery is perhaps most telling. Around 10% of foraging tribes have slavery, but 83% of advanced horticultural do. The only societal type to completely outlaw slavery was patriarchal industrial societies, 0% of which sanction slavery.

Who’s correct about human nature, the left or the right?
by Ed Rooksby

So what, if anything, is human nature? Marx provides a much richer account. He is often said to have argued that there is no such thing as human nature. This is not true. Though he did think that human behaviour was deeply informed by social environment, this is not to say that human nature does not exist. In fact it is our capacity to adapt and transform in terms of social practices and behaviours that makes us distinctive as a species and in which our specifically human nature is to be located.

For Marx, we are essentially creative and producing beings. It is not just that we produce for our means of survival, it is also that we engage in creative and productive activity over and above what is strictly necessary for survival and find fulfilment in this activity. This activity is inherently social – most of what we produce is produced collectively in some sense or another. In opposition to the individualist basis of liberal thought, then, we are fundamentally social creatures.

Indeed, for Marx, human consciousness and thus our very notion of individual identity is collectively generated. We become consciously aware of ourselves as a discrete entity only through language – and language is inherently inter-subjective; it is a social practice. What we think – including what we think about ourselves – is governed by what we do and what we do is always done socially and collectively. It is for this reason that Marx refers to our “species-being” – what we are can only be understood properly in social terms because what we are is a property and function of the human species as a whole.

Marx, then, has a fairly expansive view of human nature – it is in our nature to be creatively adaptable and for our understanding of what is normal in terms of behaviour to be shaped by the social relations around us. This is not to say that any social system is as preferable as any other. We are best able to flourish in conditions that allow us to express our sociability and creativity.

Marx’s Critique of Religion
by Cris Campbell

Alienated consciousness makes sense only in contrast to un-alienated consciousness. Marx’s conception of the latter, though somewhat vague, derives from his understanding of primitive communism. It is here that Marx’s debt to anthropology is most clear. In foraging or “primitive” societies, people are whole – they are un-alienated because resources are freely available and work directly transforms those resources into useable goods. This directness and immediateness – with no interventions or distortions between the resource, work, and result – makes for creative, fulfilled, and unified people. Society is, as a consequence, tightly bound. There are no class divisions which pit one person or group against another. Because social relations are always reflected back into people’s lives, unified societies make for unified individuals. People are not alienated they have direct, productive, and creative relationships with resources, work, things, and others. This communalism is, for Marx, most conducive to human happiness and well-being.

This unity is shattered when people begin claiming ownership of resources. Private property introduces division into formerly unified societies and classes develop. When this occurs people are no longer free to appropriate and produce as they please. Creativity and fulfillment is crushed when labor is separated from life and becomes an isolated commodity. Humans who labor for something other than their needs, or for someone else, become alienated from resources, work, things, and others. When these divided social relations are reflected back into peoples’ lives, the result is discord and disharmony. People, in other words, feel alienated. As economies develop and become more complex, life becomes progressively more specialized and splintered. The alienation becomes so intense that something is required to sooth it; otherwise, life becomes unbearable.

It is at this point (which anthropologists recognize as the Neolithic transition) that religion arises. But religion is not, Marx asserts, merely a soothing palliative – it also masks the economically and socially stratified conditions that cause alienation:

“Precisely as a consequence of man’s loss of spontaneous self-activity, religion arises as a compensatory mechanism for explaining what alienated man cannot explain and for promising him elsewhere what he cannot achieve here. Thus because man does not create himself through his productive labor, he supposes that he is created by a power beyond. Because man lacks power, he attributes power to something beyond himself. Like all forms of man’s self-alienation, religion displaces reality with illusion. The reason is that man, the alienated being, requires an ideology that will simultaneously conceal his situation from him and confer upon it significance. Religion is man’s oblique and doomed effort at humanization, a search for divine meaning in the face of human meaninglessness.”

Related posts from my blog:

Facing Shared Trauma and Seeking Hope

Society: Precarious or Persistent?

Plowing the Furrows of the Mind

Démos, The People

Making Gods, Making Individuals

On Being Strange

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

Making Gods, Making Individuals

I’ve been reading about bicameralism and the Axial Age. It is all very fascinating.

It’s strange to look back at that era of transformation. The modern sense of self-conscious, introspective, autonomous individuality (as moral agent and rational actor) was just emerging after the breakdown of the bicameral mind. What came before that is almost incomprehensible to us.

One interesting factor is that civilization didn’t create organized religion, but the other way around. Or so it seems, according to the archaeological evidence. When humans were still wandering hunter-gatherers, they began building structures for worship. It was only later that people started settled down around these worship centers. So, humans built permanent houses for the gods before they built permanent houses for themselves.

These God Houses often originated as tombs and burial mounds of revered leaders. The first deities seem to have been god-kings. The leader was considered a god while alive or spoke for god. In either case, death made concrete the deification of the former leader. In doing so, the corpse or some part of it such as the skull would become the worshipped idol. Later on it became more common to carve a statue that allowed for a more long-lasting god who was less prone to decay.

God(s) didn’t make humans. Rather, humans in a very literal sense made god(s). They made the form of the god or used the already available form of a corpse or skull. It was sort of like trapping the dead king’s soul and forcing it to play the role of god.

These bicameral people didn’t make the distinctions we make. There was no clear separation between the divine and the human, between the individual and the group. It was all a singular pre-individuated experience. These ancient humans heard voices, but they had no internal space for their own voice. The voices were heard in the world all around them. The king was or spoke for the high god, and that voice continued speaking even after the king died. We moderns would call that a hallucination, but to them it was just their daily reality.

With the breakdown of the bicameral mind, there was a crisis of community and identity. The entire social order broke down, because of large-scale environmental catastrophes that killed or made into refugees most of the human population back then. In a short period of time, nearly all the great civilizations collapsed in close succession, the collapse of each civilization sending refugees outward in waves of chaos and destruction. Nothing like it was seen before or since in recorded history.

People were desperate to make sense of what happened. But the voices of the gods had grown distant or were silenced. The temples were destroyed, the idols gone, traditions lost, and communities splintered. The bicameral societies had been extremely stable and were utterly dependent on that stability. They couldn’t deal with change at that level. The bicameral mind itself could no longer function. These societies never recovered from this mass tragedy.

An innovation that became useful in this era was improved forms of writing. Using alphabets and scrolls, the ancient oral traditions were written down and altered in the process. Also, new literary traditions increasingly took hold. Epics and canons were formed to bring new order. What formed from this was a sense of the past as different from the present. There was some basic understanding that humanity had changed and that the world used to be different.

A corrolary innovation was that, instead of idol worship, people began to worship these new texts, first as scrolls and then later as books. They found a more portable way of trapping a god. But the loss of the more concrete forms of worship led to the gods becoming more distant. People less often heard the voices of the gods for themselves and instead turned to the texts where it was written the cultural memory of the last people who heard the divine speaking (e.g., Moses) or even the last person who spoke as the divine (e.g., Jesus Christ).

The divine was increasingly brought down to the human level and yet at the same time increasingly made more separate from daily experience. It wasn’t just that the voices of the gods went silent. Rather, the voices that used to be heard externally were being internalized. What once was recognized as divine and as other became the groundwork upon which the individuated self was built. God became a still, small voice and slowly loss its divine quality altogether. People stopped hearing voices of non-human entities. Instead, they developed a thinking mind. The gods became trapped in the human skull and you could say that they forgot they were gods.

The process of making gods eventually transitioned into the process of making individuals. We revere individuality as strongly as people once revered the divine. That is an odd thing.