The Crisis of Identity

“Have we lived too fast?”
~Dr. Silas Weir Mitchell, 1871
Wear and Tear, or Hints for the Overworked

I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is describing:

“Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”

That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent (the term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ began being diagnosed).

This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.

Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” It was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains:

“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.

“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””

I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry,  with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat), affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt (Ken Albala & Trudy Eden, Food and Faith in Christian Culture).

There does seem to be a connection between an increase of intellectual activity with an increase of carbohydrates and sugar, this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

Still, it goes far beyond diet. There has been a diversity of stressors that have continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.

Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).

There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:

“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)

This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.

The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.

By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.

In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.

Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Churchland and Peterson did acknowledge that this went beyond mere individual or even familial issues: “It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”

Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.

Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).

The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.

A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).

“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as “Red Republicans”, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.

This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the “civilizing process” that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate. Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.

To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the “First Wave” of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.

This was at a time when the abortion rate was sky-rocketing, indicating most women held other views. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” (Rickie Solinger, Pregnancy and Power, p. 61). In the unGodly and unChurched period of early America (“We forgot.”), organized religion was weak and “premarital sex was typical, many marriages following after pregnancy, but some people simply lived in sin. Single parents and ‘bastards’ were common” (A Vast Experiment). Early Americans, by today’s standards, were not good Christians — visiting Europeans often saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):

“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”

Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants.

The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of  larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?

“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)

This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.

Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why abortion laws were designed to target male doctors, although they rarely did, and not their female patients. Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us for own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the contained self. But the container is weak and keeps leaking all over the place.

To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or sel- understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging.

“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.

* * *

I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
pp. 34

WOMEN AND THE WOMB: The Emerging Birth Control Debate

The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31

“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32

As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33

But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.

“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.

Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37

Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.

“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19 th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41

“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”

“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44

Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.

Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.

Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg

Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland

Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz

Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz

Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel

Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler

Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer

American Sexual Histories
edited by Elizabeth Reis

Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff

From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin

Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman

One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker

Western Individuality Before the Enlightenment Age

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Introduction
pp. 5-7

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Historical and Philosophical Issues in the Conservation of Cultural Heritage
edited by Nicholas Price, M. Kirby Talley, and Alessandra Melucco Vaccaro
Reading 5: “The History of Art as a Humanistic Discipline”
by Erwin Panofsky
pp. 83-85

Nine days before his death Immanuel Kant was visited by his physician. Old, ill and nearly blind, he rose from his chair and stood trembling with weakness and muttering unintelligible words. Finally his faithful companion realized that he would not sit down again until the visitor had taken a seat. This he did, and Kant then permitted himself to be helped to his chair and, after having regained some of his strength, said, ‘Das Gefühl für Humanität hat mich noch nicht verlassen’—’The sense of humanity has not yet left me’. The two men were moved almost to tears. For, though the word Humanität had come, in the eighteenth century, to mean little more than politeness and civility, it had, for Kant, a much deeper significance, which the circumstances of the moment served to emphasize: man’s proud and tragic consciousness of self-approved and self-imposed principles, contrasting with his utter subjection to illness, decay and all that implied in the word ‘mortality.’

Historically the word humanitas has had two clearly distinguishable meanings, the first arising from a contrast between man and what is less than man; the second between man and what is more. In the first case humanitas means a value, in the second a limitation.

The concept of humanitas as a value was formulated in the circle around the younger Scipio, with Cicero as its belated, yet most explicit spokesman. It meant the quality which distinguishes man, not only from animals, but also, and even more so, from him who belongs to the species homo without deserving the name of homo humanus; from the barbarian or vulgarian who lacks pietas and παιδεια- that is, respect for moral values and that gracious blend of learning and urbanity which we can only circumscribe by the discredited word “culture.”

In the Middle Ages this concept was displaced by the consideration of humanity as being opposed to divinity rather than to animality or barbarism. The qualities commonly associated with it were therefore those of frailty and transience: humanitas fragilis, humanitas caduca.

Thus the Renaissance conception of humanitas had a two-fold aspect from the outset. The new interest in the human being was based both on a revival of the classical antithesis between humanitas and barbartias, or feritas, and on a survival of the mediaeval antithesis between humanitas and divinitas. When Marsilio Ficino defines man as a “rational soul participating in the intellect of God, but operating in a body,” he defines him as the one being that is both autonomous and finite. And Pico’s famous ‘speech’ ‘On the Dignity of Man’ is anything but a document of paganism. Pico says that God placed man in the center of the universe so that he might be conscious of where he stands, and therefore free to decide ‘where to turn.’ He does not say that man is the center of the universe, not even in the sense commonly attributed to the classical phrase, “man the measure of all things.”

It is from this ambivalent conception of humanitas that humanism was born. It is not so much a movement as an attitude which can be defined as the conviction of the dignity of man, based on both the insistence on human values (rationality and freedom) and the acceptance of human limitations (fallibility and frailty); from this two postulates result responsibility and tolerance.

Small wonder that this attitude has been attacked from two opposite camps whose common aversion to the ideas of responsibility and tolerance has recently aligned them in a united front. Entrenched in one of these camps are those who deny human values: the determinists, whether they believe in divine, physical or social predestination, the authoritarians, and those “insectolatrists” who profess the all-importance of the hive, whether the hive be called group, class, nation or race. In the other camp are those who deny human limitations in favor of some sort of intellectual or political libertinism, such as aestheticists, vitalists, intuitionists and hero-worshipers. From the point of view of determinism, the humanist is either a lost soul or an ideologist. From the point of view of authoritarianism, he is either a heretic or a revolutionary (or a counterrevolutionary). From the point of view of “insectolatry,” he is a useless individualist. And from the point of view of libertinism he is a timid bourgeois.

Erasmus of Rotterdam, the humanist par excellence, is a typical case in point. The church suspected and ultimately rejected the writings of this man who had said: “Perhaps the spirit of Christ is more largely diffused than we think, and there are many in the community of saints who are not in our calendar.” The adventurer Uhich von Hutten despised his ironical skepticism and his unheroic love of tranquillity. And Luther, who insisted that “no man has power to think anything good or evil, but everything occurs in him by absolute necessity,” was incensed by a belief which manifested itself in the famous phrase; “What is the use of man as a totality [that is, of man endowed with both a body and a soul], if God would work in him as a sculptor works in clay, and might just as well work in stone?”

Food and Faith in Christian Culture
edited by Ken Albala and Trudy Eden
Chapter 3: “The Food Police”
Sumptuary Prohibitions On Food In The Reformation
by Johanna B. Moyer
pp. 80-83

Protestants too employed a disease model to explain the dangers of luxury consumption. Luxury damaged the body politic leading to “most incurable sickness of the universal body” (33). Protestant authors also employed Galenic humor theory, arguing that “continuous superfluous expense” unbalanced the humors leading to fever and illness (191). However, Protestants used this model less often than Catholic authors who attacked luxury. Moreover, those Protestants who did employ the Galenic model used it in a different manner than their Catholic counterparts.

Protestants also drew parallels between the damage caused by luxury to the human body and the damage excess inflicted on the French nation. Rather than a disease metaphor, however, many Protestant authors saw luxury more as a “wound” to the body politic. For Protestants the danger of luxury was not only the buildup of humors within the body politic of France but the constant “bleeding out” of humor from the body politic in the form of cash to pay for imported luxuries. The flow of cash mimicked the flow of blood from a wound in the body. Most Protestants did not see luxury foodstuffs as the problem, indeed most saw food in moderation as healthy for the body. Even luxury apparel could be healthy for the body politic in moderation, if it was domestically produced and consumed. Such luxuries circulated the “blood” of the body politic creating employment and feeding the lower orders. 72 De La Noue made this distinction clear. He dismissed the need to individually discuss the damage done by each kind of luxury that was rampant in France in his time as being as pointless “as those who have invented auricular confession have divided mortal and venal sins into infinity of roots and branches.” Rather, he argued, the damage done by luxury was in its “entire bulk” to the patrimonies of those who purchased luxuries and to the kingdom of France (116). For the Protestants, luxury did not pose an internal threat to the body and salvation of the individual. Rather, the use of luxury posed an external threat to the group, to the body politic of France.

The Reformation And Sumptuary Legislation

Catholics, as we have seen, called for antiluxury regulations on food and banqueting, hoping to curb overeating and the damage done by gluttony to the body politic. Although some Protestants also wanted to restrict food and banqueting, more often French Protestants called for restrictions on clothing and foreign luxuries. These differing views of luxury during and after the French Wars of Religion not only give insight into the theological differences between these two branches of Christianity but also provides insight into the larger pattern of the sumptuary regulation of food in Europe in this period. Sumptuary restrictions were one means by which Catholics and Protestants enforced their theology in the post-Reformation era.

Although Catholicism is often correctly cast as the branch of Reformation Christianity that gave the individual the least control over their salvation, it was also true that the individual Catholic’s path to salvation depended heavily on ascetic practices. The responsibility for following these practices fell on the individual believer. Sumptuary laws on food in Catholic areas reinforced this responsibility by emphasizing what foods should and should not be eaten and mirrored the central theological practice of fasting for the atonement of sin. Perhaps the historiographical cliché that it was only Protestantism which gave the individual believer control of his or her salvation needs to be qualified. The arithmetical piety of Catholicism ultimately placed the onus on the individual to atone for each sin. Moreover, sumptuary legislation tried to steer the Catholic believer away from the more serious sins that were associated with overeating, including gluttony, lust, anger, and pride.

Catholic theology meshed nicely with the revival of Galenism that swept through Europe in this period. Galenists preached that meat eating, overeating, and the imbalance in humors which accompanied these practices, led to behavioral changes, including an increased sex drive and increased aggression. These physical problems mirrored the spiritual problems that luxury caused, including fornication and violence. This is why so many authors blamed the French nobility for the luxury problem in France. Nobles were seen not only as more likely to bear the expense of overeating but also as more prone to violence. 73

Galenism also meshed nicely with Catholicism because it was a very physical religion in which the control of the physical body figured prominently in the believer’s path to salvation. Not surprisingly, by the seventeenth century, Protestants gravitated away from Galenism toward the chemical view of the body offered by Paracelsus. 74 Catholic sumptuary law embodied a Galenic view of the body where sin and disease were equated and therefore pushed regulations that advocated each person’s control of his or her own body.

Protestant legislators, conversely, were not interested in the individual diner. Sumptuary legislation in Protestant areas ran the gamut from control of communal displays of eating, in places like Switzerland and Germany, to little or no concern with restrictions on luxury foods, as in England. For Protestants, it was the communal role of food and luxury use that was important. Hence the laws in Protestant areas targeted food in the context of weddings, baptisms, and even funerals. The English did not even bother to enact sumptuary restrictions on food after their break with Catholicism. The French Protestants who wrote on luxury glossed over the deleterious effects of meat eating, even proclaiming it to be healthful for the body while producing diatribes against the evils of imported luxury apparel. The use of Galenism in the French Reformed treatises suggests that Protestants too were concerned with a “body,” but it was not the individual body of the believer that worried Protestant legislators. Sumptuary restrictions were designed to safeguard the mystical body of believers, or the “Elect” in the language of Calvinism. French Protestants used the Galenic model of the body to discuss the damage that luxury did to the body of believers in France, but ultimately to safeguard the economic welfare of all French subjects. The Calvinists of Switzerland used sumptuary legislation on food to protect those predestined for salvation from the dangerous eating practices of members of the community whose overeating suggested they might not be saved.

Ultimately, sumptuary regulations in the Reformation spoke to the Christian practice of fasting. Fasting served very different functions in Protestants and Catholic theology. Raymond Mentzer has suggested that Protestants “modified” the Catholic practice of fasting during the Reformation. The major reformers, including Luther, Calvin, and Zwingli, all rejected fasting as a path to salvation. 75 For Protestants, fasting was a “liturgical rite,” part of the cycle of worship and a practice that served to “bind the community.” Fasting was often a response to adversity, as during the French Wars of Religion. For Catholics, fasting was an individual act, just as sumptuary legislation in Catholic areas targeted individual diners. However, for Protestants, fasting was a communal act, “calling attention to the body of believers.” 76 The symbolic nature of fasting, Mentzer argues, reflected Protestant rejection of transubstantiation. Catholics continued to believe that God was physically present in the host, but Protestants believed His was only a spiritual presence. When Catholics took Communion, they fasted to cleanse their own bodies so as to receive the real, physical body of Christ. Protestants, on the other hand, fasted as spiritual preparation because it was their spirits that connected with the spirit of Christ in the Eucharist. 77

Capitalist Realism and Fake Fakes

“This is where ‘we’ are now: not Harawayesque cyborgs affirming our ontological hybridity but replicant-puppets (of Capital) dreaming kitsch dreams of being restored to full humanity but “without any Gepettos or Good Fairies on the horizon”.”

~ Mark (k-punk), 2009
Honeymoon in Disneyland

* * *

“Where does that leave us? I’m not sure the solution is to seek out some pre-Inversion authenticity — to red-pill ourselves back to “reality.” What’s gone from the internet, after all, isn’t “truth,” but trust: the sense that the people and things we encounter are what they represent themselves to be. Years of metrics-driven growth, lucrative manipulative systems, and unregulated platform marketplaces, have created an environment where it makes more sense to be fake online — to be disingenuous and cynical, to lie and cheat, to misrepresent and distort — than it does to be real. Fixing that would require cultural and political reform in Silicon Valley and around the world, but it’s our only choice. Otherwise we’ll all end up on the bot internet of fake people, fake clicks, fake sites, and fake computers, where the only real thing is the ads.”

~ Max Read, 2018
How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.

* * *

“In my writing I got so interested in fakes that I finally came up with the concept of fake fakes. For example, in Disneyland there are fake birds worked by electric motors which emit caws and shrieks as you pass by them. Suppose some night all of us sneaked into the park with real birds and substituted them for the artificial ones. Imagine the horror the Disneyland officials would feel when they discovered the cruel hoax. Real birds! And perhaps someday even real hippos and lions. Consternation. The park being cunningly transmuted from the unreal to the real, by sinister forces. For instance, suppose the Matterhorn turned into a genuine snow-covered mountain? What if the entire place, by a miracle of God’s power and wisdom, was changed, in a moment, in the blink of an eye, into something incorruptible? They would have to close down.

“In Plato’s Timaeus, God does not create the universe, as does the Christian God; He simply finds it one day. It is in a state of total chaos. God sets to work to transform the chaos into order. That idea appeals to me, and I have adapted it to fit my own intellectual needs: What if our universe started out as not quite real, a sort of illusion, as the Hindu religion teaches, and God, out of love and kindness for us, is slowly transmuting it, slowly and secretly, into something real?”

~ Philip K. Dick, 1978
How to Build a Universe That Doesn’t Fall Apart Two Days Later

Essentialism On the Decline

Before getting to the topic of essentialism, let me take an indirect approach. In reading about paleolithic diets and traditional foods, a recurring theme is inflammation, specifically as it relates to the health of the gut-brain network and immune system.

The paradigm change this signifies is that seemingly separate diseases with different diagnostic labels often have underlying commonalities. They share overlapping sets of causal and contributing factors, biological processes and symptoms. This is why simple dietary changes can have a profound effect on numerous health conditions. For some, the diseased state expresses as mood disorders and for others as autoimmune disorders and for still others something entirely else, but there are immense commonalities between them all. The differences have more to do with how dysbiosis and dysfunction happens to develop, where it takes hold in the body, and so what symptoms are experienced.

From a paleo diet perspective in treating both patients and her own multiple sclerosis, Terry Wahls gets at this point in a straightforward manner (p. 47): “In a very real sense, we all have the same disease because all disease begins with broken, incorrect biochemistry and disordered communication within and between our cells. […] Inside, the distinction between these autoimmune diseases is, frankly, fairly arbitrary”. In How Emotions Are Made, Lisa Feldman Barrett wrote (Kindle Locations 3834-3850):

“Inflammation has been a game-changer for our understanding of mental illness. For many years, scientists and clinicians held a classical view of mental illnesses like chronic stress, chronic pain, anxiety, and depression. Each ailment was believed to have a biological fingerprint that distinguished it from all others. Researchers would ask essentialist questions that assume each disorder is distinct: “How does depression impact your body? How does emotion influence pain? Why do anxiety and depression frequently co-occur?” 9

“More recently, the dividing lines between these illnesses have been evaporating. People who are diagnosed with the same-named disorder may have greatly diverse symptoms— variation is the norm. At the same time, different disorders overlap: they share symptoms, they cause atrophy in the same brain regions, their sufferers exhibit low emotional granularity, and some of the same medications are prescribed as effective.

“As a result of these findings, researchers are moving away from a classical view of different illnesses with distinct essences. They instead focus on a set of common ingredients that leave people vulnerable to these various disorders, such as genetic factors, insomnia, and damage to the interoceptive network or key hubs in the brain (chapter 6). If these areas become damaged, the brain is in big trouble: depression, panic disorder, schizophrenia, autism, dyslexia, chronic pain, dementia, Parkinson’s disease, and attention deficit hyperactivity disorder are all associated with hub damage. 10

“My view is that some major illnesses considered distinct and “mental” are all rooted in a chronically unbalanced body budget and unbridled inflammation. We categorize and name them as different disorders, based on context, much like we categorize and name the same bodily changes as different emotions. If I’m correct, then questions like, “Why do anxiety and depression frequently co-occur?” are no longer mysteries because, like emotions, these illnesses do not have firm boundaries in nature.”

What jumped out at me was the conventional view of disease as essentialist, and hence the related essentialism in biology and psychology. This is exemplified by genetic determinism, such as it informs race realism. It’s easy for most well-informed people to dismiss race realists, but essentialism takes on much more insidious forms that are harder to detect and root out. When scientists claimed to find a gay gene, some gay men quickly took this genetic determinism as a defense against the fundamentalist view that homosexuality is a choice and a sin. It turned out that there was no gay gene (by the way, this incident demonstrated how, in reacting to reactionaries, even leftist activists can be drawn into the reactionary mind). Not only is there no gay gene but also no simple and absolute gender divisions at all — as I previously explained (Is the Tide Starting to Turn on Genetics and Culture?):

“Recent research has taken this even further in showing that neither sex nor gender is binary (1234, & 5), as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way.”

The attraction of essentialism is powerful. And as shown in numerous cases, the attraction can be found across the political spectrum, as it offers a seemingly strong defense in diverting attention away from other factors. Similar to the gay gene, many people defend neurodiversity as if some people are simply born a particular way, and that therefore we can’t and shouldn’t seek to do anything to change or improve their condition, much less cure it or prevent it in future generations.

For example, those on the high-functioning end of the autism spectrum will occasionally defend their condition as being gifted in their ability to think and perceive differently. That is fine as far as it goes, but from a scientific perspective we still should find it concerning that conditions like this are on a drastic rise and it can’t be explained by mere greater rates of diagnosis. Whether or not one believes the world would be a better place with more people with autism, this shouldn’t be left as a fatalistic vision of an evolutionary leap, especially considering most on the autism spectrum aren’t high functioning — instead, we should try to understand why it is happening and what it means.

Researchers have found that there are prospective causes to be studied. Consider proprionate, a substance discussed by Alanna Collen (10% Human, p. 83): “although propionate was an important compound in the body, it was also used as a preservative in bread products – the very foods many autistic children crave. To top it all off, clostridia species are known to produce propionate. In itself, propionate is not ‘bad’, but MacFabe began to wonder whether autistic children were getting an overdose.” This might explain why antibiotics helped many with autism, as it would have been knocking off the clostridia population that was boosting propionate. To emphasize this point, when rodents were injected with propionate, they exhibited the precise behaviors of autism and they too showed inflammation in the brain. The fact that autistics often have brain inflammation, an unhealthy condition, is strong evidence that autism shouldn’t be taken as mere neurodiversity (and, among autistics, the commonality of inflammation-related gut issues emphasizes this point).

There is no doubt that genetic determinism, like the belief in an eternal soul, can be comforting. We identify with our genes, as we inherit them and are born with them. But to speak of inflammation or propionate or whatever makes it seem like we are victims of externalities. And it means we aren’t isolated individuals to be blamed or to take credit for who we are. To return to Collen (pp. 88-89):

“In health, we like to think we are the products of our genes and experiences. Most of us credit our virtues to the hurdles we have jumped, the pits we have climbed out of, and the triumphs we have fought for. We see our underlying personalities as fixed entities – ‘I am just not a risk-taker’, or ‘I like things to be organised’ – as if these are a result of something intrinsic to us. Our achievements are down to determination, and our relationships reflect the strength of our characters. Or so we like to think.

“But what does it mean for free will and accomplishment, if we are not our own masters? What does it mean for human nature, and for our sense of self? The idea that Toxoplasma, or any other microbe inhabiting your body, might contribute to your feelings, decisions and actions, is quite bewildering. But if that’s not mind-bending enough for you, consider this: microbes are transmissible. Just as a cold virus or a bacterial throat infection can be passed from one person to another, so can the microbiota. The idea that the make-up of your microbial community might be influenced by the people you meet and the places you go lends new meaning to the idea of cultural mind-expansion. At its simplest, sharing food and toilets with other people could provide opportunity for microbial exchange, for better or worse. Whether it might be possible to pick up microbes that encourage entrepreneurship at a business school, or a thrill-seeking love of motorbiking at a race track, is anyone’s guess for now, but the idea of personality traits being passed from person to person truly is mind-expanding.”

This goes beyond the personal level, which lends a greater threat to the proposal. Our respective societies, communities, etc might be heavily influenced by environmental factors that we can’t see. A ton of research shows the tremendous impact of parasites, heavy metal toxins, food additives, farm chemicals, hormones, hormone mimics, hormone disruptors, etc. Entire regions might be shaped by even a single species of parasite, such as how higher rates of toxoplasmosis gondii in New England is directly correlated to higher rates of neuroticism (see What do we inherit? And from whom? & Uncomfortable Questions About Ideology).

Essentialism, though still popular, has taken numerous major hits in recent years. It once was the dominant paradigm and went largely unquestioned. Consider how early last century respectable fields of study such as anthropology, linguistic relativism and behaviorism suggested that humans were largely products of environmental and cultural factors. This was the original basis of the attack on racism and race realism. In linguistics, Noam Chomsky overturned this view in positing the essentialist belief that, though not observed much less proven, there must exist within the human brain a language module with a universal grammar. It was able to defeat and replace the non-essentialist theories because it was more satisfying to the WEIRD ideologies that were becoming a greater force in an increasingly WEIRD society.

Ever since Plato, Western civilization has been drawn toward the extremes of essentialism (as part of the larger Axial Age shift toward abstraction and idealism). Yet there has also long been a countervailing force (even among the ancients, non-essentialist interpretations were common; consider group identity: here, here, here, here, and here). It wasn’t predetermined that essentialism would be so victorious as to have nearly obliterated the memory of all alternatives. It fit the spirit of the times for this past century, but now the public mood is shifting again. It’s no accident that, as social democracy and socialism regains favor, environmentalist explanations are making a comeback. But this is merely the revival of a particular Western tradition of thought, a tradition that is centuries old.

I was reminded of this in reading Liberty in America’s Founding Moment by Howard Schwartz. It’s an interesting shift of gears, since Schwartz doesn’t write about anything related to biology, health, or science. But he does indirectly get at environmentalist critique that comes out in his analysis of David Hume (1711-1776). I’ve mostly thought of Hume in terms of his bundle theory of self, possibly having been borrowed from Buddhism that he might have learned from Christian missionaries having returned from the East. However he came to it, the bundle theory argued that there is no singular coherent self, as was a central tenet of traditional Christian theology. Still, heretical views of the self were hardly new — some detect a possible Western precursor of Humean bundle theory in the ideas of Baruch Spinoza (1632-1677).

Whatever its origins in Western thought, environmentalism has been challenging essentialism since the Enlightenment. And in the case of Hume, there is an early social constructionist view of society and politics, that what motivates people isn’t essentialism. This puts a different spin on things, as Hume’s writings were widely read during the revolutionary era when the United States was founded. Thomas Jefferson, among others, was familiar with Hume and highly recommended his work. Hume represented the opposite position to John Locke. We are now returning to this old battle of ideas.

“…some deeper area of the being.”

Alec Nevala-Lee shares a passage from Colin Wilson’s Mysteries (see Magic and the art of will). It elicits many thoughts, but I want to focus on the two main related aspects: the self and the will.

The main thing Wilson is talking about is hyper-individualism — the falseness and superficiality, constraint and limitation of anxiety-driven ‘consciousness’, the conscious personality of the ego-self. This is what denies the bundled self and the extended self, the vaster sense of being that challenges the socio-psychological structure of the modern mind. We defend our thick boundaries with great care for fear of what might get in, but this locks us in a prison cell of our own making. In not allowing ourselves to be affected, we make ourselves ineffective or at best only partly effective toward paltry ends. It’s not only a matter of doing “something really well” for we don’t really know what we want to do, as we’ve become disconnected from deeper impulses and broader experience.

For about as long as I can remember, the notion of ‘free will’ has never made sense to me. It isn’t a philosophical disagreement. Rather, in my own experience and in my observation of others, it simply offers no compelling explanation or valid meaning, much less deep insight. It intuitively makes no sense, which is to say it can only make sense if we never carefully think about it with probing awareness and open-minded inquiry. To the degree there is a ‘will’ is to the degree it is inseparable from the self. That is to say the self never wills anything for the self is and can only be known through the process of willing, which is simply to say through impulse and action. We are what we do, but we never know why we do what we do. We are who we are and we don’t know how to be otherwise.

There is no way to step back from the self in order to objectively see and act upon the self. That would require yet another self. The attempt to impose a will upon the self would lead to an infinite regress of selves. That would be a pointless preoccupation, although as entertainments go it is popular these days. A more worthy activity and maybe a greater achievement is stop trying to contain ourselves and instead to align with a greater sense of self. Will wills itself. And the only freedom that the will possesses is to be itself. That is what some might consider purpose or telos, one’s reason for being or rather one’s reason in being.

No freedom exists in isolation. To believe otherwise is a trap. The precise trap involved is addiction, which is the will driven by compulsion. After all, the addict is the ultimate individual, so disconnected within a repeating pattern of behavior as to be unable to affect or be affected. Complete autonomy is impotence. The only freedom is in relationship, both to the larger world and the larger sense of self. It is in the ‘other’ that we know ourselves. We can only be free in not trying to impose freedom, in not struggling to control and manipulate. True will, if we are to speak of such a thing, is the opposite of willfulness. We are only free to the extent we don’t think in the explicit terms of freedom. It is not a thought in the mind but a way of being in the world.

We know that the conscious will is connected to the narrow, conscious part of the personality. One of the paradoxes observed by [Pierre] Janet is that as the hysteric becomes increasingly obsessed with anxiety—and the need to exert his will—he also becomes increasingly ineffective. The narrower and more obsessive the consciousness, the weaker the will. Every one of us is familiar with the phenomenon. The more we become racked with anxiety to do something well, the more we are likely to botch it. It is [Viktor] Frankl’s “law of reversed effort.” If you want to do something really well, you have to get into the “right mood.” And the right mood involves a sense of relaxation, of feeling “wide open” instead of narrow and enclosed…

As William James remarked, we all have a lifelong habit of “inferiority to our full self.” We are all hysterics; it is the endemic disease of the human race, which clearly implies that, outside our “everyday personality,” there is a wider “self” that possesses greater powers than the everyday self. And this is not the Freudian subconscious. Like the “wider self” of Janet’s patients, it is as conscious as the “contracted self.” We are, in fact, partially aware of this “other self.” When a man “unwinds” by pouring himself a drink and kicking off his shoes, he is adopting an elementary method of relaxing into the other self. When an overworked housewife decides to buy herself a new hat, she is doing the same thing. But we seldom relax far enough; habit—and anxiety—are too strong…Magic is the art and science of using the will. Not the ordinary will of the contracted ego but the “true will” that seems to spring from some deeper area of the being.

Colin WilsonMysteries

The Madness of Reason

Commenting online brings one in contact with odd people. It is often merely irritating, but at times it can be fascinating to see all the strange ways humanity gets expressed.

I met a guy, Naj Ziad, on the Facebook page for a discussion group about Julian Jaynes’ book, The Origin of Consciousness in the Breakdown of the Bicameral Mind. He posted something expressing his obsession with logical coherence and consistency. We dialogued for quite a bit, including in a post on his own Facebook page, and he seemed like a nice enough guy. He came across as being genuine in his intentions and worldview, but there was simply something off about him.

It’s likely he has Asperger’s, of a high IQ and high functioning variety. Or it could be that he has some kind of personality disorder. Either way, my sense is that he is severely lacking in cognitive empathy, although I doubt he is deficient in affective empathy. He just doesn’t seem to get that other people can perceive and experience the world differently than he does or that others entirely exist as separate entities apart from his own existence.

When I claimed that my worldview was simply different than his and that neither of our personal realities could be reduced to the other, he called me a dualist. I came to the conclusion that this guy was a solipsist, although he doesn’t identify that way. Solipsism was the only philosophy that made sense of his idiosyncratic ramblings, entirely logical ramblings I might add. He is obviously intelligent, clever, and reasonably well read. His verbal intelligence is particularly high.

In fact, he is so obsessed with his verbal intelligence that he has come to the conclusion that all of reality is language. Of course, he has his own private definition of language which asserts that language is everything. This leaves his argument as a tautology and he freely admitted this was the case, but he kept returning to his defense that his argument was logically consistent and coherent. Sure.

It was endlessly amusing. He really could not grasp that the way his mind operates isn’t how everyone’s mind operates, and so he couldn’t escape the hermetically-sealed reality tunnel of his own clever monkey mind. His world view is so perfectly constructed and orderly that there isn’t a single crack to let in fresh air or a beam of light.

He was doing a wondrous impression of Spock in being entirely logical within his narrow psychological and ideological framework. He kept falling back on his being logical and also his use of idiosyncratic jargon. He defined his terms to entirely fit his ideological worldview that those terms had no meaning to him outside of his ideological worldview. It all made perfect sense within itself.

His life philosophy is a well-rehearsed script that he goes on repeating. It is an amazing thing to observe as an outsider, especially considering he stubbornly refused to acknowledge that anything could be outside of his own mind for he couldn’t imagine the world being different than his own mind. He wouldn’t let go of his beliefs about reality, like the monkey with his hand trapped in a jar because he won’t let go of the banana.

If this guy was just insane or a troll, I would dismiss him out of hand. But that isn’t the case. Obviously, he is neuroatypical and I won’t hold that against anyone. And I freely admit that his ideological worldview is logically consistent and coherent, for whatever that is worth.

What made it so fascinating to my mind is that solipsism has always been a speculative philosophy, to be considered only as a thought experiment. It never occurred to me that there would be a highly intelligent and rational person who would seriously uphold it as an entire self-contained worldview and lifestyle. His arguments for it were equally fascinating and he had interesting thoughts and insights, some of which I even agreed with. He is a brilliant guy who, as sometimes happens, has gone a bit off the deep end.

He built an ideology that perfectly expresses and conforms to his highly unusual neurocognitive profile. And of course, in pointing this out to him, he dismisses it as ‘psychologizing’. His arguments are so perfectly patched together that he never refers to any factual evidence as support for his ideological commitments, as it is entirely unnecessary in his own mind. External facts in the external world, what he calls ‘scientism’, are as meaningless as others claiming to have existence independent of his own. From his perspective, there is only what he calls the ‘now’ and there can only be one ‘now’ to rule them all, which just so happens to coincide with his own ego-mind.

If you challenge him on any of this, he is highly articulate in defending why he is being entirely reasonable. Ah, the madness of reason!

* * *

On a personal note, I should make clear that I sympathize with this guy. I have my own psychological issues (depression, anxiety, thought disorder, strong introversion, etc) that can make me feel isolated and cause me to retreat further into myself. Along with a tendency to over-intellectualize everything, my psychological issues have at times led me to get lost in my own head.

I can even understand the attraction of solipsism and, as a thought experiment, I’ve entertained it. But somehow I’ve always known that I’m not ‘normal’, which is to say that others are not like me. I have never actually doubted that others not only exist but exist in a wide variety of differences. It hasn’t occurred to me to deny all otherness by reducing all others to my own psychological experience and ideological worldview. I’ve never quite been that lost in myself, although I could imagine how it might happen. There have been moments in my life where my mind could have gone off the deep end.

Yet my sympathy only goes so far. It is hard to sympathize with someone who refuses to acknowledge your independent existence as a unique human being with your own identity and views. There is an element of frustration in dealing with a solipsist, but in this case my fascination drew me in. Before I ascertained he was a solipsist, it was obvious something about him was highly unusual. I kept poking and prodding him until the shape of his worldview became apparent. At that point, my fascination ended. Any further engagement would have continued to go around in circles, which means watching this guy’s mind go around in circles like a dog chasing its own tail.

Of all the contortions the human mind can put itself into, solipsism has to be one of the greatest feats to accomplish. I have to give this guy credit where its due. Not many people could keep up such a mindset for long.

Hyperballad and Hyperobjects

Morton’s use of the term ‘hyperobjects’ was inspired by Björk’s 1996 single ‘Hyperballad’
(Wikipedia)

Björk
by Timothy Morton

Björk and I think that there is a major cultural shift going on around the world towards something beyond cynical reason and nihilism, as more and more it becomes impossible not to have consideration for nonhumans in everything we do. Hopefully this piece we made contributes to that somehow.

I was so lucky to be doing this while she was mixing her album with some of the nicest and most incredible musicians/producers I’ve ever met…great examples of this shift beyond cynical reason…

Here is something I think is so so amazing, the Subtle Abuse mix of “Hyperballad.” Car parts, bottles, cutlery–all the objects, right? Not to mention Björk’s body “slamming against those rocks.” It’s a veritable Latour Litany… And the haunting repetition…

Dark Ecological Chocolate
by Timothy Morton

This being-an-object is intimately related with the Kantian beauty experience, wherein I find experiential evidence without metaphysical positing that at least one other being exists. The Sadness is the attunement of coexistence stripped of its conceptual content. Since the rigid anthropocentric standard of taste with its refined distances has collapsed, it becomes at this level impossible to rebuild the distinction we lost in The Ethereal between being interested or concerned with (this painting, this polar bear) and being fascinated by… Being interested means I am in charge. Being fascinated means that something else is. Beauty starts to show the subscenent wiring under the board.

Take Björk. Her song “Hyperballad” is a classic example of what I’m trying to talk about here. She shows you the wiring under board of an emotion, the way a straightforward feeling like I love you is obviously not straightforward at all, so don’t write a love song like that, write one that says you’re sitting on top of this cliff, and you’re dropping bits and pieces of the edge like car parts, bottles and cutlery, all kinds of not-you nonhuman prosthetic bits that we take to be extensions of our totally integrated up to date shiny religious holistic selves, and then you picture throwing yourself off, and what would you look like—to the you who’s watching you still on the edge of the cliff—as you fell, and when you hit the bottom would you be alive or dead, would you look awake or asleep, would your eyes be closed, or open?

When you experience beauty you experience evidence in your inner space that at least one thing that isn’t you exists. An evanescent footprint in your inner space—you don’t need to prove that things are real by hitting them or eating them. A nonviolent coexisting without coercion. There is an undecidability between two entities—me and not-me, the thing. Beauty is sad because it is ungraspable; there is an elegiac quality to it. When we grasp it withdraws, like putting my hand into water. Yet it appears.

Beauty is virtual: I am unable to tell whether the beauty resides in me or in the thing—it is as if it were in the thing, but impossible to pin down there. The subjunctive, floating “as if” virtual reality of beauty is a little queasy—the thing emits a tractor beam in whose vortex I find myself; I veer towards it. The aesthetic dimension says something true about causality in a modern age: I can’t tell for sure what the causes and effects are without resorting to illegal metaphysical moves.[14] Something slightly sinister is afoot—there is a basic entanglement such that I can’t tell who or what started it.

Beauty is the givenness of data. A thing impinges on me before I can contain it or use it or think it. It is as if I hear the thing breathing right next to me. From the standpoint of agricultural white patriarchy, something slightly “evil” is happening: something already has a grip on us, and this is demonic insofar as it is “from elsewhere.” This “saturated” demonic proximity is the essential ingredient of ecological being and ecological awareness, not some Nature over yonder.[15]

Interdependence, which is ecology, is sad and contingent. Because of interdependence, when I’m nice to a bunny rabbit I’m not being nice to bunny rabbit parasites. Amazing violence would be required to try to fit a form over everything all at once. If you try then you basically undermine the bunnies and everything else into components of a machine, replaceable components whose only important aspect is their existence

Can Volition Save Us?

I follow a blog by Massimo Pigliucci, Footnotes to Plato. He is a professor of philosophy and an author. But I must admit until a few moments ago I had no idea who he was. It was a random blog I happened to be following. Otherwise, I’ve been completely unfamiliar with him and his work. I haven’t even read his blog closely. As far as that goes, I can’t recall when I started following his blog or why. My point is that I have no grand opinion about him as an academic philosopher or public intellectual. I occasionally read posts by him and that is all.

Massimo (on his blog, he goes only by his first name) posted a piece about the Stoics and neuroscience. I’ve had a casual interest in the Stoics for a long time. They helped shape Western ideas of natural law and liberty, partly by way of Christians leaning heavily on Stoic foundations of thought — early Christians and Stoics were often confused with each other, as they acted and dressed in a similar fashion. Stoics were the originators of martyrdom as a practice and they took it to a level far beyond Christians.

My interest in the Stoics has been more historical, in terms of influences on later thought and social changes.  Natural law and inward liberty became central justifications for radicalism and revolution. But Massimo’s focus goes in a different direction, although there is some overlap. He is talking about free will and volition, and obviously this would line up with Stoic thought on natural law and inward liberty. All of this is about how the world operates, specifically in relation to human nature as part of the world. Even as I’ve given much thought to the free will debates over the decades, I can’t say I’ve ever thought about it in terms of Stoic philosophy.

Massimo makes some good points. But I wonder about the issue considered from a different level or from a different angle. There is an almost uncontrollable impulse to want to know more than can actually be known. It’s not only that free will is ultimately a metaphysical concept, akin to theological constructs such as the soul. Even talking of ‘volition’ doesn’t save us from this dilemma.  It becomes a secularized equivalent of the search for a god in the gaps. As synred noted: “I don’t see why volition has to be conscious. A good many of our actions are not; some we think about more; others not so much (fast vs. slow?). But even an unconscious decision is a decision we make.” Massimo, in responding to synred, acknowledges the problem: “Right, both conscious and unconscious processes contribute to volition. But there really isn’t a sharp distinction between the two, which are related by continuous feedback loops.” In that case, what exactly is ‘consciousness’ that it is so hard to clearly differentiate from the unconscious? What might be the self, conscious or unconscious, making decisions and imposing volition upon the world?

In the end, science can directly say nothing about consciousness or anything involving consciousness as a cause. Consciousness simply is a non-scientific experience, in that it precedes all intellectual endeavor. And unconsciousness, as normally used, isn’t an experience at all. I love speculating about consciousness and unconsciousness, including the possible relationship to scientific understanding. But I wish more people would be more realistic, self-aware, and humble toward the stark situation of overwhelming human ignorance. I’m not saying that Massimo has fallen into this trap. It’s more about my being doubtful that any useful conclusion can be made, although I’m not sure I’m exactly offering an alternative conclusion in its place.

Massimo distinguishes free will and volition, in expressing what he considers a key point. He writes that: “Once more, to preempt distracting discussions: I do not think we should talk about “free will,” which is a hopelessly metaphysically confused concept. We are talking about what psychologists themselves call volition, i.e., the ability of human beings to make complex decisions informed by conscious thought. Hopefully no one will deny that we do have such ability.

I don’t deny it. Yet neither would I affirm it, at least not as a scientific claim. Rather, I’d argue that maybe ‘volition’ simply isn’t constructive as a scientific concept, although it could be used as part of scientific interpretation — the point being that, as a scientific hypothesis, it seems to be nonfalsifiable. Libet’s veto power doesn’t avoid this criticism, as proposing a volitional actor is merely one of many possible interpretations for we can’t directly determine what is the causal force behind stopping the action. We can talk about volition from the perspective of human experience, though. There is nothing wrong with that. Consciousness is fascinating. All of us, even the philosophically and scientifically illiterate, are compelled to take a position. It cuts to the heart of identity, the reality we are mired in: personal and social, psychological and biological.

This reality, however, offers little solid ground. There are some things we don’t know and probably never will know. Or to the degree we can know something about consciousness, it might only be from within consciousness itself, not by trying to scientifically or philosophically stand outside of consciousness by studying it as an object or by way of proxies. And it very well might be consciousness all the way down, as far as any of it is relevant to our existence as conscious beings.

Massimo goes onto say that, “Interestingly, studies have found very good experimental evidence for the veto power Libet is talking about. But that is “interesting” from within the language game of neuroscience. It makes no difference at all in terms of the language game in which the Stoics — and most of us — are engaged, that of improving ourselves as individuals and of making society a better place for everyone to live.” I’m already familiar with that position. Almost a couple of decades ago, Tor Norretranders wrote about Libet’s veto power in his book The User Illusion.

This ‘volition’ is severely constrained as I presume Massimo would agree, but an argument can be made that it is nonetheless real. The difficulty then is who supposedly is wielding this volition. All that asserting ‘volition’ accomplishes is to bring the problem back a slight step. Maybe consciousness can never be a cause since it is the very ground of our being. We don’t control consciousness, either through free will or volition. The problem goes deeper to the level of identity itself, which as Eastern philosophers and bundle theorists have noted tends to fall apart when observed closely and experienced directly. Pull apart the strands of the psyche and nothing else can be found hidden in the gaps, the space in between — no soul or free will, not even a volition. Words can never capture the experience itself. Consciousness simply is what it is and can be nothing else nor be understood on any other terms.

In a post from another blog, Massimo concludes that, “Finally, not only there is no contradiction between modern cognitive science and the Stoic idea that some things (namely, our judgments) are “up to us.”” That is fine, as far as it goes. But who are we? Identity remains a confused morass, tangled up in the hard problem of consciousness itself. Our strong intuitive sense of self doesn’t hold up to introspective inquiry. It’s not clear exactly what we are, that we have a self or that a self has us. This involves the distinction that some make between persons and selves, the topic having come up in my recent reading of Richard S. Hallam’s Virtual Selves, Real Persons. And this involves self-consciousness as a perceived actor, such as related to Julian Jayne’s theory of bicameralism.

In a comment, Patrice Ayme added that: ““Free will” or more exactly, volition, is not free: it is a prisoner of our own brain, its neural networks, its experiences, associations, theories and emotions. All those, in turn, were built progressively, over years and even decades, nonlinearly feeding on themselves, and back to the environment they evolved from and modified in turn (in that environment, typically, one’s family). Volition is a house we helped built, and also a robot we inhabit.” It goes further than that.

Events and responses from generations prior can epigenetically influence present experience and behavior (consider the mice that several generations on were still responding to the conditions neither they nor their parents had experienced because the response had become built into the genetic expression of behavior; or consider the mice that expressed different behavior based on unknown and undetectable differences in controlled laboratory conditions). This should be understood by Massimo, considering how his Wikipedia page describes his educational background: “Pigliucci was formerly a professor of ecology and evolution at Stony Brook University. He explored phenotypic plasticity, genotype-environment interactions, natural selection, and the constraints imposed on natural selection by the genetic and developmental makeup of organisms.” So, why doesn’t Massimo bring any of this up in his discussion? This leaves me unclear about where he fully stands.

Anyway, take epigenetics and such and then combine it with extended self, embodied mind, and linguistic relativity; social construction, habitus, and hyperobjects; intergenerational trauma, historical legacies, and institutionalized systems; et cetera. Our sense of self is powerfully contained and shaped by all that we inherit, both within and beyond our bodies. There is no place and position by which to act separately from what has created the very conditions of perceived self-identity as an acting agent.

None of that, of course, can say anything specific about either free will or volition. Such philosophical debates are simply outside the bounds of what we can personally know. All we can speak of is what we experience. And from that we can make assertions that lead to disagreements. But no genuine debate can be had beyond the clash of interpreted experience. It’s not so much that one position on free will and volition is exactly right for no position can ever absolutely prove all other positions wrong. The seeming conflict between worldviews is more of a conflict occurring within the splintered human mind — consciousness as an experience of identity doesn’t require careful distinctions and consistency. Our words fail us. And stumbling over our own minds, we throw ourselves into mental contortions.

Where does that leave us? Well, none of the debaters involved are doubting that most modern people intuitively sense something akin to individualistic and autonomous agency. But there is evidence strongly indicating that this isn’t always the case, evidence not only from abnormal psychology but also from anthropology and ancient texts. And so we can’t fall back on claims of common sense. We are stranger than we can comprehend, whatever we may ultimately be.

One possible response is to choose existentialism instead of Stoicism. Ronnie de Sousa confidently states: “What biology teaches us about human nature is that, in a very real sense, there is no such thing as human nature. The only coherent attitude to that fact is that of the existentialist: if there is any guidance to be found in nature, it is that there is nothing there to follow. Instead, we should aspire to create it.

I don’t know if that is any more satisfying, but from the perspective of some people’s experience it is a reasonable attitude. In the end, one person’s experience is as valid as another’s, at least on the level of experience itself (and what other level is there for humans to exist?), which I suppose is somewhat of an existentialist conclusion. Personally, I’ve felt committed to the idea that human nature does exist in some sense or another, as that intuitively resonates in my personal experience, although I can’t prove that is the case —- I can’t even ultimately prove it in my own experience when I delve deeply into my own psyche, causing me to argue with myself.

It is a conundrum.

* * *

I have some thoughts to add, but let me leave this as a placeholder for the moment. I learned from Lewis Hyde that the moral order as part of social norms always is told through a story that plays out upon the embodied self. As such, what story is being etched into the human body by way of this debate? It stands out to my mind that the Stoic’s libertas, maybe in relation to Massimo’s volition, is about the slave asserting inner liberty in opposition to outer oppression. Just a thought I wanted to note. With this in mind, here are some parts from the initial post that led me to thinking about all of this:

Ethics is another language game, or, rather, a multiplicity of language games, since there are a number of ways to conceive, talk about, and actually do, ethics. Within the human community, we talk about “good,” “bad,” “moral,” “immoral,” “ought,” and so forth, and any competent language user understands what others mean by those words. Moreover, .just like the words of the builder’s language actually help building things, so the words of ethical language actually help regulate our actions within a given community. The fact that science comes in and, say, tells us that “bricks” are really mostly empty space is interesting from within the science language game, but it is utterly useless, and indeed a distraction, to the builder. Analogously, that a neuroscientist may be able to tell us which parts of the human brain are involved in the production of ethical judgments, and by which cellular means, is interesting within the language game of neuroscience, but it is a useless distraction if we are concerned with improving social justice, or becoming a better person.

And:

Because, according to Sellars, the manifest, but not the scientific, image deals with things like reasons and values. This is not a call to reject science. On the contrary. Sellars was quite clear that whenever the scientific and the manifest images of the world are in conflict (as in “the Sun rises” vs “the Earth rotates” case), then the sensible thing is for us to yield to science. But science simply isn’t in the business of doing a number of other things for which we have developed different tools: philosophy, literature, history, and so forth. These tools are complementary with, not opposed to, scientific ones. Ideally, says Sellars, we want to develop a conceptual stereoscopic vision, whereby we are capable of integrating the manifest and scientific images.

* * *

It occurs to me that this goes back to symbolic conflation. It’s a theory I’ve been developing for years. About this theory, here are some recent posts that are particularly relevant to the problems of ideologically constrained and culturally biased debate: Race Realism and Symbolic Conflation, Sleepwalking Through Our Dreams, and Symbolic Dissociation of Nature/Nurture Debate.

There is always a debate that is being framed in a particular way in order to control what is debated and how. By doing this, public opinion is manipulated as the public is kept divided —- hence, social control is enforced and the social order maintained. The debate is a distraction. The framing is false, deceptive, superficial, constrained. And the real issue(s) is hidden or obscured. The point is that agonistic conflicts are designed to be endless, never to be won or resolved.

I get the feeling that this debate is maybe an example of symbolic conflation. That makes me wonder what might be underlying it. My sense isn’t that Massimo is an ideologue seeking to manipulate and deceive. But we all get caught up in ideologies as worldviews, somewhat in the sense used by Louis Althusser. We are the first casualties of our own rhetoric, as the successful con man has to first con himself. It’s the fate of being human, getting trapped in our own ideas and words — that are in turn reinforced by the social structure, the stage upon which ideological spectacle is played out.

There are numerous examples of symbolic conflation within public debate. The original example that helped shape the theory was abortion in terms of pro-choice vs pro-life, which demonstrated that the apparent positions held (at least by one side) had nothing to do with the actual positions being promoted. Two other examples are nature vs nurture and the related race realism vs social construction.

There is yet another example that has been on my mind lately. It is relevant to this post. The Stoics inherited the Roman idea of libertas, meaning that one wasn’t a slave within that slave society. Libertas didn’t indicate anything other than a lack of direct and active oppression, but it didn’t require a free society or even the freedom to act within society. Many other forms of oppression besides slavery existed in Roman society.

What the Stoics inspired was to make libertas into a philosophical and spiritual ideal. As such, libertas symbolized an inward freedom of the self that everyone possessed by birthright, an expression of natural law. Yet it still maintained the quality of negative freedom, at least in this world for it didn’t imply any outward freedom on any level: personal, social, political, or economic. Christians took up this Stoic libertas, eventually becoming liberty in English thought, and later revolutionaries used it as a rallying cry.

There was another term the English incorporated. The Germanic root was Freiheit. We know it now in its form as freedom, etymologically related to friend. It means being a free member of a free people in a free society. This is the source of positive freedom. But the real kicker is that, etymologically speaking, there can only be positive freedom. Latin liberty with its negative connotations has nothing to do with Germanic freedom. To speak of negative and positive freedom is to entirely miss the point. The confusion comes because the two words have to some degree become conflated, in the attempt to seal the ideological cracks at the foundation of our society.

This creates the ground for much confusion and endless antagonism. And where there is divisive and combative debate, one will likely find symbolic conflation at its root. The frame distracts from the reality of there only being one freedom. Either you are free or you are not. That radical and revolutionary understanding is not allowed within public debate and political discourse, not to be portrayed in establishment media. This is part of what underlies the arguments about free will, even in shifting the terminology to volition instead. Using ‘volition’ as the preferred term doesn’t fundamentally alter anything, as the ideological baggage remains along with the framed confusion.

Massimo is looking for the more detached and intellectual Stoic libertas, not the blood and bone kinship of Anglo-Saxon freedom. Maybe he intuitively, if not clearly, realizes this in discarding free will with its etymological roots. This doesn’t help since it doesn’t change the context and the conflict, much less the confusion. It’s unclear what ‘volition’ could possibly mean in carrying all this millennia of baggage and how asserting it could help shift thinking in a new direction. That is where we find ourselves now. There doesn’t appear to be any way for this debate to move forward on these terms and in this context. We need an entirely different debate with some other frame.

With this in mind, I noticed Massimo agreed with the following comment by ejwinner: “Responsibility is not – and, despite arguments on all sides, never has been – a matter of free will or determinism. It is a matter of social obligation, which takes the debate outside of the realm of science or metaphysics, and into the province of social discussion, expectation, and the institutions formed out of these. No one lives free; and no one lives as automaton. We are always embedded in a social web, and maneuver within as response to the maneuvering of others. Lose the sense of volition, and you became a helpless pawn; assume total freedom and you become a monster.” Massimo doesn’t seem to realize that this undermines his own defense of Stoicism.

Libertas is the source of modern hyper-individualism. It pits the individual against social bonds and it can do nothing else, as the earliest Stoic martyrs understood when they confronted the oppressive state, the oppressive social order and social norms. A society of individuals is no society at all and that was a price the most radical of Stoics were willing to pay in exchange for the inner liberty of their soul. As for the other tradition of thought, there is no freedom separate from healthy and supportive relationship with others. Massimo likely would agree with this, but his commitment to Stoicism has muddied the water and weakened his argument, that in turn has led to a conclusion of uncertain value and validity.

This isn’t a mere theoretical issue. The social problems we are dealing with go straight to libertas, having become the dominant ideology despite all the talk of ‘freedom’. We respond to the failings of liberty with demands of more liberty or else simply ever greater declarations of liberty rhetoric: “More cowbell!” Maybe we should look at the social, rather than the political, foundations of the successful social democracies in Northern Europe. Liberty is about civil rights, but civil rights is less central in Northern Europe than in the United States. The tradition of freedom there is primarily social, that is to say preceding the political. Our American obsession with politics puts the cart before the horse. This has led to great failures such as the war on drugs and mass incarceration, among so much else. We need to create a happy and healthy commons (i.e., a rat park) expressed through common bond and common vision so as to support the common good.

Sure, liberty maybe is a necessary first step when one finds oneself in an oppressive society such as the Roman Empire of the past or the American Empire of the present. But it can’t end there. There is no debate between liberty and freedom for, from an Anglo-American perspective, the latter would be a step beyond into an entirely different kind of society. So, Massimo is correct in stating that there is no free will. Where he goes astray is in not realizing that freedom never was about individuality. The debate of free will was always mired in the ideology of individualistic liberty, having originated from Stoic libertas. Freedom isn’t the problem, rather its lack — or so one could reasonably argue.

What could ‘volition’ possibly mean as an expression and embodiment of our shared humanity within our shared society? What does Massimo think it means? After all of this analysis, I remain confused about what is being proposed under the terminology of volition. Either the discussion itself is confused or it’s just me.

* * *

As a side note, it only now occurred to me that the seeming disagreement or divergence here maybe is more cultural than I realized. I didn’t initially make the connection between Massimo’s philosophical views and his personal background, specifically his upbringing from childhood to adulthood. He was raised in Rome, Italy. And he was educated in Italy where he received his PhD.

It makes sense that he might not grasp the ideological confusion and conflict that gets evoked in such a debate within Anglo-American society. He is speaking as an Italian, not an Anglo-American. He came to the United States long after the early basis of his intellect and philosophy had formed.

Germanic freedom is maybe irrelevant or simply not central to his Italian cultural worldview. Maybe even his later on moving to and working in the United States hasn’t altered that. It would be interesting to hear his perspective on cultural influences and ideological traditions. Other than in relation to Stoic libertas, what might ‘volition’ mean within the context of Italian culture or American culture? Who is the presumed volitional self and where did he come from?

* * *

Free will, atheism, dualism, Massimo Pigliucci, Jerry Coyne and Sam Harris
by Ken Ammi

Massimo Pigliucci concludes:

In the end, skepticism about free will seems to me to be akin to radical skepticism about reality in general (the idea that all of reality is an illusion, or a computer simulation, or something along those lines): it denies what we all think is self-evident, it cannot be defeated logically (though it is not based on empirical evidence), and it is completely irrelevant to our lives…we should then proceed by ignoring the radical skeptic in order to get back to the business of navigating reality, making willful decisions about our lives…and assign moral responsibility to our and other people’s actions.

So is, Pigliucci nearing dualism? Only time will tell but here, may be, a clue. His statement, that the denial of free will “denies what we all think is self-evident” mirrors what this Examiner wrote as a guest author on the statistician William Briggs’ website in an article titled, To Be, Or Not To Be…Free: Sam Harris & Jerry Coyne On Free Will, the relevant portion of which is:

What reason, really, is there to deny our common knowledge, our common experience and well, our common sense conclusion that we have free will? In this case, it is that some Atheists are interpreting lights flashing on a screen [this is referring to neuroscience]. Moreover, their interpretations are based upon materialism, mechanism, reductionism in short: based upon their particular, and peculiar, Atheistic world-views. But why should we believe that their world-view is accurate? After all, they claim that it cannot be proven and since they are making extraordinary claims they must provide evidence that is more extraordinary than expecting us to believe their personal interpretations of “data.”

The Illusion of Will, Self, and Time
by Jonathan Bricklin
pp. 38-39

Another possible objection to James’s paradigm is that it has a design flaw: if you are trying to witness an act of will, “ you ” are occupied by the “trying to witness,” and thus miss the role of “you” in the act of will. Such objection, however, begs the question that any meditation on will ultimately poses—namely, whether an active, agent “I” exists in the first place. The only proof of an agent “I” is what can be inferred from the experience of agency. But what if, as Nietzsche says, “will” is not an afterbirth of “I,” an autonomous agent; “I” is an afterbirth of will, the experience of autonomy? 14 “Trying to witness” is, itself, ostensibly, an act of will. Thus, referring the action of “trying to witness” to an “I” assumes what needs to be proven. The experience of will, as we said, is not in question; the question is: What does this experience entail? To answer this question it matters not whether the experience be of trying to do something (such as getting out of bed on a cold morning) or trying to witness the trying. What matters is that some moment of trying be revealed for what it is, stripped of assumptions.

THE GAP BETWEEN THOUGHTS

Many years ago, I was working with Nisargadatta Maharaj, an Indian teacher. He asked a woman who was audio taping for a new book, “What will be the name of my next book?” She replied, “Beyond Consciousness.” He said, “No, Prior to Consciousness. Find out who you are prior to your last thought and stay there.”
—Stephen Wolinsky, Quantum Consciousness

At the turn of the last century, Karl Marbe, of the University of Würzburg, devised an experiment in which subjects attempted to “catch themselves” in the act of choosing between two impressions. The experiment was concerned with judgment not will, but, like James’s meditation, it was, at bottom, an attempt to detect the onset of a decision between two options. The subjects were asked to lift two small weights, which had been placed on a table in front of them, and decide which one was heavier. They indicated their choice by placing the heavier object down. The results startled both Marbe and his subjects, all of whom were trained in introspective psychology. For, contrary to their own expectation, they discovered that while the feeling of the two weights was conscious, as well as placing the heavier one down, the moment of decision was not . Julian Jaynes, in his The Origin of Consciousness in the Breakdown of the Bicameral Mind , offers a home-kit version of this experiment:

Take any two unequal objects, such as a pen and pencil or two unequally filled glasses of water, and place them on the desk in front of you. Then, partly closing your eyes to increase your attention to the task, pick up each one with the thumb and forefinger and judge which is heavier. Now introspect on everything you are doing. You will find yourself conscious of the feel of the objects against the skin of your fingers, conscious of the slight downward pressure as you feel the weight of each, conscious of any protuberances on the sides of the objects, and so forth. And now the actual judging of which is heavier. Where is that? Lo! the very act of judgement that one object is heavier than the other is not conscious. It is somehow given to you by your nervous system . 15

Marbe’s experiment thus corroborated James’s meditation on will. The gap before the “deciding” thought exists.

This gap, which both Marbe and James discovered before the “deciding” thought, meditation reveals to exist before all thoughts. Indeed, “the leading idea of Buddhism,” a religion based on meditation, “is that there is no other ultimate reality than separate, instantaneous bits of existence.” 16 James had introspected experience into “small enough pulses” to realize that the discontinuity between passing thoughts is mediated by the passing thoughts themselves ( PU , 129). The “minimal fact” of experience, for James, was a “passing” moment experienced as difference (ibid., 128). But had his introspection deepened into even smaller pulses, he might have realized one more minimal fact about passing, differing moments: they do not go “indissolubly” into each other, in a continuous stream or “sheet,” (ibid., 130) but, rather, they are separated by a space of non-thought, a space he himself had called the “darkness” “out of” which “the rush of our thought” comes (ibid., 128, 130). In ordinary experience, the space between departing and arriving thoughts is so fleeting as to be an “apparition.” 17 In meditation, however, the apparition is real: “If you watch very carefully,” says Krishnamurti, “you will see that, though the response, the movement of thought, seems so swift, there are gaps, there are intervals between thoughts. Between two thoughts there is a period of silence which is not related to the thought process.” 18 According to Eckhart Tolle such a “gap in the stream of the mind” is the key to enlightenment, insofar as it allows you to “disidentify” from the “voice in your head.” 19 In Tibetan Buddhism, where meditation is a widespread daily practice, this gap has a special name: “bardo,” literally “in between.” 20 Some formal practitioners of meditation have even tried to quantify the frequency of the movements/moments of thought (the word “moment” is derived from the Latin word for “movement,” momentum ): 6,460,000 such moments in twenty-four hours (an average of one arising moment per 13.3 milliseconds), according to the Buddhist Sarvaastivaadins; a sect of Chinese Buddhists puts it at one thought per twenty milliseconds. 21

James, as we shall see, found other reasons to question the seamless continuity of the stream of thought. But in his meditation on will, the gap he discovered between “deciding” thoughts corroborated his “minimum of assumption” for all thoughts: “it thinks” is more accurate than “I think.” Even “deciding” thoughts, thoughts of apparent “I” assertion, do not emerge from an “I,” but from a gap.

“Lack of the historical sense is the traditional defect in all philosophers.”

Human, All Too Human: A Book for Free Spirits
by Friedrich Wilhelm Nietzsche

The Traditional Error of Philosophers.—All philosophers make the common mistake of taking contemporary man as their starting point and of trying, through an analysis of him, to reach a conclusion. “Man” involuntarily presents himself to them as an aeterna veritas as a passive element in every hurly-burly, as a fixed standard of things. Yet everything uttered by the philosopher on the subject of man is, in the last resort, nothing more than a piece of testimony concerning man during a very limited period of time. Lack of the historical sense is the traditional defect in all philosophers. Many innocently take man in his most childish state as fashioned through the influence of certain religious and even of certain political developments, as the permanent form under which man must be viewed. They will not learn that man has evolved,4 that the intellectual faculty itself is an evolution, whereas some philosophers make the whole cosmos out of this intellectual faculty. But everything essential in human evolution took place aeons ago, long before the four thousand years or so of which we know anything: during these man may not have changed very much. However, the philosopher ascribes “instinct” to contemporary man and assumes that this is one of the unalterable facts regarding man himself, and hence affords a clue to the understanding of the universe in general. The whole teleology is so planned that man during the last four thousand years shall be spoken of as a being existing from all eternity, and with reference to whom everything in the cosmos from its very inception is naturally ordered. Yet everything evolved: there are no eternal facts as there are no absolute truths. Accordingly, historical philosophising is henceforth indispensable, and with it honesty of judgment.

What Locke Lacked
by Louise Mabille

Locke is indeed a Colossus of modernity, but one whose twin projects of providing a concept of human understanding and political foundation undermine each other. The specificity of the experience of perception alone undermines the universality and uniformity necessary to create the subject required for a justifiable liberalism. Since mere physical perspective can generate so much difference, it is only to be expected that political differences would be even more glaring. However, no political order would ever come to pass without obliterating essential differences. The birth of liberalism was as violent as the Empire that would later be justified in its name, even if its political traces are not so obvious. To interpret is to see in a particular way, at the expense of all other possibilities of interpretation. Perspectives that do not fit are simply ignored, or as that other great resurrectionist of modernity, Freud, would concur, simply driven underground. We ourselves are the source of this interpretative injustice, or more correctly, our need for a world in which it is possible to live, is. To a certain extent, then, man is the measure of the world, but only his world. Man is thus a contingent measure and our measurements do not refer to an original, underlying reality. What we call reality is the result not only of our limited perspectives upon the world, but the interplay of those perspectives themselves. The liberal subject is thus a result of, and not a foundation for, the experience of reality. The subject is identified as origin of meaning only through a process of differentiation and reduction, a course through which the will is designated as a psychological property.

Locke takes the existence of the subject of free will – free to exercise political choice such as rising against a tyrant, choosing representatives, or deciding upon political direction – simply for granted. Furthermore, he seems to think that everyone should agree as to what the rules are according to which these events should happen. For him, the liberal subject underlying these choices is clearly fundamental and universal.

Locke’s philosophy of individualism posits the existence of a discreet and isolated individual, with private interests and rights, independent of his linguistic or socio-historical context. C. B. MacPhearson identifies a distinctly possessive quality to Locke’s individualist ethic, notably in the way in which the individual is conceived as proprietor of his own personhood, possessing capacities such as self-reflection and free will. Freedom becomes associated with possession, which the Greeks would associate with slavery, and society conceived in terms of a collection of free and equal individuals who are related to each through their means of achieving material success – which Nietzsche, too, would associate with slave morality.  […]

There is a central tenet to John Locke’s thinking that, as conventional as it has become, remains a strange strategy. Like Thomas Hobbes, he justifies modern society by contrasting it with an original state of nature. For Hobbes, as we have seen, the state of nature is but a hypothesis, a conceptual tool in order to elucidate a point. For Locke, however, the state of nature is a very real historical event, although not a condition of a state of war. Man was social by nature, rational and free. Locke drew this inspiration from Richard Hooker’s Laws of Ecclesiastical Polity, notably from his idea that church government should be based upon human nature, and not the Bible, which, according to Hooker, told us nothing about human nature. The social contract is a means to escape from nature, friendlier though it be on the Lockean account. For Nietzsche, however, we have never made the escape: we are still holus-bolus in it: ‘being conscious is in no decisive sense the opposite of the instinctive – most of the philosopher’s conscious thinking is secretly directed and compelled into definite channels by his instincts. Behind all logic too, and its apparent autonomy there stand evaluations’ (BGE, 3). Locke makes a singular mistake in thinking the state of nature a distant event. In fact, Nietzsche tells us, we have never left it. We now only wield more sophisticated weapons, such as the guilty conscience […]

Truth originates when humans forget that they are ‘artistically creating subjects’ or products of law or stasis and begin to attach ‘invincible faith’ to their perceptions, thereby creating truth itself. For Nietzsche, the key to understanding the ethic of the concept, the ethic of representation, is conviction […]

Few convictions have proven to be as strong as the conviction of the existence of a fundamental subjectivity. For Nietzsche, it is an illusion, a bundle of drives loosely collected under the name of ‘subject’ —indeed, it is nothing but these drives, willing, and actions in themselves—and it cannot appear as anything else except through the seduction of language (and the fundamental errors of reason petrified in it), which understands and misunderstands all action as conditioned by something which causes actions, by a ‘Subject’ (GM I 13). Subjectivity is a form of linguistic reductionism, and when using language, ‘[w]e enter a realm of crude fetishism when we summon before consciousness the basic presuppositions of the metaphysics of language — in plain talk, the presuppositions of reason. Everywhere reason sees a doer and doing; it believes in will as the cause; it believes in the ego, in the ego as being, in the ego as substance, and it projects this faith in the ego-substance upon all things — only thereby does it first create the concept of ‘thing’ (TI, ‘Reason in Philosophy’ 5). As Nietzsche also states in WP 484, the habit of adding a doer to a deed is a Cartesian leftover that begs more questions than it solves. It is indeed nothing more than an inference according to habit: ‘There is activity, every activity requires an agent, consequently – (BGE, 17). Locke himself found the continuous existence of the self problematic, but did not go as far as Hume’s dissolution of the self into a number of ‘bundles’. After all, even if identity shifts occurred behind the scenes, he required a subject with enough unity to be able to enter into the Social Contract. This subject had to be something more than merely an ‘eternal grammatical blunder’ (D, 120), and willing had to be understood as something simple. For Nietzsche, it is ‘above all complicated, something that is a unit only as a word, a word in which the popular prejudice lurks, which has defeated the always inadequate caution of philosophers’ (BGE, 19).

Nietzsche’s critique of past philosophers
by Michael Lacewing

Nietzsche is questioning the very foundations of philosophy. To accept his claims means being a new kind of philosopher, ones who ‘taste and inclination’, whose values, are quite different. Throughout his philosophy, Nietzsche is concerned with origins, both psychological and historical. Much of philosophy is usually thought of as an a priori investigation. But if Nietzsche can show, as he thinks he can, that philosophical theories and arguments have a specific historical basis, then they are not, in fact, a priori. What is known a priori should not change from one historical era to the next, nor should it depend on someone’s psychology. Plato’s aim, the aim that defines much of philosophy, is to be able to give complete definitions of ideas – ‘what is justice?’, ‘what is knowledge?’. For Plato, we understand an idea when we have direct knowledge of the Form, which is unchanging and has no history. If our ideas have a history, then the philosophical project of trying to give definitions of our concepts, rather than histories, is radically mistaken. For example, in §186, Nietzsche argues that philosophers have consulted their ‘intuitions’ to try to justify this or that moral principle. But they have only been aware of their own morality, of which their ‘justifications’ are in fact only expressions. Morality and moral intuitions have a history, and are not a priori. There is no one definition of justice or good, and the ‘intuitions’ that we use to defend this or that theory are themselves as historical, as contentious as the theories we give – so they offer no real support. The usual ways philosophers discuss morality misunderstands morality from the very outset. The real issues of understanding morality only emerge when we look at the relation between this particular morality and that. There is no world of unchanging ideas, no truths beyond the truths of the world we experience, nothing that stands outside or beyond nature and history.

GENEALOGY AND PHILOSOPHY

Nietzsche develops a new way of philosophizing, which he calls a ‘morphology and evolutionary theory’ (§23), and later calls ‘genealogy’. (‘Morphology’ means the study of the forms something, e.g. morality, can take; ‘genealogy’ means the historical line of descent traced from an ancestor.) He aims to locate the historical origin of philosophical and religious ideas and show how they have changed over time to the present day. His investigation brings together history, psychology, the interpretation of concepts, and a keen sense of what it is like to live with particular ideas and values. In order to best understand which of our ideas and values are particular to us, not a priori or universal, we need to look at real alternatives. In order to understand these alternatives, we need to understand the psychology of the people who lived with them. And so Nietzsche argues that traditional ways of doing philosophy fail – our intuitions are not a reliable guide to the ‘truth’, to the ‘real’ nature of this or that idea or value. And not just our intuitions, but the arguments, and style of arguing, that philosophers have used are unreliable. Philosophy needs to become, or be informed by, genealogy. A lack of any historical sense, says Nietzsche, is the ‘hereditary defect’ of all philosophers.

MOTIVATIONAL ANALYSIS

Having long kept a strict eye on the philosophers, and having looked between their lines, I say to myself… most of a philosopher’s conscious thinking is secretly guided and channelled into particular tracks by his instincts. Behind all logic, too, and its apparent tyranny of movement there are value judgements, or to speak more clearly, physiological demands for the preservation of a particular kind of life. (§3) A person’s theoretical beliefs are best explained, Nietzsche thinks, by evaluative beliefs, particular interpretations of certain values, e.g. that goodness is this and the opposite of badness. These values are best explained as ‘physiological demands for the preservation of a particular kind of life’. Nietzsche holds that each person has a particular psychophysical constitution, formed by both heredity and culture. […] Different values, and different interpretations of these values, support different ways of life, and so people are instinctively drawn to particular values and ways of understanding them. On the basis of these interpretations of values, people come to hold particular philosophical views. §2 has given us an illustration of this: philosophers come to hold metaphysical beliefs about a transcendent world, the ‘true’ and ‘good’ world, because they cannot believe that truth and goodness could originate in the world of normal experience, which is full of illusion, error, and selfishness. Therefore, there ‘must’ be a pure, spiritual world and a spiritual part of human beings, which is the origin of truth and goodness. Philosophy and values But ‘must’ there be a transcendent world? Or is this just what the philosopher wants to be true? Every great philosophy, claims Nietzsche, is ‘the personal confession of its author’ (§6). The moral aims of a philosophy are the ‘seed’ from which the whole theory grows. Philosophers pretend that their opinions have been reached by ‘cold, pure, divinely unhampered dialectic’ when in fact, they are seeking reasons to support their pre-existing commitment to ‘a rarefied and abstract version of their heart’s desire’ (§5), viz. that there is a transcendent world, and that good and bad, true and false are opposites. Consider: Many philosophical systems are of doubtful coherence, e.g. how could there be Forms, and if there were, how could we know about them? Or again, in §11, Nietzsche asks ‘how are synthetic a priori judgments possible?’. The term ‘synthetic a priori’ was invented by Kant. According to Nietzsche, Kant says that such judgments are possible, because we have a ‘faculty’ that makes them possible. What kind of answer is this?? Furthermore, no philosopher has ever been proved right (§25). Given the great difficulty of believing either in a transcendent world or in human cognitive abilities necessary to know about it, we should look elsewhere for an explanation of why someone would hold those beliefs. We can find an answer in their values. There is an interesting structural similarity between Nietzsche’s argument and Hume’s. Both argue that there is no rational explanation of many of our beliefs, and so they try to find the source of these beliefs outside or beyond reason. Hume appeals to imagination and the principle of ‘Custom’. Nietzsche appeals instead to motivation and ‘the bewitchment of language’ (see below). So Nietzsche argues that philosophy is not driven by a pure ‘will to truth’ (§1), to discover the truth whatever it may be. Instead, a philosophy interprets the world in terms of the philosopher’s values. For example, the Stoics argued that we should live ‘according to nature’ (§9). But they interpret nature by their own values, as an embodiment of rationality. They do not see the senselessness, the purposelessness, the indifference of nature to our lives […]

THE BEWITCHMENT OF LANGUAGE

We said above that Nietzsche criticizes past philosophers on two grounds. We have looked at the role of motivation; the second ground is the seduction of grammar. Nietzsche is concerned with the subject-predicate structure of language, and with it the notion of a ‘substance’ (picked out by the grammatical ‘subject’) to which we attribute ‘properties’ (identified by the predicate). This structure leads us into a mistaken metaphysics of ‘substances’. In particular, Nietzsche is concerned with the grammar of ‘I’. We tend to think that ‘I’ refers to some thing, e.g. the soul. Descartes makes this mistake in his cogito – ‘I think’, he argues, refers to a substance engaged in an activity. But Nietzsche repeats the old objection that this is an illegitimate inference (§16) that rests on many unproven assumptions – that I am thinking, that some thing is thinking, that thinking is an activity (the result of a cause, viz. I), that an ‘I’ exists, that we know what it is to think. So the simple sentence ‘I think’ is misleading. In fact, ‘a thought comes when ‘it’ wants to, and not when ‘I’ want it to’ (§17). Even ‘there is thinking’ isn’t right: ‘even this ‘there’ contains an interpretation of the process and is not part of the process itself. People are concluding here according to grammatical habit’. But our language does not allow us just to say ‘thinking’ – this is not a whole sentence. We have to say ‘there is thinking’; so grammar constrains our understanding. Furthermore, Kant shows that rather than the ‘I’ being the basis of thinking, thinking is the basis out of which the appearance of an ‘I’ is created (§54). Once we recognise that there is no soul in a traditional sense, no ‘substance’, something constant through change, something unitary and immortal, ‘the way is clear for new and refined versions of the hypothesis about the soul’ (§12), that it is mortal, that it is multiplicity rather than identical over time, even that it is a social construct and a society of drives. Nietzsche makes a similar argument about the will (§19). Because we have this one word ‘will’, we think that what it refers to must also be one thing. But the act of willing is highly complicated. First, there is an emotion of command, for willing is commanding oneself to do something, and with it a feeling of superiority over that which obeys. Second, there is the expectation that the mere commanding on its own is enough for the action to follow, which increases our sense of power. Third, there is obedience to the command, from which we also derive pleasure. But we ignore the feeling the compulsion, identifying the ‘I’ with the commanding ‘will’. Nietzsche links the seduction of language to the issue of motivation in §20, arguing that ‘the spell of certain grammatical functions is the spell of physiological value judgements’. So even the grammatical structure of language originates in our instincts, different grammars contributing to the creation of favourable conditions for different types of life. So what values are served by these notions of the ‘I’ and the ‘will’? The ‘I’ relates to the idea that we have a soul, which participates in a transcendent world. It functions in support of the ascetic ideal. The ‘will’, and in particular our inherited conception of ‘free will’, serves a particular moral aim

Hume and Nietzsche: Moral Psychology (short essay)
by epictetus_rex

1. Metaphilosophical Motivation

Both Hume and Nietzsche1 advocate a kind of naturalism. This is a weak naturalism, for it does not seek to give science authority over philosophical inquiry, nor does it commit itself to a specific ontological or metaphysical picture. Rather, it seeks to (a) place the human mind firmly in the realm of nature, as subject to the same mechanisms that drive all other natural events, and (b) investigate the world in a way that is roughly congruent with our best current conception(s) of nature […]

Furthermore, the motivation for this general position is common to both thinkers. Hume and Nietzsche saw old rationalist/dualist philosophies as both absurd and harmful: such systems were committed to extravagant and contradictory metaphysical claims which hinder philosophical progress. Furthermore, they alienated humanity from its position in nature—an effect Hume referred to as “anxiety”—and underpinned religious or “monkish” practises which greatly accentuated this alienation. Both Nietzsche and Hume believe quite strongly that coming to see ourselves as we really are will banish these bugbears from human life.

To this end, both thinkers ask us to engage in honest, realistic psychology. “Psychology is once more the path to the fundamental problems,” writes Nietzsche (BGE 23), and Hume agrees:

the only expedient, from which we can hope for success in our philosophical researches, is to leave the tedious lingering method, which we have hitherto followed, and instead of taking now and then a castle or village on the frontier, to march up directly to the capital or center of these sciences, to human nature itself.” (T Intro)

2. Selfhood

Hume and Nietzsche militate against the notion of a unified self, both at-a-time and, a fortiori, over time.

Hume’s quest for a Newtonian “science of the mind” lead him to classify all mental events as either impressions (sensory) or ideas (copies of sensory impressions, distinguished from the former by diminished vivacity or force). The self, or ego, as he says, is just “a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different; whatever natural propension we may have to imagine that simplicity and identity.” (Treatise 4.6) […]

For Nietzsche, the experience of willing lies in a certain kind of pleasure, a feeling of self-mastery and increase of power that comes with all success. This experience leads us to mistakenly posit a simple, unitary cause, the ego. (BGE 19)

The similarities here are manifest: our minds do not have any intrinsic unity to which the term “self” can properly refer, rather, they are collections or “bundles” of events (drives) which may align with or struggle against one another in a myriad of ways. Both thinkers use political models to describe what a person really is. Hume tells us we should “more properly compare the soul to a republic or commonwealth, in which the several members [impressions and ideas] are united by ties of government and subordination, and give rise to persons, who propagate the same republic in the incessant change of its parts” (T 261)

3. Action and The Will

Nietzsche and Hume attack the old platonic conception of a “free will” in lock-step with one another. This picture, roughly, involves a rational intellect which sits above the appetites and ultimately chooses which appetites will express themselves in action. This will is usually not considered to be part of the natural/empirical order, and it is this consequence which irks both Hume and Nietzsche, who offer two seamlessly interchangeable refutations […]

Since we are nothing above and beyond events, there is nothing for this “free will” to be: it is a causa sui, “a sort of rape and perversion of logic… the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense” (BGE 21).

When they discover an erroneous or empty concept such as “Free will” or “the self”, Nietzsche and Hume engage in a sort of error-theorizing which is structurally the same. Peter Kail (2006) has called this a “projective explanation”, whereby belief in those concepts is “explained by appeal to independently intelligible features of psychology”, rather than by reference to the way the world really is1.

The Philosophy of Mind
INSTRUCTOR: Larry Hauser
Chapter 7: Egos, bundles, and multiple selves

  • Who dat?  “I”
    • Locke: “something, I know not what”
    • Hume: the no-self view … “bundle theory”
    • Kant’s transcendental ego: a formal (nonempirical) condition of thought that the “I’ must accompany every perception.
      • Intentional mental state: I think that snow is white.
        • to think: a relation between
          • a subject = “I”
          • a propositional content thought =  snow is white
      • Sensations: I feel the coldness of the snow.
        • to feel: a relation between
          • a subject = “I”
          • a quale = the cold-feeling
    • Friedrich Nietzsche
      • A thought comes when “it” will and not when “I” will. Thus it is a falsification of the evidence to say that the subject “I” conditions the predicate “think.”
      • It is thought, to be sure, but that this “it” should be that old famous “I” is, to put it mildly, only a supposition, an assertion. Above all it is not an “immediate certainty.” … Our conclusion is here formulated out of our grammatical custom: “Thinking is an activity; every activity presumes something which is active, hence ….” 
    • Lichtenberg: “it’s thinking” a la “it’s raining”
      • a mere grammatical requirement
      • no proof of an thinking self

[…]

  • Ego vs. bundle theories (Derek Parfit (1987))
    • Ego: “there really is some kind of continuous self that is the subject of my experiences, that makes decisions, and so on.” (95)
      • Religions: Christianity, Islam, Hinduism
      • Philosophers: Descartes, Locke, Kant & many others (the majority view)
    • Bundle: “there is no underlying continuous and unitary self.” (95)
      • Religion: Buddhism
      • Philosophers: Hume, Nietzsche, Lichtenberg, Wittgenstein, Kripke(?), Parfit, Dennett {a stellar minority}
  • Hume v. Reid
    • David Hume: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure.  I never can catch myself at any time without a perception, and never can observe anything but the perception.  (Hume 1739, Treatise I, VI, iv)
    • Thomas Reid: I am not thought, I am not action, I am not feeling: I am something which thinks and acts and feels. (1785)

Arete: History and Etymology

Arete (moral virtue)
Wikipedia

Arete (Greekἀρετή), in its basic sense, means “excellence of any kind”.[1] The term may also mean “moral virtue”.[1] In its earliest appearance in Greek, this notion of excellence was ultimately bound up with the notion of the fulfillment of purpose or function: the act of living up to one’s full potential.

The term from Homeric times onwards is not gender specific. Homer applies the term of both the Greek and Trojan heroes as well as major female figures, such as Penelope, the wife of the Greek hero Odysseus. In the Homeric poems, Arete is frequently associated with bravery, but more often with effectiveness. The man or woman of Arete is a person of the highest effectiveness; they use all their faculties—strength, bravery and wit—to achieve real results. In the Homeric world, then, Arete involves all of the abilities and potentialities available to humans.

In some contexts, Arete is explicitly linked with human knowledge, where the expressions “virtue is knowledge” and “Arete is knowledge” are used interchangeably. The highest human potential is knowledge and all other human abilities are derived from this central capacity. If Arete is knowledge and study, the highest human knowledge is knowledge about knowledge itself; in this light, the theoretical study of human knowledge, which Aristotle called “contemplation”, is the highest human ability and happiness.[2]

History

The Ancient Greeks applied the term to anything: for example, the excellence of a chimney, the excellence of a bull to be bred and the excellence of a man. The meaning of the word changes depending on what it describes, since everything has its own peculiar excellence; the arete of a man is different from the arete of a horse. This way of thinking comes first from Plato, where it can be seen in the Allegory of the Cave.[3] In particular, the aristocratic class was presumed, essentially by definition, to be exemplary of arete: “The root of the word is the same as aristos, the word which shows superlative ability and superiority, and aristos was constantly used in the plural to denote the nobility.”[4]

By the 5th and 4th centuries BC, arete as applied to men had developed to include quieter virtues, such as dikaiosyne (justice) and sophrosyne (self-restraint). Plato attempted to produce a moral philosophy that incorporated this new usage,[5] but it was in the work of Aristotle that the doctrine of arete found its fullest flowering. Aristotle’s Doctrine of the Mean is a paradigm example of his thinking.

Arete has also been used by Plato when talking about athletic training and also the education of young boys. Stephen G. Miller delves into this usage in his book “Ancient Greek Athletics”. Aristotle is quoted as deliberating between education towards arete “…or those that are theoretical”.[6] Educating towards arete in this sense means that the boy would be educated towards things that are useful in life. However, even Plato himself says that arete is not something that can be agreed upon. He says, “Nor is there even an agreement about what constitutes arete, something that leads logically to a disagreement about the appropriate training for arete.”[7] To say that arete has a common definition of excellence or fulfillment may be an overstatement simply because it was very difficult to pinpoint arete, much less the proper ways to go about obtaining it. […]

Homer

In Homer‘s Iliad and Odyssey, “arete” is used mainly to describe heroes and nobles and their mobile dexterity, with special reference to strength and courage, but it is not limited to this. Penelope‘s arete, for example, relates to co-operation, for which she is praised by Agamemnon. The excellence of the gods generally included their power, but, in the Odyssey (13.42), the gods can grant excellence to a life, which is contextually understood to mean prosperity. Arete was also the name of King Alcinous‘s wife.

According to Bernard Knox‘s notes found in the Robert Fagles translation of The Odyssey, “arete” is also associated with the Greek word for “pray”, araomai.[8]

All Things Shining
by Hubert Dreyfus
pp. 61-63

Homer’s epic poems brought into focus a notion of arete, or excellence in life, that was at the center of the Greek understanding of human being.6 Many admirers of Greek culture have attempted to define this notion, but success here requires avoiding two prominent temptations. There is the temptation to patronize that we have already mentioned. But there is also a temptation to read a modern sensibility into Homer’s time. One standard translation of the Greek word arete as “virtue” runs the risk of this kind of retroactive reading: for any attempt to interpret the Homeric Greek notion of human excellence in terms of “virtue”—especially if one hears in this word its typical Christian or even Roman overtones—is bound to go astray. Excellence in the Greek sense involves neither the Christian notion of humility and love nor the Roman ideal of stoic adherence to one’s duty.7 Instead, excellence in the Homeric world depends crucially on one’s sense of gratitude and wonder.

Nietzsche was one of the first to understand that Homeric excellence bears little resemblance to modern moral agency. His view was that the Homeric world understood nobility in terms of the overpowering strength of noble warriors. The effect of the ensuing Judeo-Christian tradition, on this Nietzschean reading, was to enfeeble the Homeric understanding of excellence by substituting the meekness of the lamb for the strength and power of the noble warrior.8

Nietzsche was certainly right that the Homeric tradition valorizes the strong, noble hero; and he was right, too, that in some important sense the Homeric account of excellence is foreign to our basic moralizing assumptions. But there is something that the Nietzschean account leaves out. As Bernard Knox emphasizes, the Greek word arete is etymologically related to the Greek verb “to pray” (araomai).9 It follows that Homer’s basic account of human excellence involves the necessity of being in an appropriate relationship to whatever is understood to be sacred in the culture. Helen’s greatness, on this interpretation, is not properly measured in terms of the degree to which she is morally responsible for her actions.

What makes Helen great in Homer’s world is her ability to live a life that is constantly responsive to golden Aphrodite, the shining example of the sacred erotic dimension of existence. Likewise, Achilles had a special kind of receptivity to Ares and his warlike way of life; Odysseus had Athena, with her wisdom and cultural adaptability, to look out for him. Presumably, the master craftsmen of Homer’s world worked in the light of Hephaestus’s shining. In order to engage with this understanding of human excellence, we will have to think clearly about how the Homeric Greeks understood themselves. Why would it make sense to describe their lives in relation to the presence and absence of the gods?

Several questions focus this kind of approach. What is the phenomenon that Homer is responding to when he says that a god intervened or in some way took part in an action or event? Is this phenomenon recognizable to us, even if only marginally? And if Homer’s reference to the gods is something other than an attempt to pass off moral responsibility for one’s actions, then what exactly is it? Only by facing these questions head on can we understand whether it is possible—or desirable—to lure back Homer’s polytheistic gods.

The gods are essential to the Homeric Greek understanding of what it is to be a human being at all. As Peisistratus—the son of wise old Nestor—says toward the beginning of the Odyssey, “All men need the gods.”10 The Greeks were deeply aware of the ways in which our successes and our failures—indeed, our very actions themselves—are never completely under our control. They were constantly sensitive to, amazed by, and grateful for those actions that one cannot perform on one’s own simply by trying harder: going to sleep, waking up, fitting in, standing out, gathering crowds together, holding their attention with a speech, changing their mood, or indeed being filled with longing, desire, courage, wisdom, and so on. Homer sees each of these achievements as a particular god’s gift. To say that all men need the gods therefore is to say, in part at least, that we are the kinds of beings who are at our best when we find ourselves acting in ways that we cannot—and ought not—entirely take credit for.

The Discovery of the Mind
by Bruno Snell
pp. 158-160

The words for virtue and good, arete and agathos, are at first by no means clearly distinguished from the area of profit. In the early period they are not as palpably moral in content as might be supposed; we may compare the German terms Tu end and gut which originally stood for the ‘suitable’ (taugende) and the ‘fitting’ (cf. Gatte). When Homer says that a man is good, agathos, he does not mean thereby that he is morally unobjectionable, much less good-hearted, but rather that he is useful, proficient, and capable of vigorous action. We also speak of a good warrior or a good instrument. Similarly arete, virtue, does not denote a moral property but nobility, achievement, success and reputation. And yet these words have an unmistakable tendency toward the moral because, unlike ‘happiness’ or ‘profit’, they designate qualities for which a man may win the respect of his whole community. Arete is ‘ability’ and ‘achievement’, characteristics which are expected of a ‘good’, an ‘able’ man, an aner agathos. From Homer to Plato and beyond these words spell out the worth of a man and his work. Any change in their meaning, therefore, would indicate a reassessment of values. It is possible to show how at various times the formation and consolidation of social groups and even of states was connected with people’s ideas about the ‘good’. But that would be tantamount to writing a history of Greek culture. In Homer, to possess ‘virtue’ or to be ‘good’ means to realize one’s nature, and one’s wishes, to perfection. Frequently happiness and profit form the reward, but it is no such extrinsic prospect which leads men to virtue and goodness. The expressions contain a germ of the notion of entelechy. A Homeric hero, for instance, is capable of ‘reminding himself’, or of ‘experiencing’, that he is noble. ‘Use your experience to become what you are’ advises Pindar who adheres to this image of arete. The ‘good’ man fulfils his proper function, prattei ta heautou, as Plato demands it; he achieves his own perfection. And in the early period this also entails that he is good in the eyes of others, for the notions and definitions of goodness are plain and uniform: a man appears to others as he is.

In the Iliad (11.404—410) Odysseus reminds himself that he is an aristocrat, and thereby resolves his doubts how he should conduct himself in a critical situation. He does it by concentrating on the thought that he belongs to a certain social order, and that it is his duty to fulfill the ‘virtue’ of that order. The universal which underlies the predication ‘I am a noble’ is the group; he does not reflect on an abstract ‘good ’but upon the circle of which he claims membership. It is the same as if an officer were to say: ‘As an officer I must do this or that,’ thus gauging his action by the rigid conception of honour peculiar to his caste.

Aretan is ‘to thrive’; arete is the objective which the early nobles attach to achievement and success. By means of arete the aristocrat implements the ideal of his order—and at the same time distinguishes himself above his fellow nobles. With his arete the individual subjects himself to the judgment of his community, but he also surpasses it as an individual. Since the days of Jacob Burckhardt the competitive character of the great Greek achievements has rightly been stressed. Well into the classical period, those who compete for arete are remunerated with glory and honour. The community puts its stamp of approval on the value which the individual sets on himself. Thus honour, time, is even more significant than arete for the growth of the moral consciousness, because it is more evident, more palpable to all. From his earliest boyhood the young nobleman is urged to think of his glory and his honour; he must look out for his good name, and he must see to it that he commands the necessary respect. For honour is a very sensitive plant; wherever it is destroyed the moral existence of the loser collapses. Its importance is greater even than that of life itself; for the sake of glory and honour the knight is prepared to sacrifice his life.

pp. 169-172

The truth of the matter is that it was not the concept of justice but that of arete which gave rise to the call for positive individual achievement, the moral imperative which the early Greek community enjoins upon its members who in turn acknowledge it for themselves. A man may have purely egotistical motives for desiring virtue and achievement, but his group gives him considerably more credit for these ideals than if he were to desire profit or happiness. The community expects, and even demands, arete. Conversely a man who accomplishes a high purpose may convince himself so thoroughly that his deed serves the interests of a supra-personal, a universal cause that the alternative of egotism or altruism becomes irrelevant. What does the community require of the individual? What does the individual regard as universal, as eternal? These, in the archaic age, are the questions about which the speculations on arete revolve.

The problem remains simple as long as the individual cherishes the same values as the rest of his group. Given this condition, even the ordinary things in life are suffused with an air of dignity, because they are part of custom and tradition. The various daily functions, such as rising in the morning and the eating of meals, are sanctified by prayer and sacrifice, and the crucial events in the life of man—birth, marriage, burial—are for ever fixed and rooted in the rigid forms of cult. Life bears the imprint of a permanent authority which is divine, and all activity is, therefore, more than just personal striving. No one doubts the meaning of life; the hallowed tradition is carried on with implicit trust in the holy wisdom of its rules. In such a society, if a man shows unusual capacity he is rewarded as a matter of course. In Homer a signal achievement is, as one would expect, also honoured with a special permanence, through the song of the bard which outlasts the deed celebrated and preserves it for posterity. This simple concept is still to be found in Pindar’s Epinicians. The problem of virtue becomes more complex when the ancient and universally recognized ideal of chivalry breaks down. Already in Homeric times a differentiation sets in. As we have seen in the story of the quarrel over the arms of Achilles, the aretai become a subject for controversy. The word arete itself contains a tendency toward the differentiation of values, since it is possible to speak of the virtues of various men and various things. As more sections of society become aware of their own merit, they are less willing to conform to the ideal of the once-dominant class. It is discovered that the ways of men are diverse, and that arete may be attained in all sorts of professions. Whereas aristocratic society had been held together, not to say made possible by a uniform notion of arete, people now begin to ask what true virtue is. The crisis of the social system is at the same time the crisis of an ideal, and thus of morality. Archilochus says (fr. 41)that different men have their hearts quickened in various ways. But he also states, elaborating a thought which first crops up in the Odyssey: the mind of men is as Zeus ushers in each day, and they think whatever they happen to hit upon (fr. 68). One result of this splitting up of the various forms of life is a certain failure of nerve. Man begins to feel that he is changeable and exposed to many variable forces. This insight deepens the moral reflexions of the archaic period; the search for the good becomes a search for the permanent.

The topic of the virtues is especially prominent in the elegy. Several elegiac poets furnish lists of the various aretai which they exemplify by means of well-known myths. Their purpose is to clarify for themselves their own attitudes toward the conflicting standards of life. Theognis (699 ff.) stands at the end of this development; with righteous indignation he complains that the masses no longer have eyes for anything except wealth. For him material gain has, in contrast with earlier views, become an enemy of virtue.

The first to deal with this general issue is Tyrtaeus. His call to arms pronounces the Spartan ideal; perhaps he was the one to formulate that ideal for the first time. Nothing matters but the bravery of the soldier fighting for his country. Emphatically he rejects all other accomplishments and virtues as secondary: the swiftness of the runner in the arena, or the strength of the wrestler, or again physical beauty, wealth, royal power, and eloquence, are as nothing before bravery. In the Iliad also a hero best proves his virtue by standing firm against the enemy, but that is not his only proof; the heroic figures of Homer dazzle us precisely because of their richness in human qualities. Achilles is not only brave but also beautiful, ‘swift of foot’, he knows how to sing, and so forth. Tyrtaeus sharply reduces the scope of the older arete; what is more, he goes far beyond Homer in magnifying the fame of fortitude and the ignominy which awaits the coward. Of the fallen he actually says that they acquire immortality (9.32). This one-sidedness is due to the fact that the community has redoubled its claim on the individual; Sparta in particular taxed the energies of its citizenry to the utmost during the calamitous period of the Messenian wars. The community is a thing of permanence for whose sake the individual mortal has to lay down his life, and in whose memory lies his only chance for any kind of survival. Even in Tyrtaeus, however, these claims of the group do not lead to a termite morality. Far from prescribing a blind and unthinking service to the whole, or a spirit of slavish self-sacrifice, Tyrtaeus esteems the performance of the individual as a deed worthy of fame. This is a basic ingredient of arete which, in spite of countless shifts and variations, is never wholly lost.

Philosophy Before Socrates
by Richard D. McKirahan
pp. 366-369

Aretē and Agathos These two basic concepts of Greek morality are closely related and not straightforwardly translatable into English. As an approximation, aretē can be rendered “excellence” or “goodness” (sometimes “virtue”), and agathos as “excellent” or “good.” The terms are related in that a thing or person is agathos if and only if it has aretē and just because it has aretē. The concepts apply to objects, conditions, and actions as well as to humans. They are connected with the concept of ergon (plural, erga), which may be rendered as “function” or “characteristic activity.” A good (agathos) person is one who performs human erga well, and similarly a good knife is a knife that performs the ergon of a knife well. The ergon of a knife is cutting, and an agathos knife is one that cuts well. Thus, the aretē of a knife is the qualities or characteristics a knife must have in order to cut well. Likewise, if a human ergon can be identified, an agathos human is one who can and on appropriate occasions does perform that ergon well, and human aretē is the qualities or characteristics that enable him or her to do so. The classical discussion of these concepts occurs after our period, in Aristotle,6 but he is only making explicit ideas that go back to Homer and which throw light on much of the pre-philosophical ethical thought of the Greeks.

This connection of concepts makes it automatic, virtually an analytic truth, that the right goal for a person—any person—is to be or become agathos. Even if that goal is unreachable for someone, the aretē–agathos standard still stands as an ideal against which to measure one’s successes and failures. However, there is room for debate over the nature of human erga, both whether there is a set of erga applicable to all humans and relevant to aretē and, supposing that there is such a set of erga, what those erga are. The existence of the aretē–agathos standard makes it vitally important to settle these issues, for otherwise human life is left adrift with no standards of conduct. […]

The moral scene Homer presents is appropriate to the society it represents and quite alien to our own. It is the starting point for subsequent moral speculation which no one in the later Greek tradition could quite forget. The development of Greek moral thought through the Archaic and Classical periods can be seen as the gradual replacement of the competitive by the cooperative virtues as the primary virtues of conduct and as the recognition and increasing recognition of the significance of people’s intentions as well as their actions.7

Rapid change in Greek society in the Archaic and Classical periods called for new conceptions of the ideal human and the ideal human life and activities. The Archaic period saw different kinds of rulers from the Homeric kings, and individual combat gave way to the united front of a phalanx of hoplites (heavily armed warriors). Even though the Homeric warrior-king was no longer a possible role in society, the qualities of good birth, beauty, courage, honor, and the abilities to give good counsel and rule well remained. Nevertheless, the various strands of the Homeric heroic ideal began to unravel. In particular, good birth, wealth, and fighting ability no longer automatically went together. This situation forced the issue: what are the best qualities we can possess? What constitutes human aretē? The literary sources contain conflicting claims about the best life for a person, the best kind of person to be, and the relative merits of qualities thought to be ingredients of human happiness. In one way or another these different conceptions of human excellence have Homeric origins, though they diverge from Homer’s conception and from one another.

Lack of space makes it impossible to present the wealth of materials that bear on this subject.8 I will confine discussion to two representatives of the aristocratic tradition who wrote at the end of the Archaic period. Pindar shows how the aristocratic ideal had survived and been transformed from the Homeric conception and how vital it remained as late as the early fifth century, and Theognis reveals how social, political, and economic reality was undermining that ideal.

p. 374

The increase in wealth and the shift in its distribution which had begun by the seventh century led to profound changes in the social and political scenes in the sixth and forced a wedge in among the complex of qualities which traditionally constituted aristocratic aretē. Pindar’s unified picture in which wealth, power, and noble birth tend to go together became ever less true to contemporary reality.

The aristocratic response to this changed situation receives its clearest expression in the poems attributed to Theognis and composed in the sixth and early fifth centuries. Even less than with Pindar can we find a consistent set of views advocated in these poems, but among the most frequently recurring themes are the view that money does not make the man, that many undeserving people are now rich and many deserving people (deserving because of their birth and social background) are now poor. It is noteworthy how Theognis plays on the different connotations of uses of the primary terms of value, agathos and aretē, and their opposites kakos and kakia: morally good vs. evil; well-born, noble vs. low-born; and politically and socially powerful vs. powerless. Since the traditional positive attributes no longer regularly all went together, it was important to decide which are most important, indeed which are the essential ingredients of human aretē.

pp. 379-382

In short, Protagoras taught his students how to succeed in public and private life. What he claimed to teach is, in a word, aretē. That this was his boast follows from the intimate connection between agathos and aretē as well as from the fact that a person with aretē is one who enjoys success, as measured by current standards. Anyone with the abilities Protagoras claimed to teach had the keys to a successful life in fifth-century Athens.

In fact, the key to success was rhetoric, the art of public speaking, which has a precedent in the heroic conception of aretē, which included excellence in counsel. But the Sophists’ emphasis on rhetoric must not be understood as hearkening back to Homeric values. Clear reasons why success in life depended on the ability to speak well in public can be found in fifth-century politics and society. […]

That is not to say that every kind of success depended on rhetoric. It could not make you successful in a craft like carpentry and would not on its own make you a successful military commander. Nor is it plausible that every student of Protagoras could have become another Pericles. Protagoras acknowledged that natural aptitude was required over and above diligence. […] Protagoras recognized that he could not make a silk purse out of a sow’s ear, but he claimed to be able to develop a (sufficiently young) person’s abilities to the greatest extent possible.28

Pericles was an effective counselor in part because he could speak well but also by dint of his personality, experience, and intelligence. To a large extent these last three factors cannot be taught, but rhetoric can be offered as a tekhnē, a technical art or skill which has rules of its own and which can be instilled through training and practice. In these ways rhetoric is like medicine, carpentry, and other technical arts, but it is different in its seemingly universal applicability. Debates can arise on any conceivable subject, including technical ones, and rhetorical skill can be turned to the topic at hand whatever it may be. The story goes that Gorgias used his rhetorical skill to convince medical patients to undergo surgery when physicians failed to persuade them.29 Socrates turned the tables on the Sophists, arguing that if rhetoric has no specific subject matter, then so far from being a universal art, it should not be considered an art at all.30 And even if we grant that rhetoric is an art that can be taught, it remains controversial whether aretē can be taught and in what aretē consists. […]

The main charges against the Sophists are of two different sorts. First the charge of prostituting themselves. Plato emphasizes the money-making aspect of the Sophist’s work, which he uses as one of his chief criteria for determining that Socrates was not a Sophist. This charge contains two elements: the Sophists teach aretē for money, and they teach it to anyone who pays. Both elements have aristocratic origins. Traditionally aretē was learned from one’s family and friends and came as the result of a long process of socialization beginning in infancy. Such training and background can hardly be bought. Further, according to the aristocratic mentality most people are not of the right type, the appropriate social background, to aspire to aretē.

Lila
by Robert Pirsig
pp. 436-442

Digging back into ancient Greek history, to the time when this mythos-to-logos transition was taking place, Phædrus noted that the ancient rhetoricians of Greece, the Sophists, had taught what they called aretê , which was a synonym for Quality. Victorians had translated aretê as “virtue” but Victorian “virtue” connoted sexual abstinence, prissiness and a holier-than-thou snobbery. This was a long way from what the ancient Greeks meant. The early Greek literature, particularly the poetry of Homer, showed that aretê had been a central and vital term.

With Homer Phædrus was certain he’d gone back as far as anyone could go, but one day he came across some information that startled him. It said that by following linguistic analysis you could go even further back into the mythos than Homer. Ancient Greek was not an original language. It was descended from a much earlier one, now called the Proto-Indo-European language. This language has left no fragments but has been derived by scholars from similarities between such languages as Sanskrit, Greek and English which have indicated that these languages were fallouts from a common prehistoric tongue. After thousands of years of separation from Greek and English the Hindi word for “mother” is still “Ma.” Yoga both looks like and is translated as “yoke.” The reason an Indian rajah’ s title sounds like “regent” is because both terms are fallouts from Proto-Indo-European. Today a Proto-Indo-European dictionary contains more than a thousand entries with derivations extending into more than one hundred languages.

Just for curiosity’s sake Phædrus decided to see if aretê was in it. He looked under the “a” words and was disappointed to find it was not. Then he noted a statement that said that the Greeks were not the most faithful to the Proto-Indo-European spelling. Among other sins, the Greeks added the prefix “a” to many of the Proto-Indo-European roots. He checked this out by looking for aretê under “r.” This time a door opened.

The Proto-Indo-European root of aretê was the morpheme rt . There, beside aretê , was a treasure room of other derived “rt” words: “arithmetic,” “aristocrat,” “art,” “rhetoric,” “worth,” “rite,” “ritual,” “wright,” “right (handed)” and “right (correct).” All of these words except arithmetic seemed to have a vague thesaurus-like similarity to Quality. Phædrus studied them carefully, letting them soak in, trying to guess what sort of concept, what sort of way of seeing the world, could give rise to such a collection.

When the morpheme appeared in aristocrat and arithmetic the reference was to “firstness.” Rt meant first. When it appeared in art and wright it seemed to mean “created” and “of beauty.” “Ritual” suggested repetitive order. And the word right has two meanings: “right-handed” and “moral and esthetic correctness.” When all these meanings were strung together a fuller picture of the rt morpheme emerged. Rt referred to the “first, created, beautiful repetitive order of moral and esthetic correctness.” […]

There was just one thing wrong with this Proto-Indo-European discovery, something Phædrus had tried to sweep under the carpet at first, but which kept creeping out again. The meanings, grouped together, suggested something different from his interpretation of aretê . They suggested “importance” but it was an importance that was formal and social and procedural and manufactured, almost an antonym to the Quality he was talking about. Rt meant “quality” all right but the quality it meant was static, not Dynamic. He had wanted it to come out the other way, but it looked as though it wasn’t going to do it. Ritual. That was the last thing he wanted aretê to turn out to be. Bad news. It looked as though the Victorian translation of aretê as “virtue” might be better after all since “virtue” implies ritualistic conformity to social protocol. […]

Rta . It was a Sanskrit word, and Phædrus remembered what it meant: Rta was the “cosmic order of things.” Then he remembered he had read that the Sanskrit language was considered the most faithful to the Proto-Indo-European root, probably because the linguistic patterns had been so carefully preserved by the Hindu priests. […]

Rta , from the oldest portion of the Rg Veda , which was the oldest known writing of the Indo-Aryan language. The sun god, Sūrya , began his chariot ride across the heavens from the abode of rta. Varuna , the god for whom the city in which Phædrus was studying was named, was the chief support of rta .

Varuna was omniscient and was described as ever witnessing the truth and falsehood of men—as being “the third whenever two plot in secret.” He was essentially a god of righteousness and a guardian of all that is worthy and good. The texts had said that the distinctive feature of Varuna was his unswerving adherence to high principles. Later he was overshadowed by Indra who was a thunder god and destroyer of the enemies of the Indo-Aryans. But all the gods were conceived as “guardians of ta ,” willing the right and making sure it was carried out.

One of Phædrus’s old school texts, written by M. Hiriyanna, contained a good summary: “Rta , which etymologically stands for ‘course’ originally meant ‘cosmic order,’ the maintenance of which was the purpose of all the gods; and later it also came to mean ‘right,’ so that the gods were conceived as preserving the world not merely from physical disorder but also from moral chaos. The one idea is implicit in the other: and there is order in the universe because its control is in righteous hands.…”

The physical order of the universe is also the moral order of the universe. Rta is both. This was exactly what the Metaphysics of Quality was claiming. It was not a new idea. It was the oldest idea known to man.

This identification of rta and aretê was enormously valuable, Phædrus thought, because it provided a huge historical panorama in which the fundamental conflict between static and Dynamic Quality had been worked out. It answered the question of why aretê meant ritual. Rta also meant ritual. But unlike the Greeks, the Hindus in their many thousands of years of cultural evolution had paid enormous attention to the conflict between ritual and freedom. Their resolution of this conflict in the Buddhist and Vedantist philosophies is one of the profound achievements of the human mind.

Pagan Ethics: Paganism as a World Religion
by Michael York
pp. 59-60

Pirsig contends that Plato incorporated the arete of the Sophists into his dichotomy between ideas and appearances — where it was subordinated to Truth. Once Plato identifies the True with the Good, arete’s position is usurped by “dialectically determined truth.” This, in turn, allows Plato to demote the Good to a lower order and minor branch of knowledge. For Pirsig, the Sophists were those Greek philosophers who exalted quality over truth; they were the true champions of arete or excellence. With a pagan quest for the ethical that develops from an idolatrous understanding of the physical, while Aristotle remains an important consideration, it is to the Sophists (particularly Protagoras, Prodicus and Pirsig’s understanding of them) and a reconstruction of their underlying humanist position that perhaps the most important answers are to be framed if not found as well.

A basic pagan position is an acceptance of the appetites — in fact, their celebration rather than their condemnation. We find the most unbridled expression of the appetites in the actions of the young. Youth may engage in binge-drinking, vandalism, theft, promiscuity and profligate experimentation. Pagan perspectives may recognize the inherent dangers in these as there are in life itself. But they also trust the overall process of learning. In paganism, morality has a much greater latitude than it does in the transcendental philosophy of a Pythagoras, Plato, or Plotinus: it may veer toward a form of relativism, but its ultimate check is always the sanctity of the other animate individuals. An it harm none, do what ye will. The pagan ethic must be found within the appetites and not in their denial.

In fact, paganism is part of a protest against Platonic assertion. The wider denial is that of nature herself. Nature denies the Platonic by refusing to conform to the Platonic ideal. It insists on moments of chaos, the epagomenae, the carnival, that overlap between the real and the ideal that is itself a metaphor for reality. The actual year is a refusal to cooperate with the mathematically ideal year of 360 days — close but only tantalizingly.

In addition, pagans have always loved asking what is arete? This is the fundamental question we encounter with the Sophists, Plato and Aristotle. It is the question that is before us still. The classics considered variously both happiness and the good as alternative answers. The Hedonists pick happiness — but a particular kind of happiness. The underlying principle recognized behind all these possibilities is arete ‘excellence, the best’ however it is embodied — whether god, goddess, goods, the good, gods, virtue, happiness, pleasure or all of these together. Arete is that to which both individual and community aspire. Each wants one’s own individual way of putting it together in excellent fashion — but at the same time wanting some commensurable overlap of the individual way with the community way.

What is the truth of the historical claims about Greek philosophy in Zen and the Art of Motorcycle Maintenance?
answer by Ammon Allred

Arete is usually translated as “virtue,” which is certainly connected up with the good “agathon” — but in Plato an impersonal Good is probably more important than aletheia or truth. See, for instance, the central images at the end of Book VI, where the Good is called the “Father of the Sun.” The same holds in the Philebus. And it wouldn’t be right to say that Plato (or Aristotle) thought virtue was part of some small branch called “ethics” (Plato doesn’t divide his philosophy up this way; Aristotle does — although then we get into fact that we don’t have the dialogues he wrote — but still what he means by ethics is far broader than what we mean).

Certainly the Sophists pushed for a humanistic account of the Good, whereas Plato’s was far more impersonal. And Plato himself had a complex relationship to the Sophists (consider the dialogue of Protagoras, where Socrates and Protagoras both end up about equally triumphant).

That said, Pirsig is almost certainly right about Platonism — that is to say, the approach to philosophy that has been taught as though it were Plato’s philosophy. Certainly, the sophists have gotten a bad rap because of the view that Socrates and Plato were taken to have about the sophists; but even there, many philosophers have tried to rehabilitate them: most famously, Nietzsche.