Battle of Voices of Authorization in the World and in Ourselves

New Feelings: Podcast Passivity
by Suzannah Showler

My concern is that on some level, I’m prone to mistake any voice that pours so convincingly into my brain for my own. And maybe it’s not even a mistake, per se, so much as a calculated strategy on the part of my ego to maintain its primacy, targeting and claiming any foreign object that would stray so far into the inner-sanctum of my consciousness. Whether the medium is insidious, my mind a greedy assimilation machine, or both, it seems that at least some of the time, podcasts don’t just drown out my inner-monologue — they actually overwrite it. When I listen to a podcast, I think some part of me believes I’m only hearing myself think.

Twentieth-century critics worried about this, too. Writing sometime around the late 1930s, Theodore Adorno theorized that a solitary listener under the influence of radio is vulnerable to persuasion by an anonymous authority. He writes: “The deeper this [radio] voice is involved within his own privacy, the more it appears to pour out of the cells of his more intimate life; the more he gets the impression that his own cupboard, his own photography, his own bedroom speaks to him in a personal way, devoid of the intermediary stage of the printed words; the more perfectly he is ready to accept wholesale whatever he hears. It is just this privacy which fosters the authority of the radio voice and helps to hide it by making it no longer appear to come from outside.”

I’ll admit that I have occasionally been gripped by false memories as a result of podcasts — been briefly sure that I’d seen a TV show I’d never watched, or convinced that it was a friend, not a professional producer, who told me some great anecdote. But on the whole, my concern is less that I am being brainwashed and more that I’m indulging in something deeply avoidant: filling my head with ideas without actually having to do the messy, repetitive, boring, or anxious work of making meaning for myself. It’s like downloading a prefabbed stream of consciousness and then insisting it’s DIY. The effect is twofold: a podcast distracts me from the tedium of being alone with myself, while also convincingly building a rich, highly-produced version of my inner life. Of course that’s addictive — it’s one of the most effective answers to loneliness and self-importance I can imagine.

Being Your Selves: Identity R&D on alt Twitter
by Aaron Z. Lewis

Digital masks are making the static and immortal soul of the Renaissance seem increasingly out of touch. In an environment of info overload, it’s easy to lose track of where “my” ideas come from. My brain is filled with free-floating thoughts that are totally untethered from the humans who came up with them. I speak and think in memes — a language that’s more like the anonymous manuscript culture of medieval times than the individualist Renaissance era. Everything is a remix, including our identities. We wear our brains outside of our skulls and our nerves outside our skin. We walk around with other people’s voices in our heads. The self is in the network rather than a node.

The ability to play multiple characters online means that the project of crafting your identity now extends far beyond your physical body. In his later years, McLuhan predicted that this newfound ability would lead to a society-wide identity crisis:

The instant nature of electric-information movement is decentralizing — rather than enlarging — the family of man into a new state of multitudinous tribal existences. Particularly in countries where literate values are deeply institutionalized, this is a highly traumatic process, since the clash of old segmented visual culture and the new integral electronic culture creates a crisis of identity, a vacuum of the self, which generates tremendous violence — violence that is simply an identity quest, private or corporate, social or commercial.

As I survey the cultural landscape of 2020, it seems that McLuhan’s predictions have unfortunately come true. More than ever before, people are exposed to a daily onslaught of world views and belief systems that threaten their identities. Social media has become the battlefield for a modern-day Hobbesian war of all-against-all. And this conflict has leaked into the allegedly “offline” world.

Voice and Perspective

“No man should [refer to himself in the third person] unless he is the King of England — or has a tapeworm.”
~ Mark Twain

“Love him or hate him, Trump is a man who is certain about what he wants and sets out to get it, no holds barred. Women find his power almost as much of a turn-on as his money.”
~ Donald Trump

The self is a confusing matter. As always, who is speaking and who is listening. Clues can come from the language that is used. And the language we use shapes human experience, as studied in linguistic relativity. Speaking in first person may be a more recent innovation of the human society and psyche:

“An unmistakable individual voice, using the first person singular “I,” first appeared in the works of lyric poets. Archilochus, who lived in the first half of the seventh century B.C., sang his own unhappy love rather than assume the role of a spectator describing the frustrations of love in others. . . [H]e had in mind an immaterial sort of soul, with which Homer was not acquainted” (Yi-Fu Tuan, Segmented Worlds and Self, p. 152).

The autobiographical self requires the self-authorization of Jaynesian narrative consciousness. The emergence of the egoic self is the fall into historical time, an issue too complex for discussion here (see Julian Jaynes’ classic work or the diverse Jaynesian scholarship it inspired, or look at some of my previous posts on the topic).

Consider the mirror effect. When hunter-gatherers encounter a mirror for the first time there is what is called  “the tribal terror of self-recognition” (Edmund Carpenter as quoted by Philippe Rochat, from Others in Mind, p. 31). “After a frightening reaction,” Carpenter wrote about the Biamis of Papua New Guinea, “they become paralyzed, covering their mouths and hiding their heads — they stood transfixed looking at their own images, only their stomach muscles betraying great tension.”

Research has shown that heavy use of first person is associated with depression, anxiety, and other distressing emotions. Oddly, this full immersion into subjectivity can lead into depressive depersonalization and depressive realism — the individual sometimes passes through the self and into some other state. And in that other state, I’ve noticed that silence befalls the mind, that is to say the loss of the ‘I’ where the inner dialogue goes silent. One sees the world as if coldly detached, as if outside of it all.

Third person is stranger and with a much more ancient pedigree. In the modern mind, third person is often taken as an effect of narcissistic inflation of the ego, such as seen with celebrities speaking of themselves in terms of their media identities. But in other countries and at other times, it has been an indication of religious humility or a spiritual shifting of perspective (possibly expressing the belief that only God can speak of Himself as ‘I’).

There is also the Batman effect. Children act more capable and with greater perseverance when speaking of themselves in third person, specifically as superhero character. As with religious practice, this serves the purpose of distancing from emotion. Yet a sense of self can simultaneously be strengthened when the individual becomes identified with a character. This is similar to celebrities who turn their social identities into something akin to mythological figures. Or as the child can be encouraged to invoke their favorite superhero to stand in for their underdeveloped ego-selves, a religious true believer can speak of God or the Holy Spirit working through them. There is immense power in this.

This might point to the Jaynesian bicameral mind. When an Australian Aborigine ritually sings a Songline, he is invoking a god-spirit-personality. That third person of the mythological story shifts the Aboriginal experience of self and reality. The Aborigine has as many selves as he has Songlines, each a self-contained worldview and way of being. This could be a more natural expression of human nature… or at least an easier and less taxing mode of being (Hunger for Connection). Jaynes noted that schizophrenics with their weakened and loosened egoic boundaries have seemingly inexhaustible energy.

He suspected this might explain why archaic humans could do seemingly impossible tasks such as building pyramids, something moderns could only accomplish through use of our largest and most powerful cranes. Yet the early Egyptians managed it with a small, impoverished, and malnourished population that lacked even basic infrastructure of roads and bridges. Similarly, this might explain how many tribal people can dance for days on end with little rest and no food. And maybe also like how armies can collectively march for days on end in a way no individual could (Music and Dance on the Mind).

Upholding rigid egoic boundaries is tiresome work. This might be why, when individuals reach exhaustion under stress (mourning a death, getting lost in the wilderness, etc), they can experience what John Geiger called the third man factor, the appearance of another self often with its own separate voice. Apparently, when all else fails, this is the state of mind we fall back on and it’s a common experience at that. Furthermore, a negatory experience, as Jaynes describes it, can lead to negatory possession in the re-emergence of a bicameral-like mind with a third person identity becoming a fully expressed personality of its own, a phenomenon that can happen through trauma-induced dissociation and splitting:

“Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.”

Jaynes noted that those who are abused in childhood are more easily hypnotized. Their egoic boundaries never as fully develop or else the large gaps are left in this self-construction, gaps through which other voices can slip in. This relates to what has variously been referred to as the porous self, thin boundary type, fantasy proneness, etc. Compared to those who have never experienced trauma, I bet such people would find it easier to speak in the third person and when doing so would show a greater shift in personality and behavior.

As for first person subjectivity, it has its own peculiarities. I think of the association of addiction and individuality, as explored by Johann Hari and as elaborated in my own writings (Individualism and Isolation; To Put the Rat Back in the Rat Park; & The Agricultural Mind). As the ego is a tiresome project that depletes one’s reserves, maybe it’s the energy drain that causes the depression, irritability, and such. A person with such a guarded sense of self would be resistant to speak in third person in finding it hard to escape the trap of ego they’ve so carefully constructed. So many of us have fallen under its sway and can’t imagine anything else (The Spell of Inner Speech). That is probably why it so often requires trauma to break open our psychological defenses.

Besides trauma, many moderns have sought to escape the egoic prison through religious practices. Ancient methods include fasting, meditation, and prayer — these are common across the world. Fasting, by the way, fundamentally alters the functioning of the body and mind through ketosis (also the result of a very low-carb diet), something I’ve speculated may have been a supporting factor for the bicameral mind and related to do with the much earlier cultural preference of psychedelics over addictive stimulants, an entirely different discussion (“Yes, tea banished the fairies.”; & Autism and the Upper Crust). The simplest method of all is using third person language until it becomes a new habit of mind, something might require a long period of practice to feel natural.

The modern mind has always been under stress. That is because it is the source of that stress. It’s not a stable and sustainable way of being in the world (The Crisis of Identity). Rather, it’s a transitional state and all of modernity has been a centuries-long stage of transformation into something else. There is an impulse hidden within, if we could only trigger the release of the locking mechanism (Lock Without a Key). The language of perspectives, as Scott Preston explores (The Three Gems and The Cross of Reality), tells us something important about our predicament. Words such as ‘I’, ‘you’, etc aren’t merely words. In language, we discover our humanity as we come to know the other.

* * *

Are Very Young Children Stuck in the Perpetual Present?
by Jesse Bering

Interestingly, however, the authors found that the three-year-olds were significantly more likely to refer to themselves in the third person (using their first names rather and saying that the sticker is on “his” or “her” head) than were the four-year-olds, who used first-person pronouns (“me” and “my head”) almost exclusively. […]

Povinelli has pointed out the relevancy of these findings to the phenomenon of “infantile amnesia,” which tidily sums up the curious case of most people being unable to recall events from their first three years of life. (I spent my first three years in New Jersey, but for all I know I could have spontaneously appeared as a four-year-old in my parent’s bedroom in Virginia, which is where I have my first memory.) Although the precise neurocognitive mechanisms underlying infantile amnesia are still not very well-understood, escaping such a state of the perpetual present would indeed seemingly require a sense of the temporally enduring, autobiographical self.

5 Reasons Shaq and Other Athletes Refer to Themselves in the Third Person
by Amelia Ahlgren

“Illeism,” or the act of referring to oneself in the third person, is an epidemic in the sports world.

Unfortunately for humanity, the cure is still unknown.

But if we’re forced to listen to these guys drone on about an embodiment of themselves, we might as well guess why they do it.

Here are five reasons some athletes are allergic to using the word “I.”

  1. Lag in Linguistic Development (Immaturity)
  2. Reflection of Egomania
  3. Amp-Up Technique
  4. Pure Intimidation
  5. Goofiness

Rene Thinks, Therefore He Is. You?
by Richard Sandomir

Some strange, grammatical, mind-body affliction is making some well-known folks in sports and politics refer to themselves in the third person. It is as if they have stepped outside their bodies. Is this detachment? Modesty? Schizophrenia? If this loopy verbal quirk were simple egomania, then Louis XIV might have said, “L’etat, c’est Lou.” He did not. And if it were merely a sign of one’s overweening power, the Queen Victoria would not have invented the royal we (“we are not amused”) but rather the royal she. She did not.

Lately, though, some third persons have been talking in a kind of royal he:

* Accepting the New York Jets’ $25 million salary and bonus offer, the quarterback Neil O’Donnell said of his former team, “The Pittsburgh Steelers had plenty of opportunities to sign Neil O’Donnell.”

* As he pushed to be traded from the Los Angeles Kings, Wayne Gretzky said he did not want to wait for the Kings to rebuild “because that doesn’t do a whole lot of good for Wayne Gretzky.”

* After his humiliating loss in the New Hampshire primary, Senator Bob Dole proclaimed: “You’re going to see the real Bob Dole out there from now on.”

These people give you the creepy sense that they’re not talking to you but to themselves. To a first, second or third person’s ear, there’s just something missing. What if, instead of “I am what I am,” we had “Popeye is what Popeye is”?

Vocative self-address, from ancient Greece to Donald Trump
by Ben Zimmer

Earlier this week on Twitter, Donald Trump took credit for a surge in the Consumer Confidence Index, and with characteristic humility, concluded the tweet with “Thanks Donald!”

The “Thanks Donald!” capper led many to muse about whether Trump was referring to himself in the second person, the third person, or perhaps both.

Since English only marks grammatical person on pronouns, it’s not surprising that there is confusion over what is happening with the proper name “Donald” in “Thanks, Donald!” We associate proper names with third-person reference (“Donald Trump is the president-elect”), but a name can also be used as a vocative expression associated with second-person address (“Pleased to meet you, Donald Trump”). For more on how proper names and noun phrases in general get used as vocatives in English, see two conference papers from Arnold Zwicky: “Hey, Whatsyourname!” (CLS 10, 1974) and “Isolated NPs” (Semantics Fest 5, 2004).

The use of one’s own name in third-person reference is called illeism. Arnold Zwicky’s 2007 Language Log post, “Illeism and its relatives” rounds up many examples, including from politicians like Bob Dole, a notorious illeist. But what Trump is doing in tweeting “Thanks, Donald!” isn’t exactly illeism, since the vocative construction implies second-person address rather than third-person reference. We can call this a form of vocative self-address, wherein Trump treats himself as an addressee and uses his own name as a vocative to create something of an imagined interior dialogue.

Give me that Prime Time religion
by Mark Schone

Around the time football players realized end zones were for dancing, they also decided that the pronouns “I” and “me,” which they used an awful lot, had worn out. As if to endorse the view that they were commodities, cartoons or royalty — or just immune to introspection — athletes began to refer to themselves in the third person.

It makes sense, therefore, that when the most marketed personality in the NFL gets religion, he announces it in the weirdly detached grammar of football-speak. “Deion Sanders is covered by the blood of Jesus now,” writes Deion Sanders. “He loves the Lord with all his heart.” And in Deion’s new autobiography, the Lord loves Deion right back, though the salvation he offers third-person types seems different from what mere mortals can expect.

Refering to yourself in the third person
by Tetsuo

It does seem to be a stylistic thing in formal Chinese. I’ve come across a couple of articles about artists by the artist in question where they’ve referred to themselves in the third person throughout. And quite a number of politicians do the same, I’ve been told.

Illeism
from Wikipedia

Illeism in everyday speech can have a variety of intentions depending on context. One common usage is to impart humility, a common practice in feudal societies and other societies where honorifics are important to observe (“Your servant awaits your orders”), as well as in master–slave relationships (“This slave needs to be punished”). Recruits in the military, mostly United States Marine Corps recruits, are also often made to refer to themselves in the third person, such as “the recruit,” in order to reduce the sense of individuality and enforce the idea of the group being more important than the self.[citation needed] The use of illeism in this context imparts a sense of lack of self, implying a diminished importance of the speaker in relation to the addressee or to a larger whole.

Conversely, in different contexts, illeism can be used to reinforce self-promotion, as used to sometimes comic effect by Bob Dole throughout his political career.[2] This was particularly made notable during the United States presidential election, 1996 and lampooned broadly in popular media for years afterwards.

Deepanjana Pal of Firstpost noted that speaking in the third person “is a classic technique used by generations of Bollywood scriptwriters to establish a character’s aristocracy, power and gravitas.”[3] Conversely, third person self referral can be associated with self-irony and not taking oneself too seriously (since the excessive use of pronoun “I” is often seen as a sign of narcissism and egocentrism[4]), as well as with eccentricity in general.

In certain Eastern religions, like Hinduism or Buddhism, this is sometimes seen as a sign of enlightenment, since by doing so, an individual detaches his eternal self (atman) from the body related one (maya). Known illeists of that sort include Swami Ramdas,[5] Ma Yoga Laxmi,[6] Anandamayi Ma,[7] and Mata Amritanandamayi.[8] Jnana yoga actually encourages its practitioners to refer to themselves in the third person.[9]

Young children in Japan commonly refer to themselves by their own name (a habit probably picked from their elders who would normally refer to them by name. This is due to the normal Japanese way of speaking, where referring to another in the third person is considered more polite than using the Japanese words for “you”, like Omae. More explanation given in Japanese pronouns, though as the children grow older they normally switch over to using first person references. Japanese idols also may refer to themselves in the third person so to give off the feeling of childlike cuteness.

Four Paths to the Goal
from Sheber Hinduism

Jnana yoga is a concise practice made for intellectual people. It is the quickest path to the top but it is the steepest. The key to jnana yoga is to contemplate the inner self and find who our self is. Our self is Atman and by finding this we have found Brahman. Thinking in third person helps move us along the path because it helps us consider who we are from an objective point of view. As stated in the Upanishads, “In truth, who knows Brahman becomes Brahman.” (Novak 17).

Non-Reactivity: The Supreme Practice of Everyday Life
by Martin Schmidt

Respond with non-reactive awareness: consider yourself a third-person observer who watches your own emotional responses arise and then dissipate. Don’t judge, don’t try to change yourself; just observe! In time this practice will begin to cultivate a third-person perspective inside yourself that sometimes is called the Inner Witness.[4]

Frequent ‘I-Talk’ may signal proneness to emotional distress
from Science Daily

Researchers at the University of Arizona found in a 2015 study that frequent use of first-person singular pronouns — I, me and my — is not, in fact, an indicator of narcissism.

Instead, this so-called “I-talk” may signal that someone is prone to emotional distress, according to a new, follow-up UA study forthcoming in the Journal of Personality and Social Psychology.

Research at other institutions has suggested that I-talk, though not an indicator of narcissism, may be a marker for depression. While the new study confirms that link, UA researchers found an even greater connection between high levels of I-talk and a psychological disposition of negative emotionality in general.

Negative emotionality refers to a tendency to easily become upset or emotionally distressed, whether that means experiencing depression, anxiety, worry, tension, anger or other negative emotions, said Allison Tackman, a research scientist in the UA Department of Psychology and lead author of the new study.

Tackman and her co-authors found that when people talk a lot about themselves, it could point to depression, but it could just as easily indicate that they are prone to anxiety or any number of other negative emotions. Therefore, I-talk shouldn’t be considered a marker for depression alone.

Talking to yourself in the third person can help you control emotions
from Science Daily

The simple act of silently talking to yourself in the third person during stressful times may help you control emotions without any additional mental effort than what you would use for first-person self-talk — the way people normally talk to themselves.

A first-of-its-kind study led by psychology researchers at Michigan State University and the University of Michigan indicates that such third-person self-talk may constitute a relatively effortless form of self-control. The findings are published online in Scientific Reports, a Nature journal.

Say a man named John is upset about recently being dumped. By simply reflecting on his feelings in the third person (“Why is John upset?”), John is less emotionally reactive than when he addresses himself in the first person (“Why am I upset?”).

“Essentially, we think referring to yourself in the third person leads people to think about themselves more similar to how they think about others, and you can see evidence for this in the brain,” said Jason Moser, MSU associate professor of psychology. “That helps people gain a tiny bit of psychological distance from their experiences, which can often be useful for regulating emotions.”

Pretending to be Batman helps kids stay on task
by Christian Jarrett

Some of the children were assigned to a “self-immersed condition”, akin to a control group, and before and during the task were told to reflect on how they were doing, asking themselves “Am I working hard?”. Other children were asked to reflect from a third-person perspective, asking themselves “Is James [insert child’s actual name] working hard?” Finally, the rest of the kids were in the Batman condition, in which they were asked to imagine they were either Batman, Bob The Builder, Rapunzel or Dora the Explorer and to ask themselves “Is Batman [or whichever character they were] working hard?”. Children in this last condition were given a relevant prop to help, such as Batman’s cape. Once every minute through the task, a recorded voice asked the question appropriate for the condition each child was in [Are you working hard? or Is James working hard? or Is Batman working hard?].

The six-year-olds spent more time on task than the four-year-olds (half the time versus about a quarter of the time). No surprise there. But across age groups, and apparently unrelated to their personal scores on mental control, memory, or empathy, those in the Batman condition spent the most time on task (about 55 per cent for the six-year-olds; about 32 per cent for the four-year-olds). The children in the self-immersed condition spent the least time on task (about 35 per cent of the time for the six-year-olds; just over 20 per cent for the four-year-olds) and those in the third-person condition performed in between.

Dressing up as a superhero might actually give your kid grit
by Jenny Anderson

In other words, the more the child could distance him or herself from the temptation, the better the focus. “Children who were asked to reflect on the task as if they were another person were less likely to indulge in immediate gratification and more likely to work toward a relatively long-term goal,” the authors wrote in the study called “The “Batman Effect”: Improving Perseverance in Young Children,” published in Child Development.

Curmudgucation: Don’t Be Batman
by Peter Greene

This underlines the problem we see with more and more or what passes for early childhood education these days– we’re not worried about whether the school is ready to appropriately handle the students, but instead are busy trying to beat three-, four- and five-year-olds into developmentally inappropriate states to get them “ready” for their early years of education. It is precisely and absolutely backwards. I can’t say this hard enough– if early childhood programs are requiring “increased demands” on the self-regulatory skills of kids, it is the programs that are wrong, not the kids. Full stop.

What this study offers is a solution that is more damning than the “problem” that it addresses. If a four-year-old child has to disassociate, to pretend that she is someone else, in order to cope with the demands of your program, your program needs to stop, today.

Because you know where else you hear this kind of behavior described? In accounts of victims of intense, repeated trauma. In victims of torture who talk about dealing by just pretending they aren’t even there, that someone else is occupying their body while they float away from the horror.

That should not be a description of How To Cope With Preschool.

Nor should the primary lesson of early childhood education be, “You can’t really cut it as yourself. You’ll need to be somebody else to get ahead in life.” I cannot even begin to wrap my head around what a destructive message that is for a small child.

Can You Live With the Voices in Your Head?
by Daniel B. Smith

And though psychiatrists acknowledge that almost anyone is capable of hallucinating a voice under certain circumstances, they maintain that the hallucinations that occur with psychoses are qualitatively different. “One shouldn’t place too much emphasis on the content of hallucinations,” says Jeffrey Lieberman, chairman of the psychiatry department at Columbia University. “When establishing a correct diagnosis, it’s important to focus on the signs or symptoms” of a particular disorder. That is, it’s crucial to determine how the voices manifest themselves. Voices that speak in the third person, echo a patient’s thoughts or provide a running commentary on his actions are considered classically indicative of schizophrenia.

Auditory hallucinations: Psychotic symptom or dissociative experience?
by Andrew Moskowitz & Dirk Corstens

While auditory hallucinations are considered a core psychotic symptom, central to the diagnosis of schizophrenia, it has long been recognized that persons who are not psychotic may also hear voices. There is an entrenched clinical belief that distinctions can be made between these groups, typically on the basis of the perceived location or the ‘third-person’ perspective of the voices. While it is generally believed that such characteristics of voices have significant clinical implications, and are important in the differential diagnosis between dissociative and psychotic disorders, there is no research evidence in support of this. Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or with no mental disorder at all. On this and other bases outlined below, we argue that hearing voices should be considered a dissociative experience, which under some conditions may have pathological consequences. In other words, we believe that, while voices may occur in the context of a psychotic disorder, they should not be considered a psychotic symptom.

Hallucinations and Sensory Overrides
by T. M. Luhrmann

The psychiatric and psychological literature has reached no settled consensus about why hallucinations occur and whether all perceptual “mistakes” arise from the same processes (for a general review, see Aleman & Laroi 2008). For example, many researchers have found that when people hear hallucinated voices, some of these people have actually been subvocalizing: They have been using muscles used in speech, but below the level of their awareness (Gould 1949, 1950). Other researchers have not found this inner speech effect; moreover, this hypothesis does not explain many of the odd features of the hallucinations associated with psychosis, such as hearing voices that speak in the second or third person (Hoffman 1986). But many scientists now seem to agree that hallucinations are the result of judgments associated with what psychologists call “reality monitoring” (Bentall 2003). This is not the process Freud described with the term reality testing, which for the most part he treated as a cognitive higher-level decision: the ability to distinguish between fantasy and the world as it is (e.g., he loves me versus he’s just not that into me). Reality monitoring refers to the much more basic decision about whether the source of an experience is internal to the mind or external in the world.

Originally, psychologists used the term to refer to judgments about memories: Did I really have that conversation with my boyfriend back in college, or did I just think I did? The work that gave the process its name asked what it was about memories that led someone to infer that these memories were records of something that had taken place in the world or in the mind (Johnson & Raye 1981). Johnson & Raye’s elegant experiments suggested that these memories differ in predictable ways and that people use those differences to judge what has actually taken place. Memories of an external event typically have more sensory details and more details in general. By contrast, memories of thoughts are more likely to include the memory of cognitive effort, such as composing sentences in one’s mind.

Self-Monitoring and Auditory Verbal Hallucinations in Schizophrenia
by Wayne Wu

It’s worth pointing out that a significant portion of the non-clinical population experiences auditory hallucinations. Such hallucinations need not be negative in content, though as I understand it, the preponderance of AVH in schizophrenia is or becomes negative. […]

I’ve certainly experienced the “third man”, in a moment of vivid stress when I was younger. At the time, I thought it was God speaking to me in an encouraging and authoritative way! (I was raised in a very strict religious household.) But I wouldn’t be surprised if many of us have had similar experiences. These days, I have more often the cell-phone buzzing in my pocket illusion.

There are, I suspect, many reasons why they auditory system might be activated to give rise to auditory experiences that philosophers would define as hallucinations: recalling things in an auditory way, thinking in inner speech where this might be auditory in structure, etc. These can have positive influences on our ability to adapt to situations.

What continues to puzzle me about AVH in schizophrenia are some of its fairly consistent phenomenal properties: second or third-person voice, typical internal localization (though plenty of external localization) and negative content.

The Digital God, How Technology Will Reshape Spirituality
by William Indick
pp. 74-75

Doubled Consciousness

Who is this third who always walks beside you?
When I count, there are only you and I together.
But when I look ahead up the white road
There is always another one walking beside you
Gliding wrapt in a brown mantle, hooded.
—T.S. Eliot, The Waste Land

The feeling of “doubled consciousness” 81 has been reported by numerous epileptics. It is the feeling of being outside of one’s self. The feeling that you are observing yourself as if you were outside of your own body, like an outsider looking in on yourself. Consciousness is “doubled” because you are aware of the existence of both selves simultaneously—the observer and the observed. It is as if the two halves of the brain temporarily cease to function as a single mechanism; but rather, each half identifies itself separately as its own self. 82 The doubling effect that occurs as a result of some temporal lobe epileptic seizures may lead to drastic personality changes. In particular, epileptics following seizures often become much more spiritual, artistic, poetic, and musical. 83 Art and music, of course, are processed primarily in the right hemisphere, as is poetry and the more lyrical, metaphorical aspects of language. In any artistic endeavor, one must engage in “doubled consciousness,” creating the art with one “I,” while simultaneously observing the art and the artist with a critically objective “other-I.” In The Great Gatsby, Fitzgerald expressed the feeling of “doubled consciousness” in a scene in which Nick Caraway, in the throes of profound drunkenness, looks out of a city window and ponders:

Yet high over the city our line of yellow windows must have contributed their share of human secrecy to the casual watcher in the darkening streets, and I was him too , looking up and wondering . I was within and without , simultaneously enchanted and repelled by the inexhaustible variety of life.

Doubled-consciousness, the sense of being both “within and without” of one’s self, is a moment of disconnection and disassociation between the two hemispheres of the brain, a moment when left looks independently at right and right looks independently at left, each recognizing each other as an uncanny mirror reflection of himself, but at the same time not recognizing the other as “I.”

The sense of doubled consciousness also arises quite frequently in situations of extreme physical and psychological duress. 84 In his book, The Third Man Factor John Geiger delineates the conditions associated with the perception of the “sensed presence”: darkness, monotony, barrenness, isolation, cold, hunger, thirst, injury, fatigue, and fear. 85 Shermer added sleep deprivation to this list, noting that Charles Lindbergh, on his famous cross–Atlantic flight, recorded the perception of “ghostly presences” in the cockpit, that “spoke with authority and clearness … giving me messages of importance unattainable in ordinary life.” 86 Sacks noted that doubled consciousness is not necessarily an alien or abnormal sensation, we all feel it, especially when we are alone, in the dark, in a scary place. 87 We all can recall a memory from childhood when we could palpably feel the presence of the monster hiding in the closet, or that indefinable thing in the dark space beneath our bed. The experience of the “sensed other” is common in schizophrenia, can be induced by certain drugs, is a central aspect of the “near death experience,” and is also associated with certain neurological disorders. 88

To speak of oneself in the third person; to express the wish to “find myself,” is to presuppose a plurality within one’s own mind. 89 There is consciousness, and then there is something else … an Other … who is nonetheless a part of our own mind, though separate from our moment-to-moment consciousness. When I make a statement such as: “I’m disappointed with myself because I let myself gain weight,” it is quite clear that there are at least two wills at work within one mind—one will that dictates weight loss and is disappointed—and another will that defies the former and allows the body to binge or laze. One cannot point at one will and say: “This is the real me and the other is not me.” They’re both me. Within each “I” there exists a distinct Other that is also “I.” In the mind of the believer—this double-I, this other-I, this sentient other, this sensed presence who is me but also, somehow, not me—how could this be anyone other than an angel, a spirit, my own soul, or God? Sacks recalls an incident in which he broke his leg while mountain climbing alone and had to descend the mountain despite his injury and the immense pain it was causing him. Sacks heard “an inner voice” that was “wholly unlike” his normal “inner speech”—a “strong, clear, commanding voice” that told him exactly what he had to do to survive the predicament, and how to do it. “This good voice, this Life voice, braced and resolved me.” Sacks relates the story of Joe Simpson, author of Touching the Void , who had a similar experience during a climbing mishap in the Andes. For days, Simpson trudged along with a distinctly dual sense of self. There was a distracted self that jumped from one random thought to the next, and then a clearly separate focused self that spoke to him in a commanding voice, giving specific instructions and making logical deductions. 90 Sacks also reports the experience of a distraught friend who, at the moment she was about to commit suicide, heard a “voice” tell her: “No, you don’t want to do that…” The male voice, which seemed to come from outside of her, convinced her not to throw her life away. She speaks of it as her “guardian angel.” Sacks suggested that this other voice may always be there, but it is usually inhibited. When it is heard, it’s usually as an inner voice, rather than an external one. 91 Sacks also reports that the “persistent feeling” of a “presence” or a “companion” that is not actually there is a common hallucination, especially among people suffering from Parkinson’s disease. Sacks is unsure if this is a side-effect of L-DOPA, the drug used to treat the disease, or if the hallucinations are symptoms of the neurological disease itself. He also noted that some patients were able to control the hallucinations to varying degrees. One elderly patient hallucinated a handsome and debonair gentleman caller who provided “love, attention, and invisible presents … faithfully each evening.” 92

Part III: Off to the Asylum – Rational Anti-psychiatry
by Veronika Nasamoto

The ancients were also clued up in that the origins of mental instability was spiritual but they perceived it differently. In The Origins of Consciousness in the Breakdown of Bicameral Mind, Julian Jaynes’ book present a startling thesis, based on an analysis of the language of the Iliad, that the ancient Greeks were not conscious in the same way that modern humans are. Because the ancient Greeks had no sense of “I” (also Victorian England would sometimes speak in the third person rather than say I, because the eternal God – YHWH was known as the great “I AM”) with which to locate their mental processes. To them their inner thoughts were perceived as coming from the gods, which is why the characters in the Iliad find themselves in frequent communication with supernatural entities.

The Shadows of Consciousness in the Breakdown of the Bicameral Mirror
by Chris Savia

Jaynes’s description of consciousness, in relation to memory, proposes what people believe to be rote recollection are concepts, the platonic ideals of their office, the view out of the window, et al. These contribute to one’s mental sense of place and position in the world. The memories enabling one to see themselves in the third person.

Language, consciousness and the bicameral mind
by Andreas van Cranenburgh

Consciousness not a copy of experience Since Locke’s tabula rasa it has been thought that consciousness records our experiences, to save them for possible later reflection. However, this is clearly false: most details of our experience are immediately lost when not given special notice. Recalling an arbitrary past event requires a reconstruction of memories. Interestingly, memories are often from a third-person perspective, which proves that they could not be a mere copy of experience.

The Origin of Consciousness in the Breakdown of the Bicameral Mind
by Julian Jaynes
pp. 347-350

Negatory Possession

There is another side to this vigorously strange vestige of the bicameral mind. And it is different from other topics in this chapter. For it is not a response to a ritual induction for the purpose of retrieving the bicameral mind. It is an illness in response to stress. In effect, emotional stress takes the place of the induction in the general bicameral paradigm just as in antiquity. And when it does, the authorization is of a different kind.

The difference presents a fascinating problem. In the New Testament, where we first hear of such spontaneous possession, it is called in Greek daemonizomai, or demonization. 10 And from that time to the present, instances of the phenomenon most often have that negatory quality connoted by the term. The why of the negatory quality is at present unclear. In an earlier chapter (II. 4) I have tried to suggest the origin of ‘evil’ in the volitional emptiness of the silent bicameral voices. And that this took place in Mesopotamia and particularly in Babylon, to which the Jews were exiled in the sixth century B.C., might account for the prevalence of this quality in the world of Jesus at the start of this syndrome.

But whatever the reasons, they must in the individual be similar to the reasons behind the predominantly negatory quality of schizophrenic hallucinations. And indeed the relationship of this type of possession to schizophrenia seems obvious.

Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.

Always the patients are uneducated, usually illiterate, and all believe heartily in spirits or demons or similar beings and live in a society which does. The attacks usually last from several minutes to an hour or two, the patient being relatively normal between attacks and recalling little of them. Contrary to horror fiction stories, negatory possession is chiefly a linguistic phenomenon, not one of actual conduct. In all the cases I have studied, it is rare to find one of criminal behavior against other persons. The stricken individual does not run off and behave like a demon; he just talks like one.

Such episodes are usually accompanied by twistings and writhings as in induced possession. The voice is distorted, often guttural, full of cries, groans, and vulgarity, and usually railing against the institutionalized gods of the period. Almost always, there is a loss of consciousness as the person seems the opposite of his or her usual self. ‘He’ may name himself a god, demon, spirit, ghost, or animal (in the Orient it is often ‘the fox’), may demand a shrine or to be worshiped, throwing the patient into convulsions if these are withheld. ‘He’ commonly describes his natural self in the third person as a despised stranger, even as Yahweh sometimes despised his prophets or the Muses sneered at their poets. 12 And ‘he’ often seems far more intelligent and alert than the patient in his normal state, even as Yahweh and the Muses were more intelligent and alert than prophet or poet.

As in schizophrenia, the patient may act out the suggestions of others, and, even more curiously, may be interested in contracts or treaties with observers, such as a promise that ‘he’ will leave the patient if such and such is done, bargains which are carried out as faithfully by the ‘demon’ as the sometimes similar covenants of Yahweh in the Old Testament. Somehow related to this suggestibility and contract interest is the fact that the cure for spontaneous stress-produced possession, exorcism, has never varied from New Testament days to the present. It is simply by the command of an authoritative person often following an induction ritual, speaking in the name of a more powerful god. The exorcist can be said to fit into the authorization element of the general bicameral paradigm, replacing the ‘demon.’ The cognitive imperatives of the belief system that determined the form of the illness in the first place determine the form of its cure.

The phenomenon does not depend on age, but sex differences, depending on the historical epoch, are pronounced, demonstrating its cultural expectancy basis. Of those possessed by ‘demons’ whom Jesus or his disciples cured in the New Testament, the overwhelming majority were men. In the Middle Ages and thereafter, however, the overwhelming majority were women. Also evidence for its basis in a collective cognitive imperative are its occasional epidemics, as in convents of nuns during the Middle Ages, in Salem, Massachusetts, in the eighteenth century, or those reported in the nineteenth century at Savoy in the Alps. And occasionally today.

The Emergence of Reflexivity in Greek Language and Thought
by Edward T. Jeremiah
p. 3

Modernity’s tendency to understand the human being in terms of abstract grammatical relations, namely the subject and self, and also the ‘I’—and, conversely, the relative indifference of Greece to such categories—creates some of the most important semantic contrasts between our and Greek notions of the self.

p. 52

Reflexivisations such as the last, as well as those like ‘Know yourself’ which reconstitute the nature of the person, are entirely absent in Homer. So too are uses of the reflexive which reference some psychological aspect of the subject. Indeed the reference of reflexives directly governed by verbs in Homer is overwhelmingly bodily: ‘adorning oneself’, ‘covering oneself’, ‘defending oneself’, ‘debasing oneself physically’, ‘arranging themselves in a certain formation’, ‘stirring oneself’, ad all the prepositional phrases. The usual reference for indirect arguments is the self interested in its own advantage. We do not find in Homer any of the psychological models of self-relation discussed by Lakoff.

Use of the Third Person for Self-Reference by Jesus and Yahweh
by Rod Elledge
pp. 11-13

Viswanathan addresses illeism in Shakespeare’s works, designating it as “illeism with a difference.” He writes: “It [‘illeism with a difference’] is one by which the dramatist makes a character, speaking in the first person, refer to himself in the third person, not simple as a ‘he’, which would be illeism proper, a traditional grammatical mode, but by name.” He adds that the device is extensively used in Julius Caesar and Troilus and Cressida, and occasionally in Hamlet and Othello. Viswanathan notes the device, prior to Shakespeare, was used in the medieval theater simply to allow a character to announce himself and clarify his identity. Yet, he argues that, in the hands of Shakespeare, the device becomes “a masterstroke of dramatic artistry.” He notes four uses of this illeism with a difference.” First, it highlights the character using it and his inner self. He notes that it provides a way of “making the character momentarily detach himself from himself, achieve a measure of dramatic (and philosophical) depersonalization, and create a kind of aesthetic distance from which he can contemplate himself.” Second, it reflects the tension between the character’s public and private selves. Third, the device “raises the question of the way in which the character is seen to behave and to order his very modes of feeling and thought in accordance with a rightly or wrongly conceived image or idea of himself.” Lastly, he notes the device tends to point toward the larger philosophical problem for man’s search for identity. Speaking of the use of illeism within Julius Caesar, Spevak writes that “in addiction to the psychological and other implications, the overall effect is a certain stateliness, a classical look, a consciousness on the part of the actors that they are acting in a not so everyday context.”

Modern linguistic scholarship

Otto Jespersen notes various examples of the third-person self-reference including those seeking to reflect deference or politeness, adults talking to children as “papa” or “Aunt Mary” to be more easily understood, as well as the case of some writers who write “the author” or “this present writer” in order to avoid the mention of “I.” He notes Caesar as a famous example of “self-effacement [used to] produce the impression of absolute objectivity.” Yet, Head writes, in response to Jespersen, that since the use of the third person for self-reference

is typical of important personages, whether in autobiography (e.g. Caesar in De Bello Gallico and Captain John Smith in his memoirs) or in literature (Marlowe’s Faustus, Shakespeare’s Julius Caesar, Cordelia and Richared II, Lessing’s Saladin, etc.), it is actually an indication of special status and hence implies greater social distance than does the more commonly used first person singular.

Land and Kitzinger argue that “very often—but not always . . . the use of a third-person reference form in self-reference is designed to display that the speaker is talking about themselves as if from the perspective of another—either the addressee(s) . . . or a non-present other.” The linguist Laurence Horn, noting the use of illeism by various athlete and political celebrities, notes that “the celeb is viewing himself . . . from the outside.” Addressing what he refers to as “the dissociative third person,” he notes that an athlete or politician “may establish distance between himself (virtually never herself) and his public persona, but only by the use of his name, never a 3rd person pronoun.”

pp. 15-17

Illeism in Clasical Antiquity

As referenced in the history of research, Kostenberger writes: “It may strike the modern reader as curious that Jesus should call himself ‘Jesus Christ’; however, self-reference in the third person was common in antiquity.” While Kostenberger’s statement is a brief comment in the context of a commentary and not a monographic study on the issue, his comment raises a critical question. Does a survey of the evidence reveal that Jesus’s use of illeism in this verse (and by implication elsewhere in the Gospels) reflects simply another example of a common mannerism in antiquity? […]

Early Evidence

From the fifth century BC to the time of Jesus the following historians refer to themselves in the third person in their historical accounts: Hecataeus (though the evidence is fragmentary), Herodotus, Thucydides, Xenophon, Polybius, Caesar, and Josephus. For the scope of this study this point in history (from fifth century BC to first century AD) is the primary focus. Yet, this feature was adopted from the earlier tendency in literature in which an author states his name as a seal or sphragis for their work. Herkommer notes that the “self-introduction” (Selbstvorstellung) in the Homeric Hymn to Apollo, in choral poetry (Chorlyrik) such as that by the Greek poet Alkman (seventh century BC), and in the poetic mxims, (Spruchdichtung) such as those of the Greek poet Phokylides (seventh century BC). Yet, from fifth century onward, this feature appears primarily in the works of Greek historians. In addition to early evidence (prior to the fifth century of an author’s self-reference in his historiographic work, the survey of evidence also noted an early example of illeism within Homer’s Illiad. Because this ancient Greek epic poem reflects an early use of the third-person self-reference in a narrative context and offers a point of comparison to its use in later Greek historiography, this early example of the use of illeism is briefly addressed.

Maricola notes that the style of historical narrative that first appears in Herodotus is a legacy from Homer (ca. 850 BC). He notes that “as the writer of the most ‘authoritative’ third-person narrative, [Homer] provided a model not only for later poets, epic and otherwise, but also to the prose historians who, by way of Herodotus, saw him as their model and rival.” While Homer provided the authoritative example of third-person narrative, he also, centuries before the development of Greek historiography, used illeism in his epic poem the Iliad. Illeism occurs in the direct speech of Zeus (the king of the gods), Achilles (the “god-like” son of a king and goddess), and Hector (the mighty Trojan prince).

Zeus, addressing the assembled gods on Mt. Olympus, refers to himself as “Zeus, the supreme Master” […] and states how superior he is above all gods and men. Hector’s use of illeism occurs as he addresses the Greeks and challenges the best of them to fight against “good Hector” […]. Muellner notes in these instances of third person for self-reference (Zeus twice and Hector once) that “the personage at the top and center of the social hierarchy is asserting his superiority over the group . . . . In other word, these are self-aggrandizing third-person references, like those in the war memoirs of Xenophon, Julius Caesar, and Napoleon.” He adds that “the primary goal of this kind of third-person self-reference is to assert the status accruing to exception excellence. Achilles refers to himself in the context of an oath (examples of which are reflected in the OT), yet his self-reference serves to emphasize his status in relation to the Greeks, and especially to King Agamemnon. Addressing Agamemnon, the general of the Greek armies, Achillies swears by his sceptor and states that the day will come when the Greeks will long for Achilles […].

Homer’s choice to use illeism within the direct speech of these three characters contributes to an understanding of its potential rhetorical implications. In each case the character’s use of illeism serves to set him apart by highlighting his innate authority and superior status. Also, all three characters reflect divine and/or royal aspects (Zeus, king of gods; Achilles, son of a king and a goddess, and referred to as “god-like”; and Hector, son of a king). The examples of illeism in the Iliad, among the earliest evidence of illeism, reflect a usage that shares similarities with the illeism as used by Jesus and Yahweh. The biblical and Homeric examples each reflects illeism in direct speech within narrative discourse and the self-reverence serves to emphasize authority or status as well as a possible associated royal and/or divine aspect(s). Yet, the examples stand in contrast to the use of illeism by later historians. As will be addressed next, these ancient historians used the third-person self-reference as a literary device to give their historical accounts a sense of objectivity.

Women and Gender in Medieval Europe: An Encyclopedia
edited by Margaret C. Schaus
“Mystics’ Writings”

by Patricia Dailey
p. 600

The question of scribal mediation is further complicated in that the mystic’s text is, in essence, a message transmitted through her, which must be transmitted to her surrounding community. Thus, the denuding of voice of the text, of a first-person narrative, goes hand in hand with the status of the mystic as “transcriber” of a divine message that does not bear the mystic’s signature, but rather God’s. In addition, the tendency to write in the third person in visionary narratives may draw from a longstanding tradition that stems from Paul in 2 Cor. of communicating visions in the third person, but at the same time, it presents a means for women to negotiate with conflicts with regard to authority or immediacy of the divine through a veiled distance or humility that conformed to a narrative tradition.

Romantic Confession: Jean-Jacques Rousseau and Thomas de Quincey
by Martina Domines Veliki

It is no accident that the term ‘autobiography’, entailing a special amalgam of ‘autos’, ‘bios’ and ‘graphe’ (oneself, life and writing), was first used in 1797 in the Monthly Review by a well-known essayist and polyglot, translator of German romantic literature, William Taylor of Norwich. However, the term‘autobiographer’ was first extensively used by an English Romantic poet, one of the Lake Poets, Robert Southey1. This does not mean that no autobiographies were written before the beginning of the nineteenth century. The classical writers wrote about famous figures of public life, the Middle Ages produced educated writers who wrote about saints’ lives and from Renaissance onward people wrote about their own lives. However, autobiography, as an auto-reflexive telling of one’s own life’s story, presupposes a special understanding of one’s‘self’ and therefore, biographies and legends of Antiquity and the Middle Ages are fundamentally different from ‘modern’ autobiography, which postulates a truly autonomous subject, fully conscious of his/her own uniqueness2. Life-writing, whether in the form of biography or autobiography, occupied the central place in Romanticism. Autobiography would also often appear in disguise. One would immediately think of S. T. Coleridge’s Biographia Literaria (1817) which combines literary criticism and sketches from the author’s life and opinions, and Mary Wollstonecratf’s Short Residence in Sweden, Norway and Denmark (1796),which combines travel narrative and the author’s own difficulties of travelling as a woman.

When one thinks about the first ‘modern’ secular autobiography, it is impossible to avoid the name of Jean-Jacques Rousseau. He calls his first autobiography The Confessions, thus aligning himself in the long Western tradition of confessional writings inaugurated by St. Augustine (354 – 430 AD). Though St. Augustine confesses to the almighty God and does not really perceive his own life as significant, there is another dimension of Augustine’s legacy which is important for his Romantic inheritors: the dichotomies inherent in the Christian way of perceiving the world, namely the opposition of spirit/matter, higher/lower, eternal/temporal, immutable/changing become ultimately emanations of a single binary opposition, that of inner and outer (Taylor 1989: 128). The substance of St. Augustine’s piety is summed up by a single sentence from his Confessions:

“And how shall I call upon my God – my God and my Lord? For when I call on Him, I ask Him to come into me. And what place is there in me into which my God can come? (…) I could not therefore exist, could not exist at all, O my God, unless Thou wert in me.” (Confessions, book I, chapter 2, p.2, emphasis mine)

The step towards inwardness was for Augustine the step towards Truth, i.e. God, and as Charles Taylor explains, this turn inward was a decisive one in the Western tradition of thought. The ‘I’ or the first person standpoint becomes unavoidable thereafter. It took a long way from Augustine’s seeing these sources to reside in God to Rousseau’s pivotal turn to inwardness without recourse to God. Of course, one must not lose sight of the developments in continental philosophy pre-dating Rousseau’s work. René Descartes was the first to embrace Augustinian thinking at the beginning of the modern era, and he was responsible for the articulation of the disengaged subject: the subject asserting that the real locus of all experience is in his own mind3. With the empiricist philosophy of John Locke and David Hume, who claimed that we reach the knowledge of the surrounding world through disengagement and procedural reason, there is further development towards an idea of the autonomous subject. Although their teachings seemed to leave no place for subjectivity as we know it today, still they were a vital step in redirecting the human gaze from the heavens to man’s own existence.

2 Furthermore, the Middle Ages would not speak about such concepts as ‘the author’and one’s ‘individuality’ and it is futile to seek in such texts the appertaining subject. When a Croatian fourteenth-century-author, Hanibal Lucić, writes about his life in a short text called De regno Croatiae et Dalmatiae? Paulus de Paulo, the last words indicate that the author perceives his life as being insignificant and invaluable. The nuns of the fourteenth century writing their own confessions had to use the third person pronoun to refer to themselves and the ‘I’ was reserved for God only. (See Zlatar 2000)

Return to Childhood by Leila Abouzeid
by Geoff Wisner

In addition, autobiography has the pejorative connotation in Arabic of madihu nafsihi wa muzakkiha (he or she who praises and recommends him- or herself). This phrase denotes all sorts of defects in a person or a writer: selfishness versus altruism, individualism versus the spirit of the group, arrogance versus modesty. That is why Arabs usually refer to themselves in formal speech in the third person plural, to avoid the use of the embarrassing íI.ë In autobiography, of course, one uses íIë frequently.

Becoming Abraham Lincoln
by Richard Kigel
Preface, XI

A note about the quotations and sources: most of the statements were collected by William Herndon, Lincoln’s law partner and friend, in the years following Lincoln’s death. The responses came in original handwritten letters and transcribed interviews. Because of the low literacy levels of many of his subjects, sometimes these statements are difficult to understand. Often they used no punctuation and wrote in fragments of thoughts. Misspellings were common and names and places were often confused. “Lincoln” was sometimes spelled “Linkhorn” or “Linkern.” Lincoln’s grandmother “Lucy” was sometimes “Lucey.” Some respondents referred to themselves in third person. Lincoln himself did in his biographical writings.

p. 35

“From this place,” wrote Abe, referring to himself in the third person, “he removed to what is now Spencer County, Indiana, in the autumn of 1816, Abraham then being in his eighth [actually seventh] year. This removal was partly on account of slavery, but chiefly on account of the difficulty in land titles in Kentucky.”

Ritual and the Consciousness Monoculture
by Sarah Perry

Mirrors only became common in the nineteenth century; before, they were luxury items owned only by the rich. Access to mirrors is a novelty, and likely a harmful one.

In Others In Mind: Social Origins of Self-Consciousness, Philippe Rochat describes an essential and tragic feature of our experience as humans: an irreconcilable gap between the beloved, special self as experienced in the first person, and the neutrally-evaluated self as experienced in the third person, imagined through the eyes of others. One’s first-person self image tends to be inflated and idealized, whereas the third-person self image tends to be deflated; reminders of this distance are demoralizing.

When people without access to mirrors (or clear water in which to view their reflections) are first exposed to them, their reaction tends to be very negative. Rochat quotes the anthropologist Edmund Carpenter’s description of showing mirrors to the Biamis of Papua New Guinea for the first time, a phenomenon Carpenter calls “the tribal terror of self-recognition”:

After a first frightening reaction, they became paralyzed, covering their mouths and hiding their heads – they stood transfixed looking at their own images, only their stomach muscles betraying great tension.

Why is their reaction negative, and not positive? It is that the first-person perspective of the self tends to be idealized compared to accurate, objective information; the more of this kind of information that becomes available (or unavoidable), the more each person will feel the shame and embarrassment from awareness of the irreconcilable gap between his first-person specialness and his third-person averageness.

There are many “mirrors”—novel sources of accurate information about the self—in our twenty-first century world. School is one such mirror; grades and test scores measure one’s intelligence and capacity for self-inhibition, but just as importantly, peers determine one’s “erotic ranking” in the social hierarchy, as the sociologist Randall Collins terms it. […]

There are many more “mirrors” available to us today; photography in all its forms is a mirror, and internet social networks are mirrors. Our modern selves are very exposed to third-person, deflating information about the idealized self. At the same time, say Rochat, “Rich contemporary cultures promote individual development, the individual expression and management of self-presentation. They foster self-idealization.”

My Beef With Ken Wilber
by Scott Preston (also posted on Integral World)

We see immediately from this schema why the persons of grammar are minimally four and not three. It’s because we are fourfold beings and our reality is a fourfold structure, too, being constituted of two times and two spaces — past and future, inner and outer. The fourfold human and the fourfold cosmos grew up together. Wilber’s model can’t account for that at all.

So, what’s the problem here? Wilber seems to have omitted time and our experience of time as an irrelevancy. Time isn’t even represented in Wilber’s AQAL model. Only subject and object spaces. Therefore, the human form cannot be properly interpreted, for we have four faces, like some representations of the god Janus, that face backwards, forwards, inwards, and outwards, and we have attendant faculties and consciousness functions organised accordingly for mastery of these dimensions — Jung’s feeling, thinking, sensing, willing functions are attuned to a reality that is fourfold in terms of two times and two spaces. And the four basic persons of grammar — You, I, We, He or She — are the representation in grammar of that reality and that consciousness, that we are fourfold beings just as our reality is a fourfold cosmos.

Comparing Wilber’s model to Rosenstock-Huessy’s, I would have to conclude that Wilber’s model is “deficient integral” owing to its apparent omission of time and subsequently of the “I-thou” relationship in which the time factor is really pronounced. For the “I-It” (or “We-Its”) relation is a relation of spaces — inner and outer, while the “I-Thou” (or “We-thou”) relation is a relation of times.

It is perhaps not so apparent to English speakers especially that the “thou” or “you” form is connected with time future. Other languages, like German, still preserve the formal aspects of this. In old English you had to say “go thou!” or “be thou loving!”, and so on. In other words, the “thou” or “you” is most closely associated with the imperative form and that is the future addressing the past. It is a call to change one’s personal or collective state — what we call the “vocation” or “calling” is time future in dialogue with time past. Time past is represented in the “we” form. We is not plural “I’s”. It is constituted by some historical act, like a marriage or union or congregation of peoples or the sexes in which “the two shall become one flesh”. We is the collective person, historically established by some act. The people in “We the People” is a singularity and a unity, an historically constituted entity called “nation”. A bunch of autonomous “I’s” or egos never yet formed a tribe or a nation — or a commune for that matter. Nor a successful marriage.

Though “I-It” (or “We-Its”) might be permissible in referring to the relation of subject and object spaces, “we-thou” is the relation in which the time element is outstanding.

“Not with a bang but with a whimper.”

There is “a large group of people who feel that they no longer have any effective stake or a just share in a particular system of economic or social relations aren’t going to feel any obligation to defend that system when it faces a crisis, or any sense even of belonging within it.” Scott Preston writes about this in Panem et Circenses.

When people feel disenfranchised and disinvested, disengaged and divided, they are forced to fall back on other identities or else to feel isolated. In either case, radicalization can follow. And if not radicalization, people can simply begin acting strangely, desperately, and aggressively, sometimes violently as they react to each new stressor and lash out at perceived threats (see Keith Paynes’ The Broken Ladder, as written about in my posts Inequality Means No Center to Moderate TowardOn Conflict and Stupidity, Class Anxiety of Privilege Denied, Connecting the Dots of Violence, & Inequality in the Anthropocene).

We see this loss of trust in so many ways. As inequality goes up, so do the rates of social problems, from homicides to child abuse, from censorship to police brutality. The public becomes more outraged and the ruling elite become more authoritarian. But it’s the public that concerns me, as I’m not part of the ruling elite nor aspire to be. The public has lost faith in government, corporations, media, and increasingly the healthcare system as well — nearly all of the major institutions that hold together the social fabric. A society can’t survive long under these conditions. Sure, a society can turn toward the overtly authoritarian as China is doing, but even that requires public trust that the government in some basic sense has the public good or national interest in mind.

Then again, American society has been resilient up to this point. This isn’t the first time that the social order began fracturing. On more than one occasion, the ruling elite lost control of the narrative and almost entirely lost control of the reigns of power. The US has a long history of revolts, often large-scale and violent, that started as soon as the country was founded (Shays’ Rebellion, Whiskey Rebellion, etc; see Spirit of ’76 & The Fight For Freedom Is the Fight To Exist: Independence and Interdependence). In their abject fear, look at how the ruling elite treated the Bonus Army. And veterans were to be feared. Black veterans came back from WWI with their guns still in their possession and they violently fought back against their oppressors. And after WWII, veterans rose up against corrupt local governments, in one case using military weapons to shoot up the courthouse (e.g., 1946 Battle of Athens).

The public losing trust in authority figures and institutions of power is not to be taken lightly. That is even more true with a country founded on revolution and that soon after fell into civil war. As with Shays’ Rebellion, the American Civil War was a continuation of the American Revolution. The cracks in the foundation remain, the issues unresolved. This has been a particular concern for the American oligarchs and plutocrats this past century, as mass uprisings and coups overturned numerous societies around the world. The key factor, though, is what Americans will do. Patriotic indoctrination can only go so far. Where will people turn to for meaningful identities that are relevant to survival in an ever more harsh and punishing society, as stress and uncertainty continues to rise?

Even if the American public doesn’t turn against the system any time soon, when it comes under attack they might not feel in the mood to sacrifice themselves to defend it. Societies can collapse from revolt, but they can more easily collapse from indifference and apathy, a slow erosion of trust. “Not with a bang but with a whimper.” But maybe that wouldn’t be such a bad thing. It’s better than some of the alternatives. And it would be an opportunity for reinvention, for new identities.

* * *

7/26/19 – An interesting thing is that the oligarchs are so unconcerned. They see this situation as a good thing, as an opportunity. Everything is an opportunity to the opportunist and disaster capitalism is one endless opportunity for those who lack a soul.

They aren’t seeking to re-create early 20th century fascism. The loss of national identities is not an issue for them, even as they exploit and manipulate patriotism. Meanwhile, their own identities and source of power has been offshored in a new form of extra-national governance, a deep state beyond all states. They are citizens of nowhere and the rest of us are left holding the bag.

“The oligarch’s interests always lie offshore: in tax havens and secrecy regimes. Paradoxically, these interests are best promoted by nationalists and nativists. The politicians who most loudly proclaim their patriotism and defence of sovereignty are always the first to sell their nations down the river. It is no coincidence that most of the newspapers promoting the nativist agenda, whipping up hatred against immigrants and thundering about sovereignty, are owned by billionaire tax exiles, living offshore” (George Monbiot, From Trump to Johnson, nationalists are on the rise – backed by billionaire oligarchs).

It’s not that old identities are merely dying. There are those seeking to snuff them out, to put them out of their misery. The ruling elite are decimating what holds society together. But in their own demented way, maybe they are unintentionally freeing us to become something else. They have the upper hand for the moment, but moments don’t last long. Even they realize disaster capitalism can’t be maintained. It’s why they constantly dream of somewhere to escape, whether on international waters or space colonies. Everyone is looking for a new identity. That isn’t to say all potential new identities will serve us well.

All of this is a strange scenario. And most people are simply lost. As old identities loosen, we lose our bearings, even or especially among the best of us. It is disorienting, another thing Scott Preston has been writing a lot about lately (Our Mental Meltdown: Mind in Dissolution). The modern self is splintering and this creates all kinds of self-deception and self-contradiction. As Preston puts often it, “the duplicity of our times — the double-think, the double-speak, the double-standards, and the double-bind” (Age of Revolutions). But don’t worry. The oligarchs too will find themselves caught in their own traps. Their fantasies of control are only that, fantasies.

The Crisis of Identity

“Besides real diseases we are subject to many that are only imaginary, for which the physicians have invented imaginary cures; these have then several names, and so have the drugs that are proper to them.”
~Jonathan Swift, 1726
Gulliver’s Travels

“The alarming increase in Insanity, as might naturally be expected, has incited many persons to an investigation of this disease.”
~John Haslam, 1809
On Madness and Melancholy: Including Practical Remarks on those Diseases

“Cancer, like insanity, seems to increase with the progress of civilization.”
~Stanislas Tanchou, 1843
Paper presented to the Paris Medical Society

I’ve been following Scott Preston over at his blog, Chrysalis. He has been writing on the same set of issues for a long time now, longer than I’ve been reading his blog. He reads widely and so draws on many sources, most of which I’m not familiar with, part of the reason I appreciate the work he does to pull together such informed pieces. A recent post, A Brief History of Our Disintegration, would give you a good sense of his intellectual project, although the word ‘intellectual’ sounds rather paltry for what he is exploring: “Around the end of the 19th century (called the fin de siecle period), something uncanny began to emerge in the functioning of the modern mind, also called the “perspectival” or “the mental-rational structure of consciousness” (Jean Gebser). As usual, it first became evident in the arts — a portent of things to come, but most especially as a disintegration of the personality and character structure of Modern Man and mental-rational consciousness.”

That time period has been an interest of mine as well. There are two books that come to mind that I’ve mentioned before: Tom Lutz’s American Nervousness, 1903 and Jackson Lear’s Rebirth of a Nation (for a discussion of the latter, see: Juvenile Delinquents and Emasculated Males). Both talk about that turn-of-the-century crisis, the psychological projections and physical manifestations, the social movements and political actions. A major concern was neurasthenia which, according to the dominant economic paradigm, meant a deficit of ‘nervous energy’ or ‘nerve force’, the reserves of which if not reinvested wisely and instead wasted would lead to physical and psychological bankruptcy, and so one became spent. (The term ‘neurasthenia’ was first used in 1829 and popularized by George Miller Beard in 1869, the same period when the related medical condition of ‘nostalgia’ became a more common diagnosis, although ‘nostalgia’ was first referred to in the 17th century (Swiss doctor Johannes Hofer coined the term, also using it interchangeably with nosomania and philopatridomania — see: Michael S. Roth, Memory, Trauma, and History; David Lowenthal, The Past Is a Foreign Country; Thomas Dodman, What Nostalgia Was; Susan J. Matt, Homesickness; Linda Marilyn Austin, Nostalgia in Transition, 1780-1917; Svetlana Boym, The Future of Nostalgia; Gary S. Meltzer, Euripides and the Poetics of Nostalgia; see The Disease of Nostalgia). Today, we might speak of ‘neurasthenia’ as stress and, even earlier, they had other ways of talking about it — as Bryan Kozlowski explained in The Jane Austen Diet, p. 231: “A multitude of Regency terms like “flutterings,” “fidgets,” “agitations,” “vexations,” and, above all, “nerves” are the historical equivalents to what we would now recognize as physiological stress.” It was the stress of falling into history, a new sense of time, linear progression that made the past a lost world — from Stranded in the Present, Peter Fritzsche wrote:

“On that August day on the way to Mainz, Boisseree reported on of the startling consequences of the French Revolution. This was that more and more people began to visualize history as a process that affected their lives in knowable, comprehensible ways, connected them to strangers on a market boat, and thus allowed them to offer their own versions and opinions to a wider public. The emerging historical consciousness was not restricted to an elite, or a small literate stratum, but was the shared cultural good of ordinary travelers, soldiers, and artisans. In many ways history had become a mass medium connecting people and their stories all over Europe and beyond. Moreover, the drama of history was construed in such a way as to put emphasis on displacement, whether because customary business routines had been upset by the unexpected demands of headquartered Prussian troops, as the innkeepers protested, or because so many demobilized soldiers were on the move as they returned home or pressed on to seek their fortune, or because restrictive legislation against Jews and other religious minorities had been lifted, which would explain the keen interest of “the black-bearded Jew” in Napoleon and of Boisseree in the Jew. History was not simply unsettlement, though. The exchange of opinion “in the front cabin” and “in the back” hinted at the contested nature of well-defined political visions: the role of the French, of Jacobins, of Napoleon. The travelers were describing a world knocked off the feet of tradition and reworked and rearranged by various ideological protagonists and conspirators (Napoleon, Talleyrand, Blucher) who sought to create new social communities. Journeying together to Mainz, Boisseree and his companions were bound together by their common understanding of moving toward a world that was new and strange, a place more dangerous and more wonderful than the one they left behind.”

That excitement was mixed with the feeling of being spent, the reserves having been fully tapped. This was mixed up with sexuality in what Theodore Dreiser called the ‘spermatic economy’ in the management of libido as psychic energy, a modernization of Galenic thought (by the way, the catalogue for Sears, Roebuck and Company offered an electrical device to replenish nerve force that came with a genital attachment). Obsession with sexuality was used to reinforce gender roles in how neurasthenic patients were treated in following the practice of Dr. Silas Weir Mitchell, in that men were recommended to become more active (the ‘West cure’) and women more passive (the ‘rest cure’), although some women “used neurasthenia to challenge the status quo, rather than enforce it. They argued that traditional gender roles were causing women’s neurasthenia, and that housework was wasting their nervous energy. If they were allowed to do more useful work, they said, they’d be reinvesting and replenishing their energies, much as men were thought to do out in the wilderness” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). That feminist-style argument, as I recall, came up in advertisements for the Bernarr Macfadden’s fitness protocol in the early-1900s, encouraging (presumably middle class) women to give up housework for exercise and so regain their vitality. Macfadden was also an advocate of living a fully sensuous life, going as far as free love.

Besides the gender wars, there was the ever-present bourgeois bigotry. Neurasthenia is the most civilized of the diseases of civilization since, in its original American conception, it was perceived as only afflicting middle-to-upper class whites, especially WASPs — as Lutz says that, “if you were lower class, and you weren’t educated and you weren’t Anglo Saxon, you wouldn’t get neurasthenic because you just didn’t have what it took to be damaged by modernity” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast) and so, according to Lutz’s book, people would make “claims to sickness as claims to privilege.” This class bias goes back even earlier to Robert Burton’s melancholia with its element of what later would be understood as the Cartesian anxiety of mind-body dualism, a common ailment of the intellectual elite (mind-body dualism goes back to the Axial Age; see Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind). The class bias was different for nostalgia, as written about by Svetlana Boym in The Future of Nostalgia (p. 5):

“For Robert Burton, melancholia, far from being a mere physical or psychological condition, had a philosophical dimension. The melancholic saw the world as a theater ruled by capricious fate and demonic play. Often mistaken for a mere misanthrope, the melancholic was in fact a utopian dreamer who had higher hopes for humanity. In this respect, melancholia was an affect and an ailment of intellectuals, a Hamletian doubt, a side effect of critical reason; in melancholia, thinking and feeling, spirit and matter, soul and body were perpetually in conflict. Unlike melancholia, which was regarded as an ailment of monks and philosophers, nostalgia was a more “democratic” disease that threatened to affect soldiers and sailors displaced far from home as well as many country people who began to move to the cities. Nostalgia was not merely an individual anxiety but a public threat that revealed the contradictions of modernity and acquired a greater importance.”

Like diabetes, melancholia and neuraesthenia was first seen among the elite, and so it was taken as demonstrating one’s elite nature. Prior to neurasthenic diagnoses but in the post-revolutionary era, a similar phenomenon went by other names. This is explored by Bryan Kozlowski in one chapter of The Jane Austen Diet (p. 232-233):

“Yet the idea that this was acceptable—nay, encouraged—behavior was rampant throughout the late 18th century. Ever since Jane was young, stress itself was viewed as the right and prerogative of the rich and well-off. The more stress you felt, the more you demonstrated to the world how truly delicate and sensitive your wealthy, softly pampered body actually was. The common catchword for this was having a heightened sensibility—one of the most fashionable afflictions in England at the time. Mainly affecting the “nerves,” a Regency woman who caught the sensibility but “disdains to be strong minded,” wrote a cultural observer in 1799, “she trembles at every breeze, faints at every peril and yields to every assailant.” Austen knew real-life strutters of this sensibility, writing about one acquaintance who rather enjoys “her spasms and nervousness and the consequence they give her.” It’s the same “sensibility” Marianne wallows in throughout the novel that bears its name, “feeding and encouraging” her anxiety “as a duty.” Readers of the era would have found nothing out of the ordinary in Marianne’s high-strung embrace of stress.”

This condition was considered a sign of progress, but over time it came to be seen by some as the greatest threat to civilization, in either case offering much material for fictionalized portrayals that were popular. Being sick in this fashion was proof that one was a modern individual, an exemplar of advanced civilization, if coming at immense cost —Julie Beck explains (‘Americanitis’: The Disease of Living Too Fast):

“The nature of this sickness was vague and all-encompassing. In his book Neurasthenic Nation, David Schuster, an associate professor of history at Indiana University-Purdue University Fort Wayne, outlines some of the possible symptoms of neurasthenia: headaches, muscle pain, weight loss, irritability, anxiety, impotence, depression, “a lack of ambition,” and both insomnia and lethargy. It was a bit of a grab bag of a diagnosis, a catch-all for nearly any kind of discomfort or unhappiness.

“This vagueness meant that the diagnosis was likely given to people suffering from a variety of mental and physical illnesses, as well as some people with no clinical conditions by modern standards, who were just dissatisfied or full of ennui. “It was really largely a quality-of-life issue,” Schuster says. “If you were feeling good and healthy, you were not neurasthenic, but if for some reason you were feeling run down, then you were neurasthenic.””

I’d point out how neurasthenia was seen as primarily caused by intellectual activity, as it became a descriptor of a common experience among the burgeoning middle class of often well-educated professionals and office workers. This relates to Weston A. Price’s work in the 1930s, as modern dietary changes first hit this demographic since they had the means to afford eating a fully industrialized Standard American Diet (SAD), long before others (within decades, though, SAD-caused malnourishment would wreck the health at all levels of society). What this meant, in particular, was a diet high in processed carbs and sugar that coincided, because of Upton Sinclair’s 1904 The Jungle: Muckraking the Meat-Packing Industry,  with the early-1900s decreased consumption of meat and saturated fats. As Price demonstrated, this was a vast change from the traditional diet found all over the world, including in rural Europe (and presumably in rural America, with most Americans not urbanized until the turn of last century), that always included significant amounts of nutritious animal foods loaded up with fat-soluble vitamins, not to mention lots of healthy fats and cholesterol.

Prior to talk of neurasthenia, the exhaustion model of health portrayed as waste and depletion took hold in Europe centuries earlier (e.g., anti-masturbation panics) and had its roots in humor theory of bodily fluids. It has long been understood that food, specifically macronutrients (carbohydrate, protein, & fat) and food groups, affect mood and behavior — see the early literature on melancholy. During feudalism food laws were used as a means of social control, such that in one case meat was prohibited prior to Carnival because of its energizing effect that it was thought could lead to rowdiness or even revolt, as sometimes did happen (Ken Albala & Trudy Eden, Food and Faith in Christian Culture). Red meat, in particular, was thought to heat up blood (warm, wet) and yellow bile (warm, dry), in promoting sanguine and choleric personalities of masculinity. Like women, peasants were supposed to be submissive and hence not too masculine — they were to be socially controlled, not self-controlled. Anyone who was too strong-willed and strong-minded, other than the (ruling, economic, clerical, and intellectual) elite, was considered problematic; and one of the solutions was an enforced change of diet to create the proper humoral disposition for their appointed social role within the social order (i.e., depriving nutrient-dense meat until an individual or group was too malnourished, weak, anemic, sickly, docile, and effeminate to be assertive, aggressive, and confrontational toward their ‘betters’)

There does seem to be a correlation (causal link?) between an increase of intellectual activity and abstract thought with an increase of carbohydrates and sugar, with this connection first appearing during the early colonial era that set the stage for the Enlightenment. It was the agricultural mind taken to a whole new level. Indeed, a steady flow of glucose is one way to fuel extended periods of brain work, such as reading and writing for hours on end and late into the night — the reason college students to this day will down sugary drinks while studying. Because of trade networks, Enlightenment thinkers were buzzing on the suddenly much more available simple carbs and sugar, with an added boost from caffeine and nicotine. The modern intellectual mind was drugged-up right from the beginning, and over time it took its toll. Such dietary highs inevitably lead to ever greater crashes of mood and health. Interestingly, Dr. Silas Weir Mitchell who advocated the ‘rest cure’ and ‘West cure’ in treating neurasthenia and other ailments additionally used a “meat-rich diet” for his patients (Ann Stiles, Go rest, young man). Other doctors of that era were even more direct in using specifically low-carb diets for various health conditions, often for obesity which was also a focus of Dr. Mitchell.

As a side note, the gendering of diet was seen as important for constructing, maintaining, and enforcing gender roles; that is carried over into the modern bias that masculine men eat steak and effeminate women eat salad. According to humoralism, men are well contained while women are leaky vessels. One can immediately see the fears of neurasthenia, emasculation, and excessive ejaculation. The ideal man was supposed to hold onto and control his bodily fluids, from urine to semen, by using and investing them carefully. With neurasthenia, though, men were seen as having become effeminate and leaky, dissipating and draining away their reserves vital fluids and psychic energies. So, a neurasthenic man needed a strengthening of the boundaries that held everything in. The leakiness of women was also a problem, but women couldn’t and shouldn’t be expected to contain themselves. The rest cure designed for women was to isolate them in a bedroom where they’d be contained by architectural structure of home that was owned and ruled over by the male master. A husband and, as an extension, the husband’s property was to contain the wife; since she too was property of the man’s propertied self. This made a weak man of the upper classes even more dangerous to the social order because he couldn’t play he is needed gender role of husband and patriarch, upon which all of Western civilization was dependent.

All of this was based on an economic model of physiological scarcity. With neurasthenia arising in late modernity, the public debate was overtly framed by an economic metaphor. But the perceived need of economic containment of the self, be it self-containment or enforced containment, went back to early modernity. The enclosure movement was part of a larger reform movement, not only of land but also of society and identity.

* * *

“It cannot be denied that civilization, in its progress, is rife with causes which over-excite individuals, and result in the loss of mental equilibrium.”
~Edward Jarvis, 1843
“What shall we do with the Insane?”
The North American Review, Volume 56, Issue 118

“Have we lived too fast?”
~Dr. Silas Weir Mitchell, 1871
Wear and Tear, or Hints for the Overworked

It goes far beyond diet or any other single factor. There has been a diversity of stressors that continued to amass over the centuries of tumultuous change. The exhaustion of modern man (and typically the focus has been on men) has been building up for generations upon generations before it came to feel like a world-shaking crisis with the new industrialized world. The lens of neurasthenia was an attempt to grapple with what had changed, but the focus was too narrow. With the plague of neurasthenia, the atomization of commericialized man and woman couldn’t hold together. And so there was a temptation toward nationalistic projects, including wars, to revitalize the ailing soul and to suture the gash of social division and disarray. But this further wrenched out of alignment the traditional order that had once held society together, and what was lost mostly went without recognition. The individual was brought into the foreground of public thought, a lone protagonist in a social Darwinian world. In this melodramatic narrative of struggle and self-assertion, many individuals didn’t fare so well and everything else suffered in the wake.

Tom Lutz writes that, “By 1903, neurasthenic language and representations of neurasthenia were everywhere: in magazine articles, fiction, poetry, medical journals and books, in scholarly journals and newspaper articles, in political rhetoric and religious discourse, and in advertisements for spas, cures, nostrums, and myriad other products in newspapers, magazines and mail-order catalogs” (American Nervousness, 1903, p. 2).

There was a sense of moral decline that was hard to grasp, although some people like Weston A. Price tried to dig down into concrete explanations of what had so gone wrong, the social and psychological changes observable during mass urbanization and industrialization. He was far from alone in his inquiries, having built on the prior observations of doctors, anthropologists, and missionaries. Other doctors and scientists were looking into the influences of diet in the mid-1800s and, by the 1880s, scientists were exploring a variety of biological theories. Their inability to pinpoint the cause maybe had more to do with their lack of a needed framework, as they touched upon numerous facets of biological functioning:

“Not surprisingly, laboratory experiments designed to uncover physiological changes in the nerve cell were inconclusive. European research on neurasthenics reported such findings as loss of elasticity of blood vessels,’ thickening of the cell wall, changes in the shape of nerve cells,’6 or nerve cells that never advanced beyond an embryonic state.’ Another theory held that an overtaxed organism cannot keep up with metabolic requirements, leading to inadequate cell nutrition and waste excretion. The weakened cells cannot develop properly, while the resulting build-up of waste products effectively poisons the cells (so-called “autointoxication”).’ This theory was especially attractive because it seemed to explain the extreme diversity of neurasthenic symptoms: weakened or poisoned cells might affect the functioning of any organ in the body. Furthermore, “autointoxicants” could have a stimulatory effect, helping to account for the increased sensitivity and overexcitability characteristic of neurasthenics.'” (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia)

This early scientific research could not lessen the mercurial sense of unease, as neurasthenia was from its inception a broad category that captured some greater shift in public mood, even as it so powerfully shaped the individual’s health. For all the effort, there were as many theories about neurasthenia as there were symptoms. Deeper insight was required. “[I]f a human being is a multiformity of mind, body, soul, and spirit,” writes Preston, “you don’t achieve wholeness or fulfillment by amputating or suppressing one or more of these aspects, but only by an effective integration of the four aspects.” But integration is easier said than done.

The modern human hasn’t been suffering from mere psychic wear and tear for the individual body itself has been showing the signs of sickness, as the diseases of civilization have become harder and harder to ignore. On a societal level of human health, I’ve previously shared passages from Lears (see here) — he discusses the vitalist impulse that was the response to the turmoil, and vitalism often was explored in terms of physical health as the most apparent manifestation, although social and spiritual health were just as often spoken of in the same breath. The whole person was under assault by an accumulation of stressors and the increasingly isolated individual didn’t have the resources to fight them off.

By the way, this was far from being limited to America. Europeans picked up the discussion of neurasthenia and took it in other directions, often with less optimism about progress, but also some thinkers emphasizing social interpretations with specific blame on hyper-individualism (Laura Goering, “Russian Nervousness”: Neurasthenia and National Identity in Nineteenth-Century Russia). Thoughts on neurasthenia became mixed up with earlier speculations on nostalgia and romanticized notions of rural life. More important, Russian thinkers in particular understood that the problems of modernity weren’t limited to the upper classes, instead extending across entire populations, as a result of how societies had been turned on their heads during that fractious century of revolutions.

In looking around, I came across some other interesting stuff. From 1901 Nervous and Mental Diseases by Archibald Church and Frederick Peterson, the authors in the chapter on “Mental Disease” are keen to further the description, categorization, and labeling of ‘insanity’. And I noted their concern with physiological asymmetry, something shared later with Price, among many others going back to the prior century.

Maybe asymmetry was not only indicative of developmental issues but also symbolic of a deeper imbalance. The attempts of phrenological analysis about psychiatric, criminal, and anti-social behavior were off-base; and, despite the bigotry and proto-genetic determinism among racists using these kinds of ideas, there is a simple truth about health in relationship to physiological development, most easily observed in bone structure, but it would take many generations to understand the deeper scientific causes, along with nutrition (e.g., Price’s discovery of vitamin K2, what he called Acivator X) including parasites, toxins, and epigenetics. Church and Peterson did acknowledge that this went beyond mere individual or even familial issues: “The proportion of the insane to normal individuals may be stated to be about 1 to 300 of the population, though this proportion varies somewhat within narrow limits among different races and countries. It is probable that the intemperate use of alcohol and drugs, the spreading of syphilis, and the overstimulation in many directions of modern civilization have determined an increase difficult to estimate, but nevertheless palpable, of insanity in the present century as compared with past centuries.”

Also, there is the 1902 The Journal of Nervous and Mental Disease: Volume 29 edited by William G. Spiller. There is much discussion in there about how anxiety was observed, diagnosed, and treated at the time. Some of the case studies make for a fascinating read —– check out: “Report of a Case of Epilepsy Presenting as Symptoms Night Terrors, Inipellant Ideas, Complicated Automatisms, with Subsequent Development of Convulsive Motor Seizures and Psychical Aberration” by W. K. Walker. This reminds me of the case that influenced Sigmund Freud and Carl Jung, Daniel Paul Schreber’s 1903 Memoirs of My Nervous Illness.

Talk about “a disintegration of the personality and character structure of Modern Man and mental-rational consciousness,” as Scott Preston put it. He goes on to say that, “The individual is not a natural thing. There is an incoherency in “Margaret Thatcher’s view of things when she infamously declared “there is no such thing as society” — that she saw only individuals and families, that is to say, atoms and molecules.” Her saying that really did capture the mood of the society she denied existing. Even the family was shrunk down to the ‘nuclear’. To state there is no society is to declare that there is also no extended family, no kinship, no community, that there is no larger human reality of any kind. Ironically in this pseudo-libertarian sentiment, there is nothing holding the family together other than government laws imposing strict control of marriage and parenting where common finances lock two individuals together under the rule of capitalist realism (the only larger realities involved are inhuman systems) — compared to high trust societies such as Nordic countries where the definition and practice of family life is less legalistic (Nordic Theory of Love and Individualism).

* * *

“It is easy, as we can see, for a barbarian to be healthy; for a civilized man the task is hard. The desire for a powerful and uninhibited ego may seem to us intelligible, but, as is shown by the times we live in, it is the profoundest sense antagonistic to civilization.”
~Sigmund Freud, 1938
An Outline of Psychoanalysis

“Consciousness is a very recent acquisition of nature, and it is still in an “experimental” state. It is frail, menaced by specific dangers, and easily injured.”
~Carl Jung, 1961
Man and His Symbols
Part 1: Approaching the Unconscious
The importance of dreams

The individual consumer-citizen as a legal member of a family unit has to be created and then controlled, as it is a rather unstable atomized identity. “The idea of the “individual”,” Preston says, “has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” That is partly the reason for the heavy focus on the body, an attempt to make concrete the individual in order to hold together the splintered self — great analysis of this can be found in Lewis Hyde’s Trickster Makes This World: “an unalterable fact about the body is linked to a place in the social order, and in both cases, to accept the link is to be caught in a kind of trap. Before anyone can be snared in this trap, an equation must be made between the body and the world (my skin color is my place as a Hispanic; menstruation is my place as a woman)” (see one of my posts about it: Lock Without a Key). Along with increasing authoritarianism, there was increasing medicalization and rationalization — to try to make sense of what was senseless.

A specific example of a change can be found in Dr. Frederick Hollick (1818-1900) who was a popular writer and speaker on medicine and health — his “links were to the free-thinking tradition, not to Christianity” (Helen Lefkowitz Horowitz, Rewriting Sex). With the influence of Mesmerism and animal magnetism, he studied and wrote about what more scientifically-sounding was variously called electrotherapeutics, galvanism, and electro-galvanism. Hollick was an English follower of the Scottish industrialist and socialist Robert Dale Owen who he literally followed to the United States where Owen started the utopian community New Harmony, a Southern Indiana village bought from the utopian German Harmonists and then filled with brilliant and innovative minds but lacking in practical know-how about running a self-sustaining community (Abraham Lincoln, later becoming a friend to the Owen family, recalled as a boy seeing the boat full of books heading to New Harmony).

“As had Owen before him, Hollick argued for the positive value of sexual feeling. Not only was it neither immoral nor injurious, it was the basis for morality and society. […] In many ways, Hollick was a sexual enthusiast” (Horowitz). These were the social circles of Abraham Lincoln, as he personally knew free-love advocates; that is why early Republicans were often referred to as ‘Red Republicans’, the ‘Red’ indicating radicalism as it still does to this day. Hollick wasn’t the first to be a sexual advocate nor, of course would he be the last — preceding him was Sarah Grimke (1837, Equality of the Sexes) and Charles Knowlton (1839, The Private Companion of Young Married People), Hollick having been “a student of Knowlton’s work” (Debran Rowland, The Boundaries of Her Body); and following him were two more well known figures, the previously mentioned Bernarr Macfadden (1868-1955) who was the first major health and fitness guru, and Wilhelm Reich (1897–1957) who was the less respectable member of the trinity formed with Sigmund Freud and Carl Jung. Sexuality became a symbolic issue of politics and health, partly because of increasing scientific knowledge but also because increasing marketization of products such as birth control (with public discussion of contraceptives happening in the late 1700s and advances in contraceptive production in the early 1800s), the latter being quite significant as it meant individuals could control pregnancy and this is particularly relevant to women. It should be noted that Hollick promoted the ideal of female sexual autonomy, that sex should be assented to and enjoyed by both partners.

This growing concern with sexuality began with the growing middle class in the decades following the American Revolution. Among much else, it was related to the post-revolutionary focus on parenting and the perceived need for raising republican citizens — this formed an audience far beyond radical libertinism and free-love. Expert advice was needed for the new bourgeouis family life, as part of the ‘civilizing process’ that increasingly took hold at that time with not only sexual manuals but also parenting guides, health pamphlets, books of manners, cookbooks, diet books, etc — cut off from the roots of traditional community and kinship, the modern individual no longer trusted inherited wisdom and so needed to be taught how to live, how to behave and relate (Norbert Elias, The Civilizing Process, & Society of Individuals; Bruce Mazlish, Civilization and Its Contents; Keith Thomas, In Pursuit of Civility; Stephen Mennell, The American Civilizing Process; Cas Wouters, Informalization; Jonathan Fletcher, Violence and Civilization; François Dépelteau & ‎T. Landini, Norbert Elias and Social Theory; Rob Watts, States of Violence and the Civilising Process; Pieter Spierenburg, Violence and Punishment; Steven Pinker, The Better Angels of Our Nature; Eric Dunning & Chris Rojek, Sport and Leisure in the Civilizing Process; D. E. Thiery, Polluting the Sacred; Helmut Kuzmics, Roland Axtmann, Authority, State and National Character; Mary Fulbrook, Un-Civilizing Processes?; John Zerzan, Against Civilization; Michel Foucault, Madness and Civilization; Dennis Smith, Norbert Elias and Modern Social Theory; Stejpan Mestrovic, The Barbarian Temperament; Thomas Salumets, Norbert Elias and Human Interdependencies). Along with the rise of the science, this situation promoted the role of the public intellectual that Hollick effectively took advantage of and, after the failure of Owen’s utopian experiment, he went on the lecture circuit which brought on legal cases in the unsuccessful attempt to silence him, the kind of persecution that Reich also later endured.

To put it in perspective, this Antebellum era of public debate and public education on sexuality coincided with other changes. Following the revolutionary era feminism (e.g., Mary Wollstonecraft), the ‘First Wave’ of organized feminists emerged generations later with the Seneca meeting in 1848 and, in that movement, there was a strong abolitionist impulse. This was part of the rise of ideological -isms in the North that so concerned the Southern aristocrats who wanted to maintain their hierarchical control of the entire country, the control they were quickly losing with the shift of power in the Federal government. A few years before that in 1844, a more effective condom was developed using vulcanized rubber, although condoms had been on the market since the previous decade; also in the 1840s, the vaginal sponge became available. Interestingly, many feminists were as against the contraceptives as they were against abortions. These were far from being mere practical issues as politics imbued every aspect and some feminists worried about how this might lessen the role of women and motherhood in society, if sexuality was divorced from pregnancy.

This was at a time when the abortion rate was sky-rocketing, indicating most women held other views; since large farm families were less needed with increase of both industrialized urbanization and industrialized farming. “Yet we also know that thousands of women were attending lectures in these years, lectures dealing, in part, with fertility control. And rates of abortion were escalating rapidly, especially, according to historian James Mohr, the rate for married women. Mohr estimates that in the period 1800-1830, perhaps one out of every twenty-five to thirty pregnancies was aborted. Between 1850 and 1860, he estimates, the ratio may have been one out of every five or six pregnancies. At mid-century, more than two hundred full-time abortionists reported worked in New York City” Other sources concur and extend this pattern of high abortion rate into the early 20th century: “Some have estimated that between 20-35 percent of 19th century pregnancies were terminated as a means of restoring “menstrual regularity” (Luker, 1984, p. 18-21). About 20 percent of pregnancies were aborted as late as in the 1930s (Tolnai, 1939, p. 425)” (Rickie Solinger, Pregnancy and Power, p. 61). (Polly F. Radosh, “Abortion: A Sociological Perspective”, from Interdisciplinary Views on Abortion ed. by Susan A. Martinelli-Fernandez, Lori Baker-Sperry, & Heather McIlvaine-Newsad).

In the unGodly and unChurched period of early America (“We forgot.”), organized religion was weak and “premarital sex was typical, many marriages following after pregnancy, but some people simply lived in sin. Single parents and ‘bastards’ were common” (A Vast Experiment). Abortions were so common at the time that abortifacients were advertised in major American newspapers, something that is never seen today. “Abortifacients were hawked in store fronts and even door to door. Vendors openly advertised their willingness to end women’s pregnancies” (Erin Blakemore, The Criminalization of Abortion Began as a Business Tactic). By the way, the oldest of the founding fathers, Benjamin Franklin, published in 1748 material about traditional methods of at-home abortions, what he referred to as ‘suppression of the courses’ (Molly Farrell, Ben Franklin Put an Abortion Recipe in His Math Textbook; Emily Feng, Benjamin Franklin gave instructions on at-home abortions in a book in the 1700s). It was a a reprinting of material from 1734. “While “suppression of the courses” can apply to any medical condition that results in the suspension of one’s menstrual cycle, the entry specifically refers to “unmarried women.” Described as a “misfortune” it recommends a number of known abortifacents from that time, like pennyroyal water and bellyache root, also known as angelica” (Nur Ibrahim, Did Ben Franklin Publish a Recipe in a Math Textbook on How to Induce Abortion?).

This is unsurprising as abortifacients have been known for at least millennia earlier, recorded in ancient texts from diverse societies, and probably were common knowledge prior to any written language, considering abortifacients are used by many hunter-gatherer tribes who need birth control to space out pregnancies in order to avoid malnourished babies and for other reasons. This is true within the Judeo-Christian tradition as well, such as where the Old Testament gives an abortion recipe for when a wife gets pregnant from an affair (Numbers 5:11-31). Patriarchal social dominators sought to further control women not necessarily for religious reasons, but more because medical practice was becoming professionalized by men who wanted to eliminate the business competition of female doctors, midwives, and herbalists. “To do so, they challenged common perceptions that a fetus was not a person until the pregnant mother felt it “quicken,” or move, inside their womb. In a time before sonograms, this was often the only way to definitively prove that a pregnancy was underway. Quickening was both a medical and legal concept, and abortions were considered immoral or illegal only after quickening. Churches discouraged the practice, but made a distinction between a woman who terminated her pregnancy pre- or post-quickening” (Erin Blakemore). Yet these conservative authoritarians would and still claim to speak on behalf of some vague and amorphous concept of Western Civilization and Christendom.

This is a great example of how, through the power of charismatic demagogues and Machiavellian social dominators, modern reactionary ideology obscures the past with deceptive nostalgia and replaces the traditional with historical revisionism. The thing is, until the modern era, abortifacients and other forms of birth control weren’t politicized, much less under the purview of judges. They were practical concerns that were largely determined privately and personally or else determined informally within communities and families. “Prior to the formation of the AMA, decisions related to pregnancy and abortion were made primarily with the domain and control of women. Midwives and the pregnant women they served decided the best course of action within extant knowledge of pregnancy. Most people did not view what would currently be called first trimester abortion as a significant moral issue. […] A woman’s awareness of quickening indicated a real pregnancy” (Polly F. Radosh). Yet something did change with birth control that was improved in its efficacy and ever more common or else more out in the open, making it a much more public and politicized issue, not to mention exacerbated by capitalist markets and mass media.

Premarital sex or, heck, even marital sex no longer inevitably meant birth; and with contraceptives, unwanted pregnancies often could be prevented entirely. Maybe this is why fertility had been declining for so long, and definitely the reason there was a mid-19th century moral panic. “Extending the analysis back further, the White fertility rate declined from 7.04 in 1800 to 5.42 in 1850, to 3.56 in 1900, and 2.98 in 1950. Thus, the White fertility declined for nearly all of American history but may have bottomed out in the 1980s. Black fertility has also been declining for well over 150 years, but it may very well continue to do so in the coming decades” (Ideas and Data, Sex, Marriage, and Children: Trends Among Millennial Women). If this is a crisis, it started pretty much at the founding of the country. And if we had reliable data before that, we might see the trend having originated in the colonial era or maybe back in late feudalism during the enclosure movement that destroyed traditional rural communities and kinship groups. Early Americans, by today’s standards of the culture wars, were not good Christians — many visiting Europeans at the time saw them as uncouth heathens and quite dangerous at that, such as the common American practice of toting around guns and knives, ever ready for a fight, whereas carrying weapons had been made illegal in England. In The Churching of America, Roger Finke and Rodney Stark write (pp. 25-26):

“Americans are burdened with more nostalgic illusions about the colonial era than about any other period in their history. Our conceptions of the time are dominated by a few powerful illustrations of Pilgrim scenes that most people over forty stared at year after year on classroom walls: the baptism of Pocahontas, the Pilgrims walking through the woods to church, and the first Thanksgiving. Had these classroom walls also been graced with colonial scenes of drunken revelry and barroom brawling, of women in risque ball-gowns, of gamblers and rakes, a better balance might have been struck. For the fact is that there never were all that many Puritans, even in New England, and non-Puritan behavior abounded. From 1761 through 1800 a third (33.7%) of all first births in New England occurred after less than nine months of marriage (D. S. Smith, 1985), despite harsh laws against fornication. Granted, some of these early births were simply premature and do not necessarily show that premarital intercourse had occurred, but offsetting this is the likelihood that not all women who engaged in premarital intercourse would have become pregnant. In any case, single women in New England during the colonial period were more likely to be sexually active than to belong to a church-in 1776 only about one out of five New Englanders had a religious affiliation. The lack of affiliation does not necessarily mean that most were irreligious (although some clearly were), but it does mean that their faith lacked public expression and organized influence.”

Though marriage remained important as an ideal in American culture, what changed was that procreative control became increasingly available — with fewer accidental pregnancies and more abortions, a powerful motivation for marriage disappeared. Unsurprisingly, at the same time, there was increasing worries about the breakdown of community and family, concerns that would turn into moral panic at various points. Antebellum America was in turmoil. This was concretely exemplified by the dropping birth rate that was already noticeable by mid-19th century (Timothy Crumrin, “Her Daily Concern:” Women’s Health Issues in Early 19th-Century Indiana) and was nearly halved from 1800 to 1900 (Debran Rowland, The Boundaries of Her Body). “The late 19th century and early 20th saw a huge increase in the country’s population (nearly 200 percent between 1860 and 1910) mostly due to immigration, and that population was becoming ever more urban as people moved to cities to seek their fortunes—including women, more of whom were getting college educations and jobs outside the home” (Julie Beck, ‘Americanitis’: The Disease of Living Too Fast). It was a period of crisis, not all that different from our present crisis, including the fear about low birth rate of native-born white Americans, especially the endangered species of whites/WASPs, being overtaken by the supposed dirty hordes of blacks, ethnics, and immigrants (i.e., replacement theory); at a time when Southern and Eastern Europeans, and even the Irish, were questionable in their whiteness, particularly if Catholic (Aren’t Irish White?).

The promotion of birth control was considered a genuine threat to American society, maybe to all of Western Civilization. It was most directly a threat to traditional gender roles. Women could better control when they got pregnant, a decisive factor in the phenomenon of  larger numbers of women entering college and the workforce. And with an epidemic of neurasthenia, this dilemma was worsened by the crippling effeminacy that neutered masculine potency. Was modern man, specifically the white ruling elite, up for the task of carrying on Western Civilization?

“Indeed, civilization’s demands on men’s nerve force had left their bodies positively effeminate. According to Beard, neurasthenics had the organization of “women more than men.” They possessed ” a muscular system comparatively small and feeble.” Their dainty frames and feeble musculature lacked the masculine vigor and nervous reserves of even their most recent forefathers. “It is much less than a century ago, that a man who could not [drink] many bottles of wine was thought of as effeminate—but a fraction of a man.” No more. With their dwindling reserves of nerve force, civilized men were becoming increasingly susceptible to the weakest stimulants until now, “like babes, we find no safe retreat, save in chocolate and milk and water.” Sex was as debilitating as alcohol for neurasthenics. For most men, sex in moderation was a tonic. Yet civilized neurasthenics could become ill if they attempted intercourse even once every three months. As Beard put it, “there is not force enough left in them to reproduce the species or go through the process of reproducing the species.” Lacking even the force “to reproduce the species,” their manhood was clearly in jeopardy.” (Gail Bederman, Manliness and Civilization, pp. 87-88)

This led to a backlash that began before the Civil War with the early obscenity laws and abortion laws, but went into high gear with the 1873 Comstock laws that effectively shut down the free market of both ideas and products related to sexuality, including sex toys. This made it near impossible for most women to learn about birth control or obtain contraceptives and abortifacients. There was a felt need to restore order and that meant white male order of the WASP middle-to-upper classes, especially with the end of slavery, mass immigration of ethnics, urbanization and industrialization. The crisis wasn’t only ideological or political. The entire world had been falling apart for centuries with the ending of feudalism and the ancien regime, the last remnants of it in America being maintained through slavery. Motherhood being the backbone of civilization, it was believed that women’s sexuality had to be controlled and, unlike so much else that was out of control, it actually could be controlled through enforcement of laws.

Outlawing abortions is a particularly interesting example of social control. Even with laws in place, abortions remained commonly practiced by local doctors, even in many rural areas (American Christianity: History, Politics, & Social Issues). Corey Robin argues that the strategy hasn’t been to deny women’s agency but to assert their subordination (Denying the Agency of the Subordinate Class). This is why, according to Rogin, abortion laws were designed to primarily target male doctors, although they rarely did, and not their female patients (at least once women had been largely removed from medical and healthcare practice, beyond the role as nurses who assisted male doctors). Everything comes down to agency or its lack or loss, but our entire sense of agency is out of accord with our own human nature. We seek to control what is outside of us, including control of others, for our own sense of self is out of control. The legalistic worldview is inherently authoritarian, at the heart of what Julian Jaynes proposes as the post-bicameral project of consciousness, the metaphorically contained self. But this psychic container is weak and keeps leaking all over the place.

* * *

“It is clear that if it goes on with the same ruthless speed for the next half century . . . the sane people will be in a minority at no very distant day.”
~Henry Maudsley, 1877
“The Alleged Increase of Insanity”
Journal of Mental Science, Volume 23, Issue 101

“If this increase was real, we have argued, then we are now in the midst of an epidemic of insanity so insidious that most people are even unaware of its existence.”
~Edwin Fuller Torrey & Judy Miller, 2001
The Invisible Plague: The Rise of Mental Illness from 1750 to the Present

To bring it back to the original inspiration, Scott Preston wrote: “Quite obviously, our picture of the human being as an indivisible unit or monad of existence was quite wrong-headed, and is not adequate for the generation and re-generation of whole human beings. Our self-portrait or self-understanding of “human nature” was deficient and serves now only to produce and reproduce human caricatures. Many of us now understand that the authentic process of individuation hasn’t much in common at all with individualism and the supremacy of the self-interest.” The failure we face is that of identify, of our way of being in the world. As with neurasthenia in the past, we are now in a crisis of anxiety and depression, along with yet another moral panic about the declining white race. So, we get the likes of Steve Bannon, Donald Trump, and Jordan Peterson. We failed to resolve past conflicts and so they keep re-emerging. Over this past century, we have continued to be in a crisis of identity (Mark Greif, The Age of the Crisis of Man).

“In retrospect, the omens of an impending crisis and disintegration of the individual were rather obvious,” Preston points out. “So, what we face today as “the crisis of identity” and the cognitive dissonance of “the New Normal” is not something really new — it’s an intensification of that disintegrative process that has been underway for over four generations now. It has now become acute. This is the paradox. The idea of the “individual” has become an unsustainable metaphor and moral ideal when the actual reality is “21st century schizoid man” — a being who, far from being individual, is falling to pieces and riven with self-contradiction, duplicity, and cognitive dissonance, as reflects life in “the New Normal” of double-talk, double-think, double-standard, and double-bind.” We never were individuals. It was just a story we told ourselves, but there are others that could be told. Scott Preston offers an alternative narrative, that of individuation.

* * *

I found some potentially interesting books while skimming material on Google Books, in my researching Frederick Hollick and other info. Among the titles below, I’ll share some text from one of them because it offers a good summary about sexuality at the time, specifically women’s sexuality. Obviously, it went far beyond sexuality itself, and going by my own theorizing I’d say it is yet another example of symbolic conflation, considering its direct relationship to abortion.

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland
pp. 34

WOMEN AND THE WOMB: The Emerging Birth Control Debate

The twentieth century dawned in America on a falling white birth rate. In 1800, an average of seven children were born to each “American-born white wife,” historians report. 29 By 1900, that number had fallen to roughly half. 30 Though there may have been several factors, some historians suggest that this decline—occurring as it did among young white women—may have been due to the use of contraceptives or abstinence,though few talked openly about it. 31

“In spite of all the rhetoric against birth control,the birthrate plummeted in the late nineteenth century in America and Western Europe (as it had in France the century before); family size was halved by the time of World War I,” notes Shari Thurer in The Myth of Motherhood. 32

As issues go, the “plummeting birthrate” among whites was a powder keg, sparking outcry as the “failure”of the privileged class to have children was contrasted with the “failure” of poor immigrants and minorities to control the number of children they were having. Criticism was loud and rampant. “The upper classes started the trend, and by the 1880s the swarms of ragged children produced by the poor were regarded by the bourgeoisie, so Emile Zola’s novels inform us, as evidence of the lower order’s ignorance and brutality,” Thurer notes. 33

But the seeds of this then-still nearly invisible movement had been planted much earlier. In the late 1700s, British political theorists began disseminating information on contraceptives as concerns of overpopulation grew among some classes. 34 Despite the separation of an ocean, by the 1820s, this information was “seeping” into the United States.

“Before the introduction of the Comstock laws, contraceptive devices were openly advertised in newspapers, tabloids, pamphlets, and health magazines,” Yalom notes.“Condoms had become increasing popular since the 1830s, when vulcanized rubber (the invention of Charles Goodyear) began to replace the earlier sheepskin models.” 35 Vaginal sponges also grew in popularity during the 1840s, as women traded letters and advice on contraceptives. 36 Of course, prosecutions under the Comstock Act went a long way toward chilling public discussion.

Though Margaret Sanger’s is often the first name associated with the dissemination of information on contraceptives in the early United States, in fact, a woman named Sarah Grimke preceded her by several decades. In 1837, Grimke published the Letters on the Equality of the Sexes, a pamphlet containing advice about sex, physiology, and the prevention of pregnancy. 37

Two years later, Charles Knowlton published The Private Companion of Young Married People, becoming the first physician in America to do so. 38 Near this time, Frederick Hollick, a student of Knowlton’s work, “popularized” the rhythm method and douching. And by the 1850s, a variety of material was being published providing men and women with information on the prevention of pregnancy. And the advances weren’t limited to paper.

“In 1846,a diaphragm-like article called The Wife’s Protector was patented in the United States,” according to Marilyn Yalom. 39 “By the 1850s dozens of patents for rubber pessaries ‘inflated to hold them in place’ were listed in the U.S. Patent Office records,” Janet Farrell Brodie reports in Contraception and Abortion in 19th Century America. 40 And, although many of these early devices were often more medical than prophylactic, by 1864 advertisements had begun to appear for “an India-rubber contrivance”similar in function and concept to the diaphragms of today. 41

“[B]y the 1860s and 1870s, a wide assortment of pessaries (vaginal rubber caps) could be purchased at two to six dollars each,”says Yalom. 42 And by 1860, following publication of James Ashton’s Book of Nature, the five most popular ways of avoiding pregnancy—“withdrawal, and the rhythm methods”—had become part of the public discussion. 43 But this early contraceptives movement in America would prove a victim of its own success. The openness and frank talk that characterized it would run afoul of the burgeoning “purity movement.”

“During the second half of the nineteenth century,American and European purity activists, determined to control other people’s sexuality, railed against male vice, prostitution, the spread of venereal disease, and the risks run by a chaste wife in the arms of a dissolute husband,” says Yalom. “They agitated against the availability of contraception under the assumption that such devices, because of their association with prostitution, would sully the home.” 44

Anthony Comstock, a “fanatical figure,” some historians suggest, was a charismatic “purist,” who, along with others in the movement, “acted like medieval Christiansengaged in a holy war,”Yalom says. 45 It was a successful crusade. “Comstock’s dogged efforts resulted in the 1873 law passed by Congress that barred use of the postal system for the distribution of any ‘article or thing designed or intended for the prevention of contraception or procuring of abortion’,”Yalom notes.

Comstock’s zeal would also lead to his appointment as a special agent of the United States Post Office with the authority to track and destroy “illegal” mailing,i.e.,mail deemed to be “obscene”or in violation of the Comstock Act.Until his death in 1915, Comstock is said to have been energetic in his pursuit of offenders,among them Dr. Edward Bliss Foote, whose articles on contraceptive devices and methods were widely published. 46 Foote was indicted in January of 1876 for dissemination of contraceptive information. He was tried, found guilty, and fined $3,000. Though donations of more than $300 were made to help defray costs,Foote was reportedly more cautious after the trial. 47 That “caution”spread to others, some historians suggest.

Disorderly Conduct: Visions of Gender in Victorian America
By Carroll Smith-Rosenberg

Riotous Flesh: Women, Physiology, and the Solitary Vice in Nineteenth-Century America
by April R. Haynes

The Boundaries of Her Body: The Troubling History of Women’s Rights in America
by Debran Rowland

Rereading Sex: Battles Over Sexual Knowledge and Suppression in Nineteenth-century America
by Helen Lefkowitz Horowitz

Rewriting Sex: Sexual Knowledge in Antebellum America, A Brief History with Documents
by Helen Lefkowitz Horowitz

Imperiled Innocents: Anthony Comstock and Family Reproduction in Victorian America
by Nicola Kay Beisel

Against Obscenity: Reform and the Politics of Womanhood in America, 1873–1935
by Leigh Ann Wheeler

Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age
by Paul S. Boyer

American Sexual Histories
edited by Elizabeth Reis

Wash and Be Healed: The Water-Cure Movement and Women’s Health
by Susan Cayleff

From Eve to Evolution: Darwin, Science, and Women’s Rights in Gilded Age America
by Kimberly A. Hamlin

Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917
by Gail Bederman

One Nation Under Stress: The Trouble with Stress as an Idea
by Dana Becker

* * *

8/18/19 – Looking back at this piece, I realize there is so much that could be added to it. And it already is long. It’s a topic that would require writing a book to do it justice. And it is such a fascinating area of study with lines of thought going in numerous directions. But I’ll limit myself by adding only a few thoughts that point toward some of those other directions.

The topic of this post goes back to the Renaissance (Western Individuality Before the Enlightenment Age) and even earlier to the Axial Age (Hunger for Connection), a thread that can be traced back through history following the collapse of what Julian Jaynes called bicameral civilization in the Bronze Age. At the beginning of modernity, the psychic tension erupted in many ways that were increasingly dramatic and sometimes disturbing, from revolution to media panics (Technological Fears and Media Panics). I see all of this as having to do with the isolating and anxiety-inducing effects of hyper-individualism. The rigid egoic boundaries required by our social order are simply tiresome (Music and Dance on the Mind), as Julian Jaynes conjectured:

“Another advantage of schizophrenia, perhaps evolutionary, is tirelessness. While a few schizophrenics complain of generalized fatigue, particularly in the early stages of the illness, most patients do not. In fact, they show less fatigue than normal persons and are capable of tremendous feats of endurance. They are not fatigued by examinations lasting many hours. They may move about day and night, or work endlessly without any sign of being tired. Catatonics may hold an awkward position for days that the reader could not hold for more than a few minutes. This suggests that much fatigue is a product of the subjective conscious mind, and that bicameral man, building the pyramids of Egypt, the ziggurats of Sumer, or the gigantic temples at Teotihuacan with only hand labor, could do so far more easily than could conscious self-reflective men.”

On the Facebook page for Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind, Luciano Imoto made the same basic point in speaking about hyper-individualism. He stated that, “In my point of view the constant use of memory (and the hippocampus) to sustain a fictitious identity of “self/I” could be deleterious to the brain´s health at long range (considering that the brain consumes about 20 percent of the body’s energy).” I’m sure others have made similar observations. This strain on the psyche has been building up for a long time, but it became particularly apparent in the 19th century, to such an extent it was deemed necessary to build special institutions to house and care for the broken and deficient humans who couldn’t handle modern life or else couldn’t appropriately conform to the ever more oppressive social norms (Mark Jackson, The Borderland of Imbecility). As radical as some consider Jaynes to be, insights like this were hardly new — in 1867, Henry Maudsley offered insight laced with bigotry, from The Physiology and Pathology of Mind:

“There are general causes, such as the state of civilization in a country, the form of its government and its religion, the occupation, habits, and condition of its inhabitants, which are not without influence in determining the pro portion of mental diseases amongst them. Reliable statistical data respecting the prevalence of insanity in different countries are not yet to be had ; even the question whether it has increased with the progress of civilization has not been positively settled. Travellers are certainly agreed that it is a rare disease amongst barbarous people, while, in the different civilized nations of the world, there is, so far as can be ascertained, an average of about one insane person in five hundred inhabitants. Theoretical considerations would lead to the expectation of an increased liability to mental disorder with an increase in the complexity of the mental organization: as there are a greater liability to disease, and the possibility of many more diseases, in a complex organism like the human body, where there are many kinds of tissues and an orderly subordination of parts, than in a simple organism with less differentiation of tissue and less complexity of structure; so in the complex mental organization, with its manifold, special, and complex relations with the external, which a state of civilization implies, there is plainly the favourable occasion of many derangements. The feverish activity of life, the eager interests, the numerous passions, and the great strain of mental work incident to the multiplied industries and eager competition of an active civilization, can scarcely fail, one may suppose, to augment the liability to mental disease. On the other hand, it may be presumed that mental sufferings will be as rare in an infant state of society as they are in the infancy of the individual. That degenerate nervous function in young children is displayed, not in mental disorder, but in convulsions; that animals very seldom suffer from insanity; that insanity is of comparatively rare occurrence among savages; all these are circumstances that arise from one and the same fact—a want of development of the mental organization. There seems, therefore, good reason to believe that, with the progress of mental development through the ages, there is, as is the case with other forms of organic development, a correlative degeneration going on, and that an increase of insanity is a penalty which an increase of our present civilization necessarily pays. […]

“If we admit such an increase of insanity with our present civilization, we shall be at no loss to indicate causes for it. Some would no doubt easily find in over-population the prolific parent of this as of numerous other ills to mankind. In the fierce and active struggle for existence which there necessarily is where the claimants are many and the supplies are limited, and where the competition therefore is severe, the weakest must suffer, and some of them, breaking down into madness, fall by the wayside. As it is the distinctly manifested aim of mental development to bring man into more intimate, special, and complex relations with the rest of nature by means of patient investigations of physical laws, and a corresponding internal adaptation to external relations, it is no marvel, it appears indeed inevitable, that those who, either from inherited weakness or some other debilitating causes, have been rendered unequal to the struggle of life, should be ruthlessly crushed out as abortive beings in nature. They are the waste thrown up by the silent but strong current of progress; they are the weak crushed out by the strong in the mortal struggle for development; they are examples of decaying reason thrown off by vigorous mental growth, the energy of which they testify. Everywhere and always “to be weak is to be miserable.”

As civilization became complex, so did the human mind in having to adapt to it and sometimes that hit a breaking point in individuals; or else what was previously considered normal behavior was now judged unacceptable, the latter explanation favored by Michel Foucault and Thomas Szasz (also see Bruce Levine’s article, Societies With Little Coercion Have Little Mental Illness). Whatever the explanation, something that once was severely abnormal had become normalized and, as it happened with insidious gradualism, few noticed and would accept what had changed “Living amid an ongoing epidemic that nobody notices is surreal. It is like viewing a mighty river that has risen slowly over two centuries, imperceptibly claiming the surrounding land, millimeter by millimeter. . . . Humans adapt remarkably well to a disaster as long as the disaster occurs over a long period of time” (E. Fuller Torrey & Judy Miller, Invisible Plague; also see Torrey’s Schizophrenia and Civilization); “At the end of the seventeenth century, insanity was of little significance and was little discussed. At the end of the eighteenth century, it was perceived as probably increasing and was of some concern. At the end of the nineteenth century, it was perceived as an epidemic and was a major concern. And at the end of the twentieth century, insanity was simply accepted as part of the fabric of life. It is a remarkable history.” All of the changes were mostly happening over generations and centuries, which left little if any living memory from when the changes began. Many thinkers like Torrey and Miller would be useful for fleshing this out, but here is a small sampling of authors and their books: Harold D. Foster’s What Really Causes Schizophrenia, Andrew Scull’s Madness in Civilization, Alain Ehrenberg’s Weariness of the Self, etc; and I shouldn’t ignore the growing field of Jaynesian scholarship such as found in the books put out by the Julian Jaynes Society.

Besides social stress and societal complexity, there was much else that was changing. For example, increasing concentrated urbanization and close proximity with other species meant ever more spread of infectious diseases and parasites (consider toxoplasma gondii from domesticated cats; see E. Fuller Torrey’s Beasts of Earth). Also, the 18th century saw the beginnings of industrialization with the related rise of toxins (Dan Olmsted & Mark Blaxill, The Age of Autism: Mercury, Medicine, and a Man-Made Epidemic). That worsened over the following century. Industrialization also transformed the Western diet. Sugar, having been introduced in the early colonial era, now was affordable and available to the general population. And wheat, once hard to grow and limited to the rich, also was becoming a widespread ingredient with new milling methods allowing highly refined white flour which made white bread popular (in the mid-1800s, Stanislas Tanchou did a statistical analysis that correlated the rate of grain consumption with the rate of cancer; and he observed that cancer, like insanity, spread along with civilization). For the first time in history, most Westerners were eating a very high-carb diet. This diet is addictive for a number of reasons and it was combined with the introduction of addictive stimulants. As I argue, this profoundly altered neurocognitive functioning and behavior (The Agricultural Mind, “Yes, tea banished the fairies.”, Autism and the Upper Crust, & Diets and SystemsDiets and Systems).

This represents an ongoing project for me. And I’m in good company.

“…some deeper area of the being.”

Alec Nevala-Lee shares a passage from Colin Wilson’s Mysteries (see Magic and the art of will). It elicits many thoughts, but I want to focus on the two main related aspects: the self and the will.

The main thing Wilson is talking about is hyper-individualism — the falseness and superficiality, constraint and limitation of anxiety-driven ‘consciousness’, the conscious personality of the ego-self. This is what denies the bundled self and the extended self, the vaster sense of being that challenges the socio-psychological structure of the modern mind. We defend our thick boundaries with great care for fear of what might get in, but this locks us in a prison cell of our own making. In not allowing ourselves to be affected, we make ourselves ineffective or at best only partly effective toward paltry ends. It’s not only a matter of doing “something really well” for we don’t really know what we want to do, as we’ve become disconnected from deeper impulses and broader experience.

For about as long as I can remember, the notion of ‘free will’ has never made sense to me. It isn’t a philosophical disagreement. Rather, in my own experience and in my observation of others, it simply offers no compelling explanation or valid meaning, much less deep insight. It intuitively makes no sense, which is to say it can only make sense if we never carefully think about it with probing awareness and open-minded inquiry. To the degree there is a ‘will’ is to the degree it is inseparable from the self. That is to say the self never wills anything for the self is and can only be known through the process of willing, which is simply to say through impulse and action. We are what we do, but we never know why we do what we do. We are who we are and we don’t know how to be otherwise.

There is no way to step back from the self in order to objectively see and act upon the self. That would require yet another self. The attempt to impose a will upon the self would lead to an infinite regress of selves. That would be a pointless preoccupation, although as entertainments go it is popular these days. A more worthy activity and maybe a greater achievement is stop trying to contain ourselves and instead to align with a greater sense of self. Will wills itself. And the only freedom that the will possesses is to be itself. That is what some might consider purpose or telos, one’s reason for being or rather one’s reason in being.

No freedom exists in isolation. To believe otherwise is a trap. The precise trap involved is addiction, which is the will driven by compulsion. After all, the addict is the ultimate individual, so disconnected within a repeating pattern of behavior as to be unable to affect or be affected. Complete autonomy is impotence. The only freedom is in relationship, both to the larger world and the larger sense of self. It is in the ‘other’ that we know ourselves. We can only be free in not trying to impose freedom, in not struggling to control and manipulate. True will, if we are to speak of such a thing, is the opposite of willfulness. We are only free to the extent we don’t think in the explicit terms of freedom. It is not a thought in the mind but a way of being in the world.

We know that the conscious will is connected to the narrow, conscious part of the personality. One of the paradoxes observed by [Pierre] Janet is that as the hysteric becomes increasingly obsessed with anxiety—and the need to exert his will—he also becomes increasingly ineffective. The narrower and more obsessive the consciousness, the weaker the will. Every one of us is familiar with the phenomenon. The more we become racked with anxiety to do something well, the more we are likely to botch it. It is [Viktor] Frankl’s “law of reversed effort.” If you want to do something really well, you have to get into the “right mood.” And the right mood involves a sense of relaxation, of feeling “wide open” instead of narrow and enclosed…

As William James remarked, we all have a lifelong habit of “inferiority to our full self.” We are all hysterics; it is the endemic disease of the human race, which clearly implies that, outside our “everyday personality,” there is a wider “self” that possesses greater powers than the everyday self. And this is not the Freudian subconscious. Like the “wider self” of Janet’s patients, it is as conscious as the “contracted self.” We are, in fact, partially aware of this “other self.” When a man “unwinds” by pouring himself a drink and kicking off his shoes, he is adopting an elementary method of relaxing into the other self. When an overworked housewife decides to buy herself a new hat, she is doing the same thing. But we seldom relax far enough; habit—and anxiety—are too strong…Magic is the art and science of using the will. Not the ordinary will of the contracted ego but the “true will” that seems to spring from some deeper area of the being.

Colin WilsonMysteries

“…consciousness is itself the result of learning.”

As above, so below
by Axel Cleeremans

A central aspect of the entire hierarchical predictive coding approach, though this is not readily apparent in the corresponding literature, is the emphasis it puts on learning mechanisms. In other works (Cleeremans, 2008, 2011), I have defended the idea that consciousness is itself the result of learning. From this perspective, agents become conscious in virtue of learning to redescribe their own activity to themselves. Taking the proposal that consciousness is inherently dynamical seriously opens up the mesmerizing possibility that conscious awareness is itself a product of plasticity-driven dynamics. In other words, from this perspective, we learn to be conscious. To dispel possible misunderstandings of this proposal right away, I am not suggesting that consciousness is something that one learns like one would learn about the Hundred Years War, that is, as an academic endeavour, but rather that consciousness is the result (vs. the starting point) of continuous and extended interaction with the world, with ourselves, and with others. The brain, from this perspective, continuously (and unconsciously) learns to anticipate the consequences of its own activity on itself, on the environment, and on other brains, and it is from the practical knowledge that accrues in such interactions that conscious experience is rooted. This perspective, in short, endorses the enactive approach introduced by O’Regan and Noë (2001), but extends it both inwards (the brain learning about itself) and further outwards (the brain learning about other brains), so connecting with the central ideas put forward by the predictive coding approach to cognition. In this light, the conscious mind is the brain’s (implicit, enacted) theory about itself, expressed in a language that other minds can understand.

The theory rests on several assumptions and is articulated over three core ideas. A first assumption is that information processing as carried out by neurons is intrinsically unconscious. There is nothing in the activity of individual neurons that make it so that their activity should produce conscious experience. Important consequences of this assumption are (1) that conscious and unconscious processing must be rooted in the same set of representational systems and neural processes, and (2) that tasks in general will always involve both conscious and unconscious influences, for awareness cannot be “turned off” in normal participants.

A second assumption is that information processing as carried out by the brain is graded and cascades (McClelland, 1979) in a continuous flow (Eriksen & Schultz, 1979) over the multiple levels of a heterarchy (Fuster, 2008) extending from posterior to anterior cortex as evidence accumulates during an information processing episode. An implication of this assumption is that consciousness takes time.

The third assumption is that plasticity is mandatory: The brain learns all the time, whether we intend to or not. Each experience leaves a trace in the brain (Kreiman, Fried, & Koch, 2002).

The social roots of consciousness
by Axel Cleeremans

How does this ability to represent the mental states of other agents get going? While there is considerable debate about this issue, it is probably fair to say that one crucial mechanism involves learning about the consequences of the actions that one directs towards other agents. In this respect, interactions with the natural world are fundamentally different from interactions with other agents, precisely because other agents are endowed with unobservable internal states. If I let a spoon drop on a hard floor, the sound that results will always be the same, within certain parameters that only vary in a limited range. The consequences of my action are thus more or less entirely predictable. But if I smile to someone, the consequences that may result are many. Perhaps the person will smile back to me, but it may also be the case that the person will ignore me or that she will display puzzlement, or even that she will be angry at me. It all depends on the context and on the unobservable mental states that the person currently entertains. Of course, there is a lot I can learn about the space of possible responses based on my knowledge of the person, my history of prior interactions with her, and on the context in which my interactions take place. But the point is simply to say that in order to successfully predict the consequences of the actions that I direct towards other agents, I have to build a model of how these agents work. And this is complex because, unlike what is the case for interactions with the natural world, it is an inverse problem: The same action may result in many different reactions, and those different reactions can themselves be caused by many different internal states.

Based on these observations, one provocative claim about the relationships between self-awareness and one’s ability to represent the mental states of other agents (“theory of mind”, as it is called) is thus that theory of mind comes first, as the philosopher Peter Caruthers has defended. That is, it is in virtue of my learning to correctly anticipate the consequences of the actions that  dIirect towards other agents that I end up developing models of the internal states of such agents, and it is in virtue of the existence of such models that I become able to gain insight about myself (more specifically: about my self). Thus, by this view, self-awareness, and perhaps subjective experience itself, is a consequence of theory of mind as it develops over extended periods of social intercourse.

Proteus Effect and Mediated Experience

The Proteus effect is how our appearance on media results in mediating our experience, perception, and identity. It also shapes how we relate and how others relate to us. Most of this happens unconsciously.

There are many ways this might relate to other psychological phenomenon. And there are real world equivalents to this. Consider that how we dress influences how we act such as wearing a black uniform will increase aggressive behavior. Another powerful example is that children imagining themselves as a superhero while doing a task will exceed the ability they would otherwise have.

Most interesting is how the Proteus effect might begin to overlap with so much else as immersive media comes to dominate our lives. We already see the power of such influences by way of placebo effect, Pygmalion/Rosenthal effect, golem effect, stereotype threat, and much else. I’ve been particularly interested in the placebo effect as the efficacy of antidepressants for most people are no more statistically significant than that of a placebo, demonstrating how something can allow us to imagine ourselves into a different state of mind. Or consider how simply interacting with a doctor or someone acting like a doctor brings relief without any actual procedure having been involved.

Our imaginations are powerful, imagination of both of ourselves and others along with the imagination of others of ourselves. Tell a doctor or a teacher something about a patient or student, even if not true, and the individual will respond in such a way as if it is true with real world measurable effects. New media could have similar effects, even when we know it isn’t ‘real’ but merely virtual. Imagination doesn’t necessarily concern itself with the constraints of supposed rationality, as shown how people will viscerally react to a fake arm being cut after they’ve come to identify with that fake arm, despite their consciously knowing it is not actually their arm.

Our minds are highly plastic and our experience easily influenced. The implications are immense, from education to mental health, from advertising to propaganda. The Proteus effect could play a transformative role in the further development of the modern mind, either through potential greater self-control or greater social control.

* * *

Virtual Worlds Are Real
by Nick Yee

Meet Virtual You: How Your VR Self Influences Your Real-Life Self
by Amy Cuddy

The Proteus Effect: How Our Avatar Changes Online Behavior
by John M. Grohol

Enhancing Our Lives with Immersive Virtual Reality
by Mel Slater & Maria V. Sanchez-Vives

Virtual Reality and Social Networks Will Be a Powerful Combination
by Jeremy N. Bailenson and Jim Blascovich

Promoting motivation with virtual agents and avatars
by Amy L. Baylor

Avatars and the Mirrorbox: Can Humans Hack Empathy?
by Danna Staaf

The Proteus effect: How gaming may revolutionise peacekeeping
by Gordon Hunt

Can virtual reality convince Americans to save for retirement?
by The Week Staff

When Reason Falters, It’s Age-Morphing Apps and Virtual Reality to the Rescue
by David Berreby

Give Someone a Virtual Avatar and They Adopt Stereotype Behavior
by Colin Schultz

Wii, Myself, and Size
by Li BJ, Lwin MO, & Jung Y

Would Being Forced to Use This ‘Obese’ Avatar Affect Your Physical Fitness?
by Esther Inglis-Arkell

The Proteus Effect and Self-Objectification via Avatars
by Big Think editors

The Proteus Effect in Dyadic Communication: Examining the Effect of Avatar Appearance in Computer-Mediated Dyadic Interaction
by Brandon Van Der Heide, Erin M. Schumaker, Ashley M. Peterson, & Elizabeth B. Jones

Verbal Behavior

There is a somewhat interesting discussion of the friendship between B.F. Skinner and W.V.O. Quine. The piece explores their shared interests and possible influences on one another. It’s not exactly an area of personal interest, but it got me thinking about Julian Jaynes.

Skinner is famous for his behaviorist research. When behaviorism is mentioned, what immediately comes to mind for most people is Pavlov’s dog. But behaviorism wasn’t limited to animals and simple responses to stimuli. Skinner developed his theory toward verbal behavior as well. As Michael Karson explains,

“Skinner called his behaviorism “radical,” (i.e., thorough or complete) because he rejected then-behaviorism’s lack of interest in private events. Just as Galileo insisted that the laws of physics would apply in the sky just as much as on the ground, Skinner insisted that the laws of psychology would apply just as much to the psychologist’s inner life as to the rat’s observable life.

“Consciousness has nothing to do with the so-called and now-solved philosophical problem of mind-body duality, or in current terms, how the physical brain can give rise to immaterial thought. The answer to this pseudo-problem is that even though thought seems to be immaterial, it is not. Thought is no more immaterial than sound, light, or odor. Even educated people used to believe, a long time ago, that these things were immaterial, but now we know that sound requires a material medium to transmit waves, light is made up of photons, and odor consists of molecules. Thus, hearing, seeing, and smelling are not immaterial activities, and there is nothing in so-called consciousness besides hearing, seeing, and smelling (and tasting and feeling). Once you learn how to see and hear things that are there, you can also see and hear things that are not there, just as you can kick a ball that is not there once you have learned to kick a ball that is there. Engaging in the behavior of seeing and hearing things that are not there is called imagination. Its survival value is obvious, since it allows trial and error learning in the safe space of imagination. There is nothing in so-called consciousness that is not some version of the five senses operating on their own. Once you have learned to hear words spoken in a way that makes sense, you can have thoughts; thinking is hearing yourself make language; it is verbal behavior and nothing more. It’s not private speech, as once was believed; thinking is private hearing.”

It’s amazing how much this is resonates with Jaynes’ bicameral theory. This maybe shouldn’t be surprising. After all, Jaynes was trained in behaviorism and early on did animal research. He was mentored by the behaviorist Frank A. Beach and was friends with Edward Boring who wrote a book about consciousness in relation to behaviorism. Reading about Skinner’s ideas about verbal behavior, I was reminded of Jaynes’ view of authorization as it relates to linguistic commands and how they become internalized to form an interiorized mind-space (i.e., Jaynesian consciousness).

I’m not the only person to think along these lines. On Reddit, someone wrote: “It is possible that before there were verbal communities that reinforced the basic verbal operants in full, people didn’t have complete “thinking” and really ran on operant auto-pilot since they didn’t have a full covert verbal repertoire and internal reinforcement/shaping process for verbal responses covert or overt, but this would be aeons before 2-3 kya. Wonder if Jaynes ever encountered Skinner’s “Verbal Behavior”…” Jaynes only references Skinner once in his book on bicameralism and consciousness. But he discusses behaviorism in general to some extent.

In the introduction, he describes behaviorism in this way: “From the outside, this revolt against consciousness seemed to storm the ancient citadels of human thought and set its arrogant banners up in one university after another. But having once been a part of its major school, I confess it was not really what it seemed. Off the printed page, behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious. And there was a very real hypocrisy abroad, as those interested in its problems were forcibly excluded from academic psychology, as text after text tried to smother the unwanted problem from student view. In essence, behaviorism was a method, not the theory that it tried to be. And as a method, it exorcised old ghosts. It gave psychology a thorough house cleaning. And now the closets have been swept out and the cupboards washed and aired, and we are ready to examine the problem again.” As dissatisfying as animal research was for Jaynes, it nonetheless set the stage for deeper questioning by way of a broader approach. It made possible new understanding.

Like Skinner, he wanted to take the next step, shifting from behavior to experience. Even their strategies to accomplish this appear to have been similar. Sensory experience itself becomes internalized, according to both of their theories. For Jaynes, perception of external space becomes the metaphorical model for a sense of internal space. When Karson says of Skinner’s view that “thinking is hearing yourself make language,” that seems close to Jaynes discussion of hearing voices as it develops into an ‘I’ and a ‘me’, the sense of identity split into subject and object which asserted was required for one to hear one’s own thoughts.

I don’t know Skinner’s thinking in detail or how it changed over time. He too pushed beyond the bounds of behavioral research. It’s not clear that Jaynes’ ever acknowledged this commonality. In his 1990 afterword to his book, Jaynes’ makes his one mention of Skinner without pointing out Skinner’s work on verbal behavior:

“This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

“This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.”

Jaynes was making the simple, if oft overlooked, point that perception of body is not the same thing as consciousness of mind. A behavioral response to one’s own body isn’t fundamentally different than a behavioral response to anything else. Behavioral responses are found in every species. This isn’t helpful in exploring consciousness itself. Skinner too wanted to get beyond this level of basic behavioral research, so it seems. Interestingly, without any mention of Skinner, Jaynes does use the exact phrasing of Skinner in speaking about the unconscious learning of ‘verbal behavior’ (Book One, Chapter 1):

“Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, or whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying words. The important thing here is that the subject is not aware that he is learning anything at all. [13] He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it.”

This is just a passing comment in using one example among many, and he states that “Such unconscious learning is not confined to verbal behavior.” He doesn’t further explore language in this immediate section or repeat again the phrase ‘verbal behavior’ in any other section, although the notion of verbal behavior is central to the entire book. But a decade after the original publication date of his book, Jaynes wrote a paper where he does talk about Skinner’s ideas about language:

“One needs language for consciousness. We think consciousness is learned by children between two and a half and five or six years in what we can call the verbal surround, or the verbal community as B.F Skinner calls it. It is an aspect of learning to speak. Mental words are out there as part of the culture and part of the family. A child fits himself into these words and uses them even before he knows the meaning of them. A mother is constantly instilling the seeds of consciousness in a two- and three-year-old, telling the child to stop and think, asking him “What shall we do today?” or “Do you remember when we did such and such or were somewhere?” And all this while metaphor and analogy are hard at work. There are many different ways that different children come to this, but indeed I would say that children without some kind of language are not conscious.”
(Jaynes, J. 1986. “Consciousness and the Voices of the Mind.” Canadian Psychology, 27, 128– 148.)

I don’t have access to that paper. That quote comes from an article by John E. Limber: “Language and consciousness: Jaynes’s “Preposterous idea” reconsidered.” It is found in Reflections on the Dawn of Consciousness edited by Marcel Kuijsten (pp. 169-202).

Anyway, the point Jaynes makes is that language is required for consciousness as an inner sense of self because language is required to hear ourselves think. So verbal behavior is a necessary, if not sufficient, condition for the emergence of consciousness as we know it. As long as verbal behavior remains an external event, conscious experience won’t follow. Humans have to learn to hear themselves as they hear others, to split themselves into a speaker and a listener.

This relates to what makes possible the differentiation of hearing a voice being spoken by someone in the external world and hearing a voice as a memory of someone in one’s internal mind-space. Without this distinction, imagination isn’t possible for anything imagined would become a hallucination where internal and external hearing are conflated or rather never separated. Jaynes proposes this is why ancient texts regularly describe people as hearing voices of deities and deified kings, spirits and ancestors. The bicameral person, according to the theory, hears their own voice without being conscious that it is their own thought.

All of that emerges from those early studies of animal behavior. Behaviorism plays a key role simply in placing the emphasis on behavior. From there, one can come to the insight that consciousness is a neurocognitive behavior modeled on physical and verbal behavior. The self is a metaphor built on embodied experience in the world. This relates to many similar views, such as that humans learn a theory of mind within themselves by first developing a theory of mind in perceiving others. This goes along with attention schema and the attribution of consciousness. And some have pointed out what is called the double subject fallacy, a hidden form of dualism that infects neuroscience. However described, it gets at the same issue.

It all comes down our being both social animals and inhabitants of the world. Human development begins with a focus outward, culture and language determining what kind of identity forms. How we learn to behave is who we become.

Vestiges of an Earlier Mentality: Different Psychologies

“The Self as Interiorized Social Relations Applying a Jaynesian Approach to Problems of Agency and Volition”
By Brian J. McVeigh

(II) Vestiges of an Earlier Mentality: Different Psychologies

If what Jaynes has proposed about bicamerality is correct, we should expect to find remnants of this extinct mentality. In any case, an examination of the ethnopsychologies of other societies should at least challenge our assumptions. What kinds of metaphors do they employ to discuss the self? Where is agency localized? To what extent do they even “psychologize” the individual, positing an “interior space” within the person? If agency is a socio-historical construction (rather than a bio-evolutionary product), we should expect some cultural variability in how it is conceived. At the same time, we should also expect certain parameters within which different theories of agency are built.

Ethnographies are filled with descriptions of very different psychologies. For example, about the Maori, Jean Smith writes that

it would appear that generally it was not the “self” which encompassed the experience, but experience which encompassed the “self” … Because the “self” was not in control of experience, a man’s experience was not felt to be integral to him; it happened in him but was not of him. A Maori individual was not so much the experiencer of his experience as the observer of it. 22

Furthermore, “bodily organs were endowed with independent volition.” 23 Renato Rosaldo states that the Ilongots of the Philippines rarely concern themselves with what we refer to as an “inner self” and see no major differences between public presentation and private feeling. 24

Perhaps the most intriguing picture of just how radically different mental concepts can be is found in anthropologist Maurice Leenhardt’s   intriguing book Do Kamo, about the Canaque of New Caledonia, who are “unaware” of their own existence: the “psychic or psychological aspect of man’s actions are events in nature. The Canaque sees them as outside of himself, as externalized. He handles his existence similarly: he places it in an object — a yam, for instance — and through the yam he gains some knowledge of his existence, by identifying himself with it.” 25

Speaking of the Dinka, anthropologist Godfrey Lienhardt writes that “the man is the object acted upon,” and “we often find a reversal of European expressions which assume the human self, or mind, as subject in relation to what happens to it.” 26 Concerning the mind itself,

The Dinka have no conception which at all closely corresponds to our popular modern conception of the “mind,” as mediating and, as it were, storing up the experiences of the self. There is for them no such interior entity to appear, on reflection, to stand between the experiencing self at any given moment and what is or has been an exterior influence upon the self. So it seems that what we should call in some cases the memories of experiences, and regard therefore as in some way intrinsic and interior to the remembering person and modified in their effect upon him by that interiority, appear to the Dinka as exteriority acting upon him, as were the sources from which they derived. 27

The above mentioned ethnographic examples may be interpreted as merely colorful descriptions, as exotic and poetic folk psychologies. Or, we may take a more literal view, and entertain the idea that these ethnopsychological accounts are vestiges of a distant past when individuals possessed radically different mentalities. For example, if it is possible to be a person lacking interiority in which a self moves about making conscious decisions, then we must at least entertain the idea that entire civilizations existed whose members had a radically different mentality. The notion of a “person without a self” is admittedly controversial and open to misinterpretation. Here allow me to stress that I am not suggesting that in today’s world there are groups of people whose mentality is distinct from our own. However, I am suggesting that remnants of an earlier mentality are evident in extant ethnopsychologies, including our own. 28

* * *

Text from:

Reflections on the Dawn of Consciousness:
Julian Jaynes’s Bicameral Mind Theory Revisited
Edited by Marcel Kuijsten
Chapter 7, Kindle Locations 3604-3636

See also:

Survival and Persistence of Bicameralism
Piraha and Bicameralism

“Lack of the historical sense is the traditional defect in all philosophers.”

Human, All Too Human: A Book for Free Spirits
by Friedrich Wilhelm Nietzsche

The Traditional Error of Philosophers.—All philosophers make the common mistake of taking contemporary man as their starting point and of trying, through an analysis of him, to reach a conclusion. “Man” involuntarily presents himself to them as an aeterna veritas as a passive element in every hurly-burly, as a fixed standard of things. Yet everything uttered by the philosopher on the subject of man is, in the last resort, nothing more than a piece of testimony concerning man during a very limited period of time. Lack of the historical sense is the traditional defect in all philosophers. Many innocently take man in his most childish state as fashioned through the influence of certain religious and even of certain political developments, as the permanent form under which man must be viewed. They will not learn that man has evolved,4 that the intellectual faculty itself is an evolution, whereas some philosophers make the whole cosmos out of this intellectual faculty. But everything essential in human evolution took place aeons ago, long before the four thousand years or so of which we know anything: during these man may not have changed very much. However, the philosopher ascribes “instinct” to contemporary man and assumes that this is one of the unalterable facts regarding man himself, and hence affords a clue to the understanding of the universe in general. The whole teleology is so planned that man during the last four thousand years shall be spoken of as a being existing from all eternity, and with reference to whom everything in the cosmos from its very inception is naturally ordered. Yet everything evolved: there are no eternal facts as there are no absolute truths. Accordingly, historical philosophising is henceforth indispensable, and with it honesty of judgment.

What Locke Lacked
by Louise Mabille

Locke is indeed a Colossus of modernity, but one whose twin projects of providing a concept of human understanding and political foundation undermine each other. The specificity of the experience of perception alone undermines the universality and uniformity necessary to create the subject required for a justifiable liberalism. Since mere physical perspective can generate so much difference, it is only to be expected that political differences would be even more glaring. However, no political order would ever come to pass without obliterating essential differences. The birth of liberalism was as violent as the Empire that would later be justified in its name, even if its political traces are not so obvious. To interpret is to see in a particular way, at the expense of all other possibilities of interpretation. Perspectives that do not fit are simply ignored, or as that other great resurrectionist of modernity, Freud, would concur, simply driven underground. We ourselves are the source of this interpretative injustice, or more correctly, our need for a world in which it is possible to live, is. To a certain extent, then, man is the measure of the world, but only his world. Man is thus a contingent measure and our measurements do not refer to an original, underlying reality. What we call reality is the result not only of our limited perspectives upon the world, but the interplay of those perspectives themselves. The liberal subject is thus a result of, and not a foundation for, the experience of reality. The subject is identified as origin of meaning only through a process of differentiation and reduction, a course through which the will is designated as a psychological property.

Locke takes the existence of the subject of free will – free to exercise political choice such as rising against a tyrant, choosing representatives, or deciding upon political direction – simply for granted. Furthermore, he seems to think that everyone should agree as to what the rules are according to which these events should happen. For him, the liberal subject underlying these choices is clearly fundamental and universal.

Locke’s philosophy of individualism posits the existence of a discreet and isolated individual, with private interests and rights, independent of his linguistic or socio-historical context. C. B. MacPhearson identifies a distinctly possessive quality to Locke’s individualist ethic, notably in the way in which the individual is conceived as proprietor of his own personhood, possessing capacities such as self-reflection and free will. Freedom becomes associated with possession, which the Greeks would associate with slavery, and society conceived in terms of a collection of free and equal individuals who are related to each through their means of achieving material success – which Nietzsche, too, would associate with slave morality.  […]

There is a central tenet to John Locke’s thinking that, as conventional as it has become, remains a strange strategy. Like Thomas Hobbes, he justifies modern society by contrasting it with an original state of nature. For Hobbes, as we have seen, the state of nature is but a hypothesis, a conceptual tool in order to elucidate a point. For Locke, however, the state of nature is a very real historical event, although not a condition of a state of war. Man was social by nature, rational and free. Locke drew this inspiration from Richard Hooker’s Laws of Ecclesiastical Polity, notably from his idea that church government should be based upon human nature, and not the Bible, which, according to Hooker, told us nothing about human nature. The social contract is a means to escape from nature, friendlier though it be on the Lockean account. For Nietzsche, however, we have never made the escape: we are still holus-bolus in it: ‘being conscious is in no decisive sense the opposite of the instinctive – most of the philosopher’s conscious thinking is secretly directed and compelled into definite channels by his instincts. Behind all logic too, and its apparent autonomy there stand evaluations’ (BGE, 3). Locke makes a singular mistake in thinking the state of nature a distant event. In fact, Nietzsche tells us, we have never left it. We now only wield more sophisticated weapons, such as the guilty conscience […]

Truth originates when humans forget that they are ‘artistically creating subjects’ or products of law or stasis and begin to attach ‘invincible faith’ to their perceptions, thereby creating truth itself. For Nietzsche, the key to understanding the ethic of the concept, the ethic of representation, is conviction […]

Few convictions have proven to be as strong as the conviction of the existence of a fundamental subjectivity. For Nietzsche, it is an illusion, a bundle of drives loosely collected under the name of ‘subject’ —indeed, it is nothing but these drives, willing, and actions in themselves—and it cannot appear as anything else except through the seduction of language (and the fundamental errors of reason petrified in it), which understands and misunderstands all action as conditioned by something which causes actions, by a ‘Subject’ (GM I 13). Subjectivity is a form of linguistic reductionism, and when using language, ‘[w]e enter a realm of crude fetishism when we summon before consciousness the basic presuppositions of the metaphysics of language — in plain talk, the presuppositions of reason. Everywhere reason sees a doer and doing; it believes in will as the cause; it believes in the ego, in the ego as being, in the ego as substance, and it projects this faith in the ego-substance upon all things — only thereby does it first create the concept of ‘thing’ (TI, ‘Reason in Philosophy’ 5). As Nietzsche also states in WP 484, the habit of adding a doer to a deed is a Cartesian leftover that begs more questions than it solves. It is indeed nothing more than an inference according to habit: ‘There is activity, every activity requires an agent, consequently – (BGE, 17). Locke himself found the continuous existence of the self problematic, but did not go as far as Hume’s dissolution of the self into a number of ‘bundles’. After all, even if identity shifts occurred behind the scenes, he required a subject with enough unity to be able to enter into the Social Contract. This subject had to be something more than merely an ‘eternal grammatical blunder’ (D, 120), and willing had to be understood as something simple. For Nietzsche, it is ‘above all complicated, something that is a unit only as a word, a word in which the popular prejudice lurks, which has defeated the always inadequate caution of philosophers’ (BGE, 19).

Nietzsche’s critique of past philosophers
by Michael Lacewing

Nietzsche is questioning the very foundations of philosophy. To accept his claims means being a new kind of philosopher, ones who ‘taste and inclination’, whose values, are quite different. Throughout his philosophy, Nietzsche is concerned with origins, both psychological and historical. Much of philosophy is usually thought of as an a priori investigation. But if Nietzsche can show, as he thinks he can, that philosophical theories and arguments have a specific historical basis, then they are not, in fact, a priori. What is known a priori should not change from one historical era to the next, nor should it depend on someone’s psychology. Plato’s aim, the aim that defines much of philosophy, is to be able to give complete definitions of ideas – ‘what is justice?’, ‘what is knowledge?’. For Plato, we understand an idea when we have direct knowledge of the Form, which is unchanging and has no history. If our ideas have a history, then the philosophical project of trying to give definitions of our concepts, rather than histories, is radically mistaken. For example, in §186, Nietzsche argues that philosophers have consulted their ‘intuitions’ to try to justify this or that moral principle. But they have only been aware of their own morality, of which their ‘justifications’ are in fact only expressions. Morality and moral intuitions have a history, and are not a priori. There is no one definition of justice or good, and the ‘intuitions’ that we use to defend this or that theory are themselves as historical, as contentious as the theories we give – so they offer no real support. The usual ways philosophers discuss morality misunderstands morality from the very outset. The real issues of understanding morality only emerge when we look at the relation between this particular morality and that. There is no world of unchanging ideas, no truths beyond the truths of the world we experience, nothing that stands outside or beyond nature and history.

GENEALOGY AND PHILOSOPHY

Nietzsche develops a new way of philosophizing, which he calls a ‘morphology and evolutionary theory’ (§23), and later calls ‘genealogy’. (‘Morphology’ means the study of the forms something, e.g. morality, can take; ‘genealogy’ means the historical line of descent traced from an ancestor.) He aims to locate the historical origin of philosophical and religious ideas and show how they have changed over time to the present day. His investigation brings together history, psychology, the interpretation of concepts, and a keen sense of what it is like to live with particular ideas and values. In order to best understand which of our ideas and values are particular to us, not a priori or universal, we need to look at real alternatives. In order to understand these alternatives, we need to understand the psychology of the people who lived with them. And so Nietzsche argues that traditional ways of doing philosophy fail – our intuitions are not a reliable guide to the ‘truth’, to the ‘real’ nature of this or that idea or value. And not just our intuitions, but the arguments, and style of arguing, that philosophers have used are unreliable. Philosophy needs to become, or be informed by, genealogy. A lack of any historical sense, says Nietzsche, is the ‘hereditary defect’ of all philosophers.

MOTIVATIONAL ANALYSIS

Having long kept a strict eye on the philosophers, and having looked between their lines, I say to myself… most of a philosopher’s conscious thinking is secretly guided and channelled into particular tracks by his instincts. Behind all logic, too, and its apparent tyranny of movement there are value judgements, or to speak more clearly, physiological demands for the preservation of a particular kind of life. (§3) A person’s theoretical beliefs are best explained, Nietzsche thinks, by evaluative beliefs, particular interpretations of certain values, e.g. that goodness is this and the opposite of badness. These values are best explained as ‘physiological demands for the preservation of a particular kind of life’. Nietzsche holds that each person has a particular psychophysical constitution, formed by both heredity and culture. […] Different values, and different interpretations of these values, support different ways of life, and so people are instinctively drawn to particular values and ways of understanding them. On the basis of these interpretations of values, people come to hold particular philosophical views. §2 has given us an illustration of this: philosophers come to hold metaphysical beliefs about a transcendent world, the ‘true’ and ‘good’ world, because they cannot believe that truth and goodness could originate in the world of normal experience, which is full of illusion, error, and selfishness. Therefore, there ‘must’ be a pure, spiritual world and a spiritual part of human beings, which is the origin of truth and goodness. Philosophy and values But ‘must’ there be a transcendent world? Or is this just what the philosopher wants to be true? Every great philosophy, claims Nietzsche, is ‘the personal confession of its author’ (§6). The moral aims of a philosophy are the ‘seed’ from which the whole theory grows. Philosophers pretend that their opinions have been reached by ‘cold, pure, divinely unhampered dialectic’ when in fact, they are seeking reasons to support their pre-existing commitment to ‘a rarefied and abstract version of their heart’s desire’ (§5), viz. that there is a transcendent world, and that good and bad, true and false are opposites. Consider: Many philosophical systems are of doubtful coherence, e.g. how could there be Forms, and if there were, how could we know about them? Or again, in §11, Nietzsche asks ‘how are synthetic a priori judgments possible?’. The term ‘synthetic a priori’ was invented by Kant. According to Nietzsche, Kant says that such judgments are possible, because we have a ‘faculty’ that makes them possible. What kind of answer is this?? Furthermore, no philosopher has ever been proved right (§25). Given the great difficulty of believing either in a transcendent world or in human cognitive abilities necessary to know about it, we should look elsewhere for an explanation of why someone would hold those beliefs. We can find an answer in their values. There is an interesting structural similarity between Nietzsche’s argument and Hume’s. Both argue that there is no rational explanation of many of our beliefs, and so they try to find the source of these beliefs outside or beyond reason. Hume appeals to imagination and the principle of ‘Custom’. Nietzsche appeals instead to motivation and ‘the bewitchment of language’ (see below). So Nietzsche argues that philosophy is not driven by a pure ‘will to truth’ (§1), to discover the truth whatever it may be. Instead, a philosophy interprets the world in terms of the philosopher’s values. For example, the Stoics argued that we should live ‘according to nature’ (§9). But they interpret nature by their own values, as an embodiment of rationality. They do not see the senselessness, the purposelessness, the indifference of nature to our lives […]

THE BEWITCHMENT OF LANGUAGE

We said above that Nietzsche criticizes past philosophers on two grounds. We have looked at the role of motivation; the second ground is the seduction of grammar. Nietzsche is concerned with the subject-predicate structure of language, and with it the notion of a ‘substance’ (picked out by the grammatical ‘subject’) to which we attribute ‘properties’ (identified by the predicate). This structure leads us into a mistaken metaphysics of ‘substances’. In particular, Nietzsche is concerned with the grammar of ‘I’. We tend to think that ‘I’ refers to some thing, e.g. the soul. Descartes makes this mistake in his cogito – ‘I think’, he argues, refers to a substance engaged in an activity. But Nietzsche repeats the old objection that this is an illegitimate inference (§16) that rests on many unproven assumptions – that I am thinking, that some thing is thinking, that thinking is an activity (the result of a cause, viz. I), that an ‘I’ exists, that we know what it is to think. So the simple sentence ‘I think’ is misleading. In fact, ‘a thought comes when ‘it’ wants to, and not when ‘I’ want it to’ (§17). Even ‘there is thinking’ isn’t right: ‘even this ‘there’ contains an interpretation of the process and is not part of the process itself. People are concluding here according to grammatical habit’. But our language does not allow us just to say ‘thinking’ – this is not a whole sentence. We have to say ‘there is thinking’; so grammar constrains our understanding. Furthermore, Kant shows that rather than the ‘I’ being the basis of thinking, thinking is the basis out of which the appearance of an ‘I’ is created (§54). Once we recognise that there is no soul in a traditional sense, no ‘substance’, something constant through change, something unitary and immortal, ‘the way is clear for new and refined versions of the hypothesis about the soul’ (§12), that it is mortal, that it is multiplicity rather than identical over time, even that it is a social construct and a society of drives. Nietzsche makes a similar argument about the will (§19). Because we have this one word ‘will’, we think that what it refers to must also be one thing. But the act of willing is highly complicated. First, there is an emotion of command, for willing is commanding oneself to do something, and with it a feeling of superiority over that which obeys. Second, there is the expectation that the mere commanding on its own is enough for the action to follow, which increases our sense of power. Third, there is obedience to the command, from which we also derive pleasure. But we ignore the feeling the compulsion, identifying the ‘I’ with the commanding ‘will’. Nietzsche links the seduction of language to the issue of motivation in §20, arguing that ‘the spell of certain grammatical functions is the spell of physiological value judgements’. So even the grammatical structure of language originates in our instincts, different grammars contributing to the creation of favourable conditions for different types of life. So what values are served by these notions of the ‘I’ and the ‘will’? The ‘I’ relates to the idea that we have a soul, which participates in a transcendent world. It functions in support of the ascetic ideal. The ‘will’, and in particular our inherited conception of ‘free will’, serves a particular moral aim

Hume and Nietzsche: Moral Psychology (short essay)
by epictetus_rex

1. Metaphilosophical Motivation

Both Hume and Nietzsche1 advocate a kind of naturalism. This is a weak naturalism, for it does not seek to give science authority over philosophical inquiry, nor does it commit itself to a specific ontological or metaphysical picture. Rather, it seeks to (a) place the human mind firmly in the realm of nature, as subject to the same mechanisms that drive all other natural events, and (b) investigate the world in a way that is roughly congruent with our best current conception(s) of nature […]

Furthermore, the motivation for this general position is common to both thinkers. Hume and Nietzsche saw old rationalist/dualist philosophies as both absurd and harmful: such systems were committed to extravagant and contradictory metaphysical claims which hinder philosophical progress. Furthermore, they alienated humanity from its position in nature—an effect Hume referred to as “anxiety”—and underpinned religious or “monkish” practises which greatly accentuated this alienation. Both Nietzsche and Hume believe quite strongly that coming to see ourselves as we really are will banish these bugbears from human life.

To this end, both thinkers ask us to engage in honest, realistic psychology. “Psychology is once more the path to the fundamental problems,” writes Nietzsche (BGE 23), and Hume agrees:

the only expedient, from which we can hope for success in our philosophical researches, is to leave the tedious lingering method, which we have hitherto followed, and instead of taking now and then a castle or village on the frontier, to march up directly to the capital or center of these sciences, to human nature itself.” (T Intro)

2. Selfhood

Hume and Nietzsche militate against the notion of a unified self, both at-a-time and, a fortiori, over time.

Hume’s quest for a Newtonian “science of the mind” lead him to classify all mental events as either impressions (sensory) or ideas (copies of sensory impressions, distinguished from the former by diminished vivacity or force). The self, or ego, as he says, is just “a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different; whatever natural propension we may have to imagine that simplicity and identity.” (Treatise 4.6) […]

For Nietzsche, the experience of willing lies in a certain kind of pleasure, a feeling of self-mastery and increase of power that comes with all success. This experience leads us to mistakenly posit a simple, unitary cause, the ego. (BGE 19)

The similarities here are manifest: our minds do not have any intrinsic unity to which the term “self” can properly refer, rather, they are collections or “bundles” of events (drives) which may align with or struggle against one another in a myriad of ways. Both thinkers use political models to describe what a person really is. Hume tells us we should “more properly compare the soul to a republic or commonwealth, in which the several members [impressions and ideas] are united by ties of government and subordination, and give rise to persons, who propagate the same republic in the incessant change of its parts” (T 261)

3. Action and The Will

Nietzsche and Hume attack the old platonic conception of a “free will” in lock-step with one another. This picture, roughly, involves a rational intellect which sits above the appetites and ultimately chooses which appetites will express themselves in action. This will is usually not considered to be part of the natural/empirical order, and it is this consequence which irks both Hume and Nietzsche, who offer two seamlessly interchangeable refutations […]

Since we are nothing above and beyond events, there is nothing for this “free will” to be: it is a causa sui, “a sort of rape and perversion of logic… the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense” (BGE 21).

When they discover an erroneous or empty concept such as “Free will” or “the self”, Nietzsche and Hume engage in a sort of error-theorizing which is structurally the same. Peter Kail (2006) has called this a “projective explanation”, whereby belief in those concepts is “explained by appeal to independently intelligible features of psychology”, rather than by reference to the way the world really is1.

The Philosophy of Mind
INSTRUCTOR: Larry Hauser
Chapter 7: Egos, bundles, and multiple selves

  • Who dat?  “I”
    • Locke: “something, I know not what”
    • Hume: the no-self view … “bundle theory”
    • Kant’s transcendental ego: a formal (nonempirical) condition of thought that the “I’ must accompany every perception.
      • Intentional mental state: I think that snow is white.
        • to think: a relation between
          • a subject = “I”
          • a propositional content thought =  snow is white
      • Sensations: I feel the coldness of the snow.
        • to feel: a relation between
          • a subject = “I”
          • a quale = the cold-feeling
    • Friedrich Nietzsche
      • A thought comes when “it” will and not when “I” will. Thus it is a falsification of the evidence to say that the subject “I” conditions the predicate “think.”
      • It is thought, to be sure, but that this “it” should be that old famous “I” is, to put it mildly, only a supposition, an assertion. Above all it is not an “immediate certainty.” … Our conclusion is here formulated out of our grammatical custom: “Thinking is an activity; every activity presumes something which is active, hence ….” 
    • Lichtenberg: “it’s thinking” a la “it’s raining”
      • a mere grammatical requirement
      • no proof of an thinking self

[…]

  • Ego vs. bundle theories (Derek Parfit (1987))
    • Ego: “there really is some kind of continuous self that is the subject of my experiences, that makes decisions, and so on.” (95)
      • Religions: Christianity, Islam, Hinduism
      • Philosophers: Descartes, Locke, Kant & many others (the majority view)
    • Bundle: “there is no underlying continuous and unitary self.” (95)
      • Religion: Buddhism
      • Philosophers: Hume, Nietzsche, Lichtenberg, Wittgenstein, Kripke(?), Parfit, Dennett {a stellar minority}
  • Hume v. Reid
    • David Hume: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure.  I never can catch myself at any time without a perception, and never can observe anything but the perception.  (Hume 1739, Treatise I, VI, iv)
    • Thomas Reid: I am not thought, I am not action, I am not feeling: I am something which thinks and acts and feels. (1785)