Voice and Perspective

“No man should [refer to himself in the third person] unless he is the King of England — or has a tapeworm.”
~ Mark Twain

“Love him or hate him, Trump is a man who is certain about what he wants and sets out to get it, no holds barred. Women find his power almost as much of a turn-on as his money.”
~ Donald Trump

The self is a confusing matter. As always, who is speaking and who is listening. Clues can come from the language that is used. And the language we use shapes human experience, as studied in linguistic relativity. Speaking in first person may be a more recent innovation of the human society and psyche:

“An unmistakable individual voice, using the first person singular “I,” first appeared in the works of lyric poets. Archilochus, who lived in the first half of the seventh century B.C., sang his own unhappy love rather than assume the role of a spectator describing the frustrations of love in others. . . [H]e had in mind an immaterial sort of soul, with which Homer was not acquainted” (Yi-Fu Tuan, Segmented Worlds and Self, p. 152).

The autobiographical self requires the self-authorization of Jaynesian narrative consciousness. The emergence of the egoic self is the fall into historical time, an issue too complex for discussion here (see Julian Jaynes’ classic work or the diverse Jaynesian scholarship it inspired, or look at some of my previous posts on the topic).

Consider the mirror effect. When hunter-gatherers encounter a mirror for the first time there is what is called  “the tribal terror of self-recognition” (Edmund Carpenter as quoted by Philippe Rochat, from Others in Mind, p. 31). “After a frightening reaction,” Carpenter wrote about the Biamis of Papua New Guinea, “they become paralyzed, covering their mouths and hiding their heads — they stood transfixed looking at their own images, only their stomach muscles betraying great tension.”

Research has shown that heavy use of first person is associated with depression, anxiety, and other distressing emotions. Oddly, this full immersion into subjectivity can lead into depressive depersonalization and depressive realism — the individual sometimes passes through the self and into some other state. And in that other state, I’ve noticed that silence befalls the mind, that is to say the loss of the ‘I’ where the inner dialogue goes silent. One sees the world as if coldly detached, as if outside of it all.

Third person is stranger and with a much more ancient pedigree. In the modern mind, third person is often taken as an effect of narcissistic inflation of the ego, such as seen with celebrities speaking of themselves in terms of their media identities. But in other countries and at other times, it has been an indication of religious humility or a spiritual shifting of perspective (possibly expressing the belief that only God can speak of Himself as ‘I’).

There is also the Batman effect. Children act more capable and with greater perseverance when speaking of themselves in third person, specifically as superhero character. As with religious practice, this serves the purpose of distancing from emotion. Yet a sense of self can simultaneously be strengthened when the individual becomes identified with a character. This is similar to celebrities who turn their social identities into something akin to mythological figures. Or as the child can be encouraged to invoke their favorite superhero to stand in for their underdeveloped ego-selves, a religious true believer can speak of God or the Holy Spirit working through them. There is immense power in this.

This might point to the Jaynesian bicameral mind. When an Australian Aborigine ritually sings a Songline, he is invoking a god-spirit-personality. That third person of the mythological story shifts the Aboriginal experience of self and reality. The Aborigine has as many selves as he has Songlines, each a self-contained worldview and way of being. This could be a more natural expression of human nature… or at least an easier and less taxing mode of being (Hunger for Connection). Jaynes noted that schizophrenics with their weakened and loosened egoic boundaries have seemingly inexhaustible energy.

He suspected this might explain why archaic humans could do seemingly impossible tasks such as building pyramids, something moderns could only accomplish through use of our largest and most powerful cranes. Yet the early Egyptians managed it with a small, impoverished, and malnourished population that lacked even basic infrastructure of roads and bridges. Similarly, this might explain how many tribal people can dance for days on end with little rest and no food. And maybe also like how armies can collectively march for days on end in a way no individual could (Music and Dance on the Mind).

Upholding rigid egoic boundaries is tiresome work. This might be why, when individuals reach exhaustion under stress (mourning a death, getting lost in the wilderness, etc), they can experience what John Geiger called the third man factor, the appearance of another self often with its own separate voice. Apparently, when all else fails, this is the state of mind we fall back on and it’s a common experience at that. Furthermore, a negatory experience, as Jaynes describes it, can lead to negatory possession in the re-emergence of a bicameral-like mind with a third person identity becoming a fully expressed personality of its own, a phenomenon that can happen through trauma-induced dissociation and splitting:

“Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.”

Jaynes noted that those who are abused in childhood are more easily hypnotized. Their egoic boundaries never as fully develop or else the large gaps are left in this self-construction, gaps through which other voices can slip in. This relates to what has variously been referred to as the porous self, thin boundary type, fantasy proneness, etc. Compared to those who have never experienced trauma, I bet such people would find it easier to speak in the third person and when doing so would show a greater shift in personality and behavior.

As for first person subjectivity, it has its own peculiarities. I think of the association of addiction and individuality, as explored by Johann Hari and as elaborated in my own writings (Individualism and Isolation; To Put the Rat Back in the Rat Park; & The Agricultural Mind). As the ego is a tiresome project that depletes one’s reserves, maybe it’s the energy drain that causes the depression, irritability, and such. A person with such a guarded sense of self would be resistant to speak in third person in finding it hard to escape the trap of ego they’ve so carefully constructed. So many of us have fallen under its sway and can’t imagine anything else (The Spell of Inner Speech). That is probably why it so often requires trauma to break open our psychological defenses.

Besides trauma, many moderns have sought to escape the egoic prison through religious practices. Ancient methods include fasting, meditation, and prayer — these are common across the world. Fasting, by the way, fundamentally alters the functioning of the body and mind through ketosis (also the result of a very low-carb diet), something I’ve speculated may have been a supporting factor for the bicameral mind and related to do with the much earlier cultural preference of psychedelics over addictive stimulants, an entirely different discussion (“Yes, tea banished the fairies.”; & Autism and the Upper Crust). The simplest method of all is using third person language until it becomes a new habit of mind, something might require a long period of practice to feel natural.

The modern mind has always been under stress. That is because it is the source of that stress. It’s not a stable and sustainable way of being in the world (The Crisis of Identity). Rather, it’s a transitional state and all of modernity has been a centuries-long stage of transformation into something else. There is an impulse hidden within, if we could only trigger the release of the locking mechanism (Lock Without a Key). The language of perspectives, as Scott Preston explores (The Three Gems and The Cross of Reality), tells us something important about our predicament. Words such as ‘I’, ‘you’, etc aren’t merely words. In language, we discover our humanity as we come to know the other.

* * *

Are Very Young Children Stuck in the Perpetual Present?
by Jesse Bering

Interestingly, however, the authors found that the three-year-olds were significantly more likely to refer to themselves in the third person (using their first names rather and saying that the sticker is on “his” or “her” head) than were the four-year-olds, who used first-person pronouns (“me” and “my head”) almost exclusively. […]

Povinelli has pointed out the relevancy of these findings to the phenomenon of “infantile amnesia,” which tidily sums up the curious case of most people being unable to recall events from their first three years of life. (I spent my first three years in New Jersey, but for all I know I could have spontaneously appeared as a four-year-old in my parent’s bedroom in Virginia, which is where I have my first memory.) Although the precise neurocognitive mechanisms underlying infantile amnesia are still not very well-understood, escaping such a state of the perpetual present would indeed seemingly require a sense of the temporally enduring, autobiographical self.

5 Reasons Shaq and Other Athletes Refer to Themselves in the Third Person
by Amelia Ahlgren

“Illeism,” or the act of referring to oneself in the third person, is an epidemic in the sports world.

Unfortunately for humanity, the cure is still unknown.

But if we’re forced to listen to these guys drone on about an embodiment of themselves, we might as well guess why they do it.

Here are five reasons some athletes are allergic to using the word “I.”

  1. Lag in Linguistic Development (Immaturity)
  2. Reflection of Egomania
  3. Amp-Up Technique
  4. Pure Intimidation
  5. Goofiness

Rene Thinks, Therefore He Is. You?
by Richard Sandomir

Some strange, grammatical, mind-body affliction is making some well-known folks in sports and politics refer to themselves in the third person. It is as if they have stepped outside their bodies. Is this detachment? Modesty? Schizophrenia? If this loopy verbal quirk were simple egomania, then Louis XIV might have said, “L’etat, c’est Lou.” He did not. And if it were merely a sign of one’s overweening power, the Queen Victoria would not have invented the royal we (“we are not amused”) but rather the royal she. She did not.

Lately, though, some third persons have been talking in a kind of royal he:

* Accepting the New York Jets’ $25 million salary and bonus offer, the quarterback Neil O’Donnell said of his former team, “The Pittsburgh Steelers had plenty of opportunities to sign Neil O’Donnell.”

* As he pushed to be traded from the Los Angeles Kings, Wayne Gretzky said he did not want to wait for the Kings to rebuild “because that doesn’t do a whole lot of good for Wayne Gretzky.”

* After his humiliating loss in the New Hampshire primary, Senator Bob Dole proclaimed: “You’re going to see the real Bob Dole out there from now on.”

These people give you the creepy sense that they’re not talking to you but to themselves. To a first, second or third person’s ear, there’s just something missing. What if, instead of “I am what I am,” we had “Popeye is what Popeye is”?

Vocative self-address, from ancient Greece to Donald Trump
by Ben Zimmer

Earlier this week on Twitter, Donald Trump took credit for a surge in the Consumer Confidence Index, and with characteristic humility, concluded the tweet with “Thanks Donald!”

The “Thanks Donald!” capper led many to muse about whether Trump was referring to himself in the second person, the third person, or perhaps both.

Since English only marks grammatical person on pronouns, it’s not surprising that there is confusion over what is happening with the proper name “Donald” in “Thanks, Donald!” We associate proper names with third-person reference (“Donald Trump is the president-elect”), but a name can also be used as a vocative expression associated with second-person address (“Pleased to meet you, Donald Trump”). For more on how proper names and noun phrases in general get used as vocatives in English, see two conference papers from Arnold Zwicky: “Hey, Whatsyourname!” (CLS 10, 1974) and “Isolated NPs” (Semantics Fest 5, 2004).

The use of one’s own name in third-person reference is called illeism. Arnold Zwicky’s 2007 Language Log post, “Illeism and its relatives” rounds up many examples, including from politicians like Bob Dole, a notorious illeist. But what Trump is doing in tweeting “Thanks, Donald!” isn’t exactly illeism, since the vocative construction implies second-person address rather than third-person reference. We can call this a form of vocative self-address, wherein Trump treats himself as an addressee and uses his own name as a vocative to create something of an imagined interior dialogue.

Give me that Prime Time religion
by Mark Schone

Around the time football players realized end zones were for dancing, they also decided that the pronouns “I” and “me,” which they used an awful lot, had worn out. As if to endorse the view that they were commodities, cartoons or royalty — or just immune to introspection — athletes began to refer to themselves in the third person.

It makes sense, therefore, that when the most marketed personality in the NFL gets religion, he announces it in the weirdly detached grammar of football-speak. “Deion Sanders is covered by the blood of Jesus now,” writes Deion Sanders. “He loves the Lord with all his heart.” And in Deion’s new autobiography, the Lord loves Deion right back, though the salvation he offers third-person types seems different from what mere mortals can expect.

Refering to yourself in the third person
by Tetsuo

It does seem to be a stylistic thing in formal Chinese. I’ve come across a couple of articles about artists by the artist in question where they’ve referred to themselves in the third person throughout. And quite a number of politicians do the same, I’ve been told.

Illeism
from Wikipedia

Illeism in everyday speech can have a variety of intentions depending on context. One common usage is to impart humility, a common practice in feudal societies and other societies where honorifics are important to observe (“Your servant awaits your orders”), as well as in master–slave relationships (“This slave needs to be punished”). Recruits in the military, mostly United States Marine Corps recruits, are also often made to refer to themselves in the third person, such as “the recruit,” in order to reduce the sense of individuality and enforce the idea of the group being more important than the self.[citation needed] The use of illeism in this context imparts a sense of lack of self, implying a diminished importance of the speaker in relation to the addressee or to a larger whole.

Conversely, in different contexts, illeism can be used to reinforce self-promotion, as used to sometimes comic effect by Bob Dole throughout his political career.[2] This was particularly made notable during the United States presidential election, 1996 and lampooned broadly in popular media for years afterwards.

Deepanjana Pal of Firstpost noted that speaking in the third person “is a classic technique used by generations of Bollywood scriptwriters to establish a character’s aristocracy, power and gravitas.”[3] Conversely, third person self referral can be associated with self-irony and not taking oneself too seriously (since the excessive use of pronoun “I” is often seen as a sign of narcissism and egocentrism[4]), as well as with eccentricity in general.

In certain Eastern religions, like Hinduism or Buddhism, this is sometimes seen as a sign of enlightenment, since by doing so, an individual detaches his eternal self (atman) from the body related one (maya). Known illeists of that sort include Swami Ramdas,[5] Ma Yoga Laxmi,[6] Anandamayi Ma,[7] and Mata Amritanandamayi.[8] Jnana yoga actually encourages its practitioners to refer to themselves in the third person.[9]

Young children in Japan commonly refer to themselves by their own name (a habit probably picked from their elders who would normally refer to them by name. This is due to the normal Japanese way of speaking, where referring to another in the third person is considered more polite than using the Japanese words for “you”, like Omae. More explanation given in Japanese pronouns, though as the children grow older they normally switch over to using first person references. Japanese idols also may refer to themselves in the third person so to give off the feeling of childlike cuteness.

Four Paths to the Goal
from Sheber Hinduism

Jnana yoga is a concise practice made for intellectual people. It is the quickest path to the top but it is the steepest. The key to jnana yoga is to contemplate the inner self and find who our self is. Our self is Atman and by finding this we have found Brahman. Thinking in third person helps move us along the path because it helps us consider who we are from an objective point of view. As stated in the Upanishads, “In truth, who knows Brahman becomes Brahman.” (Novak 17).

Non-Reactivity: The Supreme Practice of Everyday Life
by Martin Schmidt

Respond with non-reactive awareness: consider yourself a third-person observer who watches your own emotional responses arise and then dissipate. Don’t judge, don’t try to change yourself; just observe! In time this practice will begin to cultivate a third-person perspective inside yourself that sometimes is called the Inner Witness.[4]

Frequent ‘I-Talk’ may signal proneness to emotional distress
from Science Daily

Researchers at the University of Arizona found in a 2015 study that frequent use of first-person singular pronouns — I, me and my — is not, in fact, an indicator of narcissism.

Instead, this so-called “I-talk” may signal that someone is prone to emotional distress, according to a new, follow-up UA study forthcoming in the Journal of Personality and Social Psychology.

Research at other institutions has suggested that I-talk, though not an indicator of narcissism, may be a marker for depression. While the new study confirms that link, UA researchers found an even greater connection between high levels of I-talk and a psychological disposition of negative emotionality in general.

Negative emotionality refers to a tendency to easily become upset or emotionally distressed, whether that means experiencing depression, anxiety, worry, tension, anger or other negative emotions, said Allison Tackman, a research scientist in the UA Department of Psychology and lead author of the new study.

Tackman and her co-authors found that when people talk a lot about themselves, it could point to depression, but it could just as easily indicate that they are prone to anxiety or any number of other negative emotions. Therefore, I-talk shouldn’t be considered a marker for depression alone.

Talking to yourself in the third person can help you control emotions
from Science Daily

The simple act of silently talking to yourself in the third person during stressful times may help you control emotions without any additional mental effort than what you would use for first-person self-talk — the way people normally talk to themselves.

A first-of-its-kind study led by psychology researchers at Michigan State University and the University of Michigan indicates that such third-person self-talk may constitute a relatively effortless form of self-control. The findings are published online in Scientific Reports, a Nature journal.

Say a man named John is upset about recently being dumped. By simply reflecting on his feelings in the third person (“Why is John upset?”), John is less emotionally reactive than when he addresses himself in the first person (“Why am I upset?”).

“Essentially, we think referring to yourself in the third person leads people to think about themselves more similar to how they think about others, and you can see evidence for this in the brain,” said Jason Moser, MSU associate professor of psychology. “That helps people gain a tiny bit of psychological distance from their experiences, which can often be useful for regulating emotions.”

Pretending to be Batman helps kids stay on task
by Christian Jarrett

Some of the children were assigned to a “self-immersed condition”, akin to a control group, and before and during the task were told to reflect on how they were doing, asking themselves “Am I working hard?”. Other children were asked to reflect from a third-person perspective, asking themselves “Is James [insert child’s actual name] working hard?” Finally, the rest of the kids were in the Batman condition, in which they were asked to imagine they were either Batman, Bob The Builder, Rapunzel or Dora the Explorer and to ask themselves “Is Batman [or whichever character they were] working hard?”. Children in this last condition were given a relevant prop to help, such as Batman’s cape. Once every minute through the task, a recorded voice asked the question appropriate for the condition each child was in [Are you working hard? or Is James working hard? or Is Batman working hard?].

The six-year-olds spent more time on task than the four-year-olds (half the time versus about a quarter of the time). No surprise there. But across age groups, and apparently unrelated to their personal scores on mental control, memory, or empathy, those in the Batman condition spent the most time on task (about 55 per cent for the six-year-olds; about 32 per cent for the four-year-olds). The children in the self-immersed condition spent the least time on task (about 35 per cent of the time for the six-year-olds; just over 20 per cent for the four-year-olds) and those in the third-person condition performed in between.

Dressing up as a superhero might actually give your kid grit
by Jenny Anderson

In other words, the more the child could distance him or herself from the temptation, the better the focus. “Children who were asked to reflect on the task as if they were another person were less likely to indulge in immediate gratification and more likely to work toward a relatively long-term goal,” the authors wrote in the study called “The “Batman Effect”: Improving Perseverance in Young Children,” published in Child Development.

Curmudgucation: Don’t Be Batman
by Peter Greene

This underlines the problem we see with more and more or what passes for early childhood education these days– we’re not worried about whether the school is ready to appropriately handle the students, but instead are busy trying to beat three-, four- and five-year-olds into developmentally inappropriate states to get them “ready” for their early years of education. It is precisely and absolutely backwards. I can’t say this hard enough– if early childhood programs are requiring “increased demands” on the self-regulatory skills of kids, it is the programs that are wrong, not the kids. Full stop.

What this study offers is a solution that is more damning than the “problem” that it addresses. If a four-year-old child has to disassociate, to pretend that she is someone else, in order to cope with the demands of your program, your program needs to stop, today.

Because you know where else you hear this kind of behavior described? In accounts of victims of intense, repeated trauma. In victims of torture who talk about dealing by just pretending they aren’t even there, that someone else is occupying their body while they float away from the horror.

That should not be a description of How To Cope With Preschool.

Nor should the primary lesson of early childhood education be, “You can’t really cut it as yourself. You’ll need to be somebody else to get ahead in life.” I cannot even begin to wrap my head around what a destructive message that is for a small child.

Can You Live With the Voices in Your Head?
by Daniel B. Smith

And though psychiatrists acknowledge that almost anyone is capable of hallucinating a voice under certain circumstances, they maintain that the hallucinations that occur with psychoses are qualitatively different. “One shouldn’t place too much emphasis on the content of hallucinations,” says Jeffrey Lieberman, chairman of the psychiatry department at Columbia University. “When establishing a correct diagnosis, it’s important to focus on the signs or symptoms” of a particular disorder. That is, it’s crucial to determine how the voices manifest themselves. Voices that speak in the third person, echo a patient’s thoughts or provide a running commentary on his actions are considered classically indicative of schizophrenia.

Auditory hallucinations: Psychotic symptom or dissociative experience?
by Andrew Moskowitz & Dirk Corstens

While auditory hallucinations are considered a core psychotic symptom, central to the diagnosis of schizophrenia, it has long been recognized that persons who are not psychotic may also hear voices. There is an entrenched clinical belief that distinctions can be made between these groups, typically on the basis of the perceived location or the ‘third-person’ perspective of the voices. While it is generally believed that such characteristics of voices have significant clinical implications, and are important in the differential diagnosis between dissociative and psychotic disorders, there is no research evidence in support of this. Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or with no mental disorder at all. On this and other bases outlined below, we argue that hearing voices should be considered a dissociative experience, which under some conditions may have pathological consequences. In other words, we believe that, while voices may occur in the context of a psychotic disorder, they should not be considered a psychotic symptom.

Hallucinations and Sensory Overrides
by T. M. Luhrmann

The psychiatric and psychological literature has reached no settled consensus about why hallucinations occur and whether all perceptual “mistakes” arise from the same processes (for a general review, see Aleman & Laroi 2008). For example, many researchers have found that when people hear hallucinated voices, some of these people have actually been subvocalizing: They have been using muscles used in speech, but below the level of their awareness (Gould 1949, 1950). Other researchers have not found this inner speech effect; moreover, this hypothesis does not explain many of the odd features of the hallucinations associated with psychosis, such as hearing voices that speak in the second or third person (Hoffman 1986). But many scientists now seem to agree that hallucinations are the result of judgments associated with what psychologists call “reality monitoring” (Bentall 2003). This is not the process Freud described with the term reality testing, which for the most part he treated as a cognitive higher-level decision: the ability to distinguish between fantasy and the world as it is (e.g., he loves me versus he’s just not that into me). Reality monitoring refers to the much more basic decision about whether the source of an experience is internal to the mind or external in the world.

Originally, psychologists used the term to refer to judgments about memories: Did I really have that conversation with my boyfriend back in college, or did I just think I did? The work that gave the process its name asked what it was about memories that led someone to infer that these memories were records of something that had taken place in the world or in the mind (Johnson & Raye 1981). Johnson & Raye’s elegant experiments suggested that these memories differ in predictable ways and that people use those differences to judge what has actually taken place. Memories of an external event typically have more sensory details and more details in general. By contrast, memories of thoughts are more likely to include the memory of cognitive effort, such as composing sentences in one’s mind.

Self-Monitoring and Auditory Verbal Hallucinations in Schizophrenia
by Wayne Wu

It’s worth pointing out that a significant portion of the non-clinical population experiences auditory hallucinations. Such hallucinations need not be negative in content, though as I understand it, the preponderance of AVH in schizophrenia is or becomes negative. […]

I’ve certainly experienced the “third man”, in a moment of vivid stress when I was younger. At the time, I thought it was God speaking to me in an encouraging and authoritative way! (I was raised in a very strict religious household.) But I wouldn’t be surprised if many of us have had similar experiences. These days, I have more often the cell-phone buzzing in my pocket illusion.

There are, I suspect, many reasons why they auditory system might be activated to give rise to auditory experiences that philosophers would define as hallucinations: recalling things in an auditory way, thinking in inner speech where this might be auditory in structure, etc. These can have positive influences on our ability to adapt to situations.

What continues to puzzle me about AVH in schizophrenia are some of its fairly consistent phenomenal properties: second or third-person voice, typical internal localization (though plenty of external localization) and negative content.

The Digital God, How Technology Will Reshape Spirituality
by William Indick
pp. 74-75

Doubled Consciousness

Who is this third who always walks beside you?
When I count, there are only you and I together.
But when I look ahead up the white road
There is always another one walking beside you
Gliding wrapt in a brown mantle, hooded.
—T.S. Eliot, The Waste Land

The feeling of “doubled consciousness” 81 has been reported by numerous epileptics. It is the feeling of being outside of one’s self. The feeling that you are observing yourself as if you were outside of your own body, like an outsider looking in on yourself. Consciousness is “doubled” because you are aware of the existence of both selves simultaneously—the observer and the observed. It is as if the two halves of the brain temporarily cease to function as a single mechanism; but rather, each half identifies itself separately as its own self. 82 The doubling effect that occurs as a result of some temporal lobe epileptic seizures may lead to drastic personality changes. In particular, epileptics following seizures often become much more spiritual, artistic, poetic, and musical. 83 Art and music, of course, are processed primarily in the right hemisphere, as is poetry and the more lyrical, metaphorical aspects of language. In any artistic endeavor, one must engage in “doubled consciousness,” creating the art with one “I,” while simultaneously observing the art and the artist with a critically objective “other-I.” In The Great Gatsby, Fitzgerald expressed the feeling of “doubled consciousness” in a scene in which Nick Caraway, in the throes of profound drunkenness, looks out of a city window and ponders:

Yet high over the city our line of yellow windows must have contributed their share of human secrecy to the casual watcher in the darkening streets, and I was him too , looking up and wondering . I was within and without , simultaneously enchanted and repelled by the inexhaustible variety of life.

Doubled-consciousness, the sense of being both “within and without” of one’s self, is a moment of disconnection and disassociation between the two hemispheres of the brain, a moment when left looks independently at right and right looks independently at left, each recognizing each other as an uncanny mirror reflection of himself, but at the same time not recognizing the other as “I.”

The sense of doubled consciousness also arises quite frequently in situations of extreme physical and psychological duress. 84 In his book, The Third Man Factor John Geiger delineates the conditions associated with the perception of the “sensed presence”: darkness, monotony, barrenness, isolation, cold, hunger, thirst, injury, fatigue, and fear. 85 Shermer added sleep deprivation to this list, noting that Charles Lindbergh, on his famous cross–Atlantic flight, recorded the perception of “ghostly presences” in the cockpit, that “spoke with authority and clearness … giving me messages of importance unattainable in ordinary life.” 86 Sacks noted that doubled consciousness is not necessarily an alien or abnormal sensation, we all feel it, especially when we are alone, in the dark, in a scary place. 87 We all can recall a memory from childhood when we could palpably feel the presence of the monster hiding in the closet, or that indefinable thing in the dark space beneath our bed. The experience of the “sensed other” is common in schizophrenia, can be induced by certain drugs, is a central aspect of the “near death experience,” and is also associated with certain neurological disorders. 88

To speak of oneself in the third person; to express the wish to “find myself,” is to presuppose a plurality within one’s own mind. 89 There is consciousness, and then there is something else … an Other … who is nonetheless a part of our own mind, though separate from our moment-to-moment consciousness. When I make a statement such as: “I’m disappointed with myself because I let myself gain weight,” it is quite clear that there are at least two wills at work within one mind—one will that dictates weight loss and is disappointed—and another will that defies the former and allows the body to binge or laze. One cannot point at one will and say: “This is the real me and the other is not me.” They’re both me. Within each “I” there exists a distinct Other that is also “I.” In the mind of the believer—this double-I, this other-I, this sentient other, this sensed presence who is me but also, somehow, not me—how could this be anyone other than an angel, a spirit, my own soul, or God? Sacks recalls an incident in which he broke his leg while mountain climbing alone and had to descend the mountain despite his injury and the immense pain it was causing him. Sacks heard “an inner voice” that was “wholly unlike” his normal “inner speech”—a “strong, clear, commanding voice” that told him exactly what he had to do to survive the predicament, and how to do it. “This good voice, this Life voice, braced and resolved me.” Sacks relates the story of Joe Simpson, author of Touching the Void , who had a similar experience during a climbing mishap in the Andes. For days, Simpson trudged along with a distinctly dual sense of self. There was a distracted self that jumped from one random thought to the next, and then a clearly separate focused self that spoke to him in a commanding voice, giving specific instructions and making logical deductions. 90 Sacks also reports the experience of a distraught friend who, at the moment she was about to commit suicide, heard a “voice” tell her: “No, you don’t want to do that…” The male voice, which seemed to come from outside of her, convinced her not to throw her life away. She speaks of it as her “guardian angel.” Sacks suggested that this other voice may always be there, but it is usually inhibited. When it is heard, it’s usually as an inner voice, rather than an external one. 91 Sacks also reports that the “persistent feeling” of a “presence” or a “companion” that is not actually there is a common hallucination, especially among people suffering from Parkinson’s disease. Sacks is unsure if this is a side-effect of L-DOPA, the drug used to treat the disease, or if the hallucinations are symptoms of the neurological disease itself. He also noted that some patients were able to control the hallucinations to varying degrees. One elderly patient hallucinated a handsome and debonair gentleman caller who provided “love, attention, and invisible presents … faithfully each evening.” 92

Part III: Off to the Asylum – Rational Anti-psychiatry
by Veronika Nasamoto

The ancients were also clued up in that the origins of mental instability was spiritual but they perceived it differently. In The Origins of Consciousness in the Breakdown of Bicameral Mind, Julian Jaynes’ book present a startling thesis, based on an analysis of the language of the Iliad, that the ancient Greeks were not conscious in the same way that modern humans are. Because the ancient Greeks had no sense of “I” (also Victorian England would sometimes speak in the third person rather than say I, because the eternal God – YHWH was known as the great “I AM”) with which to locate their mental processes. To them their inner thoughts were perceived as coming from the gods, which is why the characters in the Iliad find themselves in frequent communication with supernatural entities.

The Shadows of Consciousness in the Breakdown of the Bicameral Mirror
by Chris Savia

Jaynes’s description of consciousness, in relation to memory, proposes what people believe to be rote recollection are concepts, the platonic ideals of their office, the view out of the window, et al. These contribute to one’s mental sense of place and position in the world. The memories enabling one to see themselves in the third person.

Language, consciousness and the bicameral mind
by Andreas van Cranenburgh

Consciousness not a copy of experience Since Locke’s tabula rasa it has been thought that consciousness records our experiences, to save them for possible later reflection. However, this is clearly false: most details of our experience are immediately lost when not given special notice. Recalling an arbitrary past event requires a reconstruction of memories. Interestingly, memories are often from a third-person perspective, which proves that they could not be a mere copy of experience.

The Origin of Consciousness in the Breakdown of the Bicameral Mind
by Julian Jaynes
pp. 347-350

Negatory Possession

There is another side to this vigorously strange vestige of the bicameral mind. And it is different from other topics in this chapter. For it is not a response to a ritual induction for the purpose of retrieving the bicameral mind. It is an illness in response to stress. In effect, emotional stress takes the place of the induction in the general bicameral paradigm just as in antiquity. And when it does, the authorization is of a different kind.

The difference presents a fascinating problem. In the New Testament, where we first hear of such spontaneous possession, it is called in Greek daemonizomai, or demonization. 10 And from that time to the present, instances of the phenomenon most often have that negatory quality connoted by the term. The why of the negatory quality is at present unclear. In an earlier chapter (II. 4) I have tried to suggest the origin of ‘evil’ in the volitional emptiness of the silent bicameral voices. And that this took place in Mesopotamia and particularly in Babylon, to which the Jews were exiled in the sixth century B.C., might account for the prevalence of this quality in the world of Jesus at the start of this syndrome.

But whatever the reasons, they must in the individual be similar to the reasons behind the predominantly negatory quality of schizophrenic hallucinations. And indeed the relationship of this type of possession to schizophrenia seems obvious.

Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.

Always the patients are uneducated, usually illiterate, and all believe heartily in spirits or demons or similar beings and live in a society which does. The attacks usually last from several minutes to an hour or two, the patient being relatively normal between attacks and recalling little of them. Contrary to horror fiction stories, negatory possession is chiefly a linguistic phenomenon, not one of actual conduct. In all the cases I have studied, it is rare to find one of criminal behavior against other persons. The stricken individual does not run off and behave like a demon; he just talks like one.

Such episodes are usually accompanied by twistings and writhings as in induced possession. The voice is distorted, often guttural, full of cries, groans, and vulgarity, and usually railing against the institutionalized gods of the period. Almost always, there is a loss of consciousness as the person seems the opposite of his or her usual self. ‘He’ may name himself a god, demon, spirit, ghost, or animal (in the Orient it is often ‘the fox’), may demand a shrine or to be worshiped, throwing the patient into convulsions if these are withheld. ‘He’ commonly describes his natural self in the third person as a despised stranger, even as Yahweh sometimes despised his prophets or the Muses sneered at their poets. 12 And ‘he’ often seems far more intelligent and alert than the patient in his normal state, even as Yahweh and the Muses were more intelligent and alert than prophet or poet.

As in schizophrenia, the patient may act out the suggestions of others, and, even more curiously, may be interested in contracts or treaties with observers, such as a promise that ‘he’ will leave the patient if such and such is done, bargains which are carried out as faithfully by the ‘demon’ as the sometimes similar covenants of Yahweh in the Old Testament. Somehow related to this suggestibility and contract interest is the fact that the cure for spontaneous stress-produced possession, exorcism, has never varied from New Testament days to the present. It is simply by the command of an authoritative person often following an induction ritual, speaking in the name of a more powerful god. The exorcist can be said to fit into the authorization element of the general bicameral paradigm, replacing the ‘demon.’ The cognitive imperatives of the belief system that determined the form of the illness in the first place determine the form of its cure.

The phenomenon does not depend on age, but sex differences, depending on the historical epoch, are pronounced, demonstrating its cultural expectancy basis. Of those possessed by ‘demons’ whom Jesus or his disciples cured in the New Testament, the overwhelming majority were men. In the Middle Ages and thereafter, however, the overwhelming majority were women. Also evidence for its basis in a collective cognitive imperative are its occasional epidemics, as in convents of nuns during the Middle Ages, in Salem, Massachusetts, in the eighteenth century, or those reported in the nineteenth century at Savoy in the Alps. And occasionally today.

The Emergence of Reflexivity in Greek Language and Thought
by Edward T. Jeremiah
p. 3

Modernity’s tendency to understand the human being in terms of abstract grammatical relations, namely the subject and self, and also the ‘I’—and, conversely, the relative indifference of Greece to such categories—creates some of the most important semantic contrasts between our and Greek notions of the self.

p. 52

Reflexivisations such as the last, as well as those like ‘Know yourself’ which reconstitute the nature of the person, are entirely absent in Homer. So too are uses of the reflexive which reference some psychological aspect of the subject. Indeed the reference of reflexives directly governed by verbs in Homer is overwhelmingly bodily: ‘adorning oneself’, ‘covering oneself’, ‘defending oneself’, ‘debasing oneself physically’, ‘arranging themselves in a certain formation’, ‘stirring oneself’, ad all the prepositional phrases. The usual reference for indirect arguments is the self interested in its own advantage. We do not find in Homer any of the psychological models of self-relation discussed by Lakoff.

Use of the Third Person for Self-Reference by Jesus and Yahweh
by Rod Elledge
pp. 11-13

Viswanathan addresses illeism in Shakespeare’s works, designating it as “illeism with a difference.” He writes: “It [‘illeism with a difference’] is one by which the dramatist makes a character, speaking in the first person, refer to himself in the third person, not simple as a ‘he’, which would be illeism proper, a traditional grammatical mode, but by name.” He adds that the device is extensively used in Julius Caesar and Troilus and Cressida, and occasionally in Hamlet and Othello. Viswanathan notes the device, prior to Shakespeare, was used in the medieval theater simply to allow a character to announce himself and clarify his identity. Yet, he argues that, in the hands of Shakespeare, the device becomes “a masterstroke of dramatic artistry.” He notes four uses of this illeism with a difference.” First, it highlights the character using it and his inner self. He notes that it provides a way of “making the character momentarily detach himself from himself, achieve a measure of dramatic (and philosophical) depersonalization, and create a kind of aesthetic distance from which he can contemplate himself.” Second, it reflects the tension between the character’s public and private selves. Third, the device “raises the question of the way in which the character is seen to behave and to order his very modes of feeling and thought in accordance with a rightly or wrongly conceived image or idea of himself.” Lastly, he notes the device tends to point toward the larger philosophical problem for man’s search for identity. Speaking of the use of illeism within Julius Caesar, Spevak writes that “in addiction to the psychological and other implications, the overall effect is a certain stateliness, a classical look, a consciousness on the part of the actors that they are acting in a not so everyday context.”

Modern linguistic scholarship

Otto Jespersen notes various examples of the third-person self-reference including those seeking to reflect deference or politeness, adults talking to children as “papa” or “Aunt Mary” to be more easily understood, as well as the case of some writers who write “the author” or “this present writer” in order to avoid the mention of “I.” He notes Caesar as a famous example of “self-effacement [used to] produce the impression of absolute objectivity.” Yet, Head writes, in response to Jespersen, that since the use of the third person for self-reference

is typical of important personages, whether in autobiography (e.g. Caesar in De Bello Gallico and Captain John Smith in his memoirs) or in literature (Marlowe’s Faustus, Shakespeare’s Julius Caesar, Cordelia and Richared II, Lessing’s Saladin, etc.), it is actually an indication of special status and hence implies greater social distance than does the more commonly used first person singular.

Land and Kitzinger argue that “very often—but not always . . . the use of a third-person reference form in self-reference is designed to display that the speaker is talking about themselves as if from the perspective of another—either the addressee(s) . . . or a non-present other.” The linguist Laurence Horn, noting the use of illeism by various athlete and political celebrities, notes that “the celeb is viewing himself . . . from the outside.” Addressing what he refers to as “the dissociative third person,” he notes that an athlete or politician “may establish distance between himself (virtually never herself) and his public persona, but only by the use of his name, never a 3rd person pronoun.”

pp. 15-17

Illeism in Clasical Antiquity

As referenced in the history of research, Kostenberger writes: “It may strike the modern reader as curious that Jesus should call himself ‘Jesus Christ’; however, self-reference in the third person was common in antiquity.” While Kostenberger’s statement is a brief comment in the context of a commentary and not a monographic study on the issue, his comment raises a critical question. Does a survey of the evidence reveal that Jesus’s use of illeism in this verse (and by implication elsewhere in the Gospels) reflects simply another example of a common mannerism in antiquity? […]

Early Evidence

From the fifth century BC to the time of Jesus the following historians refer to themselves in the third person in their historical accounts: Hecataeus (though the evidence is fragmentary), Herodotus, Thucydides, Xenophon, Polybius, Caesar, and Josephus. For the scope of this study this point in history (from fifth century BC to first century AD) is the primary focus. Yet, this feature was adopted from the earlier tendency in literature in which an author states his name as a seal or sphragis for their work. Herkommer notes that the “self-introduction” (Selbstvorstellung) in the Homeric Hymn to Apollo, in choral poetry (Chorlyrik) such as that by the Greek poet Alkman (seventh century BC), and in the poetic mxims, (Spruchdichtung) such as those of the Greek poet Phokylides (seventh century BC). Yet, from fifth century onward, this feature appears primarily in the works of Greek historians. In addition to early evidence (prior to the fifth century of an author’s self-reference in his historiographic work, the survey of evidence also noted an early example of illeism within Homer’s Illiad. Because this ancient Greek epic poem reflects an early use of the third-person self-reference in a narrative context and offers a point of comparison to its use in later Greek historiography, this early example of the use of illeism is briefly addressed.

Maricola notes that the style of historical narrative that first appears in Herodotus is a legacy from Homer (ca. 850 BC). He notes that “as the writer of the most ‘authoritative’ third-person narrative, [Homer] provided a model not only for later poets, epic and otherwise, but also to the prose historians who, by way of Herodotus, saw him as their model and rival.” While Homer provided the authoritative example of third-person narrative, he also, centuries before the development of Greek historiography, used illeism in his epic poem the Iliad. Illeism occurs in the direct speech of Zeus (the king of the gods), Achilles (the “god-like” son of a king and goddess), and Hector (the mighty Trojan prince).

Zeus, addressing the assembled gods on Mt. Olympus, refers to himself as “Zeus, the supreme Master” […] and states how superior he is above all gods and men. Hector’s use of illeism occurs as he addresses the Greeks and challenges the best of them to fight against “good Hector” […]. Muellner notes in these instances of third person for self-reference (Zeus twice and Hector once) that “the personage at the top and center of the social hierarchy is asserting his superiority over the group . . . . In other word, these are self-aggrandizing third-person references, like those in the war memoirs of Xenophon, Julius Caesar, and Napoleon.” He adds that “the primary goal of this kind of third-person self-reference is to assert the status accruing to exception excellence. Achilles refers to himself in the context of an oath (examples of which are reflected in the OT), yet his self-reference serves to emphasize his status in relation to the Greeks, and especially to King Agamemnon. Addressing Agamemnon, the general of the Greek armies, Achillies swears by his sceptor and states that the day will come when the Greeks will long for Achilles […].

Homer’s choice to use illeism within the direct speech of these three characters contributes to an understanding of its potential rhetorical implications. In each case the character’s use of illeism serves to set him apart by highlighting his innate authority and superior status. Also, all three characters reflect divine and/or royal aspects (Zeus, king of gods; Achilles, son of a king and a goddess, and referred to as “god-like”; and Hector, son of a king). The examples of illeism in the Iliad, among the earliest evidence of illeism, reflect a usage that shares similarities with the illeism as used by Jesus and Yahweh. The biblical and Homeric examples each reflects illeism in direct speech within narrative discourse and the self-reverence serves to emphasize authority or status as well as a possible associated royal and/or divine aspect(s). Yet, the examples stand in contrast to the use of illeism by later historians. As will be addressed next, these ancient historians used the third-person self-reference as a literary device to give their historical accounts a sense of objectivity.

Women and Gender in Medieval Europe: An Encyclopedia
edited by Margaret C. Schaus
“Mystics’ Writings”

by Patricia Dailey
p. 600

The question of scribal mediation is further complicated in that the mystic’s text is, in essence, a message transmitted through her, which must be transmitted to her surrounding community. Thus, the denuding of voice of the text, of a first-person narrative, goes hand in hand with the status of the mystic as “transcriber” of a divine message that does not bear the mystic’s signature, but rather God’s. In addition, the tendency to write in the third person in visionary narratives may draw from a longstanding tradition that stems from Paul in 2 Cor. of communicating visions in the third person, but at the same time, it presents a means for women to negotiate with conflicts with regard to authority or immediacy of the divine through a veiled distance or humility that conformed to a narrative tradition.

Romantic Confession: Jean-Jacques Rousseau and Thomas de Quincey
by Martina Domines Veliki

It is no accident that the term ‘autobiography’, entailing a special amalgam of ‘autos’, ‘bios’ and ‘graphe’ (oneself, life and writing), was first used in 1797 in the Monthly Review by a well-known essayist and polyglot, translator of German romantic literature, William Taylor of Norwich. However, the term‘autobiographer’ was first extensively used by an English Romantic poet, one of the Lake Poets, Robert Southey1. This does not mean that no autobiographies were written before the beginning of the nineteenth century. The classical writers wrote about famous figures of public life, the Middle Ages produced educated writers who wrote about saints’ lives and from Renaissance onward people wrote about their own lives. However, autobiography, as an auto-reflexive telling of one’s own life’s story, presupposes a special understanding of one’s‘self’ and therefore, biographies and legends of Antiquity and the Middle Ages are fundamentally different from ‘modern’ autobiography, which postulates a truly autonomous subject, fully conscious of his/her own uniqueness2. Life-writing, whether in the form of biography or autobiography, occupied the central place in Romanticism. Autobiography would also often appear in disguise. One would immediately think of S. T. Coleridge’s Biographia Literaria (1817) which combines literary criticism and sketches from the author’s life and opinions, and Mary Wollstonecratf’s Short Residence in Sweden, Norway and Denmark (1796),which combines travel narrative and the author’s own difficulties of travelling as a woman.

When one thinks about the first ‘modern’ secular autobiography, it is impossible to avoid the name of Jean-Jacques Rousseau. He calls his first autobiography The Confessions, thus aligning himself in the long Western tradition of confessional writings inaugurated by St. Augustine (354 – 430 AD). Though St. Augustine confesses to the almighty God and does not really perceive his own life as significant, there is another dimension of Augustine’s legacy which is important for his Romantic inheritors: the dichotomies inherent in the Christian way of perceiving the world, namely the opposition of spirit/matter, higher/lower, eternal/temporal, immutable/changing become ultimately emanations of a single binary opposition, that of inner and outer (Taylor 1989: 128). The substance of St. Augustine’s piety is summed up by a single sentence from his Confessions:

“And how shall I call upon my God – my God and my Lord? For when I call on Him, I ask Him to come into me. And what place is there in me into which my God can come? (…) I could not therefore exist, could not exist at all, O my God, unless Thou wert in me.” (Confessions, book I, chapter 2, p.2, emphasis mine)

The step towards inwardness was for Augustine the step towards Truth, i.e. God, and as Charles Taylor explains, this turn inward was a decisive one in the Western tradition of thought. The ‘I’ or the first person standpoint becomes unavoidable thereafter. It took a long way from Augustine’s seeing these sources to reside in God to Rousseau’s pivotal turn to inwardness without recourse to God. Of course, one must not lose sight of the developments in continental philosophy pre-dating Rousseau’s work. René Descartes was the first to embrace Augustinian thinking at the beginning of the modern era, and he was responsible for the articulation of the disengaged subject: the subject asserting that the real locus of all experience is in his own mind3. With the empiricist philosophy of John Locke and David Hume, who claimed that we reach the knowledge of the surrounding world through disengagement and procedural reason, there is further development towards an idea of the autonomous subject. Although their teachings seemed to leave no place for subjectivity as we know it today, still they were a vital step in redirecting the human gaze from the heavens to man’s own existence.

2 Furthermore, the Middle Ages would not speak about such concepts as ‘the author’and one’s ‘individuality’ and it is futile to seek in such texts the appertaining subject. When a Croatian fourteenth-century-author, Hanibal Lucić, writes about his life in a short text called De regno Croatiae et Dalmatiae? Paulus de Paulo, the last words indicate that the author perceives his life as being insignificant and invaluable. The nuns of the fourteenth century writing their own confessions had to use the third person pronoun to refer to themselves and the ‘I’ was reserved for God only. (See Zlatar 2000)

Return to Childhood by Leila Abouzeid
by Geoff Wisner

In addition, autobiography has the pejorative connotation in Arabic of madihu nafsihi wa muzakkiha (he or she who praises and recommends him- or herself). This phrase denotes all sorts of defects in a person or a writer: selfishness versus altruism, individualism versus the spirit of the group, arrogance versus modesty. That is why Arabs usually refer to themselves in formal speech in the third person plural, to avoid the use of the embarrassing íI.ë In autobiography, of course, one uses íIë frequently.

Becoming Abraham Lincoln
by Richard Kigel
Preface, XI

A note about the quotations and sources: most of the statements were collected by William Herndon, Lincoln’s law partner and friend, in the years following Lincoln’s death. The responses came in original handwritten letters and transcribed interviews. Because of the low literacy levels of many of his subjects, sometimes these statements are difficult to understand. Often they used no punctuation and wrote in fragments of thoughts. Misspellings were common and names and places were often confused. “Lincoln” was sometimes spelled “Linkhorn” or “Linkern.” Lincoln’s grandmother “Lucy” was sometimes “Lucey.” Some respondents referred to themselves in third person. Lincoln himself did in his biographical writings.

p. 35

“From this place,” wrote Abe, referring to himself in the third person, “he removed to what is now Spencer County, Indiana, in the autumn of 1816, Abraham then being in his eighth [actually seventh] year. This removal was partly on account of slavery, but chiefly on account of the difficulty in land titles in Kentucky.”

Ritual and the Consciousness Monoculture
by Sarah Perry

Mirrors only became common in the nineteenth century; before, they were luxury items owned only by the rich. Access to mirrors is a novelty, and likely a harmful one.

In Others In Mind: Social Origins of Self-Consciousness, Philippe Rochat describes an essential and tragic feature of our experience as humans: an irreconcilable gap between the beloved, special self as experienced in the first person, and the neutrally-evaluated self as experienced in the third person, imagined through the eyes of others. One’s first-person self image tends to be inflated and idealized, whereas the third-person self image tends to be deflated; reminders of this distance are demoralizing.

When people without access to mirrors (or clear water in which to view their reflections) are first exposed to them, their reaction tends to be very negative. Rochat quotes the anthropologist Edmund Carpenter’s description of showing mirrors to the Biamis of Papua New Guinea for the first time, a phenomenon Carpenter calls “the tribal terror of self-recognition”:

After a first frightening reaction, they became paralyzed, covering their mouths and hiding their heads – they stood transfixed looking at their own images, only their stomach muscles betraying great tension.

Why is their reaction negative, and not positive? It is that the first-person perspective of the self tends to be idealized compared to accurate, objective information; the more of this kind of information that becomes available (or unavoidable), the more each person will feel the shame and embarrassment from awareness of the irreconcilable gap between his first-person specialness and his third-person averageness.

There are many “mirrors”—novel sources of accurate information about the self—in our twenty-first century world. School is one such mirror; grades and test scores measure one’s intelligence and capacity for self-inhibition, but just as importantly, peers determine one’s “erotic ranking” in the social hierarchy, as the sociologist Randall Collins terms it. […]

There are many more “mirrors” available to us today; photography in all its forms is a mirror, and internet social networks are mirrors. Our modern selves are very exposed to third-person, deflating information about the idealized self. At the same time, say Rochat, “Rich contemporary cultures promote individual development, the individual expression and management of self-presentation. They foster self-idealization.”

My Beef With Ken Wilber
by Scott Preston (also posted on Integral World)

We see immediately from this schema why the persons of grammar are minimally four and not three. It’s because we are fourfold beings and our reality is a fourfold structure, too, being constituted of two times and two spaces — past and future, inner and outer. The fourfold human and the fourfold cosmos grew up together. Wilber’s model can’t account for that at all.

So, what’s the problem here? Wilber seems to have omitted time and our experience of time as an irrelevancy. Time isn’t even represented in Wilber’s AQAL model. Only subject and object spaces. Therefore, the human form cannot be properly interpreted, for we have four faces, like some representations of the god Janus, that face backwards, forwards, inwards, and outwards, and we have attendant faculties and consciousness functions organised accordingly for mastery of these dimensions — Jung’s feeling, thinking, sensing, willing functions are attuned to a reality that is fourfold in terms of two times and two spaces. And the four basic persons of grammar — You, I, We, He or She — are the representation in grammar of that reality and that consciousness, that we are fourfold beings just as our reality is a fourfold cosmos.

Comparing Wilber’s model to Rosenstock-Huessy’s, I would have to conclude that Wilber’s model is “deficient integral” owing to its apparent omission of time and subsequently of the “I-thou” relationship in which the time factor is really pronounced. For the “I-It” (or “We-Its”) relation is a relation of spaces — inner and outer, while the “I-Thou” (or “We-thou”) relation is a relation of times.

It is perhaps not so apparent to English speakers especially that the “thou” or “you” form is connected with time future. Other languages, like German, still preserve the formal aspects of this. In old English you had to say “go thou!” or “be thou loving!”, and so on. In other words, the “thou” or “you” is most closely associated with the imperative form and that is the future addressing the past. It is a call to change one’s personal or collective state — what we call the “vocation” or “calling” is time future in dialogue with time past. Time past is represented in the “we” form. We is not plural “I’s”. It is constituted by some historical act, like a marriage or union or congregation of peoples or the sexes in which “the two shall become one flesh”. We is the collective person, historically established by some act. The people in “We the People” is a singularity and a unity, an historically constituted entity called “nation”. A bunch of autonomous “I’s” or egos never yet formed a tribe or a nation — or a commune for that matter. Nor a successful marriage.

Though “I-It” (or “We-Its”) might be permissible in referring to the relation of subject and object spaces, “we-thou” is the relation in which the time element is outstanding.

TFA and Perspective of Perspectives

marmalade
I think the best integral model is about perspectives because a model itself offers a perspective and its only through our individual perspective(and our cultural-historical perspective) that we can understand a model’s perspective. Another type of integral model creates descriptive categories which at its best has practical use, and at its worse creates unuseful or unclear distinctions.

I think Spiral Dynamics is more of the former, and Wilber’s quadrants is more of the latter; but there is much cross-over. Spiral Dynamics can be used to categorize, but I think this is a wrong use of it. Wilber’s quadrants can be used to represent perspectives which is how Wilber has tried to refine it, but still I find the quadrants disatisfying in this manner.

A perspective of perspectives is priveleged because it subsumes all else. Any model we create is created by humans on the planet earth during a very short span of time. I haven’t yet had a transcending vision of God’s view, and I’m not convinced anyone else has either. This notion of a perspective of perspectives is postmodern in the sense that it isn’t an objective framework that allows us to see outside of it, but neither can we separate it from what we are trying to explain by it. We are our perspectives meaning we change as our perspectives change.

Despite what to some may seem like subjective relativism, any model of perspectives isn’t separate from the context of the larger world that informs our perspective even if we don’t or can’t entirely comprehend it. We are part of the world and so our perspectives aren’t constrained by limited notions of individuality. We can infer that this perspective of perspectives somehow reflects a larger world context because afterall it is this that our perspectives have arisen or evolved from. There is no necessity to make any metaphysical interpretations, but speculating might be useful if it leads to new perspectives that we can then verify in our own experience.

Everything is a perspective including all aspects of an ITP. Its not having a balanced life that creates an integral perspective. There must be something within our awareness that connects it all even if only on a vague level of intention.

Basically, what I’m speaking about here is a TFA(Theory For Everything) rather than a TOE. A TFA doesn’t need to explain everything. It only needs to explain how we go about explaining and the constraints thereof, and there is no reason to assume that everything that we can explain is everything that exists. We don’t need to create a cosmological model of all reality nor a grand scientific synthesis nor theorize beyond our direct experience. A perspective of perspectives is a much less grand goal, but also much more subtle.

I have many thoughts on this matter, but I’m still thinking it all out. I’m pointing towards an archetypal explanation of model-making. The content can be anything, but there remains basic tendencies of how all models are made. And this inherent cognitive functioning of the human psyche effects the content that is modelled. We never see reality directly for what it is because we always are modelling whether consciously or unconsciously.

As an example of what I’m thinking about, check out C. J. Lofting’s theory:
http://members.iimetro.com.au/~lofting/myweb/idm001.html

Does anyone have any ideas relating to the idea of a “perspective of perspectives” or of a TFA?

Does anyone know of any interesting forum discussions about this or any interesting websites?

Tags:

Share Twitter Facebook

► Reply to This

Replies to This Discussion

marmalade Permalink Reply by marmalade on November 30, 2007 at 7:16pm
What I’m bringing up here also relates to the criticism of Wilber’s model dismissing the Western occult tradition. Fundamentally, the occult is about experience. Some people don’t like models such as Wilber presents because they seem too abstract. This is a challenge of studying Wilber. He covers so much material that it is difficult for anyone to research in depth in order to verify all of his sources. We just have to trust Wilber.

This is fine up to a point, but I want a basic framework that can be verified in my everyday and not-so-everyday experiences. For instance, I don’t simply accept spiral dynamics. I’ve looked at the world through this lense and it made sense of much of my experience. And hopefully science will further clarify its veracity or not.

Also, spiral dynamics appears to generally fit the modelling pattern of the chakra system. They aren’t the same, but maybe the same patterns in the human psyche have influenced both to create similar structures of meaning. As far as I know, Clare Graves wasn’t basing his research on the theory of the chakras. It doesn’t matter that the two theories are referring towards different views of reality. If there is an archetypal patterning process, then similar models will create simlar connections between ideas even when those ideas seem in disagreement.

I don’t know if that was a good example, but its an obvious comparison. What I’m interested in is similar to what Campbell was looking for in comparing myths from entirely separate cultures. So, I’m wondering whether there is a monomyth of models. Loftings basic idea is that all models begin with some basic duality and that is then fed back into itself to create further distinctions. The most clear example he presents for how this occurs is the I Ching. And the I Ching could be seen as a model of perspectives.

Beebe is a Jungian theorist who proposed that archetype, complex, and type are getting at the same notion. I take Wilber’s criticism seriously that there is a pre/trans confusion in Jung’s archetypes. Jung did imply a hierarchy of archetypes(see James Whitlark’s explanation of individuation as it relates to Spiral Dynamics), but he left this unclear. Similarly, how do memes, holons, and morphic fields relate? All of these kinds of ideas put forth that there is something that creates coherence in our experience in a predictable way.

I found an interesting thread discussion related to all of this at Integral Review Forums:

Thinking postformally about “theory building”
http://global-arina.org/phpBB/viewtopic.php?t=24

[quote=”jgidley”]I see the link between architecture and thinking as one of the important contributions of postmodernism. Although it is emphasized by Steiner’s and Sri Aurobindo’s integral lineages, it is essentially overlooked in much other integral theory.[/quote]

This reminds me of how mnemonics was intimately connected with architecture.

[quote=”bonnittaroy”]My feeling is that it depends upon one’s relationshipto one’s theory-making. Some people (like Shoepnehauer) build theories to try to get at what reality really is. They expect that the intellect is the portal to answer that question. Others, like Whitehead and Guenther, are theory-making to tease out what is implicit in their view, that may be hidden or unformed and as yet to be articulated. In the process, what is brewing there, becomes disclosed and “known” in a more conventional way. The theory can serve as a “marker” for other people to discuss synergistically implicit views, and move the understanding forward.

Theory making in the second sense is more like thought-experimenting. Or creating an aesthetic. This is not limited to philosophy. I love the way physicists do thought experiments like Schrodinger’s Cat and Wheeler’s Daemon.

The above comment pertains to people who actively see themselves as doing theory. But I would like also to explore how there is a kind of theory -building that is implicit in cognizing reality at all. Everyone has, at bottom, certain fundamental assumptions about reality– it is consumate with how reality arises at all. Reality arises such that I feel I am an individual being. But that is certainly just one view– based in a kind of implicit theory — the set of conditions of my cognizing mind.

Above that very fundamental level, there are the basic beliefs about reality that we hold — implicitly or explicity — that also are a kind of theory-building that is going on all the time. In the integral community, for example, there are fundamental beliefs that few people would consider “theory building” and many would consider a description of how reality really is, namely, the holarchic organization of reality, the hierarchic organization of reality, the notion of development and evolution. These are tenets that underwrite our more obvious practices of theory building, but they are in themselves, the product of implicit theory building.

I believe that an example of post-formal operations is to be able to “hold” each of these kinds of cognitive processes very very lightly — to be able to see them as processes that are going on all the time in a very intimate and implicit way. And to be able to make what is implicit, explicit — so we can be liberated from their limitations, while at the same time, expand our choice field as to what set or set of theories (thought experiments) might be more helpful/ useful.[/quote]

► Reply to This
marmalade Permalink Reply by marmalade on November 30, 2007 at 11:22pm
I can’t claim to entirely understand what Lofting is getting at here. I have a high tolerance for abstraction and I get the gist of his theory, but I wish he used more grounded examples. He speaks alot about mathematics and he loses me.
http://members.iimetro.com.au/~lofting/myweb/NeuroMaths3.htm#Recursion

The WHAT
The emphasis on WHAT is an emphasis on an object, a bounded ‘thing’ that can be tangible (as in a ball) or intangible (as in a marriage). Note how the intangible reflects what we call nominalisation where a process (and so a relationship, ’.. getting married’) has been converted into a noun, a thing (‘this marriage…’).

Although the term ‘what’ has a general nature about it, it still has a ‘point’ or ‘dot’ emphasis and we can refine this emphasis further by introducing additional terms such as WHO and WHICH. These terms act to particularise the general in that the ‘what’ realm is strongly ‘dot’ oriented and as such favours clear, precise, identifications and so a more LOCAL, discrete perspective.

This emphasis on ‘dot’ precision forces a degree of focus that can distort all considerations of the context in which the dot exists in that the precision requires a dependence on a universal context to support it.

The WHERE
The emphasis on WHERE is an emphasis on a relationship, there is a coordinates bias ‘relative’ to something else. There is a more intangible element here in that a set of relationships can go towards identifying an object by implications; there is an intuitive emphasis where a pattern based on linking a set of coordinates is ‘suddenly’ recognised as implying ‘something’; in other words there is a ‘constellations’ emphasis where objects are linked together to form a pattern that is then itself objectified; for example there is a strong emphasis here to geometric forms –e. g. ‘triangles’, ‘cubes’ etc. which in basic mathematics come out of joining coordinates.

This emphasis on constellation formation means that, when compared to the realm of the ‘what’, the ‘where’ reflects a LACK in precision where (!) the identification of something is made by identifying a pattern of landmarks ‘around’ the something. There is thus a strong context-sensitivity in the ‘where’ analysis when compared to the more precise, almost context-free (or local context-ignored) emphasis in the ‘what’ analysis. Thus the transference of a ‘where’ to a ‘what’ through the process of nominalisation acts to de-contextualise or more so encapsulate the context with the text. (See figure 1).

In general the term ‘where’ is as general as the term ‘what’ and as such we can introduce additional terms such as WHEN and HOW to aid in particularising the general. When compared to the distinctions of WHO and WHICH, the WHEN and HOW terms are highly dependent on coordinates (space and/or time), on establishing specific ‘begin-end’ positions rather than emphasis on a point free of any extensions.>>

Recursion and Emerging Numeracy
The recursion process is where an element is applied to itself, thus the identification of an object causes us to zoom-in on that object for details. This process leads to the recognition of such concepts as an object’s negation that at the general level relates to the entire universe exclusive of the object, and at the particular level the objects direct opposite (e.g. positive/negative, earth/sky etc).

Analysis of the patterns that emerge from applying the what/where dichotomy to itself leads to the identification of four fundamental distinctions which we can tie to feelings and so tie to pre-linguistic understandings of reality. These distinctions are:

Objects:

Wholes

Parts

Relationships:

Static

Dynamic

Note that a ‘part’ is the term we use for the combination of (a) an object and (b) a relationship to a greater object and it is the word ‘part’ that reflects what we can call the superposition of two distinctions – the distinction of ‘wholeness’ combined with the distinction of ‘relatedness’.>>

 

► Reply to This
BrightAbyss~ Permalink Reply by BrightAbyss~ on December 1, 2007 at 1:03am
YOU: I think the best integral model is about perspectives

ME: we gotta start there. It’s all about perspectives. Descartes tried to build up from the cogito (1PP), and then Husserl, but Merleau-Ponty did it better… A naturalist (integral) philosophy emerges out of a deep understanding of human knowledge-making and perspectives…

YOU: This notion of a perspective of perspectives is postmodern in the sense that it isn’t an objective framework that allows us to see outside of it, but neither can we separate it from what we are trying to explain by it.

ME: You gotta read Pierre Bourdieu’s work. Especially “The Logic of Practice” – where he talks about “objectifying objectification”. I think you’d find a kindred spirit with re: to perspectives.

YOU: Despite what to some may seem like subjective relativism, any model of perspectives isn’t separate from the context of the larger world that informs our perspective even if we don’t or can’t entirely comprehend it. We are part of the world and so our perspectives aren’t constrained by limited notions of individuality.

ME: This is why we have to be happy finding our way (carving out an existence) & dwelling in ‘worldspaces’ – with very HUMAN knowledges generated out of our Life Conditions (cf. Wittgenstein’s ‘Forms of Life’) and articulated through the rich tapestry of experience, being and relating.

I think ‘contingency’ is a key concept for understanding embodied human knowing… But just remember our perspectives are not so totally divorced from the Real, because there is an intimacy and immediacy our being-in-the-world that necessarily encounters actual life conditions.

YOU: We never see reality directly for what it is because we always are modelling whether consciously or unconsciously.

ME: Idealism is a very sick joke played on us by our own abstractions… We are OF the world so we can directly ‘know’ it in so many practical and meaningful ways… Don’t fall into the trap of believing a priori that “we can never really know the thing-in-itself”. Human knowledge is grounded in human kinds of knowing, with its many faults, but – like you say- it is the ONLY kind of (contextual) knowing we have, so lets get over ourselves and get to the actual work of putting our models/worldviews/discourse in the service of HEALTH & ADAPTATION.

YOU: This is fine up to a point, but I want a basic framework that can be verified in my everyday and not-so-everyday experiences.

ME: Then I believe you came to the right place. For instance, if AQAL is Wilber’s metaphoric ‘Integral Operating System’ (IOS), then what this network/forum wants to facilitate is allowing people at higher ‘altitudes’ of consciousness to evolve IOS’s of their OWN – developing and operating “applications” relevant to specific individuals, groups, projects, and always in context.

In other words, this forum is wants to help people develop their own “integral operating systems” but with OPEN “sources” – ie, drawing from various theories and traditions, and data.

So if Wilber’s AQAL can be compared to MicroSoft’s Windows operating system, then our project (at the Integral Research Group) can be compared to Linux – in that we want to help create alternative OPEN SOURCE integral (OSI) operating “systems” (theories and practical applications).

You see, the door is wide OPEN to collaborative innovation re: discourse dynamics, integral thinking, healthy being and adaptive relations.

What say you?

cheers~

► Reply to This
marmalade Permalink Reply by marmalade on December 3, 2007 at 10:33pm
Bright Abyss: “we gotta start there. It’s all about perspectives. Descartes tried to build up from the cogito (1PP), and then Husserl, but Merleau-Ponty did it better… A naturalist (integral) philosophy emerges out of a deep understanding of human knowledge-making and perspectives…”

Basically, I believe the most useful integral perspective is the one that precedes the seeking for an integral theory. Integral isn’t an unnatural or even new phenomenon. In terms of Spiral Dynamics, the higher levels are somehow already implied by the lower levels. In terms of Jung, archetypes precede specific manifestations of them as symbols or whatever.

BA: “You gotta read Pierre Bourdieu’s work. Especially “The Logic of Practice” – where he talks about “objectifying objectification”. I think you’d find a kindred spirit with re: to perspectives.”

Thanks for the suggestion.

BA: “This is why we have to be happy finding our way (carving out an existence) & dwelling in ‘worldspaces’ – with very HUMAN knowledges generated out of our Life Conditions (cf. Wittgenstein’s ‘Forms of Life’) and articulated through the rich tapestry of experience, being and relating.

I think ‘contingency’ is a key concept for understanding embodied human knowing… But just remember our perspectives are not so totally divorced from the Real, because there is an intimacy and immediacy our being-in-the-world that necessarily encounters actual life conditions.”

I’m intrigued by what you said here. Could you explain some more? How do you relate this to an integral view? Did I seem to imply that I thought perspectives are somehow divorced from the REAL? What do you mean by the REAL? What do you mean by ‘contingency’?

BA: “Idealism is a very sick joke played on us by our own abstractions… We are OF the world so we can directly ‘know’ it in so many practical and meaningful ways… Don’t fall into the trap of believing a priori that “we can never really know the thing-in-itself”. Human knowledge is grounded in human kinds of knowing, with its many faults, but – like you say- it is the ONLY kind of (contextual) knowing we have, so lets get over ourselves and get to the actual work of putting our models/worldviews/discourse in the service of HEALTH & ADAPTATION.”

When I spoke of not being able to directly know reality, I was referring to mental knowing. I agree there are many other kinds of knowing besides this. I had to laught at you last sentence. I’m far from getting over myself and I’m no model of HEALTH & ADAPTATION. I’m a seeker and that is the best I can claim for myself.

► Reply to This
marmalade Permalink Reply by marmalade on December 3, 2007 at 10:41pm
Chiron posted this at Lightmind:

By Colin McGinn

The Stuff of Thought: Language as a Window into Human Nature
by Steven Pinker

The Stuff of Thought is Steven Pinker’s fifth popular book in thirteen years, and by now we know what to expect. It is long, packed with information, clear, witty, attractively written, and generally persuasive. The topic, as earlier, is language and the mind—specifically, how language reflects human psychological nature. What can we learn about the mind by examining, with the help of linguistics and experimental psychology, the language we use to express ourselves?

Pinker ranges widely, from the verb system of English, to the idea of an innate language of thought, to metaphor, to naming, obscenity, and politeness. He is unfailingly engaging to read, with his aptly chosen cartoons, his amusing examples, and his bracing theoretical rigor. Yet there are signs of fatigue, not so much in the energy and enthusiasm he has put into the book as in the sometimes less than satisfying quality of the underlying ideas. I don’t blame the author for this: it is very hard to write anything deep, surprising, and true in psychology—especially when it comes to the most interesting aspects of our nature (such as our use of metaphor). A popular book on biology or physics will reliably deli-ver well-grounded information about things you don’t already know; in psychology the risk of banality dressed up as science is far greater. Sometimes in Pinker’s book the ratio of solid ideas to sparkling formulations is uncomfortably low (I found this particularly in the lively and amusing chapter on obscenity). He has decided to be ambitious, and there is no doubt of his ability to keep the show on the road, but it is possible to finish a long chapter of The Stuff of Thought and wonder what you have really learned—enjoyable as the experience of reading it may have been.

To my mind, by far the most interesting chapter of the book is the lengthy discussion of verbs—which may well appear the driest to some readers. Verbs are the linguistic keyhole to the mind’s secrets, it turns out. When children learn verbs they are confronted with a problem of induction: Can the syntactic rules that govern one verb be projected to another verb that has a similar meaning? Suppose you have already learned how to use the verb “load” in various syntactic combinations; you know that you can say both Hal loaded the wagon with hay and Hal loaded hay into the wagon. Linguists call the first kind of sentence a “container locative” and the second a “content locative,” because of the way they focus attention on certain aspects of the event reported—the wagon (container) or the hay (content), respectively (the word “locative” referring here to the way words express location). The two sentences seem very close in meaning, and the verb load slots naturally into the sentence frame surrounding it. So, can other verbs like fill and pour enter into the same combinations? The child learning English verbs might well suppose that they can, thus instantiating a rule of grammar that licenses certain syntactic transformations—to the effect that you can always rewrite a content locative as a container locative and vice versa. But if we look at how pour and fill actually work we quickly see that they violate any such rule. You can say John poured water into the glass (content locative) but you can’t say John poured the glass with water (container locative); whereas you can say John filled the glass with water (container locative) but you can’t say John filled water into the glass (content locative).

Somehow a child has to learn these syntactic facts about the verbs load, pour, and fill—and the rules governing them are very different. Why does one verb figure in one kind of construction but not in another? They all look like verbs that specify the movement of a type of stuff into a type of container, and yet they behave differently with respect to the syntactic structures in question. It’s puzzling.

The answer Pinker favors to this and similar puzzles is that the different verbs subtly vary in the way they construe the event they report: pour focuses on the type of movement that is involved in the transfer of the stuff, while neglecting the end result; fill by contrast specifies the final state and omits to say how that state precisely came about (and it might not have been by pouring). But load tells you both things: the type of movement and what it led to. Hence the verbs combine differently with constructions that focus on the state of the container and constructions that focus on the manner by which the container was affected.

The syntactic rules that control the verbs are thus sensitive to the precise meaning of the specific verb and how it depicts a certain event. And this means that someone who understands these verbs must tacitly grasp how this meaning plays out in the construction of sentences; thus the child has to pick up on just such subtle differences of meaning if she is to infer the right syntactic rule for the verb in question. Not consciously, of course; her brain must perform this work below the level of conscious awareness. She must implicitly analyze the verb—exposing its deep semantic structure. Moreover, these verbs form natural families, united by the way they conceive of actions—whether by their manner or by their end result. In the same class as pour, for example, we have dribble, drip, funnel, slosh, spill, and spoon.

This kind of example—and there is a considerable range of them—leads Pinker to a general hypothesis about the verb system of English (as well as other languages): the speaker must possess a language of thought that represents the world according to basic abstract categories like space, time, substance, and motion, and these categories constitute the meaning of the verb. When we use a particular verb in a sentence, we bring to bear this abstract system to “frame” reality in certain ways, thus imposing an optional grid on the flux of experience. We observe some liquid moving into a container and we describe it either as an act of pouring or as the state of being filled: a single event is construed in different ways, each reflecting the aspect we choose to focus on. None of this is conscious or explicit; indeed, it took linguists a long time to figure out why some verbs work one way and some another (Pinker credits the MIT linguists Malka Rappaport Hovav and Beth Levin). We are born with an implicit set of innate categories that organize events according to a kind of primitive physics, dealing with substance, motion, causality, and purpose, and we combine these to generate a meaning for a particular verb that we understand. The grammar of our language reflects this innate system of concepts.

As Pinker is aware, this is a very Kantian picture of human cognition. Kant regarded the mind as innately stocked with the basic concepts that make up Newtonian mechanics—though he didn’t reach that conclusion from a consideration of the syntax of verbs. And the view is not in itself terribly surprising: many philosophers have observed that the human conceptual scheme is essentially a matter of substances in space and time, causally interacting, moving and changing, obeying laws and subject to forces—with some of those substances being agents—i.e., conscious, acting human beings—with intentions and desires. What else might compose it? Here is a case where the conclusion reached by the dedicated psycholinguist is perhaps less revolutionary than he would like to think. The chief interest of Pinker’s discussion is the kind of evidence he adduces to justify such a hypothesis, rather than the hypothesis itself—evidence leading from syntax to cosmology, we might say. Of course the mind must stock basic concepts for the general structure of the universe if it is to grasp the nature of particular things within it; but it is still striking to learn that this intuitive physics shapes the very syntax of our language.

Not that everyone will agree with the general hypothesis itself—and Pinker has a whole chapter on innateness and the language of thought. Here he steers deftly between the extreme nativism of Jerry Fodor, according to which virtually every concept is innate, including trombone and opera (despite the fact that the concepts must therefore have preceded the invention of what they denote, being merely triggered into consciousness by experience of trombones and operas), and the kind of pragmatism that refuses to assign a fixed meaning to any word. Pinker sees that something conceptual has to be innate if language learning is to be possible at all, but he doesn’t believe it can be anything parochial and specific; so he concludes that only the most general categories of the world are present in the genes—the categories that any human being (or animal) needs to use if he or she is to survive at all. Among such categories, for example, are: event, thing, path, place, manner, acting, going, having, animate, rigid, flexible, past, present and future, causality, enabling and preventing, means and ends.

The picture then is that these innate abstract concepts mesh with the individual’s experience to yield the specific conceptual scheme that eventually flowers in the mind. The innate concepts pre-date language acquisition and make it possible; they are not the products of language. Thus Pinker rejects the doctrine of “linguistic determinism,” which holds that thought is nothing other than the result of the language we happen to speak—as in the infamous hypothesis of the linguists Benjamin Whorf and Harold Sapir that our thoughts are puppets of our words (as with the Eskimos who use many different words for snow). The point Pinker makes here—and it is a good one—is that we mustn’t mistake correlation for causation, assuming that because concepts and words go together the latter are the causes of the former. Indeed, it is far more plausible to suppose that our language is caused by our thoughts—that we can only introduce words for which we already have concepts. Words express concepts; they don’t create them.

Let’s suppose, then, that Pinker and others are right to credit the mind with an original system of basic physical concepts, supplemented with some concepts for number, agency, logic, and the like. We innately conceive of the world as containing what he calls “force dynamics”—substances moving through space, under forces, and impinging on other objects, changing their state. How do we get from this to the full panoply of human thought? How do we get to science, art, politics, economics, ethics, and so on? His answer is that we do it by judicious use of metaphor and the combinatorial power of language, as when words combine to produce the unlimited expressions of a human language. Language has infinite potential, because of its ability to combine words and phrases into sentences without limit: this is by now a well-worn point.

More controversial is the suggestion that metaphor is the way we transcend the merely mechanical—the bridge by which physics leads us to more abstract domains. Pinker notes, as many have before, that we routinely use spatial expressions to describe time (“he moved the meeting to Tuesday,” “don’t look backward”), as well as employ words like rise, fall, went, and send to capture events that are not literally spatial (prices rising, messages sent, and so on). Science itself is often powered by analogies, as when heat was conceived as a fluid and its laws derived accordingly. Our language is transparently shot through with meta-phors of one kind or another. But it is far from clear that everything we do with concepts and language can be accounted for in this way; consider how we think and talk about consciousness and the mind, or our moral thinking. The concept of pain, say, is not explicable as a metaphorical variation on some sort of physical concept.

It just doesn’t seem true that everything nonphysical that we think about is metaphorical; for example, our legal concepts such as “rights” are surely not all mere metaphors, introduced on the shoulders of the concepts of intuitive physics. So there is a question how Pinker’s alleged language of thought, restricted as it is, can suffice to generate our total conceptual scheme; in which case we will need to count more concepts as innate (what about contract or punishment?)—or else rethink the whole innateness question. Not that I have any good suggestions about how human concepts come to be; my point is just that Pinker’s set of basic Kantian concepts seems too exiguous to do the job.

If the Kantian categories are supposed to make thought and language possible, then they also, for Pinker, impose limits on our mental functioning. This is a second main theme of his book: the human mind, for all its rich innate endowment, is fallible, prone to confusion, easily foiled. The very concepts that enable us to think coherently about the world can lead us astray when we try to extend them beyond their natural domain. Pinker discusses the concepts of space and time, exposing the paradoxes that result from asking whether these are finite or infinite; either way, human thought reels. As he says, we can’t think without these concepts, but we can’t make sense of them—not when we start to think hard about what they involve. For example, if space is bounded, what lies on the other side of the boundary? But if it’s not bounded, we seem saddled with an infinite amount of matter—which implies multiple identical universes.

The concept of free will poses similar paradoxes: either human choices are caused or they are not, but either way we can’t seem to make sense of free will. A lot of philosophy is like that; a familiar concept we use all the time turns puzzling and paradoxical once we try to make systematic sense of it. Pinker has fun detailing the natural errors to which the human mind is prone when trying to reason statistically or economically; human specimens are notoriously poor at reasoning in these matters. Even more mortifying, our prized intuitive physics, foundation of all our thought, is pretty bad as physics: projectiles don’t need impetus to keep them in steady motion, no matter what Aristotle and common sense may say. As Newton taught us, motion, once it begins, is preserved without the pressure of a continuously applied force—as when a meteor keeps moving in a straight line, though no force maintains this motion. And relativity and quantum theory violate commonsense physics at every turn.

Our natural concepts are as much a hindrance to thought as they are a springboard for it. When we try to turn our minds away from their primitive biological tasks toward modern science and industrial-electronic society we struggle and fall into fallacies; it’s an uphill battle to keep our concepts on track. Our innate “common sense” is riddled with error and confusion—not all of it harmless (as with the economically naive ideas about what constitutes a “fair price”).

Pinker also has three bulky chap-ters on the social aspects of language, dealing with naming and linguistic innovation in general, with obscenity and taboo words, and with politeness and authority relations in speech. The chapter on naming achieves something I thought was impossible: it gives an accurate exposition of the philosopher Saul Kripke’s classic discussion of proper names by a nonphilosopher—the gist of which is that the reference of a name is fixed not by the descriptive information in the mind of the speaker but by a chain of uses stretching back to an initial identification. For example, I refer to a certain Greek philosopher with the name “Plato” in virtue of the chain of uses that link my present use with that of ancient Greeks who knew him, not in virtue of having in my mind some description that picks him out uniquely from every other Greek philosopher.

Apart from this, Pinker worries at the question of fashions in names and how they change. He refutes such popular theories as that names are taken from public figures or celebrities; usually, the trend is already in place—and anyway the name “Humphrey” never took off, despite the star of Casablanca. It is fascinating to read that in the early part of the twentieth century the following names were reserved primarily for men: Beverly, Dana, Evelyn, Gail, Leslie, Meredith, Robin, and Shirley. But not much emerges about why names change as they do, besides some platitudes about the need for elites to stand out by adopting fashions different from the common herd.

I very much enjoyed the chapter on obscenity, which asks the difficult question of how words deemed taboo differ from their inoffensive syn-onyms (e.g., shit and feces). It can’t obviously be the referent of the term, since that is the same, and it isn’t merely that the taboo words are more accurately descriptive (excre-ment is equally accurate, but it isn’t taboo). Pinker reports, no doubt correctly, that swearing forces the hearer to entertain thoughts he’d rather not, but that too fails to distinguish taboo words from their nontaboo synonyms. The phenomenon is especially puzzling when we note that words can vary over time in their taboo value: damn used to be unutterable in polite society, while word was once quite inoffensive (Pinker reports a fifteenth-century medical textbook that reads “in women the neck of the bladder is short, and is made fast to the word”).

Of particular interest to the grammarian is the fact that in English all the impolite words for the sexual act are transitive verbs, while all the polite forms involve intransitive verbs: word, screw,hump, shag, bang versus have sex, make love, sleep together, go to bed, copulate. As Pinker astutely observes, the transitive sexual verbs, like other verbs in English, bluntly connote the nature of the motion involved in the reported action with an agent and a receiver of that motion, whereas the intransitive forms are discreetly silent about exactly how the engaged objects move in space. The physical forcefulness of the act is thus underlined in the transitive forms but not in the intransitive ones. None of this explains why some verbs for intercourse are offensive while others are not, but it’s surely significant that different physical images are conjured up by the different sexual locutions—with word semantically and syntactically like staband have sex like have lunch.

Pinker’s discussion of politeness verges closest to platitude—noting, for example, that bribes cannot usually afford to be overt and that authority relations are sometimes encoded in speech acts, as with tu and vous in French. Here he relies heavily on lively examples and pop culture references, but the ideas at play are thin and rather forced. But, as I say, he has a tough assignment here—trying to extract theoretical substance from something both familiar and unsystematic. Laying out a game theory matrix, with its rows and columns of payoffs, for a potential bribe to a traffic cop adds little to the obvious description of such a situation.

The book returns to its core themes in the final chapter, “Escaping the Cave.” Pinker sums up:

Quote:
Human characterizations of reality are built out of a recognizable inventory of thoughts. The inventory begins with some basic units, like events, states, things, substances, places, and goals. It specifies the basic ways in which these units can do things: acting, going, changing, being, having. One event may be seen as impinging on another, by causing or enabling or preventing it. An action can be initiated with a goal in mind, in particular, the destination of a motion (as in loading hay) or the state resulting from a change (as in loading a wagon). Objects are differentiated by whether they are human or nonhuman, animate or inanimate, solid or aggregate, and how they are laid out along the three dimensions of space. Events are conceived as taking up stretches of time and as being ordered with respect to one another.

If that strikes you as a bit platitudinous, then such is the lot of much psychology—usually the good sort. What is interesting is the kind of evidence that can be given for these claims and the way they play out in language and behavior—not the content of the claims themselves.

But Pinker is also anxious to reiterate his thesis that our conceptual scheme is like Plato’s cave, in giving us only a partial and distorted vision of reality. We need to escape our natural way of seeing things, as well as appreciate its (limited) scope. Plato himself regarded a philosophical education as the only way to escape the illusions and errors of common sense—the cave in which we naturally dwell. Pinker too believes that education is necessary in order to correct and transcend our innate cognitive slant on the world. This means, unavoidably, using a part of our mind to get beyond the rest of our mind, so that there must be a part that is capable of distancing itself from the rest. He says little about how this might be possible—how that liberating part might operate—beyond what he has said about metaphors and the infinity of language. And the question is indeed difficult: How could the mind ever have the ability to step outside of itself? Aren’t we always trapped inside our given conceptual scheme? How do we bootstrap ourselves to real wisdom from the morass of innate confusion?

One reason it is hard to answer this question is that it is obscure what a concept is to start with. And here there is a real lacuna in Pinker’s book: no account is given of the nature of the basic concepts that are held to constitute the mind’s powers. He tells us at one point that the theory of conceptual semantics “proposes that word senses are mentally represented as expressions in a richer and more abstract language of thought,” as if concepts could literally be symbols in the language of thought. The idea then is that when we understand a verb like pour we translate it into a complex of symbols in the brain’s innate code (rather like the code used by a computer), mental counterparts of public words like move, cause, change. But that leaves wide open the question of how those inner words have meaning; they can’t just be bits of code, devoid of semantic content. We need to credit people with full-blown concepts at the foundation of their conceptual scheme—not just words for concepts.

Pinker has listed the types of concepts that may be supposed to lie at the foundation, but he hasn’t told us what those concepts consist in—what they are. So we don’t yet know what the stuff of thought is—only that it must have a certain form and content. Nowhere in the course of a long book on concepts does Pinker ever confront the really hard question of what a concept might be. Some theorists have supposed concepts to be mental images, others that they are capacities to discriminate objects, others dispositions to use words, others that they are mythical entities.

The problem is not just that this is a question Pinker fails to answer or even acknowledge; it is that without an answer it is difficult to see how we can make headway with questions about what our concepts do and do not permit. Is it our concepts themselves that shackle us in the cave or is it rather our interpretations of them, or maybe our associated theories of what they denote? Where exactly might a concept end and its interpretation begin? Is our concept of something identical to our conception of it—the things we believe about it? Do our concepts intrinsically blind us or is it just what we do with them in thought and speech that causes us to fail to grasp them? Concepts are the material that constitutes thought and makes language meaningful, but we are very far from understanding what kind of thing they are—and Pinker’s otherwise admirable book takes us no further with this fundamental question.

New York Times Book Review
Volume 54, Number 14 · September 27, 2007

► Reply to This
marmalade Permalink Reply by marmalade on December 7, 2007 at 11:36pm
I just came across Gerry Goddard’s writings. His theory is based on archetypes and he references Richard Tarnas throughout his book. Tarnas wrote the book ‘Cosmos and Psyche’ which is an analysis of history using astrological patterns. He is the first writer who made astrology meaningfully accessible to me.

Here is Goddard’s book on-line that he finished right before he died:
http://www.islandastrology.net/contents.htm

Here is a quote from his article on postmodernism that seemed appropriate:
http://www.islandastrology.net/mut-post.html

Is astrology just another dish to choose from the endless buffet of competing delights, just another Wittgensteinian language game, another social ‘form of life,’ or is it truly an ancient parchment that charts a way through this particularly difficult though fascinating terrain? In one sense, astrology is quintessentially postmodern in that it is entirely constructed of symbols reflecting and referring to other symbols within a multidimensional hologram or Indra’s net amenable to seemingly endless patterns of interpretation that richly resonate to the soul yet appear to lack any clearly identifiable concrete referents. As such, like other postmodern disciplines, it generally resists the attempts of science to connect specific symbols with specific facts, yet at the same time the astrological language appears to open and reveal soul dimensions that are more than the ‘mere’ play of socially constructed imaginations projected upon an unknowable objective world.

In this sense, astrology avoids the most radical postmodern conclusion — that words, concepts, and texts refer endlessly to other words and texts lacking any ultimate reference to objective facts, universal truths, or the ‘way things really are.’ I would like to suggest that astrology is indeed, as Richard Tarnas has described it, the ‘philosopher’s stone,’ a metaphysical or metapsychological map completely friendly to postmodern ideas yet charting them within a larger and ‘perennial’ perspective (a la Schumacher, Smith, Wilber), one that embraces the premodern, modern, postmodern, and transpersonal dimensions, pointing beyond both the old absolutisms and the current radical relativism to a higher resolution. The astrological perspective, as well as the now general perspective of postmodernism, reveals the cultural and historical relativity of these paradigms or beliefs, although astrology identifies and maps the whole process in a way that transcends the particular linguistic cul-de-sac of contemporary critical thought by revealing a more inclusive and holistic archetypal structure.

Upon the basis of the generally agreed-on meanings of the four mutable principles and their archetypal correspondence to the essential features of the postmodern mind, a case can be made which not only deepens our understanding of the principles but, through astrology’s capacity to map postmodernism (in relation to other historical stages and consciousness structures) within a larger historical and developmental perspective, may establish the astrological mandala as an effective key to understanding the greater evolution and structure of consciousness.

► Reply to This
anemone Permalink Reply by anemone on December 8, 2007 at 7:19pm
Hello marmalade

I have been an avid, though amateur student of astrology for years. I have read Tarnas’ Cosmos and Psyche, and attended a few workshops given by him last year.

I’ve never heard anyone state a connection between astrology and postmodernism quite the way you have above. It is illuminating to me to read your post. A more typical analysis involving astrology, on one hand, and postmodernism on the other, usually involves some remark on how the former is steeped in archaic and mythical lore and projection while the latter dismisses all narrative as textual constructs.

I like the way you have pointed out the flexibility and multi-faceted dimensions inherent in both world-views.

In the last year, however, I have come to feel that as powerful a tool astrology may be for psychological analysis, involving both individuals and group dynamics, it is not such a powerful tool of analysis at the macro, sociocultural or sociopolitical level of analysis, for example in making a useful comment on the larger forces shaping foreign policies of various governments, or the way public opinion is formed in a given society. This may, of course, be more a symptom of astrology’s current marginalization as an epistemological tool. If there were to be any dialogue with other established disciplines it is quite possible that it should prove instrumentally useful.
Any ideas?
I will check out your Goddard post.

► Reply to This
marmalade Permalink Reply by marmalade on December 8, 2007 at 8:54pm
Hello anemone

Thanks for the response.

A: “I have been an avid, though amateur student of astrology for years. I have read Tarnas’ Cosmos and Psyche, and attended a few workshops given by him last year.”

You probably understand astrology and Tarnas better than I. For whatever reason, astrology never quite clicked together for me in the past.

A: “I’ve never heard anyone state a connection between astrology and postmodernism quite the way you have above. ”

As much as I wish I had written that, I can’t take credit for it. Below the link is entirely the words of Goddard. I should’ve made that clearer, but this forum doesn’t allow the way of quoting that I’m used to.

I agree with your assessment. As I said, Tarnas was the first writer to present astrology so that it felt deeply meaningful to me… such that I could connect with the symbolism. And Goddard has presented astrology and Tarnas in a way that I have a better grasp of the system as a whole and how it might relate to my life.

A: “In the last year, however, I have come to feel that as powerful a tool astrology may be for psychological analysis, involving both individuals and group dynamics, it is not such a powerful tool of analysis at the macro, sociocultural or sociopolitical level of analysis, for example in making a useful comment on the larger forces shaping foreign policies of various governments, or the way public opinion is formed in a given society.”

What happened in the last year that has changed your perspective?

Are you saying that you have doubts about whether Tarnas’ analysis of history is meaningful?

“Any ideas?”

Not yet. I just discovered Goddard and have only barely begun to read his work that I linked to in my previous post. I really haven’t a clue what all of this means. I’m merely a curious fellow and am over-joyed when I happen upon someone like Goddard. I feel that I’ve stumbled upon a treasure trove of insight.

I’m better at thinking out ideas when I have feedback, and so I’d be happy for whatever little nuggets you’d like to throw my way.

► Reply to This