The Embodied Spider

There is more to embodied cognition than that neurocogntion happens within and inseparably from the body. We are bodies. And our bodies are of the world, one might say they are the world, the only world we can comprehend (com- ‘together’ + prehendere ‘grasp’). That is simple enough. But what kind of embodied beings are we with what kind of embodied experience?

How we exist within our bodies… how we hold our physical form… how we position ourselves in relation to the world… how we inhabit our extended selves… All of this and more determines our way of being, what we perceive, think, and do, what we can imagine. It is through our bodies that we manage our lived reality. And it is through our bodies that we are managed by the forces and patterns of society and environment, the affordances of structured existence forming our habitus and worldview. Maybe epigenetically carried across generations and centuries.

We are spiders in webs of our own making but webs we don’t so much see as through which we perceive, as if strands connecting us to the world to such an extent that it is unclear who is the puppet and who the puppetmaster. Social constructivism points toward a greater truth of webbed realism, of what we sense and know in our entanglement. As we are embodied, so we are embedded. Our identities extend into the world, which means the other extends back into us. One part shifts and the rest follows.

* * *

The World Shifts When a Black Widow Squats
by Ed Yong

“The widow’s abilities are part of a concept called “embodied cognition,” which argues that a creature’s ability to sense and think involves its entire body, not just its brain and sense organs. Octopus arms, for example, can grab and manipulate food without ever calling on the central brain. Female crickets can start turning toward the sound of a male using only the ears and neurons in their legs, well before their central nervous system even has a chance to process the noise. In the case of the black widow, the information provided by the sense organs in the legs depends on the position of the entire animal.

“Earlier, I described this as a postural squint. That’s close, but the analogy isn’t quite right, since squinting helps us focus on particular parts of space. Here, the spider is focusing on different parts of information space. It’s as if a human could focus on red colors by squatting, or single out high-pitched sounds by going into downward dog (or downward spider).

“The ability to sense vibrations that move through solid surfaces, as distinct from sounds that travel through air, is “an often overlooked aspect of animal communication,” says Beth Mortimer from the University of Oxford, who studies it in creatures from elephants to spiders. It’s likely, then, that the widow’s ability to control perception through posture “almost certainly [exists in] other spiders and web types, too, and other arthropods, including insects, that detect vibrations along surfaces through their legs.” Scientists just need to tune in.”

“…there resides in every language a characteristic world-view”

“Via the latter, qua character of a speech-sound, a pervasive analogy necessarily prevails in the same language; and since a like subjectivity also affects language in the same notion, there resides in every language a characteristic world-view. As the individual sound stands between man and the object, so the entire language steps in between him and the nature that operates, both inwardly and outwardly, upon him. He surrounds himself with a world of sounds, so as to take up and process within himself the world of objects. These expressions in no way outstrip the measure of the simple truth. Man lives primarily with objects, indeed, since feeling and acting in him depend on his presentations, he actually does so exclusively, as language presents them to him. By the same act whereby he spins language out of himself, he spins himself into it, and every language draws about the people that possesses it a circle whence it is possible to exit only by stepping over at once into the circle of another one. To learn a foreign language should therefore be to acquire a new standpoint in the world-view hitherto possessed, and in fact to a certain extent is so, since every language contains the whole conceptual fabric and mode of presentation of a portion of mankind.”

Wilhelm von Humboldt
On Language (1836)

* * *

Wilhelm von Humboldt
from Wikipedia

Wilhelm von Humboldt
from Stanford Encyclopedia of Philosophy

Wilhelm von Humboldt lectures
from Université de Rouen

Wilhelm von Humbold and the World of Languages
by Ian F. McNeely

Wilhelm von Humboldt: A Critical Review On His Philosophy of Language, Theory and Practice of Education
by Dr Arlini Alias

The theory of linguistic relativity from the historical perspective
by Iaroslav

Ideasthesia

Ideasthesia
from Wikipedia

Ideasthesia (alternative spelling ideaesthesia) is defined as a phenomenon in which activations of concepts (inducers) evoke perception-like experiences (concurrents). The name comes Ancient Greek ἰδέα (idéa) and αἴσθησις (aísthēsis), meaning “sensing concepts” or “sensing ideas”. The main reason for introducing the notion of ideasthesia was the problems with synesthesia. While “synesthesia” means “union of senses”, empirical evidence indicated that this was an incorrect explanation of a set of phenomena traditionally covered by this heading. Syn-aesthesis denoting also “co-perceiving”, implies the association of two sensory elements with little connection to the cognitive level. However, according to others, most phenomena that have inadvertently been linked to synesthesia in fact are induced by the semantic representations. That is, the meaning of the stimulus is what is important rather than its sensory properties, as would be implied by the term synesthesia. In other words, while synesthesia presumes that both the trigger (inducer) and the resulting experience (concurrent) are of sensory nature, ideasthesia presumes that only the resulting experience is of sensory nature while the trigger is semantic. Meanwhile, the concept of ideasthesia developed into a theory of how we perceive and the research has extended to topics other than synesthesia — as the concept of ideasthesia turned out applicable to our everyday perception. Ideasthesia has been even applied to the theory of art. Research on ideasthesia bears important implications for solving the mystery of human conscious experience, which according to ideasthesia, is grounded in how we activate concepts.

What Is “Ideasthesia” And Why Is It Important To Know?
by Faena Aleph

Many of us speak metaphorically when we describe a color as “screaming” or a sound as “sharp”, These are synesthetic associations we all experience, whether we know it or not ––but we pronounce them literally because it makes enough sense to us.

But synesthesia, which is one of the most charming sensory phenomena, has been overly studied and illustrated by many artists. Today, however, a fascinating aspect of this bridge between senses is being discovered: ideasthesia.

Danko Nikolic, a brain researcher from the Max-Plank Institute, has proposed this theory that questions the reality of two philosophical premises 1) the mind and body, and 2) the perception of senses and ideas. His research suggests that, for starters, these dualities might not exist.

Widely speaking, ideasthesia is a type of bridge that metaphorically links rational abstractions, i.e. ideas with sensory stimuli in a dynamic catalyzed by language. Nevertheless, the best way of understanding “ideasthesia” is through a TED talk that Nikolic himself recently gave. And, be warned, his theory might just change your paradigms from their foundation and reinforce the beliefs that Walt Whitman anticipated over a hundred years ago.

Ideasthesia — Art, Genius, Insanity, and Semiotics
by Totem And Token

…the notion of ideasthesia — that one can feel or physically experience an idea. Instead of a letter or a sound or a single word as being physically felt, an entire idea or construct or abstract is experienced phenomenologically.

But this seems abstract in and of itself, right? Like, what would it mean to ‘feel’ an idea? The classic example, linked to here, would be to imagine two shapes. One is a curvy splatter, kind of like the old 90s Nickelodeon logo, and the other is an angular, jagged, pointy sort of shape. Which would you call Kiki and which would you call Bouba?

An overwhelming majority (95% according to one source) would say that the splatter is Bouba and the pointy thing is Kiki.

But why though?

Bouba and Kiki are random sounds, absolutely meaningless and the figures were similarly meaningless. Some contend that it is a linguistic effect, since ‘K’ is an angular letter and ‘B’ is more rounded. Yet, there seems to be a consensus on which is which, even cross-culturally to some extent. Because just the idea of the pointy shape feels like a Kiki and the blobbier shape feels like a Bouba.

Another way I think it is felt is when we talk about highly polarizing topics, often political or religious in nature. In the podcast You Are Not So Smart, David McRaney talks about being confronted with a differing view point as having a gut-wrenching, physical effect on him. Researchers pointed out that the feeling is so strong that it actually elicits a fight-or-flight response.

But it’s just words, right? It’s not like someone saying “I don’t believe in universal healthcare” or “You should have the right to pull the plug in a coma” actually makes it so, or will cause it to happen to you. It is simply one person’s thought, so why does it trigger such a deep-seated emotion? The researchers in the episode hypothesize that the core ideas are related to you identity which is being threatened, but I think the explanation is somewhat simpler and stranger.

It’s because the ideas actually feel dangerous to you.

This is why what feels perfectly rational to you feels irrational to others.

It also makes more sense when talking about geniuses or highly gifted individuals. Although they exist, the Dr. House-type hyper-rational savants aren’t usually what you hear about when you look at the biographies of highly intelligent or gifted peoples. Da Vinci, Goethe, Tesla, Einstein and others all seem to describe an intensely phenomenological approach to creating their works.

Even in what is traditionally considered to be more rational pursuits, like math, have occasional introspective debates about whether string theory or higher order mathematics is created or discovered. This seems like a question about whether one feels out a thought or collects and constructs evidence to make a case.

What’s more is that, while I think most people can feel an idea to some extent (kiki vs bouba), gifted peoples and geniuses are more sensitive to these ideas and can thus navigate it better. Sensitivity seems to really be the hallmark of gifted individuals, so much so that I remember reading about how some gifted students have to wear special socks because the inner stitching was too distracting.

I remember when I was younger (around elementary school) there was a girl who was in our schools gifted program who everyone could not stand. She seemed to have a hairline trigger and would snap at just about anything. I realize now that she was simply incredible sensitive to other children and didn’t really know how to handle it maturely.

I can imagine if this sort of sensitivity applied to ideas and thought processes might actually be a big reason why geniuses can handle seemingly large and complex thoughts that are a struggle for the rest of us — they aren’t just thinking through it, they are also feeling their way through it.

It may offer insight into the oft-observed correlation between madness and intellect. Maybe that’s what’s really going on in schizophrenia. It’s not just a disconnect of thoughts, but an oversensitivity to the ideas that breed those thoughts that elicits instinctive, reactionary emotions much like our fight-or-flight responses to polarizing thoughts. The hallucinations are another manifestation of the weaker sensory experience of benign symbols and thoughts.

The Haunting of Voices

“If I met a skin-changer who demanded my shoes, I’d give him my shoes.” This is what a Navajo guy once told me. I didn’t inquire about why a skin-changer would want his shoes, but it was a nice detail of mundane realism. This conversation happened when I was living in Arizona and working at the Grand Canyon. Some might see this anecdote as the over-worked imagination of the superstitious. That probably is how I took it at the time. But I wouldn’t now be so dismissive.

While there, my job was to do housekeeping in the El Tovar. It’s an old hotel located directly on the South Rim of the canyon. It has the feeling of a building that has been around a while. It’s age was hard for me to ignore in its lacking an elevator, something I became familiar with in carrying stacks of sheets up the stairs of multiple floors. I worked there a few times late at night and there was an eerie atmosphere to the place. You could viscerally sense the history, all the people who had stayed there and passed through.

There were stories of suicides and homicides, of lonely lost souls still looking for their lovers or simply going through their habitual routine in the afterlife. The place was famous for it having been one of the locations where the Harvey Girls worked, young women looking for wealthy husbands. There was a tunnel that was once used by the Harvey girls to go between the hotel and the women’s dorm. This hidden and now enclosed tunnel added to the spookiness.

Many Navajo worked at the Grand Canyon, including at the El Tovar. And sometimes we would chat. I asked about the ghosts that supposedly haunted the place. But they were reluctant to talk about it. I later learned that they thought it disrespectful or unwise to speak of the dead. I also learned that some had done traditional ceremonies in the hotel in order to put the dead to rest and help them pass over to the other side. Speaking of the dead would be like calling them back to the world of the living.

I doubt this worldview is merely metaphorical in the superficial sense. Though it might be metaphorical in the Jaynesian sense. Julian Jaynes hypothesized that ancient people continued to hear the voices of the dead, that the memory would live on as auditory experience. He called this the bicameral mind. And in bicameral societies, voice-hearing supposedly was key to social order. This changed because of various reasons and then voice-hearing became a threat to the next social order that replaced the old one.

The Navajo’s fearful respect of ghosts could be thought of as a bicameral carryover. Maybe they better understand the power voice-hearing can have. Ask any schizophrenic about this and they’d agree. Most of us, however, have developed thick boundaries of the egoic mind. We so effectively repress the many voices under the authority of the egoic sole rulership that we no longer are bothered by their sway, at least not consciously.

Still, we may be more influenced than we realize. We still go through the effort of costly rituals of burying the dead where they are kept separate from the living, not to mention appeasing them with flowers and flags. Research shows that the number of people who have heard disembodied voices in their lifetime is surprisingly high. The difference for us is that we don’t openly talk about it and try our best to quickly forget it again. Even as we don’t have ceremonies in the way seen in Navajo tradition, we have other methods for dispelling the spirits that otherwise would haunt us.

Psychedelics and Language

“We cannot evolve any faster than we evolve our language because you cannot go to places that you cannot describe.”
~Terence McKenna

This post is a placeholder, as I work through some thoughts. Maybe the most central link between much of it is Terence Mckenna’s stoned ape theory. That is about the evolution of consciousness as it relates to psychedelics and language. Related to McKenna’s view, there have been many observations of non-human animals imbibing a wide variety of mind-altering plants, often psychedelics. Giorgio Samorini, in Animals and Psychedelics, that this behavior is evolutionarily advantageous in that it induces lateral thinking.

Also, as McKenna points out, many psychedelics intensify the senses, a useful effect hunting. Humans won’t only take drugs themelves for this purpose but also give them to their animals: “A classic case is indigenous people giving psychedelics to hunting dogs to enhance their abilities. A study published in the Journal of Ethnobiology, reports that at least 43 species of psychedelic plants have been used across the globe for boosting dog hunting practices. The Shuar, an indigenous people from Ecuador, include 19 different psychedelic plants in their repertoire for this purpose—including ayahuasca and four different types of brugmansia” (Alex K. Gearin, High Kingdom). So, there are many practical reasons for using psychoactive drugs. Language might have been an unintended side effect.

There is another way to get to McKenna’s conclusion. David Lewis Williams asserts that cave paintings are shamanic. He discusses the entoptic imagery that is common in trance, whether from psychedelics or by other means. This interpretation isn’t specifically about language, but that is where another theory can help us. Genevieve von Petzinger takes a different tack by speculating that the geometric signs on cave walls were a set of symbols, possibly a system of graphic communication and so maybe the origin of writing.

In exploring the sites for herself, she ascertained there were 32 signs found over a 30,000 period in Europe. Some of the same signs were found outside of Europe as well. It’s the consistency and repetition that caught her attention. They weren’t random or idiosyncratic aesthetic flourishes. If we combine that with Williams’ theory, we might have the development of proto-concepts, still attached to the concrete world but in the process of developing into something else. It would indicate that something fundamental about the human mind itself was changing.

I have my own related theory about the competing influence of psychedelics and addictive substances, the influence being not only on the mind but on society and so related to the emergence of civilization. I’m playing around with the observation that it might tell us much about civilization that, over time, addiction became more prevalent than psychedelics. I see the shift in this preference having become apparent sometime following the neolithic era, although becoming most noticeable in the Axial Age. Of course, language already existed at that point. Though maybe, as Julian Jaynes and others have argued, the use of language changed. I’ll speculate about all of that at a later time.

In the articles and passages and links below, there are numerous overlapping ideas and topics. Here are some of what stood out to me or else some of the thoughts on my mind while reading:

  • Synaesthesia, gesture, ritual, dance, sound, melody, music, poeisis, repetition (mimesis, meter, rhythm, rhyme, and alliteration, etc) vs repetition-compulsion;
  • formulaic vs grammatical language, poetry vs prose, concrete vs abstract, metaphor, and metonymy;
  • Aural and oral, listening and speaking, preliterate, epic storytelling, eloquence, verbosity, fluency, and graphomania;
  • enthralled, entangled, enactivated, embodied, extended, hypnosis, voices, voice-hearing, bundle theory of self, ego theory of self, authorization, and Logos;
  • Et cetera.

* * *

Animals on Psychedelics: Survival of the Trippiest
by Steven Kotler

According to Italian ethnobotanist Giorgio Samorini, in his 2001 Animals and Psychedelics, the risk is worth it because intoxication promotes what psychologist Edward de Bono once called lateral thinking-problem-solving through indirect and creative approaches. Lateral thinking is thinking outside the box, without which a species would be unable to come up with new solutions to old problems, without which a species would be unable to survive. De Bono thinks intoxication an important “liberating device,” freeing us from “rigidity of established ideas, schemes, divisions, categories and classifications.” Both Siegel and Samorini think animals use intoxicants for this reason, and they do so knowingly.

Don’t Be A Sea Squirt.
by Tom Morgan

It’s a feature of complex adaptive systems that a stable system is a precursor to a dead system. Something that runs the same routine day-after-day is typically a dying system. There’s evidence that people with depression are stuck in neurological loops that they can’t get out of. We all know what it’s like to be trapped in the same negative thought patterns. Life needs perpetual novelty to succeed. This is one of the reasons researchers think that psychedelics have proven effective at alleviating depression; they break our brains out of the same familiar neural pathways.

This isn’t a uniquely human trait, animals also engage in deliberate intoxication. In his book Animals and Psychedelics, Italian ethnobotanist Giorgio Samorini wrote ‘drug-seeking and drug-taking behavior, on the part of both humans and animals, enjoys an intimate connection with…..depatterning.’And thus dolphins get high on blowfish, elephants seek out alcohol and goats eat the beans of the mescal plant. They’re not just having fun, they’re expanding the possible range of their behaviours and breaking stale patterns. You’re not just getting wasted, you’re furthering the prospects of the species!*

Synesthesias, Synesthetic Imagination, and Metaphor in the Context of Individual Cognitive Development and Societal Collective Consciousness
by Hunt Harry

The continuum of synesthesias is considered in the context of evolution, childhood development, adult creativity, and related states of imaginative absorption, as well as the anthropology and sociology of “collective consciousness”. In Part I synesthesias are considered as part of the mid-childhood development of metacognition, based on a Vygotskian model of the internalization of an earlier animism and physiognomic perception, and as the precursor for an adult capacity for imaginative absorption central to creativity, metaphor, and the synesthetically based “higher states of consciousness” in spontaneous mystical experience, meditation, and psychedelic states. Supporting research is presented on childhood precocities of a fundamental synesthetic imagination that expands the current neuroscience of classical synesthetes into a broader, more spontaneous, and open-ended continuum of introspective cross modal processes that constitute the human self referential consciousness of “felt meaning”. In Part II Levi-Strauss’ analysis of the cross modal and synesthetic lattices underlying the mythologies of native peoples and their traditional animation thereby of surrounding nature as a self reflective metaphoric mirror, is illustrated by its partial survival and simplification in the Chinese I-Ching. Jung’s psychological analysis of the I-Ching, as a device for metaphorically based creative insight and as a prototype for the felt “synchronicities” underlying paranormal experience, is further extended into a model for a synesthetically and metaphorically based “collective consciousness”. This metaphorically rooted and coordinated social field is explicit in mythologically centered, shamanic peoples but rendered largely unconscious in modern societies that fail to further educate and train the first spontaneous synesthetic imaginings of mid-childhood.

Psychedelics and the Full-Fluency Phenomenon
by T.H.

Like me, the full-fluency phenomenon has been experienced by many other people who stutter while using psilocybin and MDMA, and unlike me, while using LSD as well. […]

There’s also potential for immediate recovery from stuttering following a single high dose experience. One well told account of this comes from Paul Stamets, the renowned mycologist, whose stuttering stopped altogether following his first psilocybin mushroom experience. To sustain such a high increase in fluency after the effects of the drug wear off is rare, but Paul’s story gives testimony to the possibility for it to occur.

Can Psychedelics Help You Learn New Languages?
by The Third Wave Podcast

Idahosa Ness runs “The Mimic Method,” a website that promises to help you learn foreign languages quickly by immersing you in their sounds and pronunciations. We talk to Idahosa about his experiences with cannabis and other psychedelics, and how they have improved his freestyle rapping, increased his motivation to learn new languages, and helped the growth of his business.

Marijuana and Divergent Thinking
by Jonah Lehrer

A new paper published in Psychiatry Research sheds some light on this phenomenon, or why smoking weed seems to unleash a stream of loose associations. The study looked at a phenomenon called semantic priming, in which the activation of one word allows us to react more quickly to related words. For instance, the word “dog” might lead to decreased reaction times for “wolf,” “pet” and “Lassie,” but won’t alter how quickly we react to “chair”.

Interestingly, marijuana seems to induce a state of hyper-priming, in which the reach of semantic priming extends outwards to distantly related concepts. As a result, we hear “dog” and think of nouns that, in more sober circumstances, would seem to have nothing in common. […]

Last speculative point: marijuana also enhances brain activity (at least as measured indirectly by cerebral blood flow) in the right hemisphere. The drug, in other words, doesn’t just suppress our focus or obliterate our ability to pay attention. Instead, it seems to change the very nature of what we pay attention to, flattening out our hierarchy of associations.

How the Brain Processes Language on Acid Is a Trip
by Madison Margolin

“Results showed that while LSD does not affect reaction times, people under LSD made more mistakes that were similar in meaning to the pictures they saw,” said lead author Dr. Neiloufar Family, a post-doc from the University of Kaiserslautern.

For example, participants who were dosed with acid would more often say “bus” or “train” when asked to identify a picture of a car, compared to those who ingested the placebo. These lexical mixups shed some light on how LSD affects semantic networks and the way the brain draws connections between different words or concepts.

“The effects of LSD on language can result in a cascade of associations that allow quicker access to far way concepts stored in the mind,” said Family, discussing the study’s implications for psychedelic-assisted psychotherapy. Moreover, she added, “inducing a hyper-associative state may have implications for the enhancement of creativity.”

New study shows LSD’s effects on language
by Technische Universität Kaiserslautern

This indicates that LSD seems to affect the mind’s semantic networks, or how words and concepts are stored in relation to each other. When LSD makes the network activation stronger, more words from the same family of meanings come to mind.

The results from this experiment can lead to a better understanding of the neurobiological basis of semantic network activation. Neiloufar Family explains further implication: “These findings are relevant for the renewed exploration of psychedelic psychotherapy, which are being developed for depression and other mental illnesses. The effects of LSD on language can result in a cascade of associations that allow quicker access to far away concepts stored in the mind.”

The many potential uses of this class of substances are under scientific debate. “Inducing a hyper-associative state may have implications for the enhancement of creativity,” Family adds. The increase in activation of semantic networks can lead distant or even subconscious thoughts and concepts to come to the surface.

A new harmonic language decodes the effects of LSD
by Oxford Neuroscience

Dr Selen Atasoy, the lead author of the study says: “The connectome harmonics we used to decode brain activity are universal harmonic waves, such as sound waves emerging within a musical instrument, but adapted to the anatomy of the brain. Translating fMRI data into this harmonic language is actually not different than decomposing a complex musical piece into its musical notes”. “What LSD does to your brain seems to be similar to jazz improvisation” says Atasoy, “your brain combines many more of these harmonic waves (connectome harmonics) spontaneously yet in a structured way, just like improvising jazz musicians play many more musical notes in a spontaneous, non-random fashion”.

“The presented method introduces a new paradigm to study brain function, one that links space and time in brain activity via the universal principle of harmonic waves. It also shows that this spatio-temporal relation in brain dynamics resides at the transition between order and chaos.” says Prof Gustavo Deco.

Dr. Robin Carhart-Harris adds: “Our findings reveal the first experimental evidence that LSD tunes brain dynamics closer to criticality, a state that is maximally diverse and flexible while retaining properties of order. This may explain the unusual richness of consciousness experienced under psychedelic drugs and the notion that they ‘expand consciousness’.”

Did Psilocybin Mushrooms Lead to Human Language?
by Chris Rhine

Numerous archaeological finds discovered depictions of psilocybin mushrooms in various places and times around the world. One such occasion found hallucinogenic mushrooms from works produced 7,000 to 9,000 years ago in the Sahara Desert, as stated in Giorgio Samorini’s article, “The Oldest Representations of Hallucinogenic Mushrooms in the World.” Samorini concluded, “This Saharan testimony would demonstrate that the use of hallucinogens originates in the Paleolithic period and is invariably included within mystico-religious contexts and rituals.”

Some of early man’s first drawings include the ritualization of a plant as a sign—possibly a tribute to the substance that helped in the written sign’s development.

Are Psychedelic Hallucinations Actually Metaphorical Perceptions?
by Michael Fortier

The brain is constantly attempting to predict what is going on in the world. Because it happens in a dark environment with reduced sensory stimulation, the ayahuasca ritual dampens bottom-up signaling (sensory information becomes scarcer). If you are facing a tree in daylight and your brain wrongly guesses that there is an electric pole in front you, bottom-up prediction errors will quickly correct the wrong prediction—i.e., the lookout will quickly and successfully warn the helmsman. But if the same happens in the dark, bottom-up prediction errors will be sparser and vaguer, and possibly not sufficient enough to correct errors—as it were, the lookout’s warning will be too faint to reach the helmsman. As ayahuasca introduces noise in the brain processes,6 and because bottom-up corrections cannot be as effective as usual, hallucinations appear more easily. So, on the one hand, the relative sensory deprivation of the environment in which the ayahuasca ritual takes place, and the absence of bodily motion, both favor the occurrence of hallucinations.

Furthermore, the ayahuasca ritual does include some sensory richness. The songs, the perfume, and the tobacco stimulate the brain in multiple ways. Psychedelic hallucinogens are known to induce synesthesia7 and to increase communication between areas and networks of the brain that do not usually communicate with each other.8 It is hence no surprise that the shamans’ songs are able to shape people’s visions. If one sensory modality is noisier or fainter than others, its role in perception will be downplayed.9 This is what happens with ayahuasca: Given that not much information can be gathered by the visual modality, most of the prediction errors that contribute to the shaping of conscious perception are those coming from the auditory and olfactory modalities. The combination of synesthetic processing with the increased weight attributed to non-visual senses enables shamans to “drive” people’s visions.

The same mechanisms explain the shamans’ recommendation that perfume should be sprayed or tobacco blown when one is faced with a bad spirit. Conscious perception—e.g., vision of a spirit—is the result of a complex tradeoff between top-down predictions and bottom-up prediction errors. If you spray a huge amount of perfume or blow wreaths of smoke around you, your brain will receive new and reliable information from the olfactory modality. Under psychedelics, sensory modalities easily influence one another; as a result, a sudden olfactory change amounts to sending prediction errors to upper regions of the brain. Conscious perception is updated accordingly: as predicted by the shamans’ recommendation, the olfactory change dissolves the vision of bad spirits.

In its classical sense, hallucination refers to sensory content that is not caused by objects of the world. The above description of the ayahuasca ritual demonstrates that psychedelic visions are not, in the classical sense of the term, hallucinations. Indeed, the content of the visions is tightly tied to the environment: A change of melody in a song or an olfactory change can completely transform the content of the visions. Ayahuasca visions are not caused by hypothetical supernatural entities living in a parallel world, nor are they constructed independently of the mundane objects of the world. What are they, then? They are metaphorical perceptions.

In everyday life, melodic and olfactory changes cannot affect vision much. However, because ayahuasca experience is profoundly synesthetic and intermodal, ayahuasca visions are characteristically metaphorical: A change in one sensory modality easily affects another modality. Ayahuasca visions are not hallucinations, since they are caused by real objects and events; for example, a cloud of perfume. It is more accurate to define them as metaphorical perceptions: they are loose intermodal interpretations of things that are really there.

Michael Pollan on the science of how psychedelics can ‘shake your snow globe’
interview with Michael Pollan

We know that, for example, the so-called classic psychedelics like psilocybin, LSD, and DMT, mescaline, these activate a certain receptor a serotonin receptor. And so we know that are the key that fits that lock. But beyond that, there’s a cascade of effects that happens.

The observed effect, if you do brain imaging of people who are tripping, you find some very interesting patterns of activity in the brain – specifically something called the default mode network, which is a very kind of important hub in the brain, linking parts of the cerebral cortex to deeper, older areas having to do with memory and emotion. This network is kind of a regulator of all brain activities. One neuroscientist called it, ‘The conductor of the neural symphony,’ and it’s deactivated by psychedelics, which is very interesting because the assumption going in was that they would see lots of strange activity everywhere in the brain because there’s such fireworks in the experience, but in fact, this particular network almost goes off line.

Now what does this network responsible for? Well, in addition to being this transportation hub for signals in the brain, it is involved with self reflection. It’s where we go to ruminate or mind wander – thinking about the past or thinking about the future – therefore worrying takes place here. Our sense of self, if it can be said to have an address and real, resides in this particular brain network. So this is a very interesting clue to how psychedelics affect the brain and how they create the psychological experience, the experience in the mind, that is so transformative.

When it goes off line, parts of the brain that don’t ordinarily communicate to one another, strike up conversation. And those connections may represent what people feel during the psychedelic experience as things like synaesthesia. Synaesthesia is when one sense gets cross wired with another. And so you suddenly smell musical notes or taste things that you see.

It may produce insights. It may produce new metaphors – literally connecting the dots in new ways. Now that I’m being speculative – I’m going a little beyond what we’ve established – we know there are new connections, we don’t know what’s happening with them, or which of them endure. But the fact is, the brain is temporarily rewired. And that rewiring – whether the new connections actually produce the useful material or just shaking up the system – ‘shaking the snow globe,’ as one of the neuroscientists put it, is what’s therapeutic. It is a reboot of the brain.

If you think about, you know, mental illnesses such as depression, addiction, and anxiety, many of them involve these loops of thought that we can’t control and we get stuck on these stories we tell ourselves – that we can’t get through the next hour without a drink, or we’re worthless and unworthy of love. We get stuck in these stories. This temporarily dissolves those stories and gives us a chance to write new stories.

Terence McKenna Collection

The mutation-inducing influence of diet on early humans and the effect of exotic metabolites on the evolution of their neurochemistry and culture is still unstudied territory. The early hominids’ adoption of an omnivorous diet and their discovery of the power of certain plants were decisive factors in moving early humans out of the stream of animal evolution and into the fast-rising tide of language and culture. Our remote ancestors discovered that certain plants, when self-administered, suppress appetite, diminish pain, supply bursts of sudden energy, confer immunity against pathogens, and synergize cognitive activities. These discoveries set us on the long journey to self-reflection. Once we became tool-using omnivores, evolution itself changed from a process of slow modification of our physical form to a rapid definition of cultural forms by the elaboration of rituals, languages, writing, mnemonic skills, and technology.

Food of the Gods
by Terence McKenna
pp. 24-29

Because scientists were unable to explain this tripling of the human brain size in so short a span of evolutionary time, some of the early primate paleontologists and evolutionary theorists predicted and searched for evidence of transitional skeletons. Today the idea of a “missing link” has largely been abandoned. Bipedalism, binocular vision, the opposable thumb, the throwing arm-all have been put forth as the key ingredient in the mix that caused self-reflecting humans to crystallize out of the caldron of competing hominid types and strategies. Yet all we really know is that the shift in brain size was accompanied by remarkable changes in the social organization of the hominids. They became users of tools, fire, and language. They began the process as higher animals and emerged from it 100,000 years ago as conscious, self-aware individuals.

THE REAL MISSING LINK

My contention is that mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities. Alkaloids in plants, specifically the hallucinogenic compounds such as psilocybin, dimethyltryptamine (DMT), and harmaline, could be the chemical factors in the protohuman diet that catalyzed the emergence of human self-reflection. The action of hallucinogens present in many common plants enhanced our information processing activity, or environmental sensitivity, and thus contributed to the sudden expansion of the human brain size. At a later stage in this same process, hallucinogens acted as catalysts in the development of imagination, fueling the creation of internal stratagems and hopes that may well have synergized the emergence of language and religion.

In research done in the late 1960s, Roland Fischer gave small amounts of psilocybin to graduate students and then measured their ability to detect the moment when previously parallel lines became skewed. He found that performance ability on this particular task was actually improved after small doses of psilocybin.5

When I discussed these findings with Fischer, he smiled after explaining his conclusions, then summed up, “You see what is conclusively proven here is that under certain circumstances one is actually better informed concerning the real world if one has taken a drug than if one has not.” His facetious remark stuck with me, first as an academic anecdote, later as an effort on his part to communicate something profound. What would be the consequences for evolutionary theory of admitting that some chemical habits confer adaptive advantage and thereby become deeply scripted in the behavior and even genome of some individuals?

THREE BIG STEPS FOR THE HUMAN RACE

In trying to answer that question I have constructed a scenario, some may call it fantasy; it is the world as seen from the vantage point of a mind for which the millennia are but seasons, a vision that years of musing on these matters has moved me toward. Let us imagine for a moment that we stand outside the surging gene swarm that is biological history, and that we can see the interwoven consequences of changes in diet and climate, which must certainly have been too slow to be felt by our ancestors. The scenario that unfolds involves the interconnected and mutually reinforcing effects of psilocybin taken at three different levels. Unique in its properties, psilocybin is the only substance, I believe, that could yield this scenario.

At the first, low, level of usage is the effect that Fischer noted: small amounts of psilocybin, consumed with no awareness of its psychoactivity while in the general act of browsing for food, and perhaps later consumed consciously, impart a noticeable increase in visual acuity, especially edge detection. As visual acuity is at a premium among hunter-gatherers, the discovery of the equivalent of “chemical binoculars” could not fail to have an impact on the hunting and gathering success of those individuals who availed themselves of this advantage. Partnership groups containing individuals with improved eyesight will be more successful at feeding their offspring. Because of the increase in available food, the offspring within such groups will have a higher probability of themselves reaching reproductive age. In such a situation, the out breeding (or decline) of non-psilocybin-using groups would be a natural consequence.

Because psilocybin is a stimulant of the central nervous system, when taken in slightly larger doses, it tends to trigger restlessness and sexual arousal. Thus, at this second level of usage, by increasing instances of copulation, the mushrooms directly favored human reproduction. The tendency to regulate and schedule sexual activity within the group, by linking it to a lunar cycle of mushroom availability, may have been important as a first step toward ritual and religion. Certainly at the third and highest level of usage, religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself. This third level, then, is the level of the full-blown shamanic ecstasy. The psilocybin intoxication is a rapture whose breadth and depth is the despair of prose. It is wholly Other and no less mysterious to us than it was to our mushroom-munching ancestors. The boundary-dissolving qualities of shamanic ecstasy predispose hallucinogen-using tribal groups to community bonding and to group sexual activities, which promote gene mixing, higher birth rates, and a communal sense of responsibility for the group offspring.

At whatever dose the mushroom was used, it possessed the magical property of conferring adaptive advantages upon its archaic users and their group. Increased visual acuity, sexual arousal, and access to the transcendent Other led to success in obtaining food, sexual prowess and stamina, abundance of offspring, and access to realms of supernatural power. All of these advantages can be easily self-regulated through manipulation of dosage and frequency of ingestion. Chapter 4 will detail psilocybin’s remarkable property of stimulating the language-forming capacity of the brain. Its power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.

STEERING CLEAR OF LAMARCK

An objection to these ideas inevitably arises and should be dealt with. This scenario of human emergence may seem to smack of Lamarckism, which theorizes that characteristics acquired by an organism during its lifetime can be passed on to its progeny. The classic example is the claim that giraffes have long necks because they stretch their necks to reach high branches.

This straightforward and rather common-sense idea is absolutely anathema among
neoDarwinians, who currently hold the high ground in evolutionary theory. Their position is that mutations are entirely random and that only after the mutations are expressed as the traits of organisms does natural selection mindlessly and dispassionately fulfill its function of preserving those individuals upon whom an adaptive advantage had been conferred.

Their objection can be put like this: While the mushrooms may have given us better eyesight, sex, and language when eaten, how did these enhancements get into the human genome and become innately human? Nongenetic enhancements of an organism’s functioning made by outside agents retard the corresponding genetic reservoirs of those facilities by rendering them superfluous. In other words, if a necessary metabolite is common in available food, there will not be pressure to develop a trait for endogenous expression of the metabolite. Mushroom use would thus create individuals with less visual acuity, language facility, and consciousness. Nature would not provide those enhancements through organic evolution because the metabolic investment required to sustain them wouldn’t pay off, relative to the tiny metabolic investment required to eat mushrooms. And yet today we all have these enhancements, without taking mushrooms. So how did the mushroom modifications get into the genome?

The short answer to this objection, one that requires no defense of Lamarck’s ideas, is that the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating. Experimentation with many types of foods was causing a general increase in the numbers of random mutations being offered up to the process of natural selection, while the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors. One of these new behaviors, language use, previously only a marginally important trait, was suddenly very useful in the context of new hunting and gathering lifestyles. Hence psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language; acquisition of language led to more vocabulary and an expanded memory capacity. The psilocybin-using individuals evolved epigenetic rules or cultural forms that enabled them to survive and reproduce better than other individuals. Eventually the more successful epigenetically based styles of behavior spread through the populations along with the genes that reinforce them. In this fashion the population would evolve genetically and culturally.

As for visual acuity, perhaps the widespread need for corrective lenses among modem humans is a legacy of the long period o “artificial” enhancement of vision through psilocybin use. After all, atrophy of the olfactory abilities of human beings is thought by one school to be a result of a need for hungry omnivores to tolerate strong smells and tastes, perhaps even carrion. Trade-offs of this sort are common in evolution. The suppression of keenness of tasty and smell would allow inclusion of foods in the diet that might otherwise be passed over as “too strong.” Or it may indicate some thing more profound about our evolutionary relationship to diet My brother Dennis has written:

The apparent atrophy of the human olfactory system may actually represent a functional shift in a set of primitive, externally directed chemo-receptors to an interiorized regulatory function. This function may be related to the control of the human pheromonal system, which is largely under the control of the pineal gland, and which mediates, on a subliminal level, a host of psycho-sexual and psycho-social interactions between individuals. The pineal tends to suppress gonadal development and the onset of puberty, among other functions, and this mechanism may play a role in the persistence of neonatal characteristics in the human species. Delayed maturation and prolonged childhood and adolescence play a critical role in the neurological and psychological development of the individual, since they provide the circumstances which permit the post-natal development of the brain in the early, formative years of childhood. The symbolic, cognitive and linguistic stimuli that the brain experiences during this period are essential to its development and are the factors that make us the unique, conscious, symbol-manipulating, language-using beings that we are.

Neuroactive amines and alkaloids in the diet of early primates may have played a role in the biochemical activation of the pineal gland and the resulting adaptations.

pp. 46-60

HUMAN COGNITION

All the unique characteristics and preoccupations of human beings can be summed up under the heading of cognitive activities: dance, philosophy, painting, poetry, sport, meditation, erotic fantasy, politics, and ecstatic self-intoxication. We are truly Homo sapiens, the thinking animal; our acts are all a product of the dimension that is uniquely ours, the dimension of cognitive activity. Of thought and emotion, memory and anticipation. Of Psyche.

From observing the ayahuasca-using people of the Upper Amazon, it became very clear to me that shamanism is often intuitively guided group decision making. The shamans decide when the group should move or hunt or make war. Human cognition is an adaptive response that is profoundly flexible in the way it allows us to manage what in other species are genetically programmed behaviors.

We alone live in an environment that is conditioned not only by the biological and physical constraints to which all species are subject but also by symbols and language. Our human environment is conditioned by meaning. And meaning lies in the collective mind of the group.

Symbols and language allow us to act in a dimension that is “supranatural”-outside the ordinary activities of other forms of organic life. We can actualize our cultural assumptions, alter and shape the natural world in the pursuit of ideological ends and according to the internal model of the world that our symbols have empowered us to create. We do this through the elaboration of ever more effective, and hence ever more destructive, artifacts and technologies, which we feel compelled to use.

Symbols allow us to store information outside of the physical brain. This creates for us a relationship to the past very different from that of our animal companions. Finally, we must add to any analysis of the human picture the notion of self-directed modification of activity. We are able to modify our behavior patterns based on a symbolic analysis of past events, in other words, through history. Through our ability to store and recover information as images and written records, we have created a human environment as much conditioned by symbols and languages as by biological and environmental factors.

TRANSFORMATIONS OF MONKEYS

The evolutionary breakouts that led to the appearance of language and, later, writing are examples of fundamental, almost ontological, transformations of the hominid line. Besides providing us with the ability to code data outside the confines of DNA, cognitive activities allow us to transmit information across space and time. At first this amounted merely to the ability to shout a warning or a command, really little more than a modification of the cry of alarm that is a familiar feature of the behavior of social animals. Over the course of human history this impulse to communicate has motivated the elaboration of ever more effective communication techniques. But by our century, this basic ability has turned into the all-pervasive communications media, which literally engulf the space surrounding our planet. The planet swims through a self-generated ocean of messages. Telephone calls, data exchanges, and electronically transmitted entertainment create an invisible world experienced as global informational simultaneity. We think nothing of this; as a culture we take it for granted.

Our unique and feverish love of word and symbol has given us a collective gnosis, a collective understanding of ourselves and our world that has survived throughout history until very recent times. This collective gnosis lies behind the faith of earlier centuries in “universal truths” and common human values. Ideologies can be thought of as meaning-defined environments. They are invisible, yet they surround us and determine for us, though we may never realize it, what we should think about ourselves and reality. Indeed they define for us what we can think.

The rise of globally simultaneous electronic culture has vastly accelerated the rate at which we each can obtain information necessary to our survival. This and the sheer size of the human population as a whole have brought to a halt our physical evolution as a species. The larger a population is, the less impact mutations will have on the evolution of that species. This fact, coupled with the development of shamanism and, later, scientific medicine, has removed us from the theater of natural selection. Meanwhile libraries and electronic data bases have replaced the individual human mind as the basic hardware providing storage for the cultural data base. Symbols and languages have gradually moved us away from the style of social organization that characterized the mute nomadism of our remote ancestors and has replaced that archaic model with the vastly more complicated social organization characteristic of an electronically unified planetary society. As a result of these changes, we ourselves have become largely epigenetic, meaning that much of what we are as human beings is no longer in our genes but in our culture.

THE PREHISTORIC EMERGENCE OF HUMAN IMAGINATION

Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. Neural structures concerned with conceptualization, visualization, signification, and association are highly developed in our species. Through the act of speaking vividly, we enter into a flirtation with the domain of the imagination. The ability to associate sounds, or the small mouth noises of language, with meaningful internal images is a synesthesic activity. The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.

The conclusion universally drawn from these facts is that the highly organized neurolinguistic areas of our brain have made language and culture possible. Where the search for scenarios of human emergence and social organization is concerned, the problem is this: we know that our linguistic abilities must have evolved in response to enormous evolutionary pressures-but we do not know what these pressures were.
Where psychoactive plant use was present, hominid nervous systems over many millennia would have been flooded by hallucinogenic realms of strange and alien beauty. However, evolutionary necessity channels the organism’s awareness into a narrow cul-desac where ordinary reality is perceived through the reducing valve of the senses. Otherwise, we would be rather poorly adapted for the rough-and-tumble of immediate existence. As creatures with animal bodies, we are aware that we are subject to a range of immediate concerns that we can ignore only at great peril. As human beings we are also aware of an interior world, beyond the needs of the animal body, but evolutionary necessity has placed that world far from ordinary consciousness.

PATTERNS AND UNDERSTANDING

Consciousness has been called awareness of awareness’ and is characterized by novel associations and connections among the various data of experience. Consciousness is like a super nonspecific immune response. The key to the working of the immune system is the ability of one chemical to recognize, to have a key-in-lock relationship, with another. Thus both the immune system and consciousness represent systems that learn, recognize, and remember.’

As I write this I think of what Alfred North Whitehead said about understanding, that it is apperception of pattern as such. This is also a perfectly acceptable definition of consciousness. Awareness of pattern conveys the feeling that attends understanding. There presumably can be no limit to how much consciousness a species can acquire, since understanding is not a finite project with an imaginable conclusion, but rather a stance toward immediate experience. This appears self-evident from within a world view that sees consciousness as analogous to a source of light. The more powerful the light, the greater the surface area of darkness revealed. Consciousness is the moment-to-moment integration of the individual’s perception of the world. How well, one could almost say how gracefully, an individual accomplishes this integration determines that individual’s unique adaptive response to existence.

We are masters not only of individual cognitive activity, but, when acting together, of group cognitive activity as well. Cognitive activity within a group usually means the elaboration and manipulation of symbols and language. Although this occurs in many species, within the human species it is especially well developed. Our immense power to manipulate symbols and language gives us our unique position in the natural world. The power of our magic and our science arises out of our commitment to group mental activity, symbol sharing, meme replication (the spreading of ideas), and the telling of tall tales.

The idea, expressed above, that ordinary consciousness is the end product of a process of extensive compression and filtration, and that the psychedelic experience is the antithesis of this construction, was put forward by Aldous Huxley, who contrasted this with the psychedelic experience. In analyzing his experiences with mescaline, Huxley wrote:

I find myself agreeing with the eminent Cambridge philosopher, Dr. C. D. Broad, “that we should do well to consider the suggestion that the function of the brain and nervous system and sense organs is in the main eliminative and not productive.” The function of the brain and nervous system is to protect us from being overwhelmed and confused by this mass of largely useless and irrelevant knowledge, by shutting out most of what we should otherwise perceive or remember at any moment, and leaving only that very small and special selection which is likely to be practically useful. According to such a theory, each one of us is potentially Mind at Large. But in so far as we are animals, our business is at all costs to survive. To make biological survival possible, Mind at Large has to be funnelled through the reducing valve of the brain and nervous system. What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive on the surface of this particular planet. To formulate and express the contents of this reduced awareness, man has invented and endlessly elaborated those symbol-systems and implicit philosophies which we call languages. Every individual is at once the beneficiary and the victim of the linguistic tradition into which he has been born. That which, in the language of religion, is called “this world” is the universe of reduced awareness, expressed, and, as it were, petrified by language. The various “other worlds” with which human beings erratically make contact are so many elements in the totality of the awareness belonging to Mind at Large …. Temporary by-passes may be acquired either spontaneously, or as the result of deliberate “spiritual exercises,”. . . or by means of drugs.’

What Huxley did not mention was that drugs, specifically the plant hallucinogens, can reliably and repeatedly open the floodgates of the reducing valve of consciousness and expose the individual to the full force of the howling Tao. The way in which we internalize the impact of this experience of the Unspeakable, whether encountered through psychedelics or other means, is to generalize and extrapolate our world view through acts of imagination. These acts of imagination represent our adaptive response to information concerning the outside world that is conveyed to us by our senses. In our species, culture-specific, situation-specific syntactic software in the form of language can compete with and sometimes replace the instinctual world of hard-wired animal behavior. This means that we can learn and communicate experience and thus put maladaptive behaviors behind us. We can collectively recognize the virtues of peace over war, or of cooperation over struggle. We can change.

As we have seen, human language may have arisen when primate organizational potential was synergized by plant hallucinogens. The psychedelic experience inspired us to true self-reflective thought in the first place and then further inspired us to communicate our thoughts about it.

Others have sensed the importance of hallucinations as catalysts of human psychic organization. Julian Jaynes’s theory, presented in his controversial book The Origin of Consciousness in the Breakdown of the Bicameral Mind,’ makes the point that major shifts in human self-definition may have occurred even in historical times. He proposes that through Homeric times people did not have the kind of interior psychic organization that we take for granted. Thus, what we call ego was for Homeric people a “god.” When danger threatened suddenly, the god’s voice was heard in the individual’s mind; an intrusive and alien psychic function was expressed as a kind of metaprogram for survival called forth under moments of great stress. This psychic function was perceived by those experiencing it as the direct voice of a god, of the king, or of the king in the afterlife. Merchants and traders moving from one society to another brought the unwelcome news that the gods were saying different things in different places, and so cast early seeds of doubt. At some point people integrated this previously autonomous function, and each person became the god and reinterpreted the inner voice as the “self” or, as it was later called, the “ego.”

Jaynes’s theory has been largely dismissed. Regrettably his book on the impact of hallucinations on culture, though 467 pages in length, manages to avoid discussion of hallucinogenic plants or drugs nearly entirely. By this omission Jaynes deprived himself of a mechanism that could reliably drive the kind of transformative changes he saw taking place in the evolution of human consciousness.

CATALYZING CONSCIOUSNESS

The impact of hallucinogens in the diet has been more than psychological; hallucinogenic plants may have been the catalysts for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness. Our society more than others will find this theory difficult to accept, because we have made pharmacologically obtained ecstasy a taboo. Like sexuality, altered states of consciousness are taboo because they are consciously or unconsciously sensed to be entwined with the mysteries of our origin-with where we came from and how we got to be the way we are. Such experiences dissolve boundaries and threaten the order of the reigning patriarchy and the domination of society by the unreflecting expression of ego. Yet consider how plant hallucinogens may have catalyzed the use of language, the most unique of human activities.

One has, in a hallucinogenic state, the incontrovertible impression that language possesses an objectified and visible dimension, which is ordinarily hidden from our awareness. Language, under such conditions, is seen, is beheld, just as we would ordinarily see our homes and normal surroundings. In fact our ordinary cultural environment is correctly recognized, during the experience of the altered state, as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomena have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse.

Once activities involving syntactic self-expression were established habits among early human beings, the continued evolution of language in environments where mushrooms were scarce or unavailable permitted a tendency toward the expression and emergence of the ego. If the ego is not regularly and repeatedly dissolved in the unbounded hyperspace of the Transcendent Other, there will always be slow drift away from the sense of self as part of nature’s larger whole. The ultimate consequence of this drift is the fatal ennui that now permeates Western civilization.

The connection between mushrooms and language was brilliantly anticipated by Henry Munn in his essay “The Mushrooms of Language.” Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. The spontaneity the mushrooms liberate is not only perceptual, but linguistic. For the shaman, it is as if existence were uttering itself through him.

THE FLESH MADE WORD

The evolutionary advantages of the use of speech are both obvious and subtle. Many unusual factors converged at the birth of human language. Obviously speech facilitates communication and cognitive activity, but it also may have had unanticipated effects on the whole human enterprise.

Some neurophysiologists have hypothesized that the vocal vibration associated with human use of language caused a kind of cleansing of the cerebrospinal fluid. It has been observed that vibrations can precipitate and concentrate small molecules in the spinal fluid, which bathes and continuously purifies the brain. Our ancestors may have, consciously or unconsciously, discovered that vocal sound cleared the chemical cobwebs out of their heads. This practice may have affected the evolution of our present-day thin skull structure and proclivity for language. A self-regulated process as simple as singing might well have positive adaptive advantages if it also made the removal of chemical waste from the brain more efficient. The following excerpt supports this provocative idea:

Vibrations of human skull, as produced by loud vocalization, exert a massaging effect on the brain and facilitate elution of metabolic products from the brain into the cerebrospinal fluid (CSF) . . . . The Neanderthals had a brain 15% larger than we have, yet they did not survive in competition with modern humans. Their brains were more polluted, because their massive skulls did not vibrate and therefore the brains were not sufficiently cleaned. In the evolution of the modern humans the thinning of cranial bones was important.’

As already discussed, hominids and hallucinogenic plants must have been in close association for a long span of time, especially if we want to suggest that actual physical changes in the human genome resulted from the association. The structure of the soft palate in the human infant and timing of its descent is a recent adaptation that facilitates the acquisition of language. No other primate exhibits this characteristic. This change may have been a result of selective pressure on mutations originally caused by the new omnivorous diet.

WOMEN AND LANGUAGE

Women, the gatherers in the Archaic hunter-gatherer equation, were under much greater pressure to develop language than were their male counterparts. Hunting, the prerogative of the larger male, placed a premium on strength, stealth, and stoic waiting. The hunter was able to function quite well on a very limited number of linguistic signals, as is still the case among hunting peoples such as the !Kung or the Maku.

For gatherers, the situation was different. Those women with the largest repertoire of communicable images of foods and their sources and secrets of preparation were unquestionably placed in a position of advantage. Language may well have arisen as a mysterious power possessed largely by women-women who spent much more of their waking time together-and, usually, talking-than did men, women who in all societies are seen as group-minded, in contrast to the lone male image, which is the romanticized version of the alpha male of the primate troop.

The linguistic accomplishments of women were driven by a need to remember and describe to each other a variety of locations and landmarks as well as numerous taxonomic and structural details about plants to be sought or avoided. The complex morphology of the natural world propelled the evolution of language toward modeling of the world beheld. To this day a taxonomic description of a plant is a Joycean thrill to read: “Shrub 2 to 6 feet in height, glabrous throughout. Leaves mostly opposite, some in threes or uppermost alternate, sessile, linear-lanceolate or lanceolate, acute or acuminate. Flowers solitary in axils, yellow, with aroma, pedicellate. Calyx campanulate, petals soon caducous, obovate” and so on for many lines.

The linguistic depth women attained as gatherers eventually led to a momentous discovery: the discovery of agriculture. I call it momentous because of its consequences. Women realized that they could simply grow a restricted number of plants. As a result, they learned the needs of only those few plants, embraced a sedentary lifestyle, and began to forget the rest of nature they had once known so well.

At that point the retreat from the natural world began, and the dualism of humanity versus nature was born. As we will soon see, one of the places where the old goddess culture died, fatal Huyuk, in present-day Anatolian Turkey, is the very place where agriculture may have first arisen. At places like fatal Huyuk and Jericho, humans and their domesticated plants and animals became for the first time physically and psychologically separate from the life of untamed nature and the howling unknown. Use of hallucinogens can only be sanctioned in hunting and gathering societies. When agriculturists use these plants, they are unable to get up at dawn the morning after and go hoe the fields. At that point, corn and grain become gods-gods that symbolize domesticity and hard labor. These replace the old goddesses of plant-induced ecstasy.

Agriculture brings with it the potential for overproduction, which leads to excess wealth, hoarding, and trade. Trade leads to cities; cities isolate their inhabitants from the natural world. Paradoxically, more efficient utilization of plant resources through agriculture led to a breaking away from the symbiotic relationship that had bound human beings to nature. I do not mean this metaphorically. The ennui of modernity is the consequence of a disrupted quasisymbiotic relationship between ourselves and Galan nature. Only a restoration of this relationship in some form is capable of carrying us into a full appreciation of our birthright and sense of ourselves as complete human beings.

HABIT AS CULTURE AND RELIGION

At regular intervals that were probably lunar, the ordinary activities of the small nomadic group of herders were put aside. Rains usually followed the new moon in the tropics, making mushrooms plentiful. Gatherings took place at night; night is the time of magical projection and hallucinations, and visions are more easily obtained in darkness. The whole clan was present from oldest to youngest. Elders, especially shamans, usually women but often men, doled out each person’s dose. Each clan member stood before the group and reflectively chewed and swallowed the body of the Goddess before returning to his or her place in the circle. Bone flutes and drums wove within the chanting. Line dances with heavy foot stamping channeled the energy of the first wave of visions. Suddenly the elders signal silence.

In the motionless darkness each mind follows its own trail of sparks into the bush while some people keen softly. They feel fear, and they triumph over fear through the strength of the group. They feel relief mingled with wonder at the beauty of the visionary expanse; some spontaneously reach out to those nearby in simple affection and an impulse for closeness or in erotic desire. An individual feels no distance between himself or herself and the rest of the clan or between the clan and the world. Identity is dissolved in the higher wordless truth of ecstasy. In that world, all divisions are overcome. There is only the One Great Life; it sees itself at play, and it is glad.

The impact of plants on the evolution of culture and consciousness has not been widely explored, though a conservative form of this notion appears in R. Gordon Wasson’s The Road to Eleusis. Wasson does not comment on the emergence of self-reflection in hominids, but does suggest hallucinogenic mushrooms as the causal agent in the appearance of spiritually aware human beings and the genesis of religion. Wasson feels that omnivorous foraging humans would have sooner or later encountered hallucinogenic mushrooms or other psychoactive plants in their environment:

As man emerged from his brutish past, thousands of years ago, there was a stage in the evolution of his awareness when the discovery of the mushroom (or was it a higher plant?) with miraculous properties was a revelation to him, a veritable detonator to his soul, arousing in him sentiments of awe and reverence, and gentleness and love, to the highest pitch of which mankind is capable, all those sentiments and virtues that mankind has ever since regarded as the highest attribute of his kind. It made him see what this perishing mortal eye cannot see. How right the Greeks were to hedge about this Mystery, this imbibing of the potion with secrecy and surveillance! . . . Perhaps with all our modem knowledge we do not need the divine mushroom anymore. Or do we need them more than ever? Some are shocked that the key even to religion might be reduced to a mere chug. On the other hand, the chug is as mysterious as it ever was: “like the wind that comes we know not whence nor why.” Out of a mere chug comes the ineffable, comes ecstasy. It is not the only instance in the history of humankind where the lowly has given birth to the divine.’

Scattered across the African grasslands, the mushrooms would be especially noticeable to hungry eyes because of their inviting smell and unusual form and color. Once having experienced the state of consciousness induced by the mushrooms, foraging humans would return to them repeatedly, in order to reexperience their bewitching novelty. This process would create what C. H. Waddington called a “creode, “z a pathway of developmental activity, what we call a habit.

ECSTASY

We have already mentioned the importance of ecstasy for shamanism. Among early humans a preference for the intoxication experience was ensured simply because the experience was ecstatic. “Ecstatic” is a word central to my argument and preeminently worthy of further attention. It is a notion that is forced on us whenever we wish to indicate an experience or a state of mind that is cosmic in scale. An ecstatic experience transcends duality; it is simultaneously terrifying, hilarious, awe-inspiring, familiar, and bizarre. It is an experience that one wishes to have over and over again.

For a minded and language-using species like ourselves, the experience of ecstasy is not perceived as simple pleasure but, rather, is incredibly intense and complex. It is tied up with the very nature of ourselves and our reality, our languages, and our imagings of ourselves. It is fitting, then, that it is enshrined at the center of shamanic approaches to existence. As Mircea Eliade pointed out, shamanism and ecstasy are atroot one concern:

This shamanic complex is very old; it is found, in whole or in part, among the Australians, the archaic peoples of North and South America, in the polar regions, etc. The essential and defining element of shamanism is ecstasy the shaman is a specialist in the sacred, able to abandon his body and undertake cosmic journeys “in the spirit” (in trance). “Possession” by spirits, although documented in a great many shamanisms, does not seem to have been a primary and essential element. Rather, it suggests a phenomenon of degeneration; for the supreme goal of the shaman is to abandon his body and rise to heaven or descend into hell-not to let himself be “possessed” by his assisting spirits, by demons or the souls of the dead; the shaman’s ideal is to master these spirits, not to let himself be “occupied” by them.’

Gordon Wasson added these observations on ecstasy:

In his trance the shaman goes on a far joumey-the place of the departed ancestors, or the nether world, or there where the gods dwell-and this wonderland is, I submit, precisely where the hallucinogens take us. They are a gateway to ecstasy. Ecstasy in itself is neither pleasant nor unpleasant. The bliss or panic into which it plunges you is incidental to ecstasy. When you are in a state of ecstasy, your very soul seems scooped out from your body and away it goes. Who controls its flight: Is it you, or your “subconscious,” or a “higher power”? Perhaps it is pitch dark, yet you see and hear more clearly than you have ever seen or heard before. You are at last face to face with Ultimate Truth: this is the overwhelming impression (or illusion) that grips you. You may visit Hell, or the Elysian fields of Asphodel, or the Gobi desert, or Arctic wastes. You know awe, you know bliss, and fear, even terror. Everyone experiences ecstasy in his own way, and never twice in the same way. Ecstasy is the very essence of shamanism. The neophyte from the great world associates the mushrooms primarily with visions, but for those who know the Indian language of the shaman the mushrooms “speak” through the shaman. The mushroom is the Word: es habla, as Aurelio told me. The mushroom bestows on the curandero what the Greeks called Logos, the Aryan Vac, Vedic Kavya, “poetic potency,” as Louis Renous put it. The divine afflatus of poetry is the gift of the entheogen. The textual exegete skilled only in dissecting the cruces of the verses lying before him is of course indispensable and his shrewd observations should have our full attention, but unless gifted with Kavya, he does well to be cautious in discussing the higher reaches of Poetry. He dissects the verses but knows not ecstasy, which is the soul of the verses.’

The Magic Language of the Fourth Way
by Pierre Bonnasse
pp. 228-234

Speech, just like sacred medicine, forms the basis of the shamanic path in that it permits us not only to see but also to do. Ethnobotany, the science that studies man as a function of his relationship to the plants around him, offers us new paths of reflection, explaining our relationship to language from a new angle that reconsiders all human evolution in a single movement. It now appears clear that the greatest power of the shaman, that master of ecstasy, resides in his mastery of the magic word stimulated by the ingestion of modifiers of consciousness.

For the shaman, language produces reality, our world being made of language. Terence McKenna, in his revolutionary endeavor to rethink human evolution, shows how plants have been able to influence the development of humans and animals. 41 He explains why farming and the domestication of animals as livestock were a great step forward in our cultural evolution: It was at this moment, according to him, that we were able to come into contact with the Psilocybe mushroom, which grows on and around dung. He supports the idea that “mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities.” 42 Further, because “thinking about human evolution ultimately means thinking about the evolution of human consciousness,” he supports the thesis that psychedelic plants “may well have synergized the emergence of language and religion.” 43

Studies undertaken by Fischer have shown that weak doses of psilocybin can improve certain types of mental performance while making the investigator more aware of the real world. McKenna distinguishes three degrees of effects of psilocybin: improvement of visual acuity, increase of sexual excitation, and, at higher doses, “certainly . . . religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself.” 44 Because “the psilocybin intoxication is a rapture whose breadth and depth is the despair of prose,” it is entirely clear to McKenna that shamanic ecstasy, characterized by its “boundary-dissolving qualities,” played a crucial role in the evolution of human consciousness, which, according to him, can be attributed to “psilocybin’s remarkable property of stimulating the language-forming capacity of the brain.” Indeed, “[i]ts power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.” 45 In response to the neo-Darwinist objection, McKenna states that “the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating,” and that “the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors.” 46

Be that as it may, it is undeniable that the unlimiters of consciousness, as Charles Duits calls them, have a real impact upon linguistic activity in that they strongly stimulate the emergence of speech. If, according to McKenna’s theories, “psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language,” resulting in “more vocabulary and an expanded memory capacity,” 47 then it seems obvious that the birth of poetry, literature, and all the arts came about ultimately through the fantastic encounter between humans and the magic mushroom—a primordial plant, the “umbilical cord linking us to the feminine spirit of the planet,” and thence, inevitably, to poetry. Rich in behavioral and evolutionary consequences, the mushroom, in its dynamic relationship to the human being, propelled us toward higher cultural levels developing parallel to self-reflection. 48

This in no way means that this level of consciousness is inherent in all people, but it must be observed that the experience in itself leads to a gaining of consciousness which, in order to be preserved and maintained, requires rigorous and well-directed work on ourselves. This being said, the experience allows us to observe this action in ourselves in order to endeavor to understand its subtle mechanisms. Terence McKenna writes,

Of course, imagining these higher states of self-reflection is not easy. For when we seek to do this we are acting as if we expect language to somehow encompass that which is, at present, beyond language, or translinguistic. Psilocybin, the hallucinogen unique to mushrooms, is an effective tool in this situation. Psilocybin’s main synergistic effect seems ultimately to be in the domain of language. It excites vocalization; it empowers articulation; it transmutes language into something that is visibly beheld. It could have had an impact on the sudden emergence of consciousness and language use in early humans. We literally may have eaten our way to higher consciousness. 49

If we espouse this hypothesis, then speaking means evoking and repeating the primordial act of eating the sacred medicine. Ethnobotanists insist upon the role of the human brain in the accomplishment of this process, pinpointing precisely the relevant area of activity, which, in Gurdjieffian terms, is located in the center of gravity of the intellectual center: “Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. . . . The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.” 50 It thus appears that these are the areas of the brain that have allowed for the emergence of language and culture. Yet McKenna adds, “our linguistic abilities must have evolved in response to enormous evolutionary pressures,” though we do not know the nature of these pressures. According to him, it is this “immense power to manipulate symbols and language” that “gives us our unique position in the natural world.” 51 This is obvious, in that speech and consciousness, inextricably linked, are solely the property of humans. Thus it seems logical that the plants known as psychoactive must have been the catalysts “for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness,” 52 with the primary position being held by language, “the most unique of human activities,” and the catalyst for poetic and literary activity.

Under the influence of an unlimiter, we have the incontrovertible impression that language possesses an objectified and visible dimension that is ordinarily hidden from our awareness. Under such conditions, language is seen and beheld just as we would ordinarily see our homes and normal surroundings. In fact, during the experience of the altered state, our ordinary cultural environment is recognized correctly as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomenons have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse. 53

Here we are touching upon the higher powers of speech—spontaneous creations, outbursts of poetry and suprahuman communications—which are part of the knowledge of the shamans and “sorcerers” who, through years of rigorous education, have become highly perceptive of these phenomena, which elude the subjective consciousness. In his essay “The Mushrooms of Language,” Henry Munn points to the direct links existing between the states of ecstasy and language: “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic . . . For the shaman, it is as if existence were uttering itself through him.” 54

In the 1920s, the Polish writer S. I. Witkiewicz, who attributed crucial importance to verbal creation, showed how peyote (he was one of the first people in Europe to experiment with it, or, at least, one of the first to give an account of doing so) acts upon the actual creation of words and also intervenes in the structure of sentences themselves: “. . . [I]t must also be remarked that peyote, perhaps by reason of the desire one has to capture with words that which cannot be captured, creates conceptual neologisms that belong to it alone and twists sentences in order to adapt their constructions to the frightening dimensions of its bizarrification . . .” 55 Peyote also gives those who ingest it a desire to create “new combinations of meanings.” Witkiewicz distinguishes three categories of objects in his visions: dead objects, moving objects, and living creatures. Regarding this last category, he distinguishes the “real” living creatures from the “fantastical” living creatures, which “discourage any attempt at description.” This is the moment when peyote intervenes: when those who wish to describe find themselves facing the limits of language. Peyote does not break through these limits; it simply shows that they do not exist, that they are hallucinations of the ordinary consciousness, that they are illusory, a mirage of tradition and the history of language.

The lucidogen—as it is called by Charles Duits, who created other neologisms for describing his experience with the sacred cactus—shows that life is present in everything, including speech, and he proves it. Sometimes, peyote leads us to the signifiers that escape us, always in order better to embrace the signified. Witkiewicz, pushing the phenomenon to the extreme limits of the senses and the sensible, insists:

I must draw attention to the fact that under the influence of peyote, one wants to make up neologisms. One of my friends, the most normal man in the world where language is concerned, in a state of trance and powerless to come to grips with the strangeness of these visions which defied all combinations of normal words, described them thus: “Pajtrakaly symforove i kondjioul v trykrentnykh pordeliansach.” I devised many formulas of this type on the night when I went to bed besieged by visions. I remember only this one. There is therefore nothing surprising in the fact that I, who have such inclinations even under normal conditions, should sometimes be driven to create some fancy word in order to attempt to disentangle and sort out the infernal vortex of creatures that unfurled upon me all night long from the depths of the ancient world of peyote. 56

Here, we cannot help but remember René Daumal’s experience, reported in “Le souvenir déterminant”: Under the influence of carbon tetrachloride, he pronounced with difficulty: “approximately: temgouf temgouf drr . . .” Henry Munn makes a similar remark after having taken part in shamanic rituals: “The mushroom session of language creates the words for phenomena without name.” 57 Sacred plants (and some other substances) are neologens, meaning they produce or generate neologisms from the attempts made at description by the subjects who consume them. This new word, this neologism created by circumstance, appears to be suited for this linguistic reality. We now have a word to designate this particular phenomenon pushing us against the limits of language, which in fact are revealed to be illusory.

Beyond this specific case, what is it that prevents us from creating new words whenever it appears necessary? Witkiewicz, speaking of language and life, defends the writer’s right to take liberties with the rules and invent new words. “Although certain professors insist on clinging to their own tripe,” he writes, “language is a living thing, even if it has always been considered a mummy, even if it has been thought impermissible to change anything in it. We can only imagine what literature, poetry, and even this accursed and beloved life would look like otherwise.” 58 Peyote not only incites us to this, but also, more forcefully, exercising a mysterious magnetic attraction toward a sort of supreme meaning beyond language and shaking up conventional signifiers and beings alike, peyote acts directly upon the heart of speech within the body of language. In this sense, it takes part actively and favorably in the creation of the being, the new and infinitely renewed human who, after a death that is more than symbolic, is reborn to new life. It is also very clear, in light of this example, that psilocybin alone does not explain everything, and that all lucidogenic substances work toward this same opening, this same outpouring of speech. McKenna writes:

Languages appear invisible to the people who speak them, yet they create the fabric of reality for their users. The problem of mistaking language for reality in the everyday world is only too well known. Plant use is an example of a complex language of chemical and social interactions. Yet most of us are unaware of the effects of plants on ourselves and our reality, partly because we have forgotten that plants have always mediated the human cultural relationship to the world at large. 59

pp. 238-239

It is interesting to note this dimension of speech specific to shamans, this inspired, active, healing speech. “It is not I who speak,” Heraclitus said, “it is the word.” The receptiveness brought about by an increased level of consciousness allows us not only to understand other voices, but also, above all, to express them in their entire magical substance. “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, of the logos in activity.” 72

The shamanic paroxysm is therefore the mastery of the word, the mastery of the sacred songs very often inspired by the powers that live in plants—which instruct us, making us receptive to phenomena that escape the ordinary consciousness. The shaman becomes a channel through which subtle energies can pass. Because of the mystic intoxication, he becomes the instrument for spirits that express themselves through him. Hence the word tzo —“says”—which punctuates the phrases of the Mazatec shaman in her communication with the “little growing things”: “Says, says, says. It is said. I say. Who says! We say, man says, language says, being and existence say.” 73 “The inspired man,” writes the Mexican poet Octavio Paz in an essay on Breton, “the man who speaks the truth, says nothing that is his own: Through his mouth, it is the language that speaks.” 74

The language thus regains its primordial power, its creative force and Orphic value, which determine all true poetry, for, as Duits writes, poetry—which is born in the visionary experience—is nothing other than “the language of the gods.” There is nothing phantasmagoric, hallucinated, or illusory about this speech. “[W]ords are materializations of consciousness; language is a privileged vehicle of our relation to reality,” writes Munn. Because poetry carries the world, it is the language of power, a tool in the service of knowledge and action. The incantatory repetition of names, for example, an idea we have already touched upon in our discussion of prayer, acts upon the heart of the being. “The shaman has a conception of poesis in its original sense as an action: words themselves are medicine.” 75 The words—used in their sacred dimension —work toward the transmutation of being, the healing of the spirit, our development, but in order for it be effective, the magic word must be born from a direct confrontation with the experience, because experience alone is a safe reserve for truth. Knowledge is not enough; only those who have eaten are in a position to understand, only those who have heard and seen are in a position to say. If speech goes farther than the eye, it is because it has the power of doing. “Though the psychedelic experience produced by the mushrooms is of heightened perceptivity,” Munn writes, “the I say is of privileged importance to the I see .” 76 Psychedelic speech is speech of power, revealing the spirit.

Darwin’s Pharmacy
by Richard M. Doyle
pp. 8-23

Rhetoric is the practice of learning and teaching eloquence, persuasion, and information architecture by revealing the choices of expression or interpretation open to any given rhetor, viewer, listener, or reader. Robert Anton Wilson offers a definition of rhetoric by example when he focuses on the word “reality” in his book Cosmic Trigger:

“Reality” is a word in the English language which happens to be (a) a noun and (b) singular. Thinking in the English language (and in cognate Indo-European languages) therefore subliminally programs us to conceptualize “reality” as one block-like entity, sort of like a huge New York skyscraper, in which every part is just another “room” within the same building. This linguistic program is so pervasive that most people cannot “think” outside it at all, and when one tries to offer a different perspective they imagine one is talking gibberish. (iii) […]

Mitchell’s vision offers perhaps an equally startling irony: it was only by taking on a literally extraterrestrial perspective that the moon walker overcame alienated perception.5 […]

Thus, perception is not an object but rather the label for a nonlinear process involving an object, a percipient and information.” (Mitchell n.d.; emphasis mine) […]

Like the mind apprehending it, information “wants to be free” if only because it is essentially “not an object,” but rather “the label for a nonlinear process involving an object, a percipient and information.”6 It is worth noting that Mitchell’s experience induces a desire to comprehend, an impulse that is not only the desire to tell the story of his ecodelic imbrication but a veritable symptom of it.7 […]

What are psychedelics such that they seem to persuade humans of their interconnection with an ecosystem?

Terence McKenna’s 1992 book recursively answered this query with a title: Food of the Gods. Psychedelics, McKenna argued, were important vectors in the evolution of consciousness and spiritual practice. In his “shaggy primate story,” McKenna argued that psilocybin mushrooms were a “genome-shaping power” integral to the evolution of human consciousness. On this account, human consciousness—the only instance we know of where one part of the ecosystem is capable of reflecting on itself as a self and acting on the result—was “bootstrapped” by its encounter with the astonishing visions of high-dose psilocybin, an encounter with the Transcendental Other McKenna dubbed “a glimpse of the peacock angel.” Hence for McKenna, psychedelics are both a food fit for the gods and a food that, in scrambling the very distinction between food and drug, man and god, engenders less transcendence than immanence—each is recursively implicated, nested, in the other. […]

Evolutionarily speaking the emergence of widespread animal life on earth is not separable from a “mutualistic” economy of plants, pollinators, and seed dispersers.

The basis for the spectacular radiations of animals on earth today is clearly the resources provided by plants. They are the major primary producers, autotrophically energizing planet Earth…the new ecological relationships of flowering plants resulted in colonizing species with population structures conducive to rapid evolutionary change. (Price, 4)

And if mammalian and primate evolution is enmeshed in a systemic way with angiosperms (flowering plants), so too have humans and other primates been constantly constituted by interaction with plants. […]

Navigating our implication with both plants and their precipitates might begin, then, with the startling recognition of plants as an imbricated power, a nontrivial vector in the evolution of Homo sapiens, a power against which we have waged war. “Life is a rhizome,” wrote Carl Jung, our encrypted ecological “shadow” upon which we manifest as Homo sapiens, whose individuation is an interior folding or “involution” that increases, rather than decreases, our entanglement with any given ecosystem. […]

In other words, psychedelics are (a suppressed) part of evolution. As Italian ethnobotanist Giorgio Samorini put it “the drug phenomenon is a natural phenomenon, while the drug problem is a cultural problem“ (87). […]

Indeed, even DMT, an endogenous and very real product of the human brain, has been “scheduled” by the federal government. DMT would be precisely, by most first person accounts, “the most potent hallucinogen on sale in Haight or Ashbury or Telegraph Avenue” and is a very real attribute of our brains as well as plant ecology. We are all “holding” a Schedule One psychedelic—our own brains, wired for ecodelia, are quite literally against the law. […]

The first principle of harm reduction with psychedelics is therefore this: one must pay attention to set and setting, the organisms for whom and context in which the psychedelic experience unfolds.For even as the (rediscovery of psychedelics by twentieth-century technoscience suggested to many that consciousness was finally understandable via a molecular biology of the brain, this apex of reductionism also fostered the recognition that the effects of psychedelics depend on much more than neurochemistry.23 If ecodelics can undoubtedly provoke the onset of an extra-ordinary state of mind, they do so only on the condition of an excessive response-ability, a responsiveness to rhetorical conditions—the sensory and symbolic framework in which they are assayed. Psychologists Ralph Metzner and Timothy Leary made this point most explicitly in their discussion of session “programming,” the sequencing of text, sound, and sensation that seemed to guide, but not determine the content of psychedelic experiences:

It is by now a well-known fact that psychedelic drugs may produce religious, aesthetic, therapeutic or other kinds of experiences depending on the set and setting…. Using programming we try to control the content of a psychedelic experience in specific desired directions. (5; reversed order)

Leary, Metzner, and many others have provided much shared code for such programming, but all of these recipes are bundled with an unavoidable but difficult to remember premise: an extraordinary sensitivity to initial rhetorical conditions characterizes psychedelic “drug action.” […]

Note that the nature of the psychedelic experience is contingent upon its rhetorical framing—what Leary, Metzner, and Richard Alpert characterized in The Psychedelic Experience as “the all-determining character of thought” in psychedelic experience. The force of rhetorical conditions here is immense— for Huxley it is the force linking premise to conclusion:

“No I couldn’t control it. If one began with fear and hate as the major premise, one would have to go on the conclusion.” (Ibid.)

Rhetorical technologies structure and enable fundamentally different kinds of ecodelic experiences. If the psychonaut “began” with different premises, different experiences would ensue.

pp. 33-37

Has this coevolution of rhetorical practices and humans ceased? This book will argue that psychedelic compounds have already been vectors of tech-noscientific change, and that they have been effective precisely because they are deeply implicated in the history of human problem solving. Our brains, against the law with their endogenous production of DMT, regularly go ecodelic and perceive dense interconnectivity. The human experience of radical interconnection with an ecosystem becomes a most useful snapshot of the systemic breakdowns between “autonomous” organisms necessary to sexual reproduction, and, not incidentally, they render heuristic information about the ecosystem as an ecosystem, amplifying human perception of the connections in their environment and allowing those connections to be mimed and investigated. This increased interconnection can be spurred simply by providing a different vision of the environment. Psychologist Roland Fischer noted that some aspects of visual acuity were heightened under the influence of psilocybin, and his more general theory of perception suggests that this acuity emerges out of a shift in sensory-motor ratios.

For Fischer the very distinction between “hallucination” and “perception” resides in the ratio between sensory data and motor control. Hallucination, for Fischer, is that which cannot be verified in three-dimensional Euclidean space. Hence Fischer differentiates hallucination from perception based not on truth or falsehood, but on a capacity to interact: if a subject can interact with a sensation, and at least work toward verifying it in their lived experience, navigating the shift in sensory-motor ratios, then the subject has experienced something on the order of perception. Such perception is easily fooled and is often false, but it appears to be sufficiently connective to our ecosystems to allow for human survival and sufficiently excitable for sexually selected fitness. If a human subject cannot interact with a sensation, Fischer applies the label “hallucination” for the purpose of creating a “cartography of ecstatic states.”

Given the testimony of psychonauts about their sense of interconnection, Fischer’s model suggests that ecodelic experience tunes perception through a shift of sensory-motor ratios toward an apprehension of, and facility for, interconnection: the econaut becomes a continuum between inside and outside. […] speech itself might plausibly emerge as nothing other than a symptom and practice of early hominid use of ecodelics.

pp. 51-52

It may seem that the visions—as opposed to the description of set and setting or even affect and body load—described in the psychonautic tradition elude this pragmatic dynamic of the trip report. Heinrich Klüver, writing in the 1940s and Benny Shannon, writing in the early twenty-first century, both suggest that the forms of psychedelic vision (for mescaline and ayahuasca respectively) are orderly and consistent even while they are indescribable. Visions, then, would seem to be messages without a code (Barthes) whose very consistency suggested content.

Hence this general consensus on the “indescribableness” (Ellis) of psychedelic experience still yields its share of taxonomies as well as the often remarkable textual treatments of the “retinal circus” that has become emblematic of psychedelic experience. The geometric, fractal, and arabesque visuals of trip reports would seem to be little more than pale snapshots of the much sought after “eye candy” of visual psychedelics such as LSD, DMT, 2C-I, and mescaline. Yet as deeply participatory media technologies, psychedelics involve a learning curve capable of “going with” and accepting a diverse array of phantasms that challenge the beholder and her epistemology, ontology, and identity. Viewed with the requisite detachment, such visions can effect transformation in the observing self, as it finds itself nested within an imbricated hierarchy: egoic self observed by ecstatic Atman which apprehends itself as Brahman reverberating and recoiling back onto ego. Many contemporary investigators of DMT, for example, expect and often encounter what Terence McKenna described as the “machine elves,” elfin entities seemingly tinkering with the ontological mechanics of an interdimension, so much so that the absence of such entities is itself now a frequent aspect of trip reportage and skeptics assemble to debunk elfin actuality (Kent 2004).

p. 63

While synesthesia is classically treated as a transfer or confusion of distinct perceptions, as in the tactile and gustatory conjunction of “sharp cheese,” more recent work in neurobiology by V. S. Ramachandran and others suggests that this mixture is fundamental to language itself—the move from the perceptual to the signifying, in this view, is itself essentially synesthetic. Rather than an odd symptom of a sub-population, then, synesthesia becomes fundamental to any act of perception or communication, an attribute of realistic perception rather than a pathological deviation from it.

pp. 100-126

Rhetorical practices are practically unavoidable on the occasion of death, and scholars in the history of rhetoric and linguistics have both opined that it was as a practice of mourning that rhetoric emerged as a recognizable and repeatable practice in the “West.” […] It is perhaps this capacity of some rhetorical practices to induce and manage the breakdown of borders—such as those between male and female, life and death, silence and talk—that deserves the name “eloquence.” Indeed, the Oxford English Dictionary reminds us that it is the very difference between silence and speech that eloquence manages: a. Fr. éloquent, ad. L. loquent-em, pr. pple., f. loqui to speak out.2 […]

And despite Huxley’s concern that such an opening of the doors of (rhetorical) perception would be biologically “useless,” properly Darwinian treatments of such ordeals of signification would place them squarely within the purview of sexual selection—the competition for mates. If psychedelics such as the west African plant Iboga are revered for “breaking open the head,” it may be because we are rather more like stags butting heads than we are ordinarily comfortable putting into language (Pinchbeck 2004, cover). And our discomfort and fascination ensues, because sexual selection is precisely where sexual difference is at stake rather than determined. A gradient, sexuality is, of course, not a binary form but is instead an enmeshed involutionary zone of recombination: human reproduction takes place in a “bardo” or between space that is neither male nor female nor even, especially, human. Indeed, sex probably emerged as a technique for exploring the space of all possible genotypes, breaking the symmetry of an asexual reproduction and introducing the generative “noise” of sexuality with which Aldous Huxley’s flowers resonated. In this context, psychedelics become a way of altering the context of discursive signaling within which human reproduction likely evolved, a sensory rather than “extra-sensory” sharing of information about fitness.

Doctors of the Word

In an ecstatic treatment of Mazatec mushroom intoxication, Henry Munn casts the curandera as veritable Sophists whose inebriation is marked by an incessant speaking:

The shamans who eat them, their function is to speak, they are the speakers who chant and sing the truth, they are the oral poets of their people, the doctors of the word, they who tell what is wrong and how to remedy it, the seers and oracles, the ones possessed by the voice. (Munn, 88)

Given the contingency of psychedelic states on the rhetorical conditions under which they are used, it is perhaps not surprising that the Mazatec, who have used the “little children” of psilocybin for millennia, have figured out how to modulate and even program psilocybin experience with rhetorical practices. But the central role enjoyed by rhetoricians here—those doctors of the word—should not obscure the difficulty of the shaman/ rhetorician’s task: “possessed by the voice,” such curanderas less control psychedelic experience than consistently give themselves over to it. They do not wield ecstasy, but are taught by it. Munn’s mushroom Sophists are athletes of “negative capability,” nineteenth-century poet John Keats’s term for the capacity to endure uncertainty. Hence the programming of ecodelic experience enables not control but a practiced flexibility within ritual, a “jungle gym” for traversing the transhuman interpolation. […]

Fundamental to shamanic rhetoric is the uncertainty clustering around the possibility of being an “I,” an uncertainty that becomes the very medium in which shamanic medicine emerges. While nothing could appear more straightforward than the relationship between the one who speaks and the subject of the sentence “I speak,” Munn writes, sampling Heraclitus, “It is not I who speak…it is the logos.” This sense of being less in dialogue with a voice than a conduit for language itself leads Munn toward the concept of “ecstatic signification.”

Language is an ecstatic activity of signification…. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. At times it is as if one were being told what to say, for the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings. Message fields of communication with the world, others, and one’s self are disclosed by the mushrooms. (Ibid., 88-89)

If these practices are “ecstatic,” they are so in the strictest of fashions. While recent usage tends to conjoin the “ecstatic” with enjoyment, its etymology suggests an ontological bifurcation—a “being beside oneself” in which the very location, if not existence, of a self is put into disarray and language takes on an unpredictable and lively agency: “words leap to mind, one after another.”3 This displacement suggests that the shaman hardly governs the speech and song she seemingly produces, but is instead astonished by its fluent arrival. Yet this surprise does not give way to panic, and the intoxication increases rather than retards fluency—if anything, Munn’s description suggests that for the Mazatec (and, perhaps, for Munn) psilocybin is a rhetorical adjunct that gives the speaker, singer, listener, eater access to “message fields of communication.” How might we make sense of this remarkable claim? What mechanisms would allow a speaker to deploy intoxication for eloquence?

Classically speaking, rhetoric has treated human discourse as a tripartite affair, a threefold mixture of ethos, an appeal based on character; logos, an appeal based on the word; and pathos, an appeal to or from the body.4 Numerous philosophers and literary critics since Jacques Derrida have decried the Western fascination with the logos, and many scholars have looked to the rich traditions of rhetoric for modalities associated with other offices of persuasion, deliberation, and transformation. But Munn’s account asks us to recall yet another forgotten rhetorical practice—a pharmacopeia of rhetorical adjuncts drawn from plant, fungus, and geological sources. In the context of the Mazatec, the deliberate and highly practiced ingestion of mushrooms serves to give the rhetor access not to individually created statements or acts of persuasion, but to “fields” of communication where rhetorical practice calls less for a “subject position” than it does a capacity to abide multiplicity—the combination and interaction, at the very least, of human and plant.

Writer, philosopher, and pioneering psychonaut Walter Benjamin noted that his experiments with hashish seemed to induce a “speaking out,” a lengthening of his sentences: “One is very much struck by how long one’s sentences are” (20). Longer sentences, of course, are not necessarily more eloquent in any ordinary sense than short ones, since scholars, readers, and listeners find that eloquence inheres in a response to any given rhetorical context. Indeed, Benjamin’s own telegraphic style in his hashish protocols becomes extraordinary, rare, and paradoxical given his own claim for long sentences in a short note. Yet Benjamin’s account does remind us that ecodelics often work on and with the etymological sense of “eloquence,” a “speaking out,” an outburst of language, a provocation to language. Benjamin reported that it was through language that material forms could be momentarily transformed: “The word ‘ginger’ is uttered and suddenly in place of the desk there is a fruit stand” (ibid., 21).

And yet if language and, indeed, the writing table, is the space where hashish begins to resonate for Benjamin, it does so only by making itself available to continual lacunae, openings and closings where, among other things, laughter occurs. For precisely as they are telegraphic, the hashish protocols of Benjamin create a series of non sequiturs: […]

Hashish, then, is an assassin of referentiality, inducing a butterfly effect in thought. In Benjamin, cannabis induces a parataxis wherein sentences less connect to each other through an explicit semantics than resonate together and summon coherence in the bardos between one statement and another. It is the silent murmur between sentences that is consistent while the sentences continually differentiate until, through repetition, an order appears: “You follow the same paths of thought as before. Only, they appear strewn with roses.”

For a comparable practice in classical rhetoric linking “intoxication” with eloquence, we return to Delphi, where the oracles made predictions persuasive even to the always skeptical Socrates, predictions whose oracular ecodelic speech was rendered through the invisible but inebriating “atmosphere” of ethylene gases—a geological rhetoric. Chemist Albert Hofmann, classicist Carl Ruck, ethnobotanist Jonathan Ott, and others have made a compelling case that at Eleusis, where Socrates, well before Bartleby, “preferred not” to go, the Greek Mysteries were delivered in the context of an ecodelic beverage, perhaps one derived from fermented grain or the ergotladen sacrament kykeon, chemically analogous to LSD.5 These Mystery rites occasioned a very specific rhetorical practice—silence—since participants were forbidden from describing the kykeon or its effects. But silence, too, is a rhetorical practice, and one can notice that such a prohibition functions rhetorically not only to repress but also to intensify a desire to “speak out” of the silence that must come before and after Eleusis.

And Mazatec curandera Maria Sabina is explicit that indeed it is not language or even its putative absence, silence, that is an adjunct or “set and setting” for the mushrooms. Rather, the mushrooms themselves are a languaging, eloquence itself, a book that presents itself and speaks out:

At other times, God is not like a man: He is the Book. A Book that is born from the earth, a sacred Book whose birth makes the world shake. It is the Book of God that speaks to me in order for me to speak. It counsels me, it teaches me, it tells me what I have to say to men, to the sick, to life. The Book appears and I learn new words.6

Crucial to this “speaking” is the way in which Maria Sabina puts it. Densely interactive and composed of repetition, the rhetorical encounter with the mushroom is more than informative it is pedagogical and transformative: “The Book appears and I learn new words.” The earth shakes with vitality, manifesting the mushroom orator.7 Like any good teacher, the mushrooms work with rhythms, repetitions that not only reinforce prior knowledge but induce one to take leave of it. “It counsels me, it teaches me.” The repetition of which and through which Maria Sabina speaks communicates more than knowledge, but allows for its gradual arrival, a rhythm of coming into being consonant and perhaps even resonant with the vibrations of the Earth, that scene of continual evolutionary transformation.

More than a supplement or adjunct to the rhetor, the mushroom is a transformer. Mary Barnard maps out a puppetry of flesh that entails becoming a transducer of the mushroom itself: “The mushroom-deity takes possession of the shaman’s body and speaks with the shaman’s lips. The shaman does not say whether the sick child will live or die; the mushroom says” (248).

Nor are reports of psilocybin’s effects as a rhetorical adjunct peculiar to Munn or even the Mazatec tradition. Over a span of ten years, psychologist Roland Fischer and his colleagues at Ohio State University tested the effects of psilocybin on linguistic function. Fischer articulated “the hallucination-perception continuum,” wherein hallucinations would be understood less as failed images of the real than virtual aspects of reality not verifiable in the “Euclidean” space projected by the human sensorium. Fischer, working with the literary critic Colin Martindale, located in the human metabolism of psilocybin (and its consequent rendering into psilocin) linguistic symptoms isomorphic to the epics of world literature. Psilocybin, Fischer and Martindale argued, provoked an increase in the “primary process content” of writing composed under the influence of psilocybin. Repetitious and yet corresponding to the very rhetorical structure of epics, psilocybin can thus be seen to be prima facie adjuncts to an epic eloquence, a “speaking out” that leaves rhetorical patterns consistent with the epic journey (Martindale and Fisher).

And in this journey, it is often language itself that is exhausted—there is a rhythm in the epic structure between the prolix production of primary process content and its interruption. Sage Ramana Maharshi described mouna, a “state which transcends speech and thought,” as the state that emerges only when “silence prevails.” […]

A more recent study conducted of high-dose psilocybin experience among international psychonauts suggested that over 35 percent of subjects heard what they called “the logos” after consuming psilocybin mushrooms.

Based on the responses to the question of the number of times psilocybin was taken, the study examined approximately 3,427 reported psilocybin experiences (n = 118). Of the total questionnaire responses (n = 128), 35.9% (n = 46) of the participants reported having heard a voice(s) with psilocybin use, while 64.0% (n = 82) of the participants stated that they had not. (Beach) […]

Inevitably, this flow fluctuates between silence and discourse. Michaux’s experiments with psychedelics rendered the now recognizable symptoms of graphomania, silence, and rhetorical amplification. In Miserable Miracle, one of the three books Michaux wrote “with mescaline,” Michaux testifies to a strange transformation into a Sophist:

For the first time I understood from within that animal, till now so strange and false, that is called an orator. I seemed to feel how irresistible must be the propensity for eloquence in certain people. Mesc. acted in such a way that it gave me the desire to make proclamations. On what? On anything at all. (81)11

Hence, while their spectrum of effects is wide ranging and extraordinarily sensitive to initial rhetorical conditions, psychedelics are involved in an intense inclination to speak unto silence, to write and sing in a time not limited to the physical duration of the sacramental effect, and this involvement with rhetorical practice—the management of the plume, the voice, and the breath—appears to be essential to the nature of psychedelics; they are compounds whose most persistent symptoms are rhetorical. […]

Crucial to Krippner’s analysis, though, is the efficacy of psychedelics in peeling away these strata of rhetorical practice. By withering some layers of perception, others are amplified:

In one experiment (Jarvik et al. 1955), subjects ingested one hundred micrograms of LSD and demonstrated an increase in their ability to quickly cancel out words on a page of standardized material, but a decreased ability to cancel out individual letters. The drug seemed to facilitate the perceptions of meaningful language units while it interfered with the visual perception of non-meaningful ones. (Krippner, 220)

Krippner notes that the LSD functioned here as a perceptual adjunct, somehow tuning the visual perception toward increased semantic and hence rhetorical efficacy. This intensified visual perception of language no doubt yielded the familiar swelling of font most associated with psychedelic art and pioneered by the psychedelic underground press (such as the San Francisco Oracle.) By amplifying the visual aspect of font—whose medium is the psychedelic message—this psychedelic innovation remixes the alphabet itself, as more information (the visual, often highly sensory swelling of font) is embedded in a given sequence of (otherwise syntactic and semantic) symbols. More information is compressed into font precisely by working with the larger-scale context of any given message rather than its content. This apprehension of larger-scale contexts for any given data may be the very signature of ecodelic experience. Krippner reports that this sensory amplification even reached dimensional thresholds, transforming texts:

Earlier, I had tasted an orange and found it the most intense, delightful taste sensation I had ever experienced. I tried reading a magazine as I was “coming down,” and felt the same sensual delight in moving my eye over the printed page as I had experienced when eating the orange. The words stood out in three dimensions. Reading had never been such a sheer delight and such a complete joy. My comprehension was excellent. I quickly grasped the intent of the author and felt that I knew exactly what meaning he had tried to convey. (221)

Rather than a cognitive modulation, then, psychedelics in Krippner’s analysis seem to affect language function through an intensification of sensory attention on and through language, “a complete joy.” One of Krippner’s reports concerned a student attempting to learn German. The student reported becoming fascinated with the language in a most sensory fashion, noting that it was the “delicacy” of the language that allowed him to, well, “make sense” of it and indulge his desire to “string” together language:

The thing that impressed me at first was the delicacy of the language.…Before long, I was catching on even to the umlauts. Things were speeding up like mad, and there were floods of associations.…Memory, of course, is a matter of association and boy was I ever linking up to things! I had no difficulty recalling words he had given me—in fact, I was eager to string them together. In a couple of hours after that, I was even reading some simple German, and it all made sense. (Ibid.)

Krippner reports that by the end of his LSD session, the student “had fallen in love with German” (222). Krippner rightly notes that this “falling” is anything but purely verbal, and hypothesizes that psychedelics are adjuncts to “non-verbal training”: “The psychedelic session as non-verbal training represents a method by which an individual can attain a higher level of linguistic maturity and sophistication” (225).

What could be the mechanism of such a “non-verbal” training? The motor-control theory of language suggests that language is bootstrapped and developed out of the nonlinguistic rhythms of the ventral premotor system, whose orderly patterns provided the substrate of differential repetition necessary to the arbitrary configuration and reconfiguration of linguistic units. Neuroscientist V. S. Ramachandran describes the discovery of “mirror neurons” by Giaccamo Rizzolati. Rizzolati

recorded from the ventral premotor area of the frontal lobes of monkeys and found that certain cells will fire when a monkey performs a single, highly specific action with its hand: pulling, pushing, tugging, grasping, picking up and putting a peanut in the mouth etc. different neurons fire in response to different actions. One might be tempted to think that these are motor “command” neurons, making muscles do certain things; however, the astonishing truth is that any given mirror neuron will also fire when the monkey in question observes another monkey (or even the experimenter) performing the same action, e.g. tasting a peanut! (Ramachandran)

Here the distinction between observing and performing an action are confused, as watching a primate pick up a peanut becomes indistinguishable from picking up the peanut, at least from the perspective of an EEG. Such neurological patterns are not arbitrary, linked as they are to the isomorphic patterns that are the developmentally articulated motor control system of the body. This may explain how psychedelics can, according to Krippner, allow for the perceptual discernment of meaningful units. By releasing the attention from the cognitive self or ego, human subjects can focus their attention on the orderly structures “below” conscious awareness and distributed across their embodiment and environments. Robin Allot has been arguing for the motor theory of language evolution since the 1980s:

In the evolution of language, shapes or objects seen, sounds heard, and actions perceived or performed, generated neural motor programs which, on transfer to the vocal apparatus, produced words structurally correlated with the perceived shapes, objects, sounds and actions. (1989)

These perceived shapes, objects, sounds, and actions, of course, include the sounds, smells, visions, and actions continually transmitted by ecosystems and the human body itself, and by focusing the attention on them, we browse for patterns not yet articulated by our embodiment. Significantly, as neuroscientist Ramachandran points out, this “mirror neuron” effect seems to occur only when other living systems are involved:

When people move their hands a brain wave called the MU wave gets blocked and disappears completely. Eric Altschuller, Jamie Pineda, and I suggested at the Society for Neurosciences in 1998 that this suppression was caused by Rizzolati’s mirror neuron system. Consistent with this theory we found that such a suppression also occurs when a person watches someone else moving his hand but not if he watches a similar movement by an inanimate object.

Hence, in this view, language evolves and develops precisely by nonverbal means in interaction with other living systems, as the repetitions proper to language iterate on the basis of a prior repetition—the coordinated movements necessary to survival that are coupled to neurological patterns and linked to an animate environment. By blocking the “throttling embrace of the self,” ecodelics perhaps enable a resonance between the mind and nature not usually available to the attention. This resonance creates a continuum between words and things even as it appears to enable the differentiation between meaningful and nonmeaningful units: […]

This continuum between the abstract character of language and its motor control system is consistent with Krippner’s observation that “at the sensory level, words are encoded and decoded in highly unusual ways” (238). This differential interaction with the sensory attributes of language includes an interaction with rhythms and puns common to psychedelic experience, a capacity to become aware of a previously unobserved difference and connection. Puns are often denounced as, er, punishing a reader’s sense of taste, but in fact they set up a field of resonance and association between previously distinct terms, a nonverbal connection of words. In a highly compressed fashion, puns transmit novel information in the form of a meshed relation between terms that would otherwise remain, often for cultural or taboo reasons, radically distinct.12 This punning involves a tuning of a word toward another meaning, a “troping” or bending of language toward increased information through nonsemantic means such as rhyming. This induction of eloquence and its sensory perception becomes synesthetic as an oral utterance becomes visual: […]

Hence, if it is fair to characterize some psychedelic experiences as episodes of rhetorical augmentation, it is nonetheless necessary to understand rhetoric as an ecological practice, one which truly works with all available means of persuasion (Aristotle), human or otherwise, to increase the overall dissipation of energy in any given ecology. One “goes for broke,” attempting the hopeless task of articulating psychedelics in language until exhausting language of any possible referential meaning and becoming silent. By locating “new” information only implicit in a given segment of language and not semantically available to awareness, a pun increases the informational output of an ecosystem featuring humans. This seems to feedback, […]

Paired with an apprehension of the logos, this tuning in to ecodelia suggests that in “ego death,” many psychonauts experience a perceived awareness of what Vernadsky called the noösphere, the effects of their own consciousness on their ecosystem, about which they incessantly cry out: “Will we listen in time?”

In the introduction, I noted that the ecodelic adoption of this non-local and hence distributed perspective of the biosphere was associated with the apprehension of the cosmos as an interconnected whole, and with the language of “interpellation” I want to suggest that this sense of interconnection often appears in psychonautic testimony as a “calling out” by our evolutionary context. […]

The philosopher Louis Althusser used the language of “interpellation” to describe the function of ideology and its purchase on an individual subject to it, and he treats interpellation as precisely such a “calling out.” Rather than a vague overall system involving the repression of content or the production of illusion, ideology for Althusser functions through its ability to become an “interior” rhetorical force that is the very stuff of identity, at least any identity subject to being “hailed” by any authority it finds itself response-able to. I turn to that code commons Wikipedia for Althusser’s most memorable treatment of this concept:

Memorably, Althusser illustrates this with the concept of “hailing” or “interpellation.” He uses the example of an individual walking in a street: upon hearing a policeman shout “Hey you there!”, the individual responds by turning round and in this simple movement of his body she is transformed into a subject. The person being hailed recognizes himself as the subject of the hail, and knows to respond.14

This sense of “hailing” and unconscious “turning” is appropriate to the experience of ecodelic interconnection I am calling “the transhuman interpellation.” Shifting back and forth between the nonhuman perspectives of the macro and the micro, one is hailed by the tiniest of details or largest of overarching structures as reminders of the way we are always already linked to the “evolutionary heritage that bonds all living things genetically and behaviorally to the biosphere” (Roszak et al., 14). And when we find, again and again, that such an interpellation by a “teacher” or other plant entity (à la the logos) is associated not only with eloquence but also with healing,15 we perhaps aren’t surprised by a close-up view of the etymology of “healing.” The Oxford English Dictionary traces it from the Teutonic “heilen,” which links it to “helig” or “holy.” And the alluvial flow of etymology connects “hailing” and “healing” in something more than a pun:

A Com. Teut. vb.: OE. hlan = OFris. hêla, OS. hêlian (MDu. hêlen, heilen, Du. heelen, LG. helen), OHG. heilan (Ger. heilen), ON. heil (Sw. hela, Da. hele), Goth. hailjan, deriv. of hail-s, OTeut. *hailo-z, OS. Hál <HALE><WHOLE>16

Hailed by the whole, one can become healed through ecodelic practice precisely because the subject turns back on who they thought they were, becoming aware of the existence of a whole, a system in which everything “really is” connected—the noösphere. Such a vision can be discouraging and even frightening to the phantasmically self-birthed ego, who feels not guilt but a horror of exocentricity. It appears impossible to many of us that anything hierarchically distinct, and larger and more complex than Homo sapiens—such as Gaia—could exist, and so we often cry out as one in the wilderness, in amazement and repetition.

Synesthesia, and Psychedelics, and Civilization! Oh My!
Were cave paintings an early language?

Choral Singing and Self-Identity
Music and Dance on the Mind
Development of Language and Music
Spoken Language: Formulaic, Musical, & Bicameral
“Beyond that, there is only awe.”
“First came the temple, then the city.”
The Spell of Inner Speech
Language and Knowledge, Parable and Gesture

The Helmsman and the Lookout

There is an apt metaphor for the relationship between what we think of as conscious willpower and the openness of perception.

The egoic consciousnes is the helmsman of the boat as it heads along the river of experience, but he is positioned at the back of the boat crowded with passengers. While he controls the steering, he is driving blind and can’t see what is coming. He primarily operates on memory and mental maps, habit and heuristics. He knows the river or else similar rivers, at least most of the time, as long as remains within the familiar. Still, his predictive abilities are limited and hence so are his steering abilities.

This is why a lookout is needed at the front of the boat. The lookout, although having no direct control, can give warnings. Stop! Don’t go that direction! The lookout has the information the helmsman needs, but the helmsman only listens to the lookout when something is wrong. The lookout is the veto power of volition, what is called free-won’t rather than freewill.

I came across this metaphor from a Chacruna article by Martin Fortier, Are Psychedelic Hallucinations Actually Metaphorical Perceptions?:

“Recent neuroscientific models of the brain stress the importance of prediction within perceptual experience.3 The tenets of the predictive model of the brain can be described with a useful analogy: that of helmsmen steering collective boats on the rivers of lowland South America.

“In the Amazon, to go from one riparian town to another, people usually take a collective boat. Most boats carry between 20 to 60 passengers. These boats are steered in an intriguing way. The helmsman is positioned at the rear part of the boat. Because of this, he cannot see much of the river; what he sees in front of him are mostly the backs of passengers. Yet, the helmsman critically needs to know in minute detail where he is going, as the river is replete with shallows and floating tree trunks that must be avoided by any means. The usual way to make sure that the helmsman is able to steer the boat safely is to position a lookout at the front part of the boat and to have him warn the helmsman in case anything dangerous shows up ahead.

“The human perceptual system roughly works like these collective boats! “Predictive models” of perception strongly contrast with “constructive models,” developed in the 1970s. According to constructive models of visual perception, the retina collects very gross and sparse information about the world, and each level of the visual system elaborates on this limited primary information and makes it gradually richer and more complex.4

“Let us say that the lookout stands for primary perceptual areas—low-level areas of the brain—and the helmsman stands for more frontal areas; the high-level areas of the brain. Furthermore, the trajectory of the boat stands for conscious perception. In the case of classical constructive models of the brain, perception is taken to be a gradual enrichment of information coming from lower areas of the brain. So, to use the boat analogy, constructive models of perception have it that the trajectory of the boat—i.e., conscious perception—is determined by the lookout sending warning signals to the helmsman—i.e., by bottom-up processes.

“Predictive models conceive of perception in a very different way. The first step of determining the trajectory of the boat is the helmsman guessing, on the basis of his past experience, where the boat can safely go. So, within the predictive model, the lookout plays no constitutive role. The lookout influences the trajectory of the boat only when the helmsman’s predictions are proved wrong, and when the lookout needs to warn him.

“Two niceties must be added. First, bottom-up error signals can be variously weighted. In noisy or uncertain situations, bottom-up prediction errors have a smaller influence than usual:5 in noisy or uncertain situations, the lookout’s warnings are not taken into account by the helmsman as much as usual. Second, in the boat analogy, there is only one lookout and one helmsman. In the brain, several duos of lookouts and helmsmen are working together, and each of these duos is specialized in a specific perceptual modality.”

This usually works well. Still, the egoic consciousness can be tiring, especially when it attempts to play both roles. If we never relax, we are in a constant state of stress and anxiety. That is how we get suck in loops of thought, where what the helmsman imagines about the world becomes his reality and so he stops listening as much to the lookout.

This has become ever more problematic for humanity as the boundaries of egoic consciousness have rigidified. Despite egoic self-confidence, we have limited ability to influence our situation and, as research shows, overtaxing ourselves causes us to become ineffective. No matter how hard it tries, the ego-self can’t force the ideology of freewill onto the world. Sometimes, we need to relax and allow ourselves to float along, with trust that the lookout will warn us when necessary.

There are many practices that help us with this non-egoic state. Meditation is the simplest, in which we train the mind to take a passive role but with full alertness. It allows the lookout to relax and take in the world without all of the nervous-inducing jerking around of a helmsman out of control while obsessed with control.

Another method is that of psychedelics, the experience of which is often referred to as a ‘trip’. Traditionally, a shaman or priest would have taken over the role of helmsman, allowing the participants to temporarily drop that role. Without someone else to play that role, a standard recommendation has been to let go and allow yourself to float along, just go with the current and trust where it takes you. In doing this, the environment is important in supporting this state of mind. This is a way of priming the mind with set and setting.

Richard M. Doyle explained this strategy, in Darwin’s Pharmacy (p. 18):

“If psychedelics left any consistent trace on the literature of trip reports and the investigation of psychedelic states, it is that “resistance” is unlikely to be a useful tactic and that experiment is unavoidable. Leary, whose own “setting” was consistently clustered around practices of the sacred, offered this most compressed algorithm for the manipulation (“programming”) of psychedelic experience, a script asking us to experimentally give ourselves over to the turbulence: “Whenever in doubt, turn off your mind, relax, float downstream.” Such an experiment begins, but is not completed, by a serene letting go of the self under the pull of a transhuman and improbable itinerary. This letting go, of course, can be among the greatest of human achievements, the very goal of human life: Meister Eckhart, the fourteenth-century German heretic, reminds us that this gelassenheit is very old and not easily accomplished.”

For anyone who has experienced it, the transformative power of psychedelics is undeniable. Many modern people find themselves near permanently stuck in egoic control mode, their hand ever on the steering mechanism. We don’t easily let our guard down and we hardly can even imagine what that might feel like, until something shuts down that part of our mind-brain.

In a CBC interview with Bob McDonald, Michael Pollan explained why this happens and what exactly happens:

“The observed effect, if you do brain imaging of people who are tripping, you find some very interesting patterns of activity in the brain – specifically something called the default mode network, which is a very kind of important hub in the brain, linking parts of the cerebral cortex to deeper, older areas having to do with memory and emotion. This network is kind of a regulator of all brain activities. One neuroscientist called it, ‘The conductor of the neural symphony,’ and it’s deactivated by psychedelics, which is very interesting because the assumption going in was that they would see lots of strange activity everywhere in the brain because there’s such fireworks in the experience, but in fact, this particular network almost goes off line.

“Now what does this network responsible for? Well, in addition to being this transportation hub for signals in the brain, it is involved with self reflection. It’s where we go to ruminate or mind wander – thinking about the past or thinking about the future – therefore worrying takes place here. Our sense of self, if it can be said to have an address and real, resides in this particular brain network. So this is a very interesting clue to how psychedelics affect the brain and how they create the psychological experience, the experience in the mind, that is so transformative.

“When it goes off line, parts of the brain that don’t ordinarily communicate to one another, strike up conversation. And those connections may represent what people feel during the psychedelic experience as things like synaesthesia. Synaesthesia is when one sense gets cross wired with another. And so you suddenly smell musical notes or taste things that you see.

“It may produce insights. It may produce new metaphors – literally connecting the dots in new ways. Now that I’m being speculative – I’m going a little beyond what we’ve established – we know there are new connections, we don’t know what’s happening with them, or which of them endure. But the fact is, the brain is temporarily rewired. And that rewiring – whether the new connections actually produce the useful material or just shaking up the system – ‘shaking the snow globe,’ as one of the neuroscientists put it, is what’s therapeutic. It is a reboot of the brain.

“If you think about, you know, mental illnesses such as depression, addiction, and anxiety, many of them involve these loops of thought that we can’t control and we get stuck on these stories we tell ourselves – that we can’t get through the next hour without a drink, or we’re worthless and unworthy of love. We get stuck in these stories. This temporarily dissolves those stories and gives us a chance to write new stories.”

Psychedelics give the average person the rare opportunity of full-blown negative capability, as our egoic boundaries become thinner or disappear altogether. When the chatter of the ego-mind ceases, the passengers on the boat can hear themselves and begin talking among themselves. The bundle theory of the mind suddenly becomes apparent. We might even come to the realization that the ego was never all that much in control in the first place, that consciousness is a much more limited phenomenon.

Straw Men in the Linguistic Imaginary

“For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?”
~ Jeremy Lent, The Patterning Instinct

Portraying the Sapir-Whorf hypothesis as linguistic determinism is a straw man fallacy. It’s false to speak of a Sapir-Whorf hypothesis at all as no such hypothesis was ever proposed by Edward Sapir and Benjamin Lee Whorf. Interestingly, it turns out that researchers have since found examples of what could be called linguistic determinism or at least very strong linguistic relativity, although still apparently rare (similar to how examples of genetic determinism are rare). But that is neither here nor there, considering Sapir and Whorf didn’t argue for linguistic determinism, no matter how you quote-mine their texts. The position of relativity, similar to social constructivism, is the wholesale opposite of rigid determinism — besides, linguistic relativism wasn’t even a major focus of Sapir’s work even as he influenced Whorf.

Turning their view into a caricature of determinism was an act of projection. It was the anti-relativists who were arguing for biology determining language, from Noam Chomsky’s language module in the brain to Brent Berlin and Paul Kay’s supposedly universal color categories. It was masterful rhetoric to turn the charge onto those holding the moderate position in order to dress them up as ideologial extremists and charlatans. And with Sapir and Whorf gone from early deaths, they weren’t around to defend themselves and to deny what was claimed on their behalf.

Even Whorf’s sometimes strongly worded view of relativity, by today’s standards and knowledge in the field, doesn’t sound particularly extreme. If anything, to those informed of the most up-to-date research, denying such obvious claims would now sound absurd. How did so many become disconnected from simple truths of human experience that anyone who dared speak these truths could be ridiculed and dismissed out of hand? For generations, relativists stating common sense criticisms of race realism were dismissed in a similar way, and they were often the same people (cultural relativity and linguistic relativity in American scholarship was influenced by Franz Boas) — the argument tying them together is that relativity in expression and emodiment of our shared humanity (think of it more in terms of Daniel Everett’s dark matter of the mind) is based on a complex and flexible set of universal potentials, such that universalism doesn’t require nor indicate essentialism. Yet why do we go on clinging to so many forms of determinism, essentialism, and nativism, including those ideas advocated by many of Sapir and Whorf’s opponents?

We are in a near impossible situation, Essentialism has been a cornerstone of modern civilization, most of all in its WEIRD varieties. Relativity simply can’t be fully comprehended, much less tolerated, within the dominant paradigm, although as Leavitt argues it resonates with the emphasis on language found in Romanticism which was a previous response to essentialism. As for linguistic determinism, even if it were true beyond a few exceptional cases, it is by and large an untestable hypothesis at present and so scientifically meaningless within WEIRD science. WEIRD researchers exist in a civilization that has become dominated by WEIRD societies with nearly all alternatives destroyed or altered beyond their original form. There is no where to stand outside of the WEIRD paradigm, especially not the WEIRDest of the WEIRD researchers doing most of the research.

If certain thoughts are unthinkable within WEIRD culture and language, we have no completely alien mode of thought by which to objectively assess the WEIRD, as imperialism and globalization has left no society untouched. There is no way for us to even think about what might be unthinkable, much less research it. This doublebind goes right over the heads of most people, even over the heads of some relativists who fear being disparaged if they don’t outright deny any possibility of the so-called strong Sapir-Whorf hypothesis. That such a hypothesis potentially could describe reality to a greater extent than we’d prefer is, for most people infected with the WEIRD mind virus and living within the WEIRD monocultural reality tunnel, itself an unthinkable thought.

It is unthinkable and, in its fullest form, fundamentally untestable. And so it is terra incognito within the collective mind. The response is typically either uncomfortable irritation or nervous laughter. Still, the limited evidence in support of linguistic determinism points to the possibility of it being found in other as-of-yet unexplored areas — maybe a fair amount of evidence already exists that will later be reinterpreted when a new frame of understanding becomes established or when someone, maybe generations later, looks at it with fresh eyes. History is filled with moments when something shifted allowing the incomprehensible and unspeakable to become a serious public debate, sometimes a new social reality. Determinism in all of its varieties seems a generally unfrutiful path of research, although in its linguistic form it is compelling as a thought experiment in showing how little we know and can know, how severely constrained is our imaginative capacities.

We don’t look in the darkness where we lost what we are looking for because the light is better elsewhere. But what would we find if we did search the shadows? Whether or not we discovered proof for linguistic determinism, we might stumble across all kinds of other inconvenient evidence pointing toward ever more radical and heretical thoughts. Linguistic relativity and determinism might end up playing a central role less because of the bold answers offered than in the questions that were dared to be asked. Maybe, in thinking about determinism, we could come to a more profound insight of relativity — after all, a complex enough interplay of seemingly deterministic factors would for all appearances be relativistic, that is to say what seen to be linear causation could when lines of causation are interwoven lead to emergent properties. The relativistic whole, in that case, presumably would be greater than the deterministic parts.

Besides, it always depends on perspective. Consider Whorf who “has been rejected both by cognitivists as a relativist and by symbolic and postmodern anthropologists as a determinist and essentialist” (John Leavitt, Linguistic Relativities, p. 193; Leavitt’s book goes into immense detail about all of the misunderstanding and misinterpretation, much of it because of intellectual laziness or hubris  but some of motivated by ideological agendas; the continuing and consistent wrongheadedness makes it difficult to not take much of it as arguing in bad faith). It’s not always clear what the debate is supposed to be about. Ironically, such terms as ‘determinism’ and ‘relativity’ are relativistic in their use while, in how we use them, determining how we think about the issues and how we interpret the evidence. There is no way to take ourselves out of the debate itself for our own humanity is what we are trying to place under the microscope, causing us tremendous psychological contortions in maintaining whatever worldview we latch onto.

There is less distance between linguistic relativity and linguistic determinism than is typically assumed. The former says we are only limited by habit of thought and all it entails within culture and relationships. Yet habits of thought can be so powerful as to essentially determine social orders for centuries and millennia. Calling this mere ‘habit’ hardly does it justice. In theory, a society isn’t absolutely determined to be the way it is nor for those within it to behave the way they do, but in practice extremely few individuals ever escape the gravity pull of habitual conformity and groupthink (i.e., Jaynesian self-authorization is more a story we tell ourselves than an actual description of behavior).

So, yes, in terms of genetic potential and neuroplasticity, there was nothing directly stopping Bronze Age Egyptians from starting an industrial revolution and there is nothing stopping a present-day Piraha from becoming a Harvard professor of mathematics — still, the probability of such things happening is next to zero. Consider the rare individuals in our own society who break free of the collective habits of our society, as they usually either end up homeless or institutionalized, typically with severely shortened lives. To not go along with the habits of your society is to be deemed insane, incompetent, and/or dangerous. Collective habits within a social order involve systematic enculturation, indoctrination, and enforcement. The power of language — even if only relativistic — over our minds is one small part of the cultural system, albeit an important part.

We don’t need to go that far with our argument, though. However you want to splice it, there is plenty of evidence that remains to be explained. And the evidence has become overwhelming and, to many, disconcerting. The debate over the validity of the theory of linguistic relativity is over. But the opponents of the theory have had two basic strategies to contain their loss and keep the debate on life support. They conflate linguistic relativity with linguistic determinism and dismiss it as laughably false. Or they concede that linguistic relativity is partly correct but argue that it’s insignificant in influence, as if they never denied it and simply were unimpressed.

“This is characteristic: one defines linguistic relativity in such an extreme way as to make it seem obviously untrue; one is then free to acknowledge the reality of the data at the heart of the idea of linguistic relativity – without, until quite recently, proposing to do any serious research on these data.” (John Leavit, Linguistic Relativities, p. 166)

Either way, essentialists maintain their position as if no serious challenge was posed. The evidence gets lost in the rhetoric, as the evidence keeps growing.

Still, there is something more challenging that also gets lost in debate, even when evidence is acknowledged. What motivated someone like Whorf wasn’t intellectual victory and academic prestige. There was a sense of human potential locked behind habit. That is why it was so important to study foreign cultures with their diverse languages, not only for the sake of knowledge but to be confronted by entirely different worldviews. Essentialists are on the old imperial path of Whiggish Enlightenment, denying differences by proclaiming that all things Western are the norm of humanity and reality, sometimes taken as a universal ideal state or the primary example by which to measure all else… an ideology that easily morphs into yet darker specters:

“Any attempt to speak of language in general is illusory; the (no doubt French or English) philosopher who does so is merely elevating his own mother tongue to the status of a universal standard (p. 3). See how the discourse of diversity can be turned to defend racism and fascism! I suppose by now this shouldn’t surprise us – we’ve seen so many examples of it at the end of the twentieth and beginning of the twenty-first century.” (John Leavit, Linguistic Relativities, p. 161)

In this light, it should be unsurprising that the essentialist program presented in Chomskyan linguistics was supported and funded by the Pentagon (their specific interest in this case being about human-computer interface in eliminating messy human error; in studying the brain as a computer, it was expected that the individual human mind could be made more amenable to a computerized system of military action and its accompanying chain-of-command). Essentialism makes promises that are useful for systems of effective control as part of a larger technocratic worldview of social control.

The essentialist path we’ve been on has left centuries of destruction in its wake. But from the humbling vista opening onto further possibilities, the relativists offer not a mere scientific theory but a new path for humanity or rather they throw light onto the multiple paths before us. In offering respect and openness toward the otherness of others, we open ourselves toward the otherness within our own humanity. The point is that, though trapped in linguistic cultures, the key to our release is also to be found in the same place. But this requires courage and curiosity, a broadening of the moral imagination.

Let me end on a note of irony. In comparing linguistic cultures, Joseph Needham wrote that, “Where Western minds asked ‘what essentially is it?’, Chinese minds asked ‘how is it related in its beginnings, functions, and endings with everything else, and how ought we to react to it?” This was quoted by Jeremy Lent in the Patterning Instinct (p. 206; quote originally from: Science and Civilization in China, vol. 2, History of Scientific Thought, pp. 199-200). Lent makes clear that this has everything to do with language. Chinese language embodies ambiguity and demands contextual understanding, whereas Western or more broadly Indo-European language elicits abstract essentialism.

So, it is a specific linguistic culture of essentialism that influences, if not entirely determines, that Westerners are predisposed to see language as essentialist, rather than as relative. And it is this very essentialism that causes many Westerners, especially abstract-minded intellectuals, to be blind to essentialism as being linguistically cultural, but not essentialist to human nature and neurocognitive functioning. That is the irony. This essentialist belief system is further proof of linguistic relativism.

 

* * *

The Patterning Instinct
by Jeremy Lent
pp. 197-205

The ability of these speakers to locate themselves in a way that is impossible for the rest of us is only the most dramatic in an array of discoveries that are causing a revolution in the world of linguistics. Researchers point to the Guugu Yimithirr as prima facie evidence supporting the argument that the language you speak affects how your cognition develops. As soon as they learn their first words, Guugu Yimithirr infants begin to structure their orientation around the cardinal directions. In time, their neural connections get wired accordingly until this form of orientation becomes second nature, and they no longer even have to think about where north, south, east, and west are.3 […]

For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?4

The answer can be found in the remarkable story of one charismatic individual, Benjamin Whorf. In the early twentieth century, Whorf was a student of anthropologist-linguist Edward Sapir, whose detailed study of Native American languages had caused him to propose that a language’s grammatical structure corresponds to patterns of thought in its culture. “We see and hear and otherwise experience very largely as we do,” Sapir suggested, “because the language habits of our community predispose certain choices of interpretation.”5

Whorf took this idea, which became known as the Sapir-Whorf hypothesis, to new heights of rhetoric. The grammar of our language, he claimed, affects how we pattern meaning into the natural world. “We cut up and organize the spread and flow of events as we do,” he wrote, “largely because, through our mother tongue, we are parties to an agreement to do so, not because nature itself is segmented in exactly that way for all to see.”6 […]

Whorf was brilliant but highly controversial. He had a tendency to use sweeping generalizations and dramatic statements to drive home his point. “As goes our segmentation of the face of nature,” he wrote, “so goes our physics of the Cosmos.” Sometimes he went beyond the idea that language affects how we think to a more strident assertion that language literally forces us to think in a certain way. “The forms of a person’s person’s thoughts,” he proclaimed, “are controlled by inexorable laws of pattern of which he is unconscious.” This rhetoric led people to interpret the Sapir-Whorf hypothesis as a theory of linguistic determinism, claiming that people’s thoughts are inevitably determined by the structure of their language.8

A theory of rigid linguistic determinism is easy to discredit. All you need to do is show a Hopi Indian capable of thinking in terms of past, present, and future, and you’ve proven that her language didn’t ordain how she was able to think. The more popular the Sapir-Whorf theory became, the more status could be gained by any researcher who poked holes in it. In time, attacking Sapir-Whorf became a favorite path to academic tenure, until the entire theory became completely discredited.9

In place of the Sapir-Whorf hypothesis arose what is known as the nativist view, which argues that the grammar of language is innate to humankind. As discussed earlier, the theory of universal grammar, proposed by Noam Chomsky in the 1950s and popularized more recently by Steven Pinker, posits that humans have a “language instinct” with grammatical rules coded into our DNA. This theory has dominated the field of linguistics for decades. “There is no scientific evidence,” writes Pinker, “that languages dramatically shape their speakers’ ways of thinking.” Pinker and other adherents to this theory, however, are increasingly having to turn a blind eye—not just to the Guugu Yimithirr but to the accumulating evidence of a number of studies showing the actual effects of language on people’s patterns of thought.10 […]

Psychologist Peter Gordon saw an opportunity to test the most extreme version of the Sapir-Whorf hypothesis with the Pirahã. If language predetermined patterns of thought, then the Pirahã should be unable to count, in spite of the fact that they show rich intelligence in other forms of their daily life. He performed a number of tests with the Pirahã over a two-year period, and his results were convincing: as soon as the Pirahã had to deal with a set of objects beyond three, their counting performance disintegrated. His study, he concludes, “represents a rare and perhaps unique case for strong linguistic determinism.”12

The Guugu Yimithirr, at one end of the spectrum, show the extraordinary skills a language can give its speakers; the Pirahã, at the other end, show how necessary language is for basic skills we take for granted. In between these two extremes, an increasing number of researchers are demonstrating a wide variety of more subtle ways the language we speak can influence how we think.

One set of researchers illustrated how language affects perception. They used the fact that the Greek language has two color terms—ghalazio and ble—that distinguish light and dark blue. They tested the speed with which Greek speakers and English speakers could distinguish between these two different colors, even when they weren’t being asked to name them, and discovered the Greeks were significantly faster.13

Another study demonstrates how language helps structure memory. When bilingual Mandarin-English speakers were asked in English to name a statue of someone with a raised arm looking into the distance, they were more likely to name the Statue of Liberty. When they were asked the same question in Mandarin, they named an equally famous Chinese statue of Mao with his arm raised.14

One intriguing study shows English and Spanish speakers remembering accidental events differently. In English, an accident is usually described in the standard subject-verb-object format of “I broke the bottle.” In Spanish, a reflexive verb is often used without an agent, such as “La botella se rompió”—“the bottle broke.” The researchers took advantage of this difference, asking English and Spanish speakers to watch videos of different intentional and accidental events and later having them remember what happened. Both groups had similar recall for the agents involved in intentional events. However, when remembering the accidental events, English speakers recalled the agents better than the Spanish speakers did.15

Language can also have a significant effect in channeling emotions. One researcher read the same story to Greek-English bilinguals in one language and, then, months later, in the other. Each time, he interviewed them about their feelings in response to the story. The subjects responded differently to the story depending on its language, and many of these differences could be attributed to specific emotion words available in one language but not the other. The English story elicited a sense of frustration in readers, but there is no Greek word for frustration, and this emotion was absent in responses to the Greek story. The Greek version, however, inspired a sense of stenahoria in several readers, an emotion loosely translated as “sadness/discomfort/suffocation.” When one subject was asked why he hadn’t mentioned stenahoria after his English reading of the story, he answered that he cannot feel stenahoria in English, “not just because the word doesn’t exist but because that kind of situation would never arise.”16 […]

Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.25

In particular, the use of root metaphors, embedded so deeply in our consciousness that we don’t even notice them, influences how we define our sense of self and apply meaning to the world around us. “Metaphor plays a very significant role in determining what is real for us,” writes cognitive linguist George Lakoff. “Metaphorical concepts…structure our present reality. New metaphors have the power to create a new reality.”26

These metaphors enter our minds as infants, as soon as we begin to talk. They establish neural pathways that are continually reinforced until, just like the cardinal directions of the Guugu Yimithirr, we use our metaphorical constructs without even recognizing them as metaphors. When a parent, for example, tells a child to “put that out of your mind,” she is implicitly communicating a metaphor of the MIND AS A CONTAINER that should hold some things and not others.27

When these metaphors are used to make sense of humanity’s place in the cosmos, they become the root metaphors that structure a culture’s approach to meaning. Hunter-gatherers, as we’ve seen, viewed the natural world through the root metaphor of GIVING PARENT, which gave way to the agrarian metaphor of ANCESTOR TO BE PROPITIATED. Both the Vedic and Greek traditions used the root metaphor of HIGH IS GOOD to characterize the source of ultimate meaning as transcendent, while the Chinese used the metaphor of PATH in their conceptualization of the Tao. These metaphors become hidden in plain sight, since they are used so extensively that people begin to accept them as fundamental structures of reality. This, ultimately, is how culture and language reinforce each other, leading to a deep persistence of underlying structures of thought from one generation to the next.28

Linguistic Relativities
by John Leavitt
pp. 138-142

Probably the most famous statement of Sapir’s supposed linguistic determinism comes from “The Status of Linguistics as a Science,” a talk published in 1929:

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of a particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language, and that language is merely an incidental means of solving specific problems of communication or reflection. The fact of the matter is that the “real world” is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are different worlds, not merely the same world with different labels attached … We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation. (Sapir 1949: 162)

This is the passage that is most commonly quoted to demonstrate the putative linguistic determinism of Sapir and of his student Whorf, who cites some of it (1956: 134) at the beginning of “The Relation of Habitual Thought and Behavior to Language,” a paper published in a Sapir Festschrift in 1941. But is this linguistic determinism? Or is it the statement of an observed reality that must be dealt with? Note that the passage does not say that it is impossible to translate between different languages, nor to convey the same referential content in both. Note also that there is a piece missing here, between “labels attached” and “We see and hear.” In fact, the way I have presented it, with the three dots, is how this passage is almost always presented (e.g., Lucy 1992a: 22); otherwise, the quote usually ends at “labels attached.” If we look at what has been elided, we find two examples, coming in a new paragraph immediately after “attached.” In a typically Sapirian way, one is poetic, the other perceptual. He begins:

The understanding of a simple poem, for instance, involves not merely an understanding of the single words in their average significance, but a full comprehension of the whole life of the community as it is mirrored in the words, or as it is suggested by the overtones.

So the apparent claim of linguistic determinism is to be illustrated by – a poem (Friedrich 1979: 479–80), and a simple one at that! In light of this missing piece of the passage, what Sapir seems to be saying is not that language determines thought, but that language is part of social reality, and so is thought, and to understand either a thought or “a green thought in a green shade” you need to consider the whole.

The second example is one of the relationship of terminology to classification:

Even comparatively simple acts of perception are very much more at the mercy of the social patterns called words than we might suppose. If one draws some dozen lines, for instance, of different shapes, one peceives them as divisible into such categories as “straight,” “crooked,” “curved,” “zigzag” because of the classificatory suggestiveness of the linguistic terms themselves. We see and hear …

Again, is Sapir here arguing for a determination of thought by language or simply observing that in cases of sorting out complex data, one will tend to use the categories that are available? In the latter case, he would be suggesting to his audience of professionals (the source is a talk given to a joint meeting of the Linguistic Society of America and the American Anthropological Association) that such phenomena may extend beyond simple classification tasks.

Here it is important to distinguish between claims of linguistic determinism and the observation of the utility of available categories, an observation that in itself in no way questions the likely importance of the non-linguistic salience of input or the physiological component of perception. Taken in the context of the overall Boasian approach to language and thought, this is clearly the thrust of Sapir’s comments here. Remember that this was the same man who did the famous “Study on Phonetic Symbolism,” which showed that there are what appear to be universal psychological reactions to certain speech sounds (his term is “symbolic feeling-significance”), regardless of the language or the meaning of the word in which these sounds are found (in Sapir 1949). This evidence against linguistic determinism, as it happens, was published the same year as “The Status of Linguistics as a Science,” but in the Journal of Experimental Psychology.3

The metaphor Sapir uses most regularly for the relation of language patterning to thought is not that of a constraint, but of a road or groove that is relatively easy or hard to follow. In Language, he proposed that languages are “invisible garments” for our spirits; but at the beginning of the book he had already questioned this analogy: “But what if language is not so much a garment as a prepared road or groove?” (p. 15); grammatical patterning provides “grooves of expression, (which) have come to be felt as inevitable” (p. 89; cf. Erickson et al. 1997: 298). One important thing about a road is that you can get off it; of a groove, that you can get out of it. We will see that this kind of wording permeates Whorf’s formulations as well. […]

Since the early 1950s, Sapir’s student Benjamin Lee Whorf (1897–1941) has most often been presented as the very epitome of extreme cognitive relativism and linguistic determinism. Indeed, as the name attached to the “linguistic determinism hypothesis,” a hypothesis almost never evoked but to be denied, Whorf has become both the best-known ethnolinguist outside the field itself and one of the great straw men of the century. This fate is undeserved; he was not a self-made straw man, as Marshall Sahlins once called another well-known anthropologist. While Whorf certainly maintained what he called a principle of linguistic relativity, it is clear from reading Language, Thought, and Reality, the only generally available source of his writings, published posthumously in 1956, and even clearer from still largely unpublished manuscripts, that he was also a strong universalist who accepted the general validity of modern science. With some re-evaluations since the early 1990s (Lucy 1992a; P. Lee 1996), we now have a clearer idea of what Whorf was about.

In spite of sometimes deterministic phraseology, Whorf presumed that much of human thinking and perception was non-linguistic and universal across languages. In particular, he admired Gestalt psychology (P. Lee 1996) as a science giving access to general characteristics of human perception across cultures and languages, including the lived experiences that lie behind the forms that we label time and space. He puts this most clearly in discussions of the presumably universal perception of visual space:

A discovery made by modern configurative or Gestalt psychology gives us a canon of reference, irrespective of their languages or scientific jargons, by which to break down and describe all visually observable situations, and many other situations, also. This is the discovery that visual perception is basically the same for all normal persons past infancy and conforms to definite laws. (Whorf 1956: 165)

Whorf clearly believed there was a real world out there, although, enchanted by quantum mechanics and relativity theory, he also believed that this was not the world as we conceive it, nor that every human being conceives it habitually in the same way.

Whorf also sought and proposed general descriptive principles for the analysis of languages of the most varied type. And along with Sapir, he worked on sound symbolism, proposing the universality of feeling-associations to certain speech sounds (1956: 267). Insofar as he was a good disciple of Sapir and Boas, Whorf believed, like them, in the universality of cognitive abilities and of some fundamental cognitive processes. And far from assuming that language determines thought and culture, Whorf wrote in the paper for the Sapir volume that

I should be the last to pretend that there is anything so definite as “a correlation” between culture and language, and especially between ethnological rubrics such as “agricultural, hunting,” etc., and linguistic ones like “inflected,” “synthetic,” or “isolating.” (pp. 138–9)

pp. 146

For Whorf, certain scientific disciplines – elsewhere he names “relativity, quantum theory, electronics, catalysis, colloid chemistry, theory of the gene, Gestalt psychology, psychoanalysis, unbiased cultural anthropology, and so on” (1956: 220), as well as non-Euclidean geometry and, of course, descriptive linguistics – were exemplary in that they revealed aspects of the world profoundly at variance with the world as modern Westerners habitually assume it to be, indeed as the members of any human language and social group habitually assume it to be.

Since Whorf was concerned with linguistic and/or conceptual patterns that people almost always follow in everyday life, he has often been read as a determinist. But as John Lucy pointed out (1992a), Whorf’s critiques clearly bore on habitual thinking, what it is easy to think; his ethical goal was to force us, through learning about other languages, other ways of foregrounding and linking aspects of experience, to think in ways that are not so easy, to follow paths that are not so familiar. Whorf’s argument is not fundamentally about constraint, but about the seductive force of habit, of what is “easily expressible by the type of symbolic means that language employs” (“Model,” 1956: 55) and so easy to think. It is not about the limits of a given language or the limits of thought, since Whorf presumes, Boasian that he is, that any language can convey any referential content.

Whorf’s favorite analogy for the relation of language to thought is the same as Sapir’s: that of tracks, paths, roads, ruts, or grooves. Even Whorf’s most determinist-sounding passages, which are also the ones most cited, sound very different if we take the implications of this analogy seriously: “Thinking … follows a network of tracks laid down in the given language, an organization which may concentrate systematically upon certain phases of reality … and may systematically discard others featured by other languages. The individual is utterly unaware of this organization and is constrained completely within its unbreakable bonds” (1956: 256); “we dissect nature along lines laid down by our native languages” (p. 213). But this is from the same essay in which Whorf asserted the universality of “ways of linking experiences … basically alike for all persons”; and this completely constrained individual is evidently the unreflective (utterly unaware) Mr. Everyman (Schultz 1990), and the very choice of the analogy of traced lines or tracks, assuming that they are not railway tracks – that they are not is suggested by all the other road and path metaphors – leaves open the possibility of getting off the path, if only we had the imagination and the gumption to do it. We can cut cross-country. In the study of an exotic language, he wrote, “we are at long last pushed willy-nilly out of our ruts. Then we find that the exotic language is a mirror held up to our own” (1956: 138). How can Whorf be a determinist, how can he see us as forever trapped in these ruts, if the study of another language is sufficient to push us, kicking and screaming perhaps, out of them?

The total picture, then, is not one of constraint or determinism. It is, on the other hand, a model of powerful seduction: the seduction of what is familiar and easy to think, of what is intellectually restful, of what makes common sense.7 The seduction of the habitual pathway, based largely on laziness and fear of the unknown, can, with work, be resisted and broken. Somewhere in the back of Whorf’s mind may have been the allegory of the broad, fair road to Hell and the narrow, difficult path to Heaven beloved of his Puritan forebears. It makes us think of another New England Protestant: “Two roads diverged in a wood, and I, / I took the one less travelled by, / and that has made all the difference.”

The recognition of the seduction of the familiar implies a real ethical program:

It is the “plainest” English which contains the greatest number of unconscious assumptions about nature … Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. (1956: 244)

Learning non-Western languages offers a lesson in humility and awe in an enormous multilingual world:

We shall no longer be able to see a few recent dialects of the Indo-European family, and the rationalizing techniques elaborated from their patterns, as the apex of the evolution of the human mind, nor their present wide spread as due to any survival from fitness or to anything but a few events of history – events that could be called fortunate only from the parochial point of view of the favored parties. They, and our own thought processes with them, can no longer be envisioned as spanning the gamut of reason and knowledge but only as one constellation in a galactic expanse. (p. 218)

The breathtaking sense of sudden vaster possibility, of the sky opening up to reveal a bigger sky beyond, may be what provokes such strong reactions to Whorf. For some, he is simply enraging or ridiculous. For others, reading Whorf is a transformative experience, and there are many stories of students coming to anthropology or linguistics largely because of their reading of Whorf (personal communications; Alford 2002).

p. 167-168

[T]he rise of cognitive science was accompanied by a restating of what came to be called the “Sapir–Whorf hypothesis” in the most extreme terms. Three arguments came to the fore repeatedly:

Determinism. The Sapir–Whorf hypothesis says that the language you speak, and nothing else, determines how you think and perceive. We have already seen how false a characterization this is: the model the Boasians were working from was only deterministic in cases of no effort, of habitual thought or speaking. With enough effort, it is always possible to change your accent or your ideas.

Hermeticism. The Sapir–Whorf hypothesis maintains that each language is a sealed universe, expressing things that are inexpressible in another language. In such a view, translation would be impossible and Whorf’s attempt to render Hopi concepts in English an absurdity. In fact, the Boasians presumed, rather, that languages were not sealed worlds, but that they were to some degree comparable to worlds, and that passing between them required effort and alertness.

Both of these characterizations are used to set up a now classic article on linguistic relativity by the psychologist Eleanor Rosch (1974):

Are we “trapped” by our language into holding a particular “world view”? Can we never really understand or communicate with speakers of a language quite different from our own because each language has molded the thought of its people into mutually incomprehensible world views? Can we never get “beyond” language to experience the world “directly”? Such issues develop from an extreme form of a position sometimes known as “the Whorfian hypothesis” … and called, more generally, generally, the hypothesis of “linguistic relativity.” (Rosch 1974: 95)

Rosch begins the article noting how intuitively right the importance of language differences first seemed to her, then spends much of the rest of it attacking this initial intuition.

Infinite variability. A third common characterization is that Boasian linguistics holds that, in Martin Joos’s words, “languages can differ from each other without limit and in unpredictable ways” (Joos 1966: 96). This would mean that the identification of any language universal would disprove the approach. In fact, the Boasians worked with the universals that were available to them – these were mainly derived from psychology – but opposed what they saw as the unfounded imposition of false universals that in fact reflected only modern Western prejudices. Joos’s hostile formulation has been cited repeatedly as if it were official Boasian doctrine (see Hymes and Fought 1981: 57).

For over fifty years, these three assertions have largely defined the received understanding of linguistic relativity. Anyone who has participated in discussions and/or arguments about the “Whorfian hypothesis” has heard them over and over again.

p. 169-173

In the 1950s, anthropologists and psychologists were interested in experimentation and the testing of hypotheses on what was taken to be the model of the natural sciences. At a conference on language in culture, Harry Hoijer (1954) first named a Sapir–Whorf hypothesis that language influences thought.

To call something a hypothesis is to propose to test it, presumably using experimental methods. This task was taken on primarily by psychologists. A number of attempts were made to prove or disprove experimentally that language influences thought (see Lucy 1992a: 127–78; P. Brown 2006). Both “language” and “thought” were narrowed down to make them more amenable to experiment: the aspect of language chosen was usually the lexicon, presumably the easiest aspect to control in an experimental setting; thought was interpreted to mean perceptual discrimination and cognitive processing, aspects of thinking that psychologists were comfortable testing for. Eric Lenneberg defined the problem posed by the “Sapir–Whorf hypothesis” as that of “the relationship that a particular language may have to its speakers’ cognitive processes … Does the structure of a given language affect the thoughts (or thought potential), the memory, the perception, the learning ability of those who speak that language?” (1953: 463). Need I recall that Boas, Sapir, and Whorf went out of their way to deny that different languages were likely to be correlated with strengths and weaknesses in cognitive processes, i.e., in what someone is capable of thinking, as opposed to the contents of habitual cognition? […]

Berlin and Kay started by rephrasing Sapir and Whorf as saying that the search for semantic universals was “fruitless in principle” because “each language is semantically arbitrary relative to every other language” (1969: 2; cf. Lucy 1992a: 177–81). If this is what we are calling linguistic relativity, then if any domain of experience, such as color, is identified in recognizably the same way in different languages, linguistic relativity must be wrong. As we have seen, this fits the arguments of Weisgerber and Bloomfield, but not of Sapir or Whorf. […]

A characteristic study was reported recently in my own university’s in-house newspaper under the title “Language and Perception Are Not Connected” (Baril 2004). The article starts by saying that according to the “Whorf–Sapir hypothesis … language determines perception,” and therefore that “we should not be able to distinguish differences among similar tastes if we do not possess words for expressing their nuances, since it is language that constructs the mode of thought and its concepts … According to this hypothesis, every language projects onto its speakers a system of categories through which they see and interpret the world.” The hypothesis, we are told, has been “disconfirmed since the 1970s” by research on color. The article reports on the research of Dominic Charbonneau, a graduate student in psychology. Intrigued by recent French tests in which professional sommeliers, with their elaborate vocabulary, did no better than regular ignoramuses in distinguishing among wines, Charbonneau carried out his own experiment on coffee – this is, after all, a French-speaking university, and we take coffee seriously. Francophone students were asked to distinguish among different coffees; like most of us, they had a minimal vocabulary for distinguishing them (words like “strong,” “smooth,” “dishwater”). The participants made quite fine distinctions among the eighteen coffees served, well above the possible results of chance, showing that taste discrimination does not depend on vocabulary. Conclusion: “Concepts must be independent of language, which once again disconfirms the Sapir–Whorf hypothesis” (my italics). And this of course would be true if there were such a hypothesis, if it was primarily about vocabulary, and if it said that vocabulary determines perception.

We have seen that Bloomfield and his successors in linguistics maintained the unlimited arbitrariness of color classifications, and so could have served as easy straw men for the cognitivist return to universals. But what did Boas, Sapir, Whorf, or Lee actually have to say about color? Did they in fact claim that color perception or recognition or memory was determined by vocabulary? Sapir and Lee are easy: as far as I have been able to ascertain, neither one of them talked about color at all. Steven Pinker attributes a relativist and determinist view of color classifications to Whorf:

Among Whorf’s “kaleidoscopic flux of impressions,” color is surely the most eye-catching. He noted that we see objects in different hues, depending on the wavelengths of the light they reflect, but that the wavelength is a continuous dimension with nothing delineating red, yellow, green, blue, and so on. Languages differ in their inventory of color words … You can fill in the rest of the argument. It is language that puts the frets in the spectrum. (Pinker 1994: 61–2)

No he didn’t. Whorf never noted anything like this in any of his published work, and Pinker gives no indication of having gone through Whorf’s unpublished papers. As far as I can ascertain, Whorf talks about color in two places; in both he is saying the opposite of what Pinker says he is saying.

pp. 187-188

The 1950s through the 1980s saw the progressive triumph of universalist cognitive science. From the 1980s, one saw the concomitant rise of relativistic postmodernism. By the end of the 1980s there had been a massive return to the old split between universalizing natural sciences and their ancillary social sciences on the one hand, particularizing humanities and their ancillary cultural studies on the other. Some things, in the prevailing view, were universal, others so particular as to call for treatment as fiction or anecdote. Nothing in between was of very much interest, and North American anthropology, the discipline that had been founded upon and achieved a sort of identity in crossing the natural-science/humanities divide, faced an identity crisis. Symptomatically, one noticed many scholarly bookstores disappearing their linguistics sections into “cognitive science,” their anthropology sections into “cultural studies.”

In this climate, linguistic relativity was heresy, Whorf, in particular, a kind of incompetent Antichrist. The “Whorfian hypothesis” of linguistic relativism or determinism became a topos of any anthropology textbook, almost inevitably to be shown to be silly. Otherwise serious linguists and psychologists (e.g., Pinker 1994: 59–64) continued to dismiss the idea of linguistic relativity with an alacrity suggesting alarm and felt free to heap posthumous personal vilification on Whorf, the favorite target, for his lack of official credentials, in some really surprising displays of academic snobbery. Geoffrey Pullum, to take only one example, calls him a “Connecticut fire prevention inspector and weekend language-fancier” and “our man from the Hartford Fire Insurance Company” (Pullum 1989 [1991]: 163). This comes from a book with the subtitle Irreverent Essays on the Study of Language. But how irreverent is it to make fun of somebody almost everybody has been attacking for thirty years?

The Language Myth: Why Language Is Not an Instinct
by Vyvyan Evans
pp. 195-198

Who’s afraid of the Big Bad Whorf?

Psychologist Daniel Casasanto has noted, in an article whose title gives this section its heading, that some researchers find Whorf’s principle of linguistic relativity to be threatening. 6 But why is Whorf such a bogeyman for some? And what makes his notion of linguistic relativity such a dangerous idea?

The rationalists fear linguistic relativity – the very idea of it – and they hate it, with a passion: it directly contradicts everything they stand for – if relativism is anywhere near right, then the rationalist house burns down, or collapses, like a tower of cards without a foundation. And this fear and loathing in parts of the Academy can often, paradoxically, be highly irrational indeed. Relativity is often criticised without argumentative support, or ridiculed, just for the audacity of existing as an intellectual idea to begin with. Jerry Fodor, more candid than most about his irrational fear, just hates it. He says: “The thing is: I hate relativism. I hate relativism more than I hate anything else, excepting, maybe, fiberglass powerboats.” 7 Fodor continues, illustrating further his irrational contempt: “surely, surely, no one but a relativist would drive a fiberglass powerboat”. 8

Fodor’s objection is that relativism overlooks what he deems to be “the fixed structure of human nature”. 9 Mentalese provides the fixed structure – as we saw in the previous chapter. If language could interfere with this innate set of concepts, then the fixed structure would no longer be fixed – anathema to a rationalist.

Others are more coy, but no less damning. Pinker’s strategy is to set up straw men, which he then eloquently – but mercilessly – ridicules. 10 But don’t be fooled, there is no serious argument presented – not on this occasion. Pinker takes an untenable and extreme version of what he claims Whorf said, and then pokes fun at it – a common modus operandi employed by those who are afraid. Pinker argues that Whorf was wrong because he equated language with thought: that Whorf assumes that language causes or determines thought in the first place. This is the “conventional absurdity” that Pinker refers to in the first of his quotations above. For Pinker, Whorf was either romantically naïve about the effects of language, or, worse, like the poorly read and ill-educated, credulous.

But this argument is a classic straw man: it is set up to fail, being made of straw. Whorf never claimed that language determined thought. As we shall see, the thesis of linguistic determinism, which nobody believes, and which Whorf explicitly rejected, was attributed to him long after his death. But Pinker has bought into the very myths peddled by the rationalist tradition for which he is cheerleader-in-chief, and which lives in fear of linguistic relativity. In the final analysis, the language-as-instinct crowd should be afraid, very afraid: linguistic relativity, once and for all, explodes the myth of the language-as-instinct thesis.

The rise of the Sapir − Whorf hypothesis

Benjamin Lee Whorf became interested in linguistics in 1924, and studied it, as a hobby, alongside his full-time job as an engineer. In 1931, Whorf began to attend university classes on a part-time basis, studying with one of the leading linguists of the time, Edward Sapir. 11 Amongst other things covered in his teaching, Sapir touched on what he referred to as “relativity of concepts … [and] the relativity of the form of thought which results from linguistic study”. 12 The notion of the relativistic effect of different languages on thought captured Whorf’s imagination; and so he became captivated by the idea that he was to develop and become famous for. Because Whorf’s claims have often been disputed and misrepresented since his death, let’s see exactly what his formulation of his principle of linguistic relativity was:

Users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world. 13

Indeed, as pointed out by the Whorf scholar, Penny Lee, post-war research rarely ever took Whorf’s principle, or his statements, as their starting point. 14 Rather, his writings were, on the contrary, ignored, and his ideas largely distorted. 15

For one thing, the so-called ‘Sapir − Whorf hypothesis’ was not due to either Sapir or Whorf. Sapir – whose research was not primarily concerned with relativity – and Whorf were lumped together: the term ‘Sapir − Whorf hypothesis’ was coined in the 1950s, over ten years after both men had been dead – Sapir died in 1939, and Whorf in 1941.16 Moreover, Whorf’s principle emanated from an anthropological research tradition; it was not, strictly speaking, a hypothesis. But, in the 1950s, psychologists Eric Lenneberg and Roger Brown sought to test empirically the notion of linguistic relativity. And to do so, they reformulated it in such a way that it could be tested, producing two testable formulations. 17 One, the so-called ‘strong version’ of relativity, holds that language causes a cognitive restructuring: language causes or determines thought. This is otherwise known as linguistic determinism, Pinker’s “conventional absurdity”. The second hypothesis, which came to be known as the ‘weak version’, claims instead that language influences a cognitive restructuring, rather than causing it. But neither formulation of the so-called ‘Sapir − Whorf hypothesis’ was due to Whorf, or Sapir. Indeed, on the issue of linguistic determinism, Whorf was explicit in arguing against it, saying the following:

The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur. 18

This demonstrates that, in point of fact, Whorf actually believed in something like the ‘fixed structure’ that Fodor claims is lacking in relativity. The delicious irony arising from it all is that Pinker derides Whorf on the basis of the ‘strong version’ of the Sapir − Whorf hypothesis: linguistic determinism – language causes thought. But this strong version was a hypothesis not created by Whorf, but imagined by rationalist psychologists who were dead set against Whorf and linguistic relativity anyway. Moreover, Whorf explicitly disagreed with the thesis that was posthumously attributed to him. The issue of linguistic determinism became, incorrectly and disingenuously, associated with Whorf, growing in the rationalist sub-conscious like a cancer – Whorf was clearly wrong, they reasoned.

In more general terms, defenders of the language-as-instinct thesis have taken a leaf out of the casebook of Noam Chomsky. If you thought that academics play nicely, and fight fair, think again. Successful ideas are the currency, and they guarantee tenure, promotion, influence and fame; and they allow the successful academic to attract Ph.D. students who go out and evangelise, and so help to build intellectual empires. The best defence against ideas that threaten is ridicule. And, since the 1950s, until the intervention of John Lucy in the 1990s – whom I discuss below – relativity was largely dismissed; the study of linguistic relativity was, in effect, off-limits to several generations of researchers.

The Bilingual Mind, And What it Tells Us about Language and Thought
by Aneta Pavlenko
PP. 27-32

1.1.2.4 The real authors of the Sapir-Whorf hypothesis and the invisibility of scientific revolutions

The invisibility of bilingualism in the United States also accounts for the disappearance of multilingual awareness from discussions of Sapir’s and Whorf’s work, which occurred when the two scholars passed away – both at a relatively young age – and their ideas landed in the hands of others. The posthumous collections brought Sapir’s ( 1949 ) and Whorf’s ( 1956 ) insights to the attention of the wider public (including, inter alia , young Thomas Kuhn ) and inspired the emergence of the field of psycholinguistics. But the newly minted psycholinguists faced a major problem: it had never occurred to Sapir and Whorf to put forth testable hypotheses. Whorf showed how linguistic patterns could be systematically investigated through the use of overt categories marked systematically (e.g., number in English or gender in Russian) and covert categories marked only in certain contexts (e.g., gender in English), yet neither he nor Sapir ever elaborated the meaning of ‘different observations’ or ‘psychological correlates’.

Throughout the 1950s and 1960s, scholarly debates at conferences, summer seminars and in academic journals attempted to correct this ‘oversight’ and to ‘systematize’ their ideas (Black, 1959 ; Brown & Lenneberg , 1954 ; Fishman , 1960 ; Hoijer, 1954a; Lenneberg, 1953 ; Osgood & Sebeok , 1954 ; Trager , 1959 ). The term ‘the Sapir -Whorf hypothesis’ was first used by linguistic anthropologist Harry Hoijer ( 1954b ) to refer to the idea “that language functions, not simply as a device for reporting experience, but also, and more significantly, as a way of defining experience for its speakers” (p. 93). The study of SWH, in Hoijer’s view, was supposed to focus on structural and semantic patterns active in a given language. This version, probably closest to Whorf’s own interest in linguistic classification, was soon replaced by an alternative, developed by psychologists Roger Brown and Eric Lenneberg, who translated Sapir’s and Whorf’s ideas into two ‘testable’ hypotheses (Brown & Lenneberg, 1954 ; Lenneberg, 1953 ). The definitive form of the dichotomy was articulated in Brown’s ( 1958 ) book Words and Things:

linguistic relativity holds that where there are differences of language there will also be differences of thought, that language and thought covary. Determinism goes beyond this to require that the prior existence of some language pattern is either necessary or sufficient to produce some thought pattern. (p. 260)

In what follows, I will draw on Kuhn’s ([1962] 2012 ) insights to discuss four aspects of this radical transformation of Sapir’s and Whorf’s ideas into the SWH: (a) it was a major change of paradigm , that is, of shared assumptions, research foci, and methods, (b) it erased multilingual awareness , (c) it created a false dichotomy, and (d) it proceeded unacknowledged.

The change of paradigm was necessitated by the desire to make complex notions, articulated by linguistic anthropologists, fit experimental paradigms in psychology. Yet ideas don’t travel easily across disciplines: Kuhn ([1962] 2012 ) compares a dialog between scientific communities to intercultural communication, which requires skillful translation if it is to avoid communication breakdowns. Brown and Lenneberg ’s translation was not skillful and while their ideas moved the study of language and cognition forward, they departed from the original arguments in several ways (for discussion, see also Levinson , 2012 ; Lucy , 1992a ; Lee , 1996 ).

First, they shifted the focus of the inquiry from the effects of obligatory grammatical categories, such as tense, to lexical domains, such as color, that had a rather tenuous relationship to linguistic thought (color differentiation was, in fact, discussed by Boas and Whorf as an ability not influenced by language). Secondly, they shifted from concepts as interpretive categories to cognitive processes, such as perception or memory, that were of little interest to Sapir and Whorf, and proposed to investigate them with artificial stimuli, such as Munsell chips, that hardly reflect habitual thought. Third, they privileged the idea of thought potential (and, by implication, what can be said) over Sapir’s and Whorf’s concerns with obligatory categories and habitual thought (and, by definition, with what is said). Fourth, they missed the insights about the illusory objectivity of one’s own language and replaced the interest in linguistic thought with independent ‘language’ and ‘cognition’. Last, they substituted Humboldt ’s, Sapir ’s and Whorf ’s interest in multilingual awareness with a hypothesis articulated in monolingual terms.

A closer look at Brown’s ( 1958 ) book shows that he was fully aware of the existence of bilingualism and of the claims made by bilingual speakers of Native American languages that “thinking is different in the Indian language” (p. 232). His recommendation in this case was to distrust those who have the “unusual” characteristic of being bilingual:

There are few bilinguals, after all, and the testimony of those few cannot be uncritically accepted. There is a familiar inclination on the part of those who possess unusual and arduously obtained experience to exaggerate its remoteness from anything the rest of us know. This must be taken into account when evaluating the impressions of students of Indian languages. In fact, it might be best to translate freely with the Indian languages, assimilating their minds to our own. (Brown, 1958 : 233)

The testimony of German–English bilinguals – akin to his own collaborator Eric Heinz Lenneberg – was apparently another matter: the existence of “numerous bilingual persons and countless translated documents” was, for Brown ( 1958 : 232), compelling evidence that the German mind is “very like our own”. Alas, Brown ’s ( 1958 ) contradictory treatment of bilingualism and the monolingual arrogance of the recommendations ‘to translate freely’ and ‘to assimilate Indian minds to our own’ went unnoticed by his colleagues. The result was the transformation of a fluid and dynamic account of language into a rigid, static false dichotomy.

When we look back, the attribution of the idea of linguistic determinism to multilinguals interested in language evolution and the evolution of the human mind makes little sense. Yet the replacement of the open-ended questions about implications of linguistic diversity with two ‘testable’ hypotheses had a major advantage – it was easier to argue about and to digest. And it was welcomed by scholars who, like Kay and Kempton ( 1984 ), applauded the translation of Sapir’s and Whorf’s convoluted passages into direct prose and felt that Brown and Lenneberg “really said all that was necessary” (p. 66) and that the question of what Sapir and Whorf actually thought was interesting but “after all less important than the issue of what is the case” (p. 77). In fact, by the 1980s, Kay and Kempton were among the few who could still trace the transformation to the two psychologists. Their colleagues were largely unaware of it because Brown and Lenneberg concealed the radical nature of their reformulation by giving Sapir and Whorf ‘credit’ for what should have been the Brown-Lenneberg hypothesis.

We might never know what prompted this unusual scholarly modesty – a sincere belief that they were simply ‘improving’ Sapir and Whorf or the desire to distance themselves from the hypothesis articulated only to be ‘disproved’. For Kuhn ([1962] 2012 ), this is science as usual: “it is just this sort of change in the formulation of questions and answers that accounts, far more than novel empirical discoveries, for the transition from Aristotelian to Galilean and from Galilean to Newtonian dynamics” (p. 139). He also points to the hidden nature of many scientific revolutions concealed by textbooks that provide the substitute for what they had eliminated and make scientific development look linear, truncating the scientists’ knowledge of the history of their discipline. This is precisely what happened with the SWH: the newly minted hypothesis took on a life of its own, multiplying and reproducing itself in myriads of textbooks, articles, lectures, and popular media, and moving the discussion further and further away from Sapir’s primary interest in ‘social reality’ and Whorf’s central concern with ‘habitual thought’.

The transformation was facilitated by four common academic practices that allow us to manage the ever-increasing amount of literature in the ever-decreasing amount of time: (a) simplification of complex arguments (which often results in misinterpretation); (b) reduction of original texts to standard quotes; (c) reliance on other people’s exegeses; and (d) uncritical reproduction of received knowledge. The very frequency of this reproduction made the SWH a ‘fact on the ground’, accepted as a valid substitution for the original ideas. The new terms of engagement became part of habitual thought in the Ivory Tower and to this day are considered obligatory by many academics who begin their disquisitions on linguistic relativity with a nod towards the sound-bite version of the ‘strong’ determinism and ‘weak’ relativity. In Kuhn ’s ([1962] 2012 ) view, this perpetuation of a new set of shared assumptions is a key marker of a successful paradigm change: “When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced” (p. 20).

Yet the false dichotomy reified in the SWH – and the affective framing of one hypothesis as strong and the other as weak – moved the goalposts and reset the target and the standards needed to achieve it, giving scholars a clear indication of which hypothesis they should address. This preference, too, was perpetuated by countless researchers who, like Langacker ( 1976 : 308), dismissed the ‘weak’ version as obviously true but uninteresting and extolled ‘the strongest’ as “the most interesting version of the LRH” but also as “obviously false”. And indeed, the research conducted on Brown’s and Lenneberg’s terms failed to ‘prove’ linguistic determinism and instead revealed ‘minor’ language effects on cognition (e.g., Brown & Lenneberg, 1954 ; Lenneberg , 1953 ) or no effects at all (Heider , 1972 ). The studies by Gipper ( 1976 ) 4 and Malotki ( 1983 ) showed that even Whorf ’s core claims, about the concept of time in Hopi, may have been misguided. 5 This ‘failure’ too became part of the SWH lore, with textbooks firmly stating that “a strong version of the Whorfian hypothesis cannot be true” (Foss & Hakes , 1978 : 393).

By the 1980s, there emerged an implicit consensus in US academia that Whorfianism was “a bête noire, identified with scholarly irresponsibility, fuzzy thinking, lack of rigor, and even immorality” (Lakoff, 1987 : 304). This consensus was shaped by the political climate supportive of the notion of ‘free thought’ yet hostile to linguistic diversity, by educational policies that reinforced monolingualism, and by the rise of cognitive science and meaning-free linguistics that replaced the study of meaning with the focus on structures and universals. Yet the implications of Sapir ’s and Whorf’s ideas continued to be debated (e.g., Fishman , 1980 , 1982 ; Kay & Kempton , 1984 ; Lakoff, 1987 ; Lucy & Shweder , 1979 ; McCormack & Wurm , 1977 ; Pinxten , 1976 ) and in the early 1990s the inimitable Pinker decided to put the specter of the SWH to bed once and for all. Performing a feat reminiscent of Humpty Dumpty, Pinker ( 1994 ) made the SWH ‘mean’ what he wanted it to mean, namely “the idea that thought is the same thing as language” (p. 57). Leaving behind Brown ’s ( 1958 ) articulation with its modest co-variation, he replaced it in the minds of countless undergraduates with

the famous Sapir-Whorf hypothesis of linguistic determinism , stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity , stating that differences among languages cause differences in the thoughts of their speakers. (Pinker, 1994 : 57)

And lest they still thought that there is something to it, Pinker ( 1994 ) told them that it is “an example of what can be called a conventional absurdity” (p. 57) and “it is wrong, all wrong” (p. 57). Ironically, this ‘obituary’ for the SWH coincided with the neo-Whorfian revival, through the efforts of several linguists, psychologists, and anthropologists – most notably Gumperz and Levinson ( 1996 ), Lakoff ( 1987 ), Lee ( 1996 ), Lucy ( 1992a , b ), and Slobin ( 1991 , 1996a ) – who were willing to buck the tide, to engage with the original texts, and to devise new methods of inquiry. This work will form the core of the chapters to come but for now I want to emphasize that the received belief in the validity of the terms of engagement articulated by Brown and Lenneberg and their attribution to Sapir and Whorf is still pervasive in many academic circles and evident in the numerous books and articles that regurgitate the SWH as the strong/weak dichotomy. The vulgarization of Whorf ’s views bemoaned by Fishman ( 1982 ) also continues in popular accounts, and I fully agree with Pullum ( 1991 ) who, in his own critique of Whorf, noted:

Once the public has decided to accept something as an interesting fact, it becomes almost impossible to get the acceptance rescinded. The persistent interestingness and symbolic usefulness overrides any lack of factuality. (p. 159)

Popularizers of academic work continue to stigmatize Whorf through comments such as “anyone can estimate the time of day, even the Hopi Indians; these people were once attributed with a lack of any conception of time by a book-bound scholar, who had never met them” (Richards , 1998 : 44). Even respectable linguists perpetuate the strawman version of “extreme relativism – the idea that there are no facts common to all cultures and languages” (Everett, 2012 : 201) or make cheap shots at “the most notorious of the con men, Benjamin Lee Whorf, who seduced a whole generation into believing, without a shred of evidence, that American Indian languages lead their speakers to an entirely different conception of reality from ours” (Deutscher, 2010 : 21). This assertion is then followed by a statement that while the link between language, culture, and cognition “seems perfectly kosher in theory, in practice the mere whiff of the subject today makes most linguists, psychologists, and anthropologists recoil” because the topic “carries with it a baggage of intellectual history which is so disgraceful that the mere suspicion of association with it can immediately brand anyone a fraud” (Deutscher, 2010 : 21).

Such comments are not just an innocent rhetorical strategy aimed at selling more copies: the uses of hyperbole (most linguists, psychologists, and anthropologists ; mere suspicion of association), affect (disgraceful , fraud , recoil , embarrassment), misrepresentation (disgraceful baggage of intellectual history), strawman’s arguments and reduction ad absurdum as a means of persuasion have played a major role in manufacturing the false consent in the history of ideas that Deutscher (2010) finds so ‘disgraceful’ (readers interested in the dirty tricks used by scholars should read the expert description by Pinker , 2007 : 89–90). What is particularly interesting is that both Deutscher (2010) and Everett (2012) actually martial evidence in support of Whorf’s original arguments. Their attempt to do so while distancing themselves from Whorf would have fascinated Whorf, for it reveals two patterns of habitual thought common in English-language academia: the uncritical adoption of the received version of the SWH and the reliance on the metaphor of ‘argument as war’ (Tannen , 1998), i.e., an assumption that each argument has ‘two sides’ (not one or three), that these sides should be polarized in either/or terms, and that in order to present oneself as a ‘reasonable’ author, one should exaggerate the alternatives and then occupy the ‘rational’ position in between. Add to this the reductionism common for trade books and the knowledge that criticism sells better than praise, and you get Whorf as a ‘con man’.

Dark Matter of the Mind
by Daniel L. Everett
Kindle Locations 352-373

I am here particularly concerned with difference, however, rather than sameness among the members of our species— with variation rather than homeostasis. This is because the variability in dark matter from one society to another is fundamental to human survival, arising from and sustaining our species’ ecological diversity. The range of possibilities produces a variety of “human natures” (cf. Ehrlich 2001). Crucial to the perspective here is the concept-apperception continuum. Concepts can always be made explicit; apperceptions less so. The latter result from a culturally guided experiential memory (whether conscious or unconscious or bodily). Such memories can be not only difficult to talk about but often ineffable (see Majid and Levinson 2011; Levinson and Majid 2014). Yet both apperception and conceptual knowledge are uniquely determined by culture, personal history, and physiology, contributing vitally to the formation of the individual psyche and body.

Dark matter emerges from individuals living in cultures and thereby underscores the flexibility of the human brain. Instincts are incompatible with flexibility. Thus special care must be given to evaluating arguments in support of them (see Blumberg 2006 for cogent criticisms of many purported examples of instincts, as well as the abuse of the term in the literature). If we have an instinct to do something one way, this would impede learning to do it another way. For this reason it would surprise me if creatures higher on the mental and cerebral evolutionary scale— you and I, for example— did not have fewer rather than more instincts. Humans, unlike cockroaches and rats— two other highly successful members of the animal kingdom— adapt holistically to the world in which they live, in the sense that they can learn to solve problems across environmental niches, then teach their solutions and reflect on these solutions. Cultures turn out to be vital to this human adaptational flexibility— so much so that the most important cognitive question becomes not “What is in the brain?” but “What is the brain in?” (That is, in what individual, residing in what culture does this particular brain reside?)

The brain, by this view, was designed to be as close to a blank slate as was possible for survival. In other words, the views of Aristotle, Sapir, Locke, Hume, and others better fit what we know about the nature of the brain and human evolution than the views of Plato, Bastian, Freud, Chomsky, Tooby, Pinker, and others. Aristotle’s tabula rasa seems closer to being right than is currently fashionable to suppose, especially when we answer the pointed question, what is left in the mind/ brain when culture is removed?

Most of the lessons of this book derive from the idea that our brains (including our emotions) and our cultures are related symbiotically through the individual, and that neither supervenes on the other. In this framework, nativist ideas often are superfluous.

Kindle Locations 3117-3212

Science, we might say, ought to be exempt from dark matter. Yet that is much harder to claim than to demonstrate. […] To take a concrete example of a science, we focus on linguistics, because this discipline straddles the borders between the sciences, humanities, and social sciences. The basic idea to be explored is this: because counterexamples and exceptions are culturally determined in linguistics, as in all sciences, scientific progress is the output of cultural values. These values differ even within the same discipline (e.g., linguistics), however, and can lead to different notions of progress in science. To mitigate this problem, therefore, to return to linguistics research as our primary example, our inquiry should be informed by multiple theories, with a focus on languageS rather than Language. To generalize, this would mean a focus on the particular rather than the general in many cases. Such a focus (in spite of the contrast between this and many scientists’ view that generalizations are the goal of science) develops a robust empirical basis while helping to distinguish local theoretical culture from broader, transculturally agreed-upon desiderata of science— an issue that theories of language, in a way arguably more extreme than in other disciplines, struggle to tease apart.

The reason that a discussion of science and dark matter is important here is to probe the significance and meaning of dark matter, culture, and psychology in the more comfortable, familiar territory of the reader, to understand that what we are contemplating here is not limited to cultures unlike our own, but affects every person, every endeavor of Homo sapiens, even the hallowed enterprise of science. This is not to say that science is merely a cultural illusion. This chapter has nothing to do with postmodernist epistemological relativity. But it does aim to show that science is not “pure rationality,” autonomous from its cultural matrix. […]

Whether we classify an anomaly as counterexample or exception depends on our dark matter— our personal history plus cultural values, roles, and knowledge structures. And the consequences of our classification are also determined by culture and dark matter. Thus, by social consensus, exceptions fall outside the scope of the statements of a theory or are explicitly acknowledged by the theory to be “problems” or “mysteries.” They are not immediate problems for the theory. Counterexamples, on the other hand, by social consensus render a statement false. They are immediately acknowledged as (at least potential) problems for any theory. Once again, counterexamples and exceptions are the same etically, though they are nearly polar opposites emically. Each is defined relative to a specific theoretical tradition, a specific set of values, knowledge structures, and roles— that is, a particular culture.

One bias that operates in theories, the confirmation bias, is the cultural value that a theory is true and therefore that experiments are going to strengthen it, confirm it, but not falsify it. Anomalies appearing in experiments conducted by adherents of a particular theory are much more likely to be interpreted as exceptions that might require some adjustments of the instruments, but nothing serious in terms of the foundational assumptions of the theory. On the other hand, when anomalies turn up in experiments by opponents of a theory, there will be a natural bias to interpret these as counterexamples that should lead to the abandonment of the theory. Other values that can come into play for the cultural/ theoretical classification of an anomaly as a counterexample or an exception include “tolerance for cognitive dissonance,” a value of the theory that says “maintain that the theory is right and, at least temporarily, set aside problematic facts,” assuming that they will find a solution after the passage of a bit of time. Some theoreticians call this tolerance “Galilean science”— the willingness to set aside all problematic data because a theory seems right. Fair enough. But when, why, and for how long a theory seems right in the face of counterexamples is a cultural decision, not one that is based on facts alone. We have seen that the facts of a counterexample and an exception can be exactly the same. Part of the issue of course is that data, like their interpretations, are subject to emicization. We decide to see data with a meaning, ignoring the particular variations that some other theory might seize on as crucial. In linguistics, for example, if a theory (e.g., Chomskyan theory) says that all relevant grammatical facts stop at the boundary of the sentence, then related facts at the level of paragraphs, stories, and so on, are overlooked.

The cultural and dark matter forces determining the interpretation of anomalies in the data that lead one to abandon a theory and another to maintain it themselves create new social situations that confound the intellect and the sense of morality that often is associated with the practice of a particular theory. William James (1907, 198) summed up some of the reactions to his own work, as evidence of these reactions to the larger field of intellectual endeavors: “I fully expect to see the pragmatist view of truth run through the classic stages of a theory’s career. First, you know, a new theory is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim that they themselves discovered it.”

In recent years, due to my research and claims regarding the grammar of the Amazonian Pirahã— that this language lacks recursion— I have been called a charlatan and a dull wit who has misunderstood. It has been (somewhat inconsistently) further claimed that my results are predicted (Chomsky 2010, 2014); it has been claimed that an alternative notion of recursion, Merge, was what the authors had in mind is saying that recursion is the foundation of human languages; and so on. And my results have been claimed to be irrelevant.

* * *

Beyond Our Present Knowledge
Useful Fictions Becoming Less Useful
Essentialism On the Decline
Is the Tide Starting to Turn on Genetics and Culture?
Blue on Blue
The Chomsky Problem
Dark Matter of the Mind
What is the Blank Slate of the Mind?
Cultural Body-Mind
How Universal Is The Mind?
The Psychology and Anthropology of Consciousness
On Truth and Bullshit

The Mind in the Body

“[In the Old Testament], human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’ […] In short, the biblical character presents itself to us more as parts than as a whole”
(Robert A. Di Vito, “Old Testament Anthropology and the Construction of Personal Identity”, p. 227-228)

The Axial Age was a transitional stage following the collapse of the Bronze Age Civilizations. And in that transition, new mindsets mixed with old, what came before trying to contain the rupture and what was forming not yet fully born. Writing, texts, and laws were replacing voices gone quiet and silent. Ancient forms of authorization no longer were as viscerally real and psychologically compelling. But the transition period was long and slow, and in many ways continues to this day (e.g., authoritarianism as vestigial bicameralism).

One aspect was the changing experience of identity, as experienced within the body and the world. But let me take it a step back. In hunter-gatherer societies, there is the common attribute of animism where the world is alive with voices and along with this the sense of identity that, involving sensory immersion not limited to the body, extends into the surrounding environment. The bicameral mind seems to have been a reworking of this mentality for the emerging agricultural villages and city-states. Instead of body as part of the natural environment, there was the body politic with the community as a coherent whole, a living organism. Without a metaphorical framing of inside and outside as the crux of identity as would later develop, self and other was defined by permeable collectivism rather than rigid individualism (bundle theory of mind taken to the extreme of bundle theory of society).

In the late Bronze Age, large and expansive theocratic hierarchies formed. Writing increasingly took a greater role. All of this combined to make the bicameral order precarious. The act of writing and reading texts was still integrated with voice-hearing traditions, a text being the literal ‘word’ of a god, spirit, or ancestor. But the voices being written down began the process of creating psychological distance, the text itself beginning to take onto itself authority. This became a competing metaphorical framing, that of truth and reality as text.

This transformed the perception of the body. The voices became harder to decipher. Hearing a voice of authority speak to you required little interpretation, but a text emphasizes the need for interpretation. Reading became a way of thinking about the world and about one’s way of being in the world. Divination and similar practices was the attempt to read the world. Clouds or lightning, the flight of birds or the organs of a sacrificial animal — these were texts to be read.

Likewise, the body became a repository of voices, although initially not quite a unitary whole. Different aspects of self and spirits, different energies and forces were located and contained in various organs and body parts — to the extent that they had minds of their own, a potentially distressing condition and sometimes interpreted as possession. As the bicameral community was a body politic, the post-bicameral body initiated the internalization of community. But this body as community didn’t at first have a clear egoic ruler — the need for this growing stronger as external authorization further weakened. Eventually, it became necessary to locate the ruling self in a particular place within, such as the heart or throat or head. This was a forceful suppression of the many voices and hence a disallowing of the perception of self as community. The narrative of individuality began to be told.

Even today, we go on looking for a voice in some particular location. Noam Chomsky’s theory of a language organ is an example of this. We struggle for authorization within consciousness, as the ancient grounding of authorization in the world and in community has been lost, cast into the shadows.

Still, dissociation having taken hold, the voices never disappear and they continue to demand being heard, if only as symptoms of physical and psychological disease. Or else we let the thousand voices of media to tell us how to think and what to do. Ultimately, trying to contain authorization within us is impossible and so authorization spills back out into the world, the return of the repressed. Our sense of individualism is much more of a superficial rationalization than we’d like to admit. The social nature of our humanity can’t be denied.

As with post-bicameral humanity, we are still trying to navigate this complex and confounding social reality. Maybe that is why Axial Age religions, in first articulating the dilemma of conscious individuality, remain compelling in what was taught. The Axial Age prophets gave voice to our own ambivalance and maybe that is what gives the ego such power over us. We moderns haven’t become disconnected and dissociated merely because of some recent affliction — such a state of mind is what we inherited, as the foundation of our civilization.

* * *

“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward. But when thou doest alms, let not thy left hand know what thy right hand doeth: That thine alms may be in secret: and thy Father which seeth in secret himself shall reward thee openly.” (Matthew 6:2-4)

“Wherefore if thy hand or thy foot offend thee, cut them off, and cast them from thee: it is better for thee to enter into life halt or maimed, rather than having two hands or two feet to be cast into everlasting fire. And if thine eye offend thee, pluck it out, and cast it from thee: it is better for thee to enter into life with one eye, rather than having two eyes to be cast into hell fire.” (Matthew 18:8-9)

The Prince of Medicine
by Susan P. Mattern
pp. 232-233

He mentions speaking with many women who described themselves as “hysterical,” that is, having an illness caused, as they believed, by a condition of the uterus (hystera in Greek) whose symptoms varied from muscle contractions to lethargy to nearly complete asphyxia (Loc. Affect. 6.5, 8.414K). Galen, very aware of Herophilus’s discovery of the broad ligaments anchoring the uterus to the pelvis, denied that the uterus wandered around the body like an animal wreaking havoc (the Hippocratics imagined a very actively mobile womb). But the uterus could, in his view, become withdrawn in some direction or inflamed; and in one passage he recommends the ancient practice of fumigating the vagina with sweet-smelling odors to attract the uterus, endowed in this view with senses and desires of its own, to its proper place; this technique is described in the Hippocratic Corpus but also evokes folk or shamanistic medicine.

“Between the Dream and Reality”:
Divination in the Novels of Cormac McCarthy

by Robert A. Kottage
pp. 50-52

A definition of haruspicy is in order. Known to the ancient Romans as the Etrusca disciplina or “Etruscan art” (P.B. Ellis 221), haruspicy originally included all three types of divination practiced by the Etruscan hierophant: interpretation of fulgura (lightnings), of monstra (birth defects and unusual meteorological occurrences), and of exta (internal organs) (Hammond). ”Of these, the practice still commonly associated with the term is the examination of organs, as evidenced by its OED definition: “The practice or function of a haruspex; divination by inspection of the entrails of victims” (“haruspicy”).”A detailed science of liver divination developed in the ancient world, and instructional bronze liver models formed by the Etruscans—as well as those made by their predecessors the Hittites and Babylonians—have survived (Hammond). ”Any unusual features were noted and interpreted by those trained in the esoteric art: “Significant for the exta were the size, shape, colour, and markings of the vital organs, especially the livers and gall bladders of sheep, changes in which were believed by many races to arise supernaturally… and to be susceptible of interpretation by established rules”(Hammond). Julian Jaynes, in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, comments on the unique quality of haruspicy as a form of divination, arriving as it did at the dawn of written language: “Extispicy [divining through exta] differs from other methods in that the metaphrand is explicitly not the speech or actions of the gods, but their writing. The baru [Babylonian priest] first addressed the gods… with requests that they ‘write’ their message upon the entrails of the animal” (Jaynes 243). Jaynes also remarks that organs found to contain messages of import would sometimes be sent to kings, like letters from the gods (Jaynes 244). Primitive man sought (and found) meaning everywhere.

The logic behind the belief was simple: the whole universe is a single, harmonious organism, with the thoughts and intensions of the intangible gods reflected in the tangible world. For those illiterate to such portents, a lightning bolt or the birth of a hermaphrodite would have been untranslatable; but for those with proper training, the cosmos were as alive with signs as any language:

The Babylonia s believed that the decisions of their gods, like those of their kings, were arbitrary, but that mankind could at least guess their will. Any event on earth, even a trivial one, could reflect or foreshadow the intentions of the gods because the universe is a living organism, a whole, and what happens in one part of it might be caused by a happening in some distant part. Here we see a germ of the theory of cosmic sympathy formulated by Posidonius. (Luck 230)

This view of the capricious gods behaving like human king is reminiscent of the evil archons of gnosticism; however, unlike gnosticism, the notion of cosmic sympathy implies an illuminated and vastly “readable” world, even in the darkness of matter. The Greeks viewed pneuma as “the substance that penetrates and unifies all things. In fact, this tension holds bodies together, and every coherent thing would collapse without it” (Lawrence)—a notion that diverges from the gnostic idea of pneuma as spiritual light temporarily trapped in the pall of physicality.

Proper vision, then, is central to all the offices of the haruspex. The world cooperates with the seer by being illuminated, readable.

p. 160

Jaynes establishes the important distinction between the modern notion of chance commonly associated with coin flipping and the attitude of the ancient Mesopotamians toward sortilege:

We are so used to the huge variety of games of chance, of throwing dice, roulette wheels, etc., all of them vestiges of this ancient practice of divination by lots, that we find it difficult to really appreciate the significance of this practice historically. It is a help here to realize that there was no concept of chance whatever until very recent times…. [B]ecause there was no chance, the result had to be caused by the gods whose intentions were being divined. (Jaynes 240)

In a world devoid of luck, proper divination is simply a matter of decoding the signs—bad readings are never the fault of the gods, but can only stem from the reader.

The Consciousness of John’s Gospel
A Prolegomenon to a Jaynesian-Jamesonian Approach

by Jonathan Bernier

When reading the prologue’s historical passages, one notes a central theme: the Baptist witnesses to the light coming into the world. Put otherwise, the historical witnesses to the cosmological. This, I suggest, can be understood as an example of what Jaynes (1976: 317–338) calls ‘the quest for authorization.’ As the bicameral mind broke down, as exteriorised thought ascribed to other-worldly agents gave way to interiorised thought ascribed to oneself, as the voices of the gods spoke less frequently, people sought out new means, extrinsic to themselves, by which to authorise belief and practice; they quite literally did not trust themselves. They turned to oracles and prophets, to auguries and haruspices, to ecstatics and ecstasy. Proclamatory prophecy of the sort practiced by John the Baptist should be understood in terms of the bicameral mind: the Lord God of Israel, external to the Baptist, issued imperatives to the Baptist, and then the Baptist, external to his audience, relayed those divine imperatives to his listeners. Those who chose to follow the Baptist’s imperatives operated according to the logic of the bicameral mind, as described by Jaynes (1976: 84–99): the divine voice speaks, therefore I act. That voice just happens now to be mediated through the prophet, and not apprehended directly in the way that the bicameral mind apprehended the voices and visions. The Baptist as witness to God’s words and Word is the Baptist as bicameral vestige.

By way of contrast, the Word-become-flesh can be articulated in terms of the bicameral mind giving way to consciousness. The Jesus of the prologue represents the apogee of interiorised consciousness: the Word is not just inside him, but he in fact is the Word. 1:17 draws attention to an implication consequent to this indwelling of the Word: with the divine Word – and thus also the divine words – dwelling fully within oneself, what need is there for that set of exteriorised thoughts known as the Mosaic Law? […]

[O]ne notes Jaynes’ (1976: 301, 318) suggestion that the Mosaic Law represents a sort of half-way house between bicameral exteriority and conscious interiority: no longer able to hear the voices, the ancient Israelites sought external authorisation in the written word; eventually, however, as the Jewish people became increasingly acclimated to conscious interiority, they became increasingly ambivalent towards the need for and role of such exteriorised authorisation. Jaynes (1976: 318) highlights Jesus’ place in this emerging ambivalence; however, in 1:17 it is not so much that exteriorised authorisation is displaced by interiorised consciousness but that Torah as exteriorised authority is replaced by Jesus as exteriorised authority. Jesus, the fully conscious Word-made-flesh, might displace the Law, but it is not altogether clear that he offers his followers a full turn towards interiorised consciousness; one might, rather, read 1:17 as a bicameral attempt to re-contain the cognition revolution of which Jaynes considers Jesus to be a flag-bearer.

The Discovery of the Mind
by Bruno Snell
pp. 6-8

We find it difficult to conceive of a mentality which made no provision for the body as such. Among the early expressions designating what was later rendered as soma or ‘body’, only the plurals γυα, μλεα, etc. refer to the physical nature of the body; for chros is merely the limit of the body, and demas represents the frame, the structure, and occurs only in the accusative of specification. As it is, early Greek art actually corroborates our impression that the physical body of man was comprehended, not as a unit but as an aggregate. Not until the classical art of the fifth century do we find attempts to depict the body as an organic unit whose parts are mutually correlated. In the preceding period the body is a mere construct of independent parts variously put together.6 It must not be thought, however, that the pictures of human beings from the time of Homer are like the primitive drawings to which our children have accustomed us, though they too simply add limb to limb.

Our children usually represent the human shape as shown in fig. i, whereas fig. 2 reproduces the Greek concept as found on the vases of the geometric period. Our children first draw a body as the central and most important part of their design; then they add the head, the arms and the legs. The geometric figures, on the other hand, lack this central part; they are nothing but μλεα κα γυα, i.e. limbs with strong muscles, separated from each other by means of exaggerated joints. This difference is of course partially dependent upon the clothes they wore, but even after we have made due allowance for this the fact remains that the Greeks of this early period seem to have seen in a strangely ‘articulated’ way. In their eyes the individual limbs are clearly distinguished from each other, and the joints are, for the sake of emphasis, presented as extraordinarily thin, while the fleshy parts are made to bulge just as unrealistically. The early Greek drawing seeks to demonstrate the agility of the human figure, the drawing of the modern child its compactness and unity.

Thus the early Greeks did not, either in their language or in the visual arts, grasp the body as a unit. The phenomenon is the same as with the verbs denoting sight; in the latter, the activity is at first understood in terms of its conspicuous modes, of the various attitudes and sentiments connected with it, and it is a long time before speech begins to address itself to the essential function of this activity. It seems, then, as if language aims progressively to express the essence of an act, but is at first unable to comprehend it because it is a function, and as such neither tangibly apparent nor associated with certain unambiguous emotions. As soon, however, as it is recognized and has received a name, it has come into existence, and the knowledge of its existence quickly becomes common property. Concerning the body, the chain of events may have been somewhat like this: in the early period a speaker, when faced by another person, was apparently satisfied to call out his name: this is Achilles, or to say: this is a man. As a next step, the most conspicuous elements of his appearance are described, namely his limbs as existing side by side; their functional correlation is not apprehended in its full importance until somewhat later. True enough, the function is a concrete fact, but its objective existence does not manifest itself so clearly as the presence of the individual corporeal limbs, and its prior significance escapes even the owner of the limbs himself. With the discovery of this hidden unity, of course, it is at once appreciated as an immediate and self-explanatory truth.

This objective truth, it must be admitted, does not exist for man until it is seen and known and designated by a word; until, thereby, it has become an object of thought. Of course the Homeric man had a body exactly like the later Greeks, but he did not know it qua body, but merely as the sum total of his limbs. This is another way of saying that the Homeric Greeks did not yet have a body in the modern sense of the word; body, soma, is a later interpretation of what was originally comprehended as μλη or γυα, i.e. as limbs. Again and again Homer speaks of fleet legs, of knees in speedy motion, of sinewy arms; it is in these limbs, immediately evident as they are to his eyes, that he locates the secret of life.7

Hebrew and Buddhist Selves:
A Constructive Postmodern Study

by Nicholas F. Gier

Finally, at least two biblical scholars–in response to the question “What good is this pre-modern self?”–have suggested that the Hebrew view (we add the Buddhist and the Chinese) can be used to counter balance the dysfunctional elements of modern selfhood. Both Robert Di Vito and Jacqueline Lapsley have called this move “postmodern,” based, as they contend, on the concept of intersubjectivity.[3] In his interpretation of Charles S. Peirce as a constructive postmodern thinker, Peter Ochs observes that Peirce reaffirms the Hebraic view that relationality is knowledge at its most basic level.  As Ochs states: “Peirce did not read Hebrew, but the ancient Israelite term for ‘knowledge’–yidiah–may convey Peirce’s claim better than any term he used.  For the biblical authors, ‘to know’ is ‘to have intercourse with’–with the world, with one’s spouse, with God.”[4]

The view that the self is self-sufficient and self-contained is a seductive abstraction that contradicts the very facts of our interdependent existence.  Modern social atomism was most likely the result of modeling the self on an immutable transcendent deity (more Greek than biblical) and/or the inert isolated atom of modern science. […]

It is surprising to discover that the Buddhist skandhas are more mental in character, while the Hebrew self is more material in very concrete ways.  For example, the Psalmist says that “all my inner parts (=heart-mind) bless God’s holy name” (103.1); his kidneys (=conscience) chastise him (16.7); and broken bones rejoice (16:7).  Hebrew bones offer us the most dramatic example of a view of human essence most contrary to Christian theology.  One’s essential core is not immaterial and invisible; rather, it is one’s bones, the most enduring remnant of a person’s being.  When the nepeš “rejoices in the Lord” at Ps. 35.9, the poet, in typical parallel fashion, then has the bones speak for her in v. 10.  Jeremiah describes his passion for Yahweh as a “fire” in his heart (l�b) that is also in his bones (20.9), just as we say that a great orator has “fire in his belly.” The bones of the exiles will form the foundation of those who will be restored by Yahweh’s rãah in Ezekiel 37, and later Pharisidic Judaism speaks of the bones of the deceased “sprouting” with new life in their resurrected bodies.[7]  The bones of the prophet Elijah have special healing powers (2 Kgs. 13.21).  Therefore, the cult of relic bones does indeed have scriptural basis, and we also note the obvious parallel to the worship of the Buddha’s bones.

With all these body parts functioning in various ways, it is hard to find, as Robert A. Di Vito suggests, “a true ‘center’ for the [Hebrew] person . . . a ‘consciousness’ or a self-contained ‘self.’”[8] Di Vito also observes that the Hebrew word for face (p~n§m) is plural, reflecting all the ways in which a person appears in multifarious social interactions.  The plurality of faces in Chinese culture is similar, including the “loss of face” when a younger brother fails to defer to his elder brother, who would have a difference “face” with respect to his father.  One may be tempted to say that the j§va is the center of the Buddhist self, but that would not be accurate because this term simply designates the functioning of all the skandhas together.

Both David Kalupahana and Peter Harvey demonstrate how much influence material form (rãpa) has on Buddhist personality, even at the highest stage of spiritual development.[9]  It is Zen Buddhists, however, who match the earthy Hebrew rhetoric about the human person. When Bodhidharma (d. 534 CE) prepared to depart from his body, he asked four of his disciples what they had learned from him.  As each of them answered they were offered a part of his body: his skin, his flesh, his bones, and his marrow.  The Zen monk Nangaku also compared the achievements of his six disciples to six parts of his body. Deliberately inverting the usual priority of mind over body, the Zen monk Dogen (1200-1253) declared that “The Buddha Way is therefore to be attained above all through the body.”[10]  Interestingly enough, the Hebrews rank the flesh, skin, bones, and sinews as the most essential parts of the body-soul.[11]  The great Buddhist dialectician Nagarjuna (2nd Century CE) appears to be the source of Bodhidharma’s body correlates, but it is clear that Nagarjuna meant them as metaphors.[12]  In contrast it seems clear that, although dead bones rejoicing is most likely a figure of speech, the Hebrews were convinced that we think, feel, and perceive through and with all parts of our bodies.

In Search of a Christian Identity
by Robert Hamilton

The essential points here, are the “social disengagement” of the modern self, away from identifying solely with roles defined by the family group, and the development of a “personal unity” within the individual. Morally speaking, we are no longer empty vessels to be filled up by some god, or servant of god, we are now responsible for our own actions, and decisions, in light of our own moral compass. I would like to mention Julian Jayne’s seminal work, The Origin of Consciousness in the Breakdown of the Bicameral Mind, as a pertinent hypothesis for an attempt to understand the enormous distance between the modern sense of self with that of the ancient mind, and its largely absent subjective state.[13]

“The preposterous hypothesis we have come to in the previous chapter is that at one time human nature was split in two, an executive part called a god, and a follower part called a man.”[14]

This hypothesis sits very well with De Vitos’ description of the permeable personal identity of Old Testament characters, who are “taken over,” or possessed, by Yahweh.[15] The evidence of the Old Testament stories points in this direction, where we have patriarchal family leaders, like Abraham and Noah, going around making morally contentious decisions (in today’s terms) based on their internal dialogue with a god – Jehovah.[16] As Jaynes postulates later in his book, today we would call this behaviour schizophrenia. De Vito, later in the article, confirms, that:

“Of course, this relative disregard for autonomy in no way limits one’s responsibility for conduct–not even when Yhwh has given “statutes that were not good” in order to destroy Israel “(Ezek 20:25-26).[17]

Cognitive Perspectives on Early Christology
by Daniel McClellan

The insights of CSR [cognitive science of religion] also better inform our reconstruction of early Jewish concepts of agency, identity, and divinity. Almost twenty years ago, Robert A. Di Vito argued from an anthropological perspective that the “person” in the Hebrew Bible “is more radically decentered, ‘dividual,’ and undefined with respect to personal boundaries … [and] in sharp contrast to modernity, it is identified more closely with, and by, its social roles.”40 Personhood was divisible and permeable in the Hebrew Bible, and while there was diachronic and synchronic variation in certain details, the same is evident in the literature of Second Temple Judaism and early Christianity. This is most clear in the widespread understanding of the spirit (רוח (and the soul (נפש – (often used interchangeably – as the primary loci of a person’s agency or capacity to act.41 Both entities were usually considered primarily constitutive of a person’s identity, but also distinct from their physical body and capable of existence apart from it.42 The physical body could also be penetrated or overcome by external “spirits,” and such possession imposed the agency and capacities of the possessor.43 The God of Israel was largely patterned after this concept of personhood,44 and was similarly partible, with God’s glory (Hebrew: כבוד ;Greek: δόξα), wisdom (חכמה/σοφία), spirit (רוח/πνεῦµα), word (דבר/λόγος), presence (שכינה ,(and name (שם/ὄνοµα) operating as autonomous and sometimes personified loci of agency that could presence the deity and also possess persons (or cultic objects45) and/or endow them with special status or powers.46

Did Christianity lead to schizophrenia?
Psychosis, psychology and self reference

by Roland Littlewood

This new deity could be encountered anywhere—“Wherever two are gathered in my name” (Mathew 18.20)—for Christianity was universal and individual (“neither Jew nor Greek… bond nor free… male or female, for you are all one man in Christ Jesus” says St. Paul). And ultimate control rested with Him, Creator and Master of the whole universe, throughout the whole universe. No longer was there any point in threatening your recalcitrant (Egyptian) idol for not coming up with the goods (Cumont, 1911/1958, p. 93): as similarly in colonial Africa, at least according to the missionaries (Peel, 2000). If God was independent of social context and place, then so was the individual self at least in its conversations with God (as Dilthey argues). Religious status was no longer signalled by external signs (circumcision), or social position (the higher stages of the Roman priesthood had been occupied by aspiring politicians in the course of their career: “The internal status of the officiating person was a matter of… indifference to the celestial spirits” [Cumont, 1911/1958, p. 91]). “Now it is not our flesh that we must circumcise, we must crucify ourselves, exterminate and mortify our unreasonable desires” (John Chrysostom, 1979), “circumcise your heart” says “St. Barnabas” (2003, p. 45) for religion became internal and private. Like the African or Roman self (Mauss, 1938/1979), the Jewish self had been embedded in a functioning society, individually decentred and socially contextualised (Di Vito, 1999); it survived death only through its bodily descendants: “But Abram cried, what can you give me, seeing I shall die childless” (Genesis 15.2). To die without issue was extinction in both religious systems (Madigan & Levenson, 2008). But now an enduring part of the self, or an associate of it—the soul—had a connection to what might be called body and consciousness yet had some sort of ill defined association with them. In its earthly body it was in potential communication with God. Like God it was immaterial and immortal. (The associated resurrection of the physical body, though an essential part of Christian dogma, has played an increasingly less important part in the Church [cf. Stroumsa, 1990].) For 19th-century pagan Yoruba who already accepted some idea of a hereafter, each village has its separate afterlife which had to be fused by the missionaries into a more universal schema (Peel, 2000, p. 175). If the conversation with God was one to one, then each self-aware individual had then to make up their own mind on adherence—and thus the detached observer became the surveyor of the whole world (Dumont, 1985). Sacral and secular became distinct (separate “functions” as Dumont calls them), further presaging a split between psychological faculties. The idea of the self/soul as an autonomous unit facing God became the basis, via the stages Mauss (1938/1979) briefly outlines, for a political philosophy of individualism (MacFarlane, 1978). The missionaries in Africa constantly attempted to reach the inside of their converts, but bemoaned that the Yoruba did not seem to have any inward core to the self (Peel, 2000, Chapter 9).

Embodying the Gospel:
Two Exemplary Practices

by Joel B. Green
pp. 12-16

Philosopher Charles Taylor’s magisterial account of the development of personal identity in the West provides a useful point of entry into this discussion. He shows how modern assumptions about personhood in the West developed from Augustine in the fourth and fifth centuries, through major European philosophers in the seventeenth and eighteenth centuries (e.g.,Descartes, Locke, Kant), and into the present. The result is a modern human “self defined by the powers of disengaged reason—with its associated ideals of self-responsible freedom and dignity—of self-exploration, and of personal commitment.”2 These emphases provide a launching point for our modern conception of “inwardness,” that is, the widespread view that people have an inner self, which is the authentic self.

Given this baseline understanding of the human person, it would seem only natural to understand conversion in terms of interiority, and this is precisely what William James has done for the modern west. In his enormously influential 1901–02 Gifford Lectures at Edinburgh University, published in 1902 under the title The Varieties of Religious Experience, James identifies salvation as the resolution of a person’s inner, subjective crisis.Salvation for James is thus an individual, instantaneous, feeling-based, interior experience.3 Following James, A.D. Nock’s celebrated study of conversion in antiquity reached a similar conclusion: “By conversion we mean there orientation of the soul of an individual, his [sic] deliberate turning from in different or from an earlier form of piety to another, a turning which involves a consciousness that a great change is involved, that the old was wrong and the new is right.” Nock goes on to write of “a passion of willingness and acquiescence, which removes the feeling of anxiety, a sense of perceiving truths not known before, a sense of clean and beautiful newness within and without and an ecstasy of happiness . . .”4 In short, what is needed is a “change of heart.”

However pervasive they may be in the contemporary West, whether in-side or outside the church, such assumptions actually sit uneasily with Old and New Testament portraits of humanity. Let me mention two studies that press our thinking in an alternative direction. Writing with reference to Old Testament anthropology, Robert Di Vito finds that the human “(1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively trans-parent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘inner depths’), and (4) is ‘authentic’ precisely in its heteronomy, in its obedience to another and dependence upon another.”5 Two aspects of Di Vito’s summary are of special interest: first, his emphasis on a more communitarian experience of personhood; and second, his emphasis on embodiment. Were we to take seriously what these assumptions might mean for embracing and living out the Gospel, we might reflect more on what it means to be saved within the community of God’s people and, indeed, what it means to be saved in relation to the whole of God’s creation. We might also reflect less on conversion as decision-making and more on conversion as pattern-of-life.

The second study, by Klaus Berger, concerns the New Testament. Here,Berger investigates the New Testament’s “historical psychology,” repeatedly highlighting both the ease with which we read New Testament texts against modern understandings of humanity and the problems resident in our doing so.6 His list of troublesome assumptions—troublesome because they are more at home in the contemporary West than in the ancient Mediterranean world—includes these dualities, sometimes even dichotomies: doing and being, identity and behavior, internal and external. A more integrated understanding of people, the sort we find in the New Testament world, he insists, would emphasize life patterns that hold together believing, thinking, feeling, and behaving, and allow for a clear understanding that human behavior in the world is both simply and profoundly em-bodied belief. Perspectives on human transformation that take their point  of departure from this “psychology” would emphasize humans in relation-ship with other humans, the bodily nature of human allegiances and commitments, and the fully integrated character of human faith and life. […]

Given how John’s message is framed in an agricultural context, it is not a surprise that his point turns on an organic metaphor rather than a mechanical one. The resulting frame has no room for prioritizing inner (e.g.,“mind” or “heart”) over outer (e.g., “body” or “behavior”), nor of fitting disparate pieces together to manufacture a “product,” nor of correlating status and activity as cause and effect. Organic metaphors neither depend on nor provoke images of hierarchical systems but invite images of integration, interrelation, and interdependence. Consistent with this organic metaphor, practices do not occupy a space outside the system of change, but are themselves part and parcel of the system. In short, John’s agricultural metaphor inseparably binds “is” and “does” together.

Ressurrection and the Restoration of Israel:
The Ultimate Victory of the God of Life
by Jon Douglas Levenson
pp. 108-114

In our second chapter, we discussed one of the prime warrants often adduced either for the rejection of resurrection (by better-informed individuals) or for its alleged absence, and the alleged absence of any notion of the afterlife, in Judaism (by less informed individuals). That warrant is the finality of death in the Hebrew Bible, or at least in most of it, and certainly in what is from a Jewish point of view its most important subsection, the first five books. For no resurrections take place therein, and predictions of a general resurrection at the end of time can be found in the written Torah only through ingenious derash of the sort that the rabbinic tradition itself does not univocally endorse or replicate in its translations. In the same chapter, we also identified one difficulty with this notion that the Pentateuch exhibits no possibility of an afterlife but supports, instead, the absolute finality of death, and to this point we must now return. I am speaking of the difficulty of separating individuals from their families (including the extended family that is the nation). If, in fact, individuals are fundamentally and inextricably embedded within their fam ilies, then their own deaths, however terrifying in prospect, will lack the final ity that death carries with it in a culture with a more individualistic, atomistic understanding of the self. What I am saying here is something more radical than the truism that in the Hebrew Bible, parents draw consolation from the thought that their descendants will survive them (e.g., Gen 48:11), just as, conversely, the parents are plunged into a paralyzing grief at the thought that their progeny have perished (e.g., Gen 37:33–35; Jer 31:15). This is, of course, the case, and probably more so in the ancient world, where children were the support of one’s old age, than in modern societies, where the state and the pension fund fill many roles previously concentrated in the family. That to which I am pointing, rather, is that the self of an individual in ancient Israel was entwined with the self of his or her family in ways that are foreign to the modern West, and became foreign to some degree already long ago.

Let us take as an example the passage in which Jacob is granted ‘‘the blessing of Abraham,’’ his grandfather, according to the prayer of Isaac, his father, to ‘‘possess the land where you are sojourning, which God assigned to Abraham’’ (Gen 28:1–4). The blessing on Abraham, as we have seen, can be altogether and satisfactorily fulfilled in Abraham’s descendants. Thus, too, can Ezekiel envision the appointment of ‘‘a single shepherd over [Israel] to tend them—My servant David,’’ who had passed away many generations before (Ezek 34:23). Can we, without derash, see in this a prediction that David, king of Judah and Israel, will be raised from the dead? To do so is to move outside the language of the text and the culture of Israel at the time of Ezekiel, which does not speak of the resurrections of individuals at all. But to say, as the School of Rabbi Ishmael said about ‘‘to Aaron’’ in Num 18:28,1 that Ezekiel means only one who is ‘‘like David’’—a humble shepherd boy who comes to triumph in battle and rises to royal estate, vindicating his nation and making it secure and just—is not quite the whole truth, either. For biblical Hebrew is quite capable of saying that one person is ‘‘like’’ another or descends from another’s lineage (e.g., Deut 18:15; 2 Kgs 22:2; Isa 11:1) without implying identity of some sort. The more likely interpretation, rather, is that Ezekiel here predicts the miraculous appearance of a royal figure who is not only like David but also of David, a person of Davidic lineage, that is, who functions as David redivivus. This is not the resurrection of a dead man, to be sure, but neither is it the appearance of some unrelated person who only acts like David, or of a descendant who is ‘‘a chip off the old block.’’ David is, in one obvious sense, dead and buried (1 Kgs 2:10), and his death is final and irreversible. In another sense, harder for us to grasp, however, his identity survives him and can be manifested again in a descendant who acts as he did (or, to be more precise, as Ezekiel thought he acted) and in whom the promise to David is at long last fulfilled. For David’s identity was not restricted to the one man of that name but can reappear to a large measure in kin who share it.

This is obviously not reincarnation. For that term implies that the ancient Israelites believed in something like the later Jewish and Christian ‘‘soul’’ or like the notion (such as one finds in some religions) of a disembodied consciousness that can reappear in another person after its last incarnation has died. In the Hebrew Bible, however, there is nothing of the kind. The best approximation is the nepes, the part of the person that manifests his or her life force or vitality most directly. James Barr defines the nepes as ‘‘a superior controlling centre which accompanies, exposes and directs the existence of that totality [of the personality] and one which, especially, provides the life to the whole.’’2 Although the nepes does exhibit a special relationship to the life of the whole person, it is doubtful that it constitutes ‘‘a superior controlling center.’’ As Robert Di Vito points out, ‘‘in the OT, human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’’’ Thus, the eye talks or thinks (Job 24:15) and even mocks (Prov 30:17), the ear commends or pronounces blessed (Job 29:11), blood cries out (Gen 4:10), the nepes (perhaps in the sense of gullet or appetite) labors (Prov 16:26) or pines (Ps 84:3), kidneys rejoice and lips speak (Prov 23:16), hands shed blood (Deut 21:7), the heart and flesh sing (Ps 84:3), all the psalmist’s bones say, ‘‘Lord, who is like you?’’ (Ps 35:10), tongue and lips lie or speak the truth (Prov 12:19, 22), hearts are faithful (Neh 9:8) or wayward (Jer 5:23), and so forth.3 The point is not that the individual is simply an agglomeration of distinct parts. It is, rather, that the nepes is one part of the self among many and does not control the entirety, as the old translation ‘‘soul’’ might lead us to expect.4 A similar point might be made about the modern usage of the term person.

[4. It is less clear to me that this is also Di Vito’s point. He writes, for example: ‘‘The biblical character presents itself to us more as parts than as a whole . . . accordingly, in the OT one searches in vain for anything really corresponding to the Platonic localization of desire and emotion in a central ‘locale,’ like the ‘soul’ under the hegemony of reason, a unified and self-contained center from which the individual’s activities might flow, a ‘self’ that might finally assert its control’’ (‘‘Old Testament Anthropology,’’ 228).]

All of the organs listed above, Di Vito points out, are ‘‘susceptible to moral judgment and evaluation.’’5 Not only that, parts of the body besides the nepes can actually experience emotional states. As Aubrey R. Johnson notes, ‘‘Despondency, for example, is felt to have a shriveling effect upon the bones . . . just as they are said to decay or become soft with fear or distress, and so may be referred to as being themselves troubled or afraid’’ (e.g., Ezek 37:11; Hab 3:16; Jer 23:9; Ps 31:11). In other words, ‘‘the various members and secretions of the body . . . can all be thought of as revealing psychical properties,’’6 and this is another way of saying that the nepes does not really correspond to Barr’s ‘‘superior controlling centre’’ at all. For many of the functions here attributed to the nepes are actually distributed across a number of parts of the body. The heart, too, often functions as the ‘‘controlling centre,’’ determining, for example, whether Israel will follow God’s laws or not (e.g., Ezek 11:19). The nepes in the sense of the life force of the body is sometimes identified with the blood, rather than with an insensible spiritual essence of the sort that words like ‘‘soul’’ or ‘‘person’’ imply. It is in light of this that we can best understand the Pentateuchal laws that forbid the eating of blood on the grounds that it is the equivalent of eating life itself, eating, that is, an animal that is not altogether dead (Lev 17:11, 14; Deut 12:23; cf. Gen 9:4–5). If the nepes ‘‘provides the life to the whole,’’7 so does the blood, with which laws like these, in fact, equate it. The bones, which, as we have just noted, can experience emotional states, function likewise on occasion. When a dead man is hurriedly thrown into Elisha’s grave in 2 Kgs 13:21, it is contact with the wonder-working prophet’s bones that brings about his resurrection. And when the primal man at long last finds his soul mate, he exclaims not that she (unlike the animals who have just been presented to him) shares a nepes with him but rather that she ‘‘is bone of my bones / And flesh of my flesh’’ (Gen 2:23).

In sum, even if the nepes does occasionally function as a ‘‘controlling centre’’ or a provider of life, it does not do so uniquely. The ancient Israelite self is more dynamic and internally complex than such a formulation allows. It should also be noticed that unlike the ‘‘soul’’ in most Western philosophy, the biblical nepes can die. When the non-Israelite prophet Balaam expresses his wish to ‘‘die the death of the upright,’’ it is his nepes that he hopes will share their fate (Num 23:10), and the same applies to Samson when he voices his desire to die with the Philistines whose temple he then topples upon all (Judg 16:30). Indeed, ‘‘to kill the nepes’’ functions as a term for homicide in biblical Hebrew, in which context, as elsewhere, it indeed has a meaning like that of the English ‘‘person’’ (e.g., Num 31:19; Ezek 13:19).8 As Hans Walter Wolff puts it, nepes ‘‘is never given the meaning of an indestructible core of being, in contradistinction to the physical life . . . capable of living when cut off from that life.’’9 Like heart, blood, and bones, the nepes can cease to function. It is not quite correct to say, however, that this is because it is ‘‘physical’’ rather than ‘‘spiritual,’’ for the other parts of the self that we consider physical— heart, blood, bones, or whatever—are ‘‘spiritual’’ as well—registering emotions, reacting to situations, prompting behavior, expressing ideas, each in its own way. A more accurate summary statement would be Johnson’s: ‘‘The Israelite conception of man [is] as a psycho-physical organism.’’10 ‘‘For some time at least [after a person’s death] he may live on as an individual (apart from his possible survival within the social unit),’’ observes Johnson, ‘‘in such scattered elements of his personality as the bones, the blood and the name.’’11 It would seem to follow that if ever he is to return ‘‘as a psycho-physical organ ism,’’ it will have to be not through reincarnation of his soul in some new person but through the resurrection of the body, with all its parts reassembled and revitalized. For in the understanding of the Hebrew Bible, a human being is not a spirit, soul, or consciousness that happens to inhabit this body or that—or none at all. Rather, the unity of body and soul (to phrase the point in the unhappy dualistic vocabulary that is still quite removed from the way the Hebrew Bible thought about such things) is basic to the person. It thus follows that however distant the resurrection of the dead may be from the understanding of death and life in ancient Israel, the concept of immortality in the sense of a soul that survives death is even more distant. And whatever the biblical problems with the doctrine of resurrection—and they are formidable—the biblical problems with the immortality that modern Jewish prayer books prefer (as we saw in our first chapter) are even greater.

Di Vito points, however, to an aspect of the construction of the self in ancient Israel that does have some affinities with immortality. This is the thorough embeddedness of that individual within the family and the corollary difficulty in the context of this culture of isolating a self apart from the kin group. Drawing upon Charles Taylor’s highly suggestive study The Sources of the Self,12 Di Vito points out that ‘‘salient features of modern identity, such as its pronounced individualism, are grounded in modernity’s location of the self in the ‘inner depths’ of one’s interiority rather than in one’s social role or public relations.’’13 Cautioning against the naïve assumption that ancient Israel adhered to the same conception of the self, Di Vito develops four points of contrast between modern Western and ancient Israelite thinking on this point. In the Hebrew Bible,

the subject (1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively transparent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘‘inner depths’’), and (4) is ‘‘authentic’’ precisely in its heteronomy, in its obedience to another and dependence upon another.14

Although Di Vito’s formulation is overstated and too simple—is every biblical figure, even David, presented as ‘‘altogether lacking in a sense of ‘inner depths’’’?—his first and last points are highly instructive and suggest that the familial and social understanding of ‘‘life’’ in the Hebrew Bible is congruent with larger issues in ancient Israelite culture. ‘‘Life’’ and ‘‘death’’ mean different things in a culture like ours, in which the subject is not so ‘‘deeply embedded . . . in its social identity’’ and in which authenticity tends to be associated with cultivation of individual traits at the expense of conformity, and with the attainment of personal autonomy and independence.

The contrast between the biblical and the modern Western constructions of personal identity is glaring when one considers the structure of what Di Vito calls ‘‘the patriarchal family.’’ This ‘‘system,’’ he tells us, ‘‘with strict subor dination of individual goals to those of the extended lineal group, is designed to ensure the continuity and survival of the family.’’15 In this, of course, such a system stands in marked contrast to liberal political theory that has developed over the past three and a half centuries, which, in fact, virtually assures that people committed to that theory above all else will find the Israelite system oppressive. For the liberal political theory is one that has increasingly envi sioned a system in which society is composed of only two entities, the state and individual citizens, all of whom have equal rights quite apart from their famil ial identities and roles. Whether or not one affirms such an identity or plays the role that comes with it (or any role different from that of other citizens) is thus relegated to the domain of private choice. Individuals are guaranteed the free dom to renounce the goals of ‘‘the extended lineal group’’ and ignore ‘‘the continuity and survival of the family,’’ or, increasingly, to redefine ‘‘family’’ according to their own private preferences. In this particular modern type of society, individuals may draw consolation from the thought that their group (however defined) will survive their own deaths. As we have had occasion to remark, there is no reason to doubt that ancient Israelites did so, too. But in a society like ancient Israel, in which ‘‘the subject . . . is deeply embedded, or engaged, in its social identity,’’ ‘‘with strict subordination of individual goals to those of the extended lineal group,’’ the loss of the subject’s own life and the survival of the familial group cannot but have a very different resonance from the one most familiar to us. For even though the subject’s death is irreversible—his or her nepes having died just like the rest of his or her body/soul—his or her fulfillment may yet occur, for identity survives death. God can keep his promise to Abraham or his promise to Israel associated with the gift of David even after Abraham or David, as an individual subject, has died. Indeed, in light of Di Vito’s point that ‘‘the subject . . . is comparatively decentered and undefined with respect to personal boundaries,’’ the very distinction between Abraham and the nation whose covenant came through him (Genesis 15; 17), or between David and the Judean dynasty whom the Lord has pledged never to abandon (2 Sam 7:8–16; Ps 89:20–38), is too facile.

Our examination of personal identity in the earlier literature of the Hebrew Bible thus suggests that the conventional view is too simple: death was not final and irreversible after all, at least not in the way in which we are inclined to think of these matters. This is not, however, because individuals were be lieved to possess an indestructible essence that survived their bodies. On the one hand, the body itself was thought to be animated in ways foreign to modern materialistic and biologistic thinking, but, on the other, even its most spiritual part, its nepeˇs (life force) or its n˘eˇs¯amâ (breath), was mortal. Rather, the boundary between individual subjects and the familial/ethnic/national group in which they dwelt, to which they were subordinate, and on which they depended was so fluid as to rob death of some of the horror it has in more individualistic cultures, influenced by some version of social atomism. In more theological texts, one sees this in the notion that subjects can die a good death, ‘‘old and contented . . . and gathered to [their] kin,’’ like Abraham, who lived to see a partial—though only a partial—fulfillment of God’s promise of land, progeny, and blessing upon him, or like Job, also ‘‘old and contented’’ after his adversity came to an end and his fortunes—including progeny—were restored (Gen 25:8; Job 42:17). If either of these patriarchal figures still felt terror in the face of his death, even after his afflictions had been reversed, the Bible gives us no hint of it.16 Death in situations like these is not a punishment, a cause for complaint against God, or the provocation of an existential crisis. But neither is it death as later cultures, including our own, conceive it.

Given this embeddedness in family, there is in Israelite culture, however, a threat that is the functional equivalent to death as we think of it. This is the absence or loss of descendants.

The Master and His Emissary
by Iain McGilchrist
pp. 263-264

Whoever it was that composed or wrote them [the Homeric epics], they are notable for being the earliest works of Western civilisation that exemplify a number of characteristics that are of interest to us. For in their most notable qualities – their ability to sustain a unified theme and produce a single, whole coherent narrative over a considerable length, in their degree of empathy, and insight into character, and in their strong sense of noble values (Scheler’s Lebenswerte and above) – they suggest a more highly evolved right hemisphere.

That might make one think of the importance to the right hemisphere of the human face. Yet, despite this, there are in Homeric epic few descriptions of faces. There is no doubt about the reality of the emotions experienced by the figures caught up in the drama of the Iliad or the Odyssey: their feelings of pride, hate, envy, anger, shame, pity and love are the stuff of which the drama is made. But for the most part these emotions are conveyed as relating to the body and to bodily gesture, rather than the face – though there are moments, such as at the reunion of Penelope and Odysseus at the end of the Odyssey, when we seem to see the faces of the characters, Penelope’s eyes full of tears, those of Odysseus betraying the ‘ache of longing rising from his breast’. The lack of emphasis on the face might seem puzzling at a time of increasing empathic engagement, but I think there is a reason for this.

In Homer, as I mentioned in Part I, there was no word for the body as such, nor for the soul or the mind, for that matter, in the living person. The sōma was what was left on the battlefield, and the psuchēwas what took flight from the lips of the dying warrior. In the living person, when Homer wants to speak of someone’s mind or thoughts, he refers to what is effectively a physical organ – Achilles, for example, ‘consulting his thumos’. Although the thumos is a source of vital energy within that leads us to certain actions, the thumos has fleshly characteristics such as requiring food and drink, and a bodily situation, though this varies. According to Michael Clarke’s Flesh and Spirit in the Songs of Homer, Homeric man does not have a body or a mind: ‘rather this thought and consciousness are as inseparable a part of his bodily life as are movement and metabolism’. 15 The body is indistinguishable from the whole person. 16 ‘Thinking, emotion, awareness, reflection, will’ are undertaken in the breast, not the head: ‘the ongoing process of thought is conceived of as if it were precisely identified with the palpable inhalation of the breath, and the half-imagined mingling of breath with blood and bodily fluids in the soft, warm, flowing substances that make up what is behind the chest wall.’ 17 He stresses the importance of flow, of melting and of coagulation. The common ground of meaning is not in a particular static thing but in the ongoing process of living, which ‘can be seen and encapsulated in different contexts by a length of time or an oozing liquid’. These are all images of transition between different states of flux, different degrees of permanence, and allowing the possibility of ambiguity: ‘The relationship between the bodily and mental identity of these entities is subtle and elusive.’ 18 Here there is no necessity for the question ‘is this mind or is it body?’ to have a definitive answer. Such forbearance, however, had become impossible by the time of Plato, and remains, according to current trends in neurophilosophy, impossible today.

Words suggestive of the mind, the thumos ‘family’, for example, range fluidly and continuously between actor and activity, between the entity that thinks and the thoughts or emotions that are its products. 19 Here Clarke is speaking of terms such as is, aiōn, menos. ‘The life of Homeric man is defined in terms of processes more precisely than of things.’ 20 Menos, for example, refers to force or strength, and can also mean semen, despite being often located in the chest. But it also refers to ‘the force of violent self-propelled motion in something non-human’, perhaps like Scheler’s Drang: again more an activity than a thing. 21

This profound embodiment of thought and emotion, this emphasis on processes that are always in flux, rather than on single, static entities, this refusal of the ‘either/ or’ distinction between mind and body, all perhaps again suggest a right-hemisphere-dependent version of the world. But what is equally obvious to the modern mind is the relative closeness of the point of view. And that, I believe, helps to explain why there is little description of the face: to attend to the face requires a degree of detached observation. That there is here a work of art at all, a capacity to frame human existence in this way, suggests, it is true, a degree of distance, as well as a degree of co-operation of the hemispheres in achieving it. But it is the gradual evolution of greater distance in post-Homeric Greek culture that causes the efflorescence, the ‘unpacking’, of both right and left hemisphere capacities in the service of both art and science.

With that distance comes the term closest to the modern, more disembodied, idea of mind, nous (or noos), which is rare in Homer. When nous does occur in Homer, it remains distinct, almost always intellectual, not part of the body in any straightforward sense: according to Clarke it ‘may be virtually identified with a plan or stratagem’. 22 In conformation to the processes of the left hemisphere, it is like the flight of an arrow, directional. 23

By the late fifth and fourth centuries, separate ‘concepts of body and soul were firmly fixed in Greek culture’. 24 In Plato, and thence for the next two thousand years, the soul is a prisoner in the body, as he describes it in the Phaedo, awaiting the liberation of death.

The Great Shift
by James L. Kugel
pp. 163-165

A related belief is attested in the story of Hannah (1 Sam 1). Hannah is, to her great distress, childless, and on one occasion she goes to the great temple at Shiloh to seek God’s help:

The priest Eli was sitting on a seat near the doorpost of the temple of the LORD . In the bitterness of her heart, she prayed to the LORD and wept. She made a vow and said: “O LORD of Hosts, if You take note of Your maidservant’s distress, and if You keep me in mind and do not neglect Your maidservant and grant Your maidservant a male offspring, I will give him to the LORD for all the days of his life; and no razor shall ever touch his head.” * Now as she was speaking her prayer before the LORD , Eli was watching her mouth. Hannah was praying in her heart [i.e., silently]; her lips were moving, but her voice could not be heard, so Eli thought she was drunk. Eli said to her: “How long are you going to keep up this drunkenness? Cut out the boozing!” But Hannah answered: “Oh no, sir, I am a woman of saddened spirit. I have drunk no wine or strong drink, but I have been pouring out my heart to the LORD . Don’t take your maidservant for an ill-behaved woman! I have been praying this long because of my great distress.” Eli answered her: “Then go in peace, and may the God of Israel grant you what you have asked of Him.” (1 Sam 1:9–17)

If Eli couldn’t hear her, how did Hannah ever expect God to hear her? But she did. Somehow, even though no sound was coming out of her mouth, she apparently believed that God would hear her vow and, she hoped, act accordingly. (Which He did; “at the turn of the year she bore a son,” 1 Sam 1:20.) This too seemed to defy the laws of physics, just as much as Jonah’s prayer from the belly of the fish, or any prayer uttered at some distance from God’s presumed locale, a temple or other sacred spot.

Many other things could be said about the Psalms, or about biblical prayers in general, but the foregoing three points have been chosen for what they imply for the overall theme of this book. We have already seen a great deal of evidence indicating that people in biblical times believed the mind to be semipermeable, capable of being infiltrated from the outside. This is attested not only in the biblical narratives examined earlier, but it is the very premise on which all of Israel’s prophetic corpus stands. The semipermeable mind is prominent in the Psalms as well; in a telling phrase, God is repeatedly said to penetrate people’s “kidneys and heart” (Pss 7:10, 26:2, 139:13; also Jer 11:20, 17:10, 20:12), entering these messy internal organs 28 where thoughts were believed to dwell and reading—as if from a book—all of people’s hidden ideas and intentions. God just enters and looks around:

You have examined my heart, visited [me] at night;
You have tested me and found no wickedness; my mouth has not transgressed. (Ps 17:3)
Examine me, O LORD , and test me; try my kidneys and my heart. (26:2)

[28. Robert North rightly explained references to a person’s “heart” alone ( leb in biblical Hebrew) not as a precise reference to that particular organ, but as “a vaguely known or confused jumble of organs, somewhere in the area of the heart or stomach”: see North (1993), 596.]

Indeed God is so close that inside and outside are sometimes fused:

Let me bless the LORD who has given me counsel; my kidneys have been instructing me at night.
I keep the LORD before me at all times, just at my right hand, so I will not stumble. (Ps 16:7–8)

(Who’s giving this person advice, an external God or an internal organ?)

Such is God’s passage into a person’s semipermeable mind. But the flip side of all this is prayer, when a person’s words, devised on the inside, in the human mind, leave his or her lips in order to reach—somehow—God on the outside. As we have seen, those words were indeed believed to make their way to God; in fact, it was the cry of the victim that in some sense made the world work, causing God to notice and take up the cause of justice and right. Now, the God who did so was also, we have seen, a mighty King, who presumably ranged over all of heaven and earth:

He mounted on a cherub and flew off, gliding on the wings of the wind. (Ps 18:11)

He makes the clouds His chariot, He goes about on the wings of the wind. (Ps 104:3)

Yet somehow, no matter where His travels might take Him, God is also right there, just on the other side of the curtain that separates ordinary from extraordinary reality, allowing Him to hear the sometimes geographically distant cry of the victim or even to hear an inaudible, silent prayer like Hannah’s. The doctrine of divine omnipresence was still centuries away and was in fact implicitly denied in many biblical texts, 29 yet something akin to omnipresence seems to be implied in God’s ability to hear and answer prayers uttered from anywhere, no matter where He is. In fact, this seems implied as well in the impatient, recurrent question seen above, “How long, O L ORD ?”; the psalmist seems to be saying, “I know You’ve heard me, so when will You answer?”

Perhaps the most striking thing suggested by all this is the extent to which the Psalms’ depiction of God seems to conform to the general contours of the great Outside as described in an earlier chapter. God is huge and powerful, but also all-enfolding and, hence, just a whisper away. Somehow, people in biblical times seem to have just assumed that God, on the other side of that curtain, could hear their prayers, no matter where they were. All this again suggests a sense of self quite different from our own—a self that could not only be permeated by a great, external God, but whose thoughts and prayers could float outward and reach a God who was somehow never far, His domain beginning precisely where the humans’ left off.

One might thus say that, in this and in other ways, the psalmists’ underlying assumptions constitute a kind of biblical translation of a basic way of perceiving that had started many, many millennia earlier, a rephrasing of that fundamental reality in the particular terms of the religion of Israel. That other, primeval sense of reality and this later, more specific version of it found in these psalms present the same basic outline, which is ultimately a way of fitting into the world: the little human (more specifically in the Psalms, the little supplicant) faced with a huge enfolding Outside (in the Psalms, the mighty King) who overshadows everything and has all the power: sometimes kind and sometimes cruel (in the Psalms, sometimes heeding one’s request, but at other times oddly inattentive or sluggish), the Outside is so close as to move in and out of the little human (in the Psalms as elsewhere, penetrating a person’s insides, but also, able to pick up the supplicant’s request no matter where or how uttered). 30

pp. 205-207

The biblical “soul” was not originally thought to be immortal; in fact, the whole idea that human beings have some sort of sacred or holy entity inside them did not exist in early biblical times. But the soul as we conceive of it did eventually come into existence, and how this transformation came about is an important part of the history that we are tracing.

The biblical book of Proverbs is one of the least favorites of ordinary readers. To put the matter bluntly, Proverbs can be pretty monotonous: verse after verse tells you how much better the “righteous” are than the “wicked”: that the righteous tread the strait and narrow, control their appetites, avoid the company of loose women, save their money for a rainy day, and so forth, while the “wicked” always do quite the opposite. In spite of the way the book hammers away at these basic themes, a careful look at specific verses sometimes reveals something quite striking. 1 Here, for example, is what one verse has to say about the overall subject of the present study:

A person’s soul is the lamp of the LORD , who searches out all the innermost chambers. (Prov 20:27)

At first glance, this looks like the old theme of the semipermeable mind, whose innermost chambers are accessible to an inquisitive God. But in this verse, God does not just enter as we have seen Him do so often in previous chapters, when He appeared (apparently in some kind of waking dream) to Abraham or Moses, or put His words in the mouth of Amos or Jeremiah, or in general was held to “inspect the kidneys and heart” (that is, the innermost thoughts) of people. Here, suddenly, God seems to have an ally on the inside: the person’s own soul.

This point was put forward in rather pungent form by an ancient Jewish commentator, Rabbi Aḥa (fourth century CE ). He cited this verse to suggest that the human soul is actually a kind of secret agent, a mole planted by God inside all human beings. The soul’s job is to report to God (who is apparently at some remove) on everything that a person does or thinks:

“A person’s soul is the lamp of the LORD , who searches out all the innermost chambers”: Just as kings have their secret agents * who report to the king on each and every thing, so does the Holy One have secret agents who report on everything that a person does in secret . . . The matter may be compared to a man who married the daughter of a king. The man gets up early each morning to greet the king, and the king says, “You did such-and-such a thing in your house [yesterday], then you got angry and you beat your slave . . .” and so on for each and every thing that occurred. The man leaves and says to the people of the palace, “Which of you told the king that I did such-and-so? How does he know?” They reply to him, “Don’t be foolish! You’re married to his daughter and you want to know how he finds out? His own daughter tells him!” So likewise, a person can do whatever he wants, but his soul reports everything back to God. 2

The soul, in other words, is like God’s own “daughter”: she dwells inside a human body, but she reports regularly to her divine “father.” Or, to put this in somewhat more schematic terms: God, who is on the outside, has something that is related or connected to Him on the inside, namely, “a person’s soul.” But wasn’t it always that way?

Before getting to an answer, it will be worthwhile to review in brief something basic that was seen in the preceding chapters. Over a period of centuries, the basic model of God’s interaction with human beings came to be reconceived. After a time, He no longer stepped across the curtain separating ordinary from extraordinary reality. Now He was not seen at all—at first because any sort of visual sighting was held to be lethal, and later because it was difficult to conceive of. God’s voice was still heard, but He Himself was an increasingly immense being, filling the heavens; and then finally (moving ahead to post-biblical times), He was just axiomatically everywhere all at once. This of course clashed with the old idea of the sanctuary (a notion amply demonstrated in ancient Mesopotamian religion as well), according to which wherever else He was, God was physically present in his earthly “house,” that is, His temple. But this ancient notion as well came to be reconfigured in Israel; perched like a divine hologram above the outstretched wings of the cherubim in the Holy of Holies, God was virtually bodiless, issuing orders (like “Let there be light”) that were mysteriously carried out. 3

If conceiving of such a God’s being was difficult, His continued ability to penetrate the minds of humans ought to have been, if anything, somewhat easier to account for. He was incorporeal and omnipresent; 4 what could stand in the way of His penetrating a person’s mind, or being there already? Yet precisely for this reason, Proverbs 20:27 is interesting. It suggests that God does not manage this search unaided: there is something inside the human being that plays an active role in this process, the person’s own self or soul.

p. 390

It is striking that the authors of this study went on specifically to single out the very different sense of self prevailing in the three locales as responsible for the different ways in which voice hearing was treated: “Outside Western culture people are more likely to imagine [a person’s] mind and self as interwoven with others. These are, of course, social expectations, or cultural ‘invitations’—ways in which other people expect people like themselves to behave. Actual people do not always follow social norms. Nonetheless, the more ‘independent’ emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally many times in many places—among them India and Africa . . .” The passage continues: “For instance, the anthropologist McKim Marriott wanted to be so clear about how much Hindus conceive themselves to be made through relationships, compared with Westerners, that he called the Hindu person a ‘dividual’. His observations have been supported by other anthropologists of South Asia and certainly in south India, and his term ‘dividual’ was picked up to describe other forms of non-Western personhood. The psychologist Glenn Adams has shown experimentally that Ghanaians understand themselves as intrinsically connected through relationships. The African philosopher John Mbiti remarks: ‘only in terms of other people does the [African] individual become conscious of his own being.’” Further, see Markus and Mullally (1997); Nisbett (2004); Marriot (1976); Miller (2007); Trawick (1992); Strathern (1988); Ma and Schoeneman (1997); Mbiti (1969).

The “Other” Psychology of Julian Jaynes
by Brian J. McVeigh
p. 74

The Heart is the Ruler of the Body

We can begin with the word xin1, or heart, though given its broader denotations related to both emotions and thought, a better translation is “heart-mind” (Yu 2003). Xin1 is a pictographic representation of a physical heart, and as we will see below, it forms the most primary and elemental building block for Chinese linguo-concepts having to do with the psychological. The xin1 oversaw the activities of an individual’s psychophysiological existence and was regarded as the ruler of the body — indeed, the person — in the same way a king ruled his people. If individuals cultivate and control their hearts, then the family, state, and world cold be properly governed (Yu 2007, 2009b).

Psycho-Physio-Spiritual Aspects of the Person

Under the control of heart were the wu3shen2 of “five spirits” (shen2, hun2, po4, yi4, zhi4) which dwelt respectively in the heart, liver, lungs, spleen, and kidneys. The five shen2 were implicated in the operations of thinking, perception, and bodily systems and substances. A phonosemantic, shen2 has been variously translated as mind, spirit, supernatural being, consciousness, vitality, expression, soul, energy, god, or numen/numinous. The left side element of this logograph means manifest, show, demonstrate; we can speculate that whatever was manifested came from a supernatural source; it may have meant “ancestral spirit” (Keightley 1978: 17). The right side provides sound but also the additional meaning of “to state” or “report to a superior”; again we can speculate that it meant communing to a supernatural superior.

Introspective Illusion

On split brain research, Susan Blackmore observed that, “In this way, the verbal left brain covered up its ignorance by confabulating.” This relates to the theory of introspective illusion (see also change blindness, choice blindness, and bias blind spot). In both cases, the conscious mind turns to confabulation to explain what it has no access to and so what it doesn’t understand.

This is how we maintain a sense of being in control. Our egoic minds have immense talent at rationalization and it can happen instantly with total confidence in the reason(s) given. That indicates that consciousness is a lot less conscious than it really is… or rather that consciousness isn’t what we think it is.

Our theory of mind, as such, is highly theoretical in the speculative sense. That is to say it isn’t particularly reliable in most cases. First and foremost, what matters is that the story told is compelling, to both us and others (self-justification, in its role within consciousness, is close to Jaynesian self-authorization). We are ruled by our need for meaning, even as our body-minds don’t require meaning to enact behaviors and take actions. We get through our lives just fine mostly on automatic.

According to Julian Jaynes theory of the bicameral mind, the purpose of consciousness is to create an internal stage upon which we play out narratives. As this interiorized and narratized space is itself confabulated, that is to say psychologically and socially constructed, this space allows all further confabulations of consciousness. We imaginatively bootstrap our individuality into existence, and that requires a lot of explaining.

* * *

Introspection illusion
Wikipedia

A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness.[8][9] Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, “little or no introspective access to higher order cognitive processes”.[10] They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.[8]

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated.[9]A ten-year-anniversary review of the paper raised several objections, questioning the idea of “process” they had used and arguing that unambiguous tests of introspective access are hard to achieve.[3]

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching.[10] He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity.[7] However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and “seem” to be “unaware of their unawareness”.[11]

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including “belief” or “pain” will turn out to be quite different from what is commonly expected as science advances.

The faulty guesses that people make to explain their thought processes have been called “causal theories”.[1] The causal theories provided after an action will often serve only to justify the person’s behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

2017 Report on Consciousness and Moral Patienthood
Open Philanthropy Project

Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability.

59. I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

[Keith Frankish, Editorial Introduction, Journal of Consciousness Studies, Volume 23, Numbers 11-12, 2016, pp. 9-10(2)]

The Round-Based Community

Yet there’s an even deeper point to be made here, which is that flatness may actually be closer to how we think about the people around us, or even about ourselves.

This is a useful observation from Alec Nevala-Lee (The flat earth society).

I’m willing to bet that perceiving others and oneself as round characters has to do with the ability of cognitive complexity and tolerance for cognitive dissonance. These are tendencies of the liberal-minded, although research shows that with cognitive overload, from stress to drunkenness, even the liberal-minded will become conservative-minded (e.g., liberals who watched repeated video of 9/11 terrorist attacks were more likely to support Bush’s war on terror; by the way, identifying a conflict by a single emotion is a rather flat way of looking at the world).

Bacon concludes: “Increasingly, the political party you belong to represents a big part of your identity and is not just a reflection of your political views. It may even be your most important identity.” And this strikes me as only a specific case of the way in which we flatten ourselves out to make our inner lives more manageable. We pick and choose what else we emphasize to better fit with the overall story that we’re telling. It’s just more obvious these days.

So, it’s not only about characters but entire attitudes and worldviews. The ego theory of self itself encourages flatness, as opposed to the (Humean and Buddhist) bundle theory of self. It’s interesting to note how much more complex identity has become in the modern world and how much more accepting we are of allowing people to have multiple identities than in the past. This has happened at the very same time that fluid intelligence has drastically increased, and of course fluid intelligence correlates with liberal-mindedness (correlating as well to FFM openness, MBTI perceiving, Hartmann’s thin boundary type, etc).

Cultures have a way of taking psychological cues from their heads of state. As Forster says of one critical objection to flat characters: “Queen Victoria, they argue, cannot be summed up in a single sentence, so what excuse remains for Mrs. Micawber?” When the president himself is flat—which is another way of saying that he can no longer surprise us on the downside—it has implications both for our literature and for our private lives.

At the moment, the entire society is under extreme duress. This at least temporarily rigidifies the ego boundaries. Complexity of identity becomes less attractive to the average person at such times. Still, the most liberal-minded (typically radical leftists in the US) will be better at maintaining their psychological openness in the face of conflict, fear, and anxiety. As Trump is the ultimate flat character, look to the far left for those who will represent the ultimate round character. Mainstream liberals, as usual, will attempt to play to the middle and shift with the winds, taking up flat and round in turn. It’s a battle of not only ideological but psychological worldviews. And which comes to define our collective identity will dominate our society for the coming generation.

The process is already happening. And it shouldn’t astonish us if we all wake up one day to discover that the world is flat.

It’s an interesting moment. Our entire society is becoming more complex — in terms of identity, demographics, technology, media, and on and on. This requires we develop the ability of roundedness or else fall back on the simplifying rhetoric and reaction of conservative-mindedness with the rigid absolutes of authoritarianism being the furthest reaches of flatness… and, yes, such flatness tends to be memorable (the reason it is so easy to make comparisons to someone like Hitler who has become an extreme caricature of flatness). This is all the more reason for the liberal-minded to gain the awareness and intellectual defenses toward the easy attraction of flat identities and worldviews, since in a battle of opposing flat characters the most conservative-minded will always win.