Psychedelics and Language

“We cannot evolve any faster than we evolve our language because you cannot go to places that you cannot describe.”
~Terence McKenna

This post is a placeholder, as I work through some thoughts. Maybe the most central link between much of it is Terence Mckenna’s stoned ape theory. That is about the evolution of consciousness as it relates to psychedelics and language. Related to McKenna’s view, there have been many observations of non-human animals imbibing a wide variety of mind-altering plants, often psychedelics. Giorgio Samorini, in Animals and Psychedelics, that this behavior is evolutionarily advantageous in that it induces lateral thinking.

Also, as McKenna points out, many psychedelics intensify the senses, a useful effect hunting. Humans won’t only take drugs themelves for this purpose but also give them to their animals: “A classic case is indigenous people giving psychedelics to hunting dogs to enhance their abilities. A study published in the Journal of Ethnobiology, reports that at least 43 species of psychedelic plants have been used across the globe for boosting dog hunting practices. The Shuar, an indigenous people from Ecuador, include 19 different psychedelic plants in their repertoire for this purpose—including ayahuasca and four different types of brugmansia” (Alex K. Gearin, High Kingdom). So, there are many practical reasons for using psychoactive drugs. Language might have been an unintended side effect.

There is another way to get to McKenna’s conclusion. David Lewis Williams asserts that cave paintings are shamanic. He discusses the entoptic imagery that is common in trance, whether from psychedelics or by other means. This interpretation isn’t specifically about language, but that is where another theory can help us. Genevieve von Petzinger takes a different tack by speculating that the geometric signs on cave walls were a set of symbols, possibly a system of graphic communication and so maybe the origin of writing.

In exploring the sites for herself, she ascertained there were 32 signs found over a 30,000 period in Europe. Some of the same signs were found outside of Europe as well. It’s the consistency and repetition that caught her attention. They weren’t random or idiosyncratic aesthetic flourishes. If we combine that with Williams’ theory, we might have the development of proto-concepts, still attached to the concrete world but in the process of developing into something else. It would indicate that something fundamental about the human mind itself was changing.

I have my own related theory about the competing influence of psychedelics and addictive substances, the influence being not only on the mind but on society and so related to the emergence of civilization. I’m playing around with the observation that it might tell us much about civilization that, over time, addiction became more prevalent than psychedelics. I see the shift in this preference having become apparent sometime following the neolithic era, although becoming most noticeable in the Axial Age. Of course, language already existed at that point. Though maybe, as Julian Jaynes and others have argued, the use of language changed. I’ll speculate about all of that at a later time.

In the articles and passages and links below, there are numerous overlapping ideas and topics. Here are some of what stood out to me or else some of the thoughts on my mind while reading:

  • Synaesthesia, gesture, ritual, dance, sound, melody, music, poeisis, repetition (mimesis, meter, rhythm, rhyme, and alliteration, etc) vs repetition-compulsion;
  • formulaic vs grammatical language, poetry vs prose, concrete vs abstract, metaphor, and metonymy;
  • Aural and oral, listening and speaking, preliterate, epic storytelling, eloquence, verbosity, fluency, and graphomania;
  • enthralled, entangled, enactivated, embodied, extended, hypnosis, voices, voice-hearing, bundle theory of self, ego theory of self, authorization, and Logos;
  • Et cetera.

* * *

Animals on Psychedelics: Survival of the Trippiest
by Steven Kotler

According to Italian ethnobotanist Giorgio Samorini, in his 2001 Animals and Psychedelics, the risk is worth it because intoxication promotes what psychologist Edward de Bono once called lateral thinking-problem-solving through indirect and creative approaches. Lateral thinking is thinking outside the box, without which a species would be unable to come up with new solutions to old problems, without which a species would be unable to survive. De Bono thinks intoxication an important “liberating device,” freeing us from “rigidity of established ideas, schemes, divisions, categories and classifications.” Both Siegel and Samorini think animals use intoxicants for this reason, and they do so knowingly.

Don’t Be A Sea Squirt.
by Tom Morgan

It’s a feature of complex adaptive systems that a stable system is a precursor to a dead system. Something that runs the same routine day-after-day is typically a dying system. There’s evidence that people with depression are stuck in neurological loops that they can’t get out of. We all know what it’s like to be trapped in the same negative thought patterns. Life needs perpetual novelty to succeed. This is one of the reasons researchers think that psychedelics have proven effective at alleviating depression; they break our brains out of the same familiar neural pathways.

This isn’t a uniquely human trait, animals also engage in deliberate intoxication. In his book Animals and Psychedelics, Italian ethnobotanist Giorgio Samorini wrote ‘drug-seeking and drug-taking behavior, on the part of both humans and animals, enjoys an intimate connection with…..depatterning.’And thus dolphins get high on blowfish, elephants seek out alcohol and goats eat the beans of the mescal plant. They’re not just having fun, they’re expanding the possible range of their behaviours and breaking stale patterns. You’re not just getting wasted, you’re furthering the prospects of the species!*

Synesthesias, Synesthetic Imagination, and Metaphor in the Context of Individual Cognitive Development and Societal Collective Consciousness
by Hunt Harry

The continuum of synesthesias is considered in the context of evolution, childhood development, adult creativity, and related states of imaginative absorption, as well as the anthropology and sociology of “collective consciousness”. In Part I synesthesias are considered as part of the mid-childhood development of metacognition, based on a Vygotskian model of the internalization of an earlier animism and physiognomic perception, and as the precursor for an adult capacity for imaginative absorption central to creativity, metaphor, and the synesthetically based “higher states of consciousness” in spontaneous mystical experience, meditation, and psychedelic states. Supporting research is presented on childhood precocities of a fundamental synesthetic imagination that expands the current neuroscience of classical synesthetes into a broader, more spontaneous, and open-ended continuum of introspective cross modal processes that constitute the human self referential consciousness of “felt meaning”. In Part II Levi-Strauss’ analysis of the cross modal and synesthetic lattices underlying the mythologies of native peoples and their traditional animation thereby of surrounding nature as a self reflective metaphoric mirror, is illustrated by its partial survival and simplification in the Chinese I-Ching. Jung’s psychological analysis of the I-Ching, as a device for metaphorically based creative insight and as a prototype for the felt “synchronicities” underlying paranormal experience, is further extended into a model for a synesthetically and metaphorically based “collective consciousness”. This metaphorically rooted and coordinated social field is explicit in mythologically centered, shamanic peoples but rendered largely unconscious in modern societies that fail to further educate and train the first spontaneous synesthetic imaginings of mid-childhood.

Psychedelics and the Full-Fluency Phenomenon
by T.H.

Like me, the full-fluency phenomenon has been experienced by many other people who stutter while using psilocybin and MDMA, and unlike me, while using LSD as well. […]

There’s also potential for immediate recovery from stuttering following a single high dose experience. One well told account of this comes from Paul Stamets, the renowned mycologist, whose stuttering stopped altogether following his first psilocybin mushroom experience. To sustain such a high increase in fluency after the effects of the drug wear off is rare, but Paul’s story gives testimony to the possibility for it to occur.

Can Psychedelics Help You Learn New Languages?
by The Third Wave Podcast

Idahosa Ness runs “The Mimic Method,” a website that promises to help you learn foreign languages quickly by immersing you in their sounds and pronunciations. We talk to Idahosa about his experiences with cannabis and other psychedelics, and how they have improved his freestyle rapping, increased his motivation to learn new languages, and helped the growth of his business.

Marijuana and Divergent Thinking
by Jonah Lehrer

A new paper published in Psychiatry Research sheds some light on this phenomenon, or why smoking weed seems to unleash a stream of loose associations. The study looked at a phenomenon called semantic priming, in which the activation of one word allows us to react more quickly to related words. For instance, the word “dog” might lead to decreased reaction times for “wolf,” “pet” and “Lassie,” but won’t alter how quickly we react to “chair”.

Interestingly, marijuana seems to induce a state of hyper-priming, in which the reach of semantic priming extends outwards to distantly related concepts. As a result, we hear “dog” and think of nouns that, in more sober circumstances, would seem to have nothing in common. […]

Last speculative point: marijuana also enhances brain activity (at least as measured indirectly by cerebral blood flow) in the right hemisphere. The drug, in other words, doesn’t just suppress our focus or obliterate our ability to pay attention. Instead, it seems to change the very nature of what we pay attention to, flattening out our hierarchy of associations.

How the Brain Processes Language on Acid Is a Trip
by Madison Margolin

“Results showed that while LSD does not affect reaction times, people under LSD made more mistakes that were similar in meaning to the pictures they saw,” said lead author Dr. Neiloufar Family, a post-doc from the University of Kaiserslautern.

For example, participants who were dosed with acid would more often say “bus” or “train” when asked to identify a picture of a car, compared to those who ingested the placebo. These lexical mixups shed some light on how LSD affects semantic networks and the way the brain draws connections between different words or concepts.

“The effects of LSD on language can result in a cascade of associations that allow quicker access to far way concepts stored in the mind,” said Family, discussing the study’s implications for psychedelic-assisted psychotherapy. Moreover, she added, “inducing a hyper-associative state may have implications for the enhancement of creativity.”

New study shows LSD’s effects on language
by Technische Universität Kaiserslautern

This indicates that LSD seems to affect the mind’s semantic networks, or how words and concepts are stored in relation to each other. When LSD makes the network activation stronger, more words from the same family of meanings come to mind.

The results from this experiment can lead to a better understanding of the neurobiological basis of semantic network activation. Neiloufar Family explains further implication: “These findings are relevant for the renewed exploration of psychedelic psychotherapy, which are being developed for depression and other mental illnesses. The effects of LSD on language can result in a cascade of associations that allow quicker access to far away concepts stored in the mind.”

The many potential uses of this class of substances are under scientific debate. “Inducing a hyper-associative state may have implications for the enhancement of creativity,” Family adds. The increase in activation of semantic networks can lead distant or even subconscious thoughts and concepts to come to the surface.

A new harmonic language decodes the effects of LSD
by Oxford Neuroscience

Dr Selen Atasoy, the lead author of the study says: “The connectome harmonics we used to decode brain activity are universal harmonic waves, such as sound waves emerging within a musical instrument, but adapted to the anatomy of the brain. Translating fMRI data into this harmonic language is actually not different than decomposing a complex musical piece into its musical notes”. “What LSD does to your brain seems to be similar to jazz improvisation” says Atasoy, “your brain combines many more of these harmonic waves (connectome harmonics) spontaneously yet in a structured way, just like improvising jazz musicians play many more musical notes in a spontaneous, non-random fashion”.

“The presented method introduces a new paradigm to study brain function, one that links space and time in brain activity via the universal principle of harmonic waves. It also shows that this spatio-temporal relation in brain dynamics resides at the transition between order and chaos.” says Prof Gustavo Deco.

Dr. Robin Carhart-Harris adds: “Our findings reveal the first experimental evidence that LSD tunes brain dynamics closer to criticality, a state that is maximally diverse and flexible while retaining properties of order. This may explain the unusual richness of consciousness experienced under psychedelic drugs and the notion that they ‘expand consciousness’.”

Did Psilocybin Mushrooms Lead to Human Language?
by Chris Rhine

Numerous archaeological finds discovered depictions of psilocybin mushrooms in various places and times around the world. One such occasion found hallucinogenic mushrooms from works produced 7,000 to 9,000 years ago in the Sahara Desert, as stated in Giorgio Samorini’s article, “The Oldest Representations of Hallucinogenic Mushrooms in the World.” Samorini concluded, “This Saharan testimony would demonstrate that the use of hallucinogens originates in the Paleolithic period and is invariably included within mystico-religious contexts and rituals.”

Some of early man’s first drawings include the ritualization of a plant as a sign—possibly a tribute to the substance that helped in the written sign’s development.

Are Psychedelic Hallucinations Actually Metaphorical Perceptions?
by Michael Fortier

The brain is constantly attempting to predict what is going on in the world. Because it happens in a dark environment with reduced sensory stimulation, the ayahuasca ritual dampens bottom-up signaling (sensory information becomes scarcer). If you are facing a tree in daylight and your brain wrongly guesses that there is an electric pole in front you, bottom-up prediction errors will quickly correct the wrong prediction—i.e., the lookout will quickly and successfully warn the helmsman. But if the same happens in the dark, bottom-up prediction errors will be sparser and vaguer, and possibly not sufficient enough to correct errors—as it were, the lookout’s warning will be too faint to reach the helmsman. As ayahuasca introduces noise in the brain processes,6 and because bottom-up corrections cannot be as effective as usual, hallucinations appear more easily. So, on the one hand, the relative sensory deprivation of the environment in which the ayahuasca ritual takes place, and the absence of bodily motion, both favor the occurrence of hallucinations.

Furthermore, the ayahuasca ritual does include some sensory richness. The songs, the perfume, and the tobacco stimulate the brain in multiple ways. Psychedelic hallucinogens are known to induce synesthesia7 and to increase communication between areas and networks of the brain that do not usually communicate with each other.8 It is hence no surprise that the shamans’ songs are able to shape people’s visions. If one sensory modality is noisier or fainter than others, its role in perception will be downplayed.9 This is what happens with ayahuasca: Given that not much information can be gathered by the visual modality, most of the prediction errors that contribute to the shaping of conscious perception are those coming from the auditory and olfactory modalities. The combination of synesthetic processing with the increased weight attributed to non-visual senses enables shamans to “drive” people’s visions.

The same mechanisms explain the shamans’ recommendation that perfume should be sprayed or tobacco blown when one is faced with a bad spirit. Conscious perception—e.g., vision of a spirit—is the result of a complex tradeoff between top-down predictions and bottom-up prediction errors. If you spray a huge amount of perfume or blow wreaths of smoke around you, your brain will receive new and reliable information from the olfactory modality. Under psychedelics, sensory modalities easily influence one another; as a result, a sudden olfactory change amounts to sending prediction errors to upper regions of the brain. Conscious perception is updated accordingly: as predicted by the shamans’ recommendation, the olfactory change dissolves the vision of bad spirits.

In its classical sense, hallucination refers to sensory content that is not caused by objects of the world. The above description of the ayahuasca ritual demonstrates that psychedelic visions are not, in the classical sense of the term, hallucinations. Indeed, the content of the visions is tightly tied to the environment: A change of melody in a song or an olfactory change can completely transform the content of the visions. Ayahuasca visions are not caused by hypothetical supernatural entities living in a parallel world, nor are they constructed independently of the mundane objects of the world. What are they, then? They are metaphorical perceptions.

In everyday life, melodic and olfactory changes cannot affect vision much. However, because ayahuasca experience is profoundly synesthetic and intermodal, ayahuasca visions are characteristically metaphorical: A change in one sensory modality easily affects another modality. Ayahuasca visions are not hallucinations, since they are caused by real objects and events; for example, a cloud of perfume. It is more accurate to define them as metaphorical perceptions: they are loose intermodal interpretations of things that are really there.

Michael Pollan on the science of how psychedelics can ‘shake your snow globe’
interview with Michael Pollan

We know that, for example, the so-called classic psychedelics like psilocybin, LSD, and DMT, mescaline, these activate a certain receptor a serotonin receptor. And so we know that are the key that fits that lock. But beyond that, there’s a cascade of effects that happens.

The observed effect, if you do brain imaging of people who are tripping, you find some very interesting patterns of activity in the brain – specifically something called the default mode network, which is a very kind of important hub in the brain, linking parts of the cerebral cortex to deeper, older areas having to do with memory and emotion. This network is kind of a regulator of all brain activities. One neuroscientist called it, ‘The conductor of the neural symphony,’ and it’s deactivated by psychedelics, which is very interesting because the assumption going in was that they would see lots of strange activity everywhere in the brain because there’s such fireworks in the experience, but in fact, this particular network almost goes off line.

Now what does this network responsible for? Well, in addition to being this transportation hub for signals in the brain, it is involved with self reflection. It’s where we go to ruminate or mind wander – thinking about the past or thinking about the future – therefore worrying takes place here. Our sense of self, if it can be said to have an address and real, resides in this particular brain network. So this is a very interesting clue to how psychedelics affect the brain and how they create the psychological experience, the experience in the mind, that is so transformative.

When it goes off line, parts of the brain that don’t ordinarily communicate to one another, strike up conversation. And those connections may represent what people feel during the psychedelic experience as things like synaesthesia. Synaesthesia is when one sense gets cross wired with another. And so you suddenly smell musical notes or taste things that you see.

It may produce insights. It may produce new metaphors – literally connecting the dots in new ways. Now that I’m being speculative – I’m going a little beyond what we’ve established – we know there are new connections, we don’t know what’s happening with them, or which of them endure. But the fact is, the brain is temporarily rewired. And that rewiring – whether the new connections actually produce the useful material or just shaking up the system – ‘shaking the snow globe,’ as one of the neuroscientists put it, is what’s therapeutic. It is a reboot of the brain.

If you think about, you know, mental illnesses such as depression, addiction, and anxiety, many of them involve these loops of thought that we can’t control and we get stuck on these stories we tell ourselves – that we can’t get through the next hour without a drink, or we’re worthless and unworthy of love. We get stuck in these stories. This temporarily dissolves those stories and gives us a chance to write new stories.

Terence McKenna Collection

The mutation-inducing influence of diet on early humans and the effect of exotic metabolites on the evolution of their neurochemistry and culture is still unstudied territory. The early hominids’ adoption of an omnivorous diet and their discovery of the power of certain plants were decisive factors in moving early humans out of the stream of animal evolution and into the fast-rising tide of language and culture. Our remote ancestors discovered that certain plants, when self-administered, suppress appetite, diminish pain, supply bursts of sudden energy, confer immunity against pathogens, and synergize cognitive activities. These discoveries set us on the long journey to self-reflection. Once we became tool-using omnivores, evolution itself changed from a process of slow modification of our physical form to a rapid definition of cultural forms by the elaboration of rituals, languages, writing, mnemonic skills, and technology.

Food of the Gods
by Terence McKenna
pp. 24-29

Because scientists were unable to explain this tripling of the human brain size in so short a span of evolutionary time, some of the early primate paleontologists and evolutionary theorists predicted and searched for evidence of transitional skeletons. Today the idea of a “missing link” has largely been abandoned. Bipedalism, binocular vision, the opposable thumb, the throwing arm-all have been put forth as the key ingredient in the mix that caused self-reflecting humans to crystallize out of the caldron of competing hominid types and strategies. Yet all we really know is that the shift in brain size was accompanied by remarkable changes in the social organization of the hominids. They became users of tools, fire, and language. They began the process as higher animals and emerged from it 100,000 years ago as conscious, self-aware individuals.

THE REAL MISSING LINK

My contention is that mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities. Alkaloids in plants, specifically the hallucinogenic compounds such as psilocybin, dimethyltryptamine (DMT), and harmaline, could be the chemical factors in the protohuman diet that catalyzed the emergence of human self-reflection. The action of hallucinogens present in many common plants enhanced our information processing activity, or environmental sensitivity, and thus contributed to the sudden expansion of the human brain size. At a later stage in this same process, hallucinogens acted as catalysts in the development of imagination, fueling the creation of internal stratagems and hopes that may well have synergized the emergence of language and religion.

In research done in the late 1960s, Roland Fischer gave small amounts of psilocybin to graduate students and then measured their ability to detect the moment when previously parallel lines became skewed. He found that performance ability on this particular task was actually improved after small doses of psilocybin.5

When I discussed these findings with Fischer, he smiled after explaining his conclusions, then summed up, “You see what is conclusively proven here is that under certain circumstances one is actually better informed concerning the real world if one has taken a drug than if one has not.” His facetious remark stuck with me, first as an academic anecdote, later as an effort on his part to communicate something profound. What would be the consequences for evolutionary theory of admitting that some chemical habits confer adaptive advantage and thereby become deeply scripted in the behavior and even genome of some individuals?

THREE BIG STEPS FOR THE HUMAN RACE

In trying to answer that question I have constructed a scenario, some may call it fantasy; it is the world as seen from the vantage point of a mind for which the millennia are but seasons, a vision that years of musing on these matters has moved me toward. Let us imagine for a moment that we stand outside the surging gene swarm that is biological history, and that we can see the interwoven consequences of changes in diet and climate, which must certainly have been too slow to be felt by our ancestors. The scenario that unfolds involves the interconnected and mutually reinforcing effects of psilocybin taken at three different levels. Unique in its properties, psilocybin is the only substance, I believe, that could yield this scenario.

At the first, low, level of usage is the effect that Fischer noted: small amounts of psilocybin, consumed with no awareness of its psychoactivity while in the general act of browsing for food, and perhaps later consumed consciously, impart a noticeable increase in visual acuity, especially edge detection. As visual acuity is at a premium among hunter-gatherers, the discovery of the equivalent of “chemical binoculars” could not fail to have an impact on the hunting and gathering success of those individuals who availed themselves of this advantage. Partnership groups containing individuals with improved eyesight will be more successful at feeding their offspring. Because of the increase in available food, the offspring within such groups will have a higher probability of themselves reaching reproductive age. In such a situation, the out breeding (or decline) of non-psilocybin-using groups would be a natural consequence.

Because psilocybin is a stimulant of the central nervous system, when taken in slightly larger doses, it tends to trigger restlessness and sexual arousal. Thus, at this second level of usage, by increasing instances of copulation, the mushrooms directly favored human reproduction. The tendency to regulate and schedule sexual activity within the group, by linking it to a lunar cycle of mushroom availability, may have been important as a first step toward ritual and religion. Certainly at the third and highest level of usage, religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself. This third level, then, is the level of the full-blown shamanic ecstasy. The psilocybin intoxication is a rapture whose breadth and depth is the despair of prose. It is wholly Other and no less mysterious to us than it was to our mushroom-munching ancestors. The boundary-dissolving qualities of shamanic ecstasy predispose hallucinogen-using tribal groups to community bonding and to group sexual activities, which promote gene mixing, higher birth rates, and a communal sense of responsibility for the group offspring.

At whatever dose the mushroom was used, it possessed the magical property of conferring adaptive advantages upon its archaic users and their group. Increased visual acuity, sexual arousal, and access to the transcendent Other led to success in obtaining food, sexual prowess and stamina, abundance of offspring, and access to realms of supernatural power. All of these advantages can be easily self-regulated through manipulation of dosage and frequency of ingestion. Chapter 4 will detail psilocybin’s remarkable property of stimulating the language-forming capacity of the brain. Its power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.

STEERING CLEAR OF LAMARCK

An objection to these ideas inevitably arises and should be dealt with. This scenario of human emergence may seem to smack of Lamarckism, which theorizes that characteristics acquired by an organism during its lifetime can be passed on to its progeny. The classic example is the claim that giraffes have long necks because they stretch their necks to reach high branches.

This straightforward and rather common-sense idea is absolutely anathema among
neoDarwinians, who currently hold the high ground in evolutionary theory. Their position is that mutations are entirely random and that only after the mutations are expressed as the traits of organisms does natural selection mindlessly and dispassionately fulfill its function of preserving those individuals upon whom an adaptive advantage had been conferred.

Their objection can be put like this: While the mushrooms may have given us better eyesight, sex, and language when eaten, how did these enhancements get into the human genome and become innately human? Nongenetic enhancements of an organism’s functioning made by outside agents retard the corresponding genetic reservoirs of those facilities by rendering them superfluous. In other words, if a necessary metabolite is common in available food, there will not be pressure to develop a trait for endogenous expression of the metabolite. Mushroom use would thus create individuals with less visual acuity, language facility, and consciousness. Nature would not provide those enhancements through organic evolution because the metabolic investment required to sustain them wouldn’t pay off, relative to the tiny metabolic investment required to eat mushrooms. And yet today we all have these enhancements, without taking mushrooms. So how did the mushroom modifications get into the genome?

The short answer to this objection, one that requires no defense of Lamarck’s ideas, is that the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating. Experimentation with many types of foods was causing a general increase in the numbers of random mutations being offered up to the process of natural selection, while the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors. One of these new behaviors, language use, previously only a marginally important trait, was suddenly very useful in the context of new hunting and gathering lifestyles. Hence psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language; acquisition of language led to more vocabulary and an expanded memory capacity. The psilocybin-using individuals evolved epigenetic rules or cultural forms that enabled them to survive and reproduce better than other individuals. Eventually the more successful epigenetically based styles of behavior spread through the populations along with the genes that reinforce them. In this fashion the population would evolve genetically and culturally.

As for visual acuity, perhaps the widespread need for corrective lenses among modem humans is a legacy of the long period o “artificial” enhancement of vision through psilocybin use. After all, atrophy of the olfactory abilities of human beings is thought by one school to be a result of a need for hungry omnivores to tolerate strong smells and tastes, perhaps even carrion. Trade-offs of this sort are common in evolution. The suppression of keenness of tasty and smell would allow inclusion of foods in the diet that might otherwise be passed over as “too strong.” Or it may indicate some thing more profound about our evolutionary relationship to diet My brother Dennis has written:

The apparent atrophy of the human olfactory system may actually represent a functional shift in a set of primitive, externally directed chemo-receptors to an interiorized regulatory function. This function may be related to the control of the human pheromonal system, which is largely under the control of the pineal gland, and which mediates, on a subliminal level, a host of psycho-sexual and psycho-social interactions between individuals. The pineal tends to suppress gonadal development and the onset of puberty, among other functions, and this mechanism may play a role in the persistence of neonatal characteristics in the human species. Delayed maturation and prolonged childhood and adolescence play a critical role in the neurological and psychological development of the individual, since they provide the circumstances which permit the post-natal development of the brain in the early, formative years of childhood. The symbolic, cognitive and linguistic stimuli that the brain experiences during this period are essential to its development and are the factors that make us the unique, conscious, symbol-manipulating, language-using beings that we are.

Neuroactive amines and alkaloids in the diet of early primates may have played a role in the biochemical activation of the pineal gland and the resulting adaptations.

pp. 46-60

HUMAN COGNITION

All the unique characteristics and preoccupations of human beings can be summed up under the heading of cognitive activities: dance, philosophy, painting, poetry, sport, meditation, erotic fantasy, politics, and ecstatic self-intoxication. We are truly Homo sapiens, the thinking animal; our acts are all a product of the dimension that is uniquely ours, the dimension of cognitive activity. Of thought and emotion, memory and anticipation. Of Psyche.

From observing the ayahuasca-using people of the Upper Amazon, it became very clear to me that shamanism is often intuitively guided group decision making. The shamans decide when the group should move or hunt or make war. Human cognition is an adaptive response that is profoundly flexible in the way it allows us to manage what in other species are genetically programmed behaviors.

We alone live in an environment that is conditioned not only by the biological and physical constraints to which all species are subject but also by symbols and language. Our human environment is conditioned by meaning. And meaning lies in the collective mind of the group.

Symbols and language allow us to act in a dimension that is “supranatural”-outside the ordinary activities of other forms of organic life. We can actualize our cultural assumptions, alter and shape the natural world in the pursuit of ideological ends and according to the internal model of the world that our symbols have empowered us to create. We do this through the elaboration of ever more effective, and hence ever more destructive, artifacts and technologies, which we feel compelled to use.

Symbols allow us to store information outside of the physical brain. This creates for us a relationship to the past very different from that of our animal companions. Finally, we must add to any analysis of the human picture the notion of self-directed modification of activity. We are able to modify our behavior patterns based on a symbolic analysis of past events, in other words, through history. Through our ability to store and recover information as images and written records, we have created a human environment as much conditioned by symbols and languages as by biological and environmental factors.

TRANSFORMATIONS OF MONKEYS

The evolutionary breakouts that led to the appearance of language and, later, writing are examples of fundamental, almost ontological, transformations of the hominid line. Besides providing us with the ability to code data outside the confines of DNA, cognitive activities allow us to transmit information across space and time. At first this amounted merely to the ability to shout a warning or a command, really little more than a modification of the cry of alarm that is a familiar feature of the behavior of social animals. Over the course of human history this impulse to communicate has motivated the elaboration of ever more effective communication techniques. But by our century, this basic ability has turned into the all-pervasive communications media, which literally engulf the space surrounding our planet. The planet swims through a self-generated ocean of messages. Telephone calls, data exchanges, and electronically transmitted entertainment create an invisible world experienced as global informational simultaneity. We think nothing of this; as a culture we take it for granted.

Our unique and feverish love of word and symbol has given us a collective gnosis, a collective understanding of ourselves and our world that has survived throughout history until very recent times. This collective gnosis lies behind the faith of earlier centuries in “universal truths” and common human values. Ideologies can be thought of as meaning-defined environments. They are invisible, yet they surround us and determine for us, though we may never realize it, what we should think about ourselves and reality. Indeed they define for us what we can think.

The rise of globally simultaneous electronic culture has vastly accelerated the rate at which we each can obtain information necessary to our survival. This and the sheer size of the human population as a whole have brought to a halt our physical evolution as a species. The larger a population is, the less impact mutations will have on the evolution of that species. This fact, coupled with the development of shamanism and, later, scientific medicine, has removed us from the theater of natural selection. Meanwhile libraries and electronic data bases have replaced the individual human mind as the basic hardware providing storage for the cultural data base. Symbols and languages have gradually moved us away from the style of social organization that characterized the mute nomadism of our remote ancestors and has replaced that archaic model with the vastly more complicated social organization characteristic of an electronically unified planetary society. As a result of these changes, we ourselves have become largely epigenetic, meaning that much of what we are as human beings is no longer in our genes but in our culture.

THE PREHISTORIC EMERGENCE OF HUMAN IMAGINATION

Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. Neural structures concerned with conceptualization, visualization, signification, and association are highly developed in our species. Through the act of speaking vividly, we enter into a flirtation with the domain of the imagination. The ability to associate sounds, or the small mouth noises of language, with meaningful internal images is a synesthesic activity. The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.

The conclusion universally drawn from these facts is that the highly organized neurolinguistic areas of our brain have made language and culture possible. Where the search for scenarios of human emergence and social organization is concerned, the problem is this: we know that our linguistic abilities must have evolved in response to enormous evolutionary pressures-but we do not know what these pressures were.
Where psychoactive plant use was present, hominid nervous systems over many millennia would have been flooded by hallucinogenic realms of strange and alien beauty. However, evolutionary necessity channels the organism’s awareness into a narrow cul-desac where ordinary reality is perceived through the reducing valve of the senses. Otherwise, we would be rather poorly adapted for the rough-and-tumble of immediate existence. As creatures with animal bodies, we are aware that we are subject to a range of immediate concerns that we can ignore only at great peril. As human beings we are also aware of an interior world, beyond the needs of the animal body, but evolutionary necessity has placed that world far from ordinary consciousness.

PATTERNS AND UNDERSTANDING

Consciousness has been called awareness of awareness’ and is characterized by novel associations and connections among the various data of experience. Consciousness is like a super nonspecific immune response. The key to the working of the immune system is the ability of one chemical to recognize, to have a key-in-lock relationship, with another. Thus both the immune system and consciousness represent systems that learn, recognize, and remember.’

As I write this I think of what Alfred North Whitehead said about understanding, that it is apperception of pattern as such. This is also a perfectly acceptable definition of consciousness. Awareness of pattern conveys the feeling that attends understanding. There presumably can be no limit to how much consciousness a species can acquire, since understanding is not a finite project with an imaginable conclusion, but rather a stance toward immediate experience. This appears self-evident from within a world view that sees consciousness as analogous to a source of light. The more powerful the light, the greater the surface area of darkness revealed. Consciousness is the moment-to-moment integration of the individual’s perception of the world. How well, one could almost say how gracefully, an individual accomplishes this integration determines that individual’s unique adaptive response to existence.

We are masters not only of individual cognitive activity, but, when acting together, of group cognitive activity as well. Cognitive activity within a group usually means the elaboration and manipulation of symbols and language. Although this occurs in many species, within the human species it is especially well developed. Our immense power to manipulate symbols and language gives us our unique position in the natural world. The power of our magic and our science arises out of our commitment to group mental activity, symbol sharing, meme replication (the spreading of ideas), and the telling of tall tales.

The idea, expressed above, that ordinary consciousness is the end product of a process of extensive compression and filtration, and that the psychedelic experience is the antithesis of this construction, was put forward by Aldous Huxley, who contrasted this with the psychedelic experience. In analyzing his experiences with mescaline, Huxley wrote:

I find myself agreeing with the eminent Cambridge philosopher, Dr. C. D. Broad, “that we should do well to consider the suggestion that the function of the brain and nervous system and sense organs is in the main eliminative and not productive.” The function of the brain and nervous system is to protect us from being overwhelmed and confused by this mass of largely useless and irrelevant knowledge, by shutting out most of what we should otherwise perceive or remember at any moment, and leaving only that very small and special selection which is likely to be practically useful. According to such a theory, each one of us is potentially Mind at Large. But in so far as we are animals, our business is at all costs to survive. To make biological survival possible, Mind at Large has to be funnelled through the reducing valve of the brain and nervous system. What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive on the surface of this particular planet. To formulate and express the contents of this reduced awareness, man has invented and endlessly elaborated those symbol-systems and implicit philosophies which we call languages. Every individual is at once the beneficiary and the victim of the linguistic tradition into which he has been born. That which, in the language of religion, is called “this world” is the universe of reduced awareness, expressed, and, as it were, petrified by language. The various “other worlds” with which human beings erratically make contact are so many elements in the totality of the awareness belonging to Mind at Large …. Temporary by-passes may be acquired either spontaneously, or as the result of deliberate “spiritual exercises,”. . . or by means of drugs.’

What Huxley did not mention was that drugs, specifically the plant hallucinogens, can reliably and repeatedly open the floodgates of the reducing valve of consciousness and expose the individual to the full force of the howling Tao. The way in which we internalize the impact of this experience of the Unspeakable, whether encountered through psychedelics or other means, is to generalize and extrapolate our world view through acts of imagination. These acts of imagination represent our adaptive response to information concerning the outside world that is conveyed to us by our senses. In our species, culture-specific, situation-specific syntactic software in the form of language can compete with and sometimes replace the instinctual world of hard-wired animal behavior. This means that we can learn and communicate experience and thus put maladaptive behaviors behind us. We can collectively recognize the virtues of peace over war, or of cooperation over struggle. We can change.

As we have seen, human language may have arisen when primate organizational potential was synergized by plant hallucinogens. The psychedelic experience inspired us to true self-reflective thought in the first place and then further inspired us to communicate our thoughts about it.

Others have sensed the importance of hallucinations as catalysts of human psychic organization. Julian Jaynes’s theory, presented in his controversial book The Origin of Consciousness in the Breakdown of the Bicameral Mind,’ makes the point that major shifts in human self-definition may have occurred even in historical times. He proposes that through Homeric times people did not have the kind of interior psychic organization that we take for granted. Thus, what we call ego was for Homeric people a “god.” When danger threatened suddenly, the god’s voice was heard in the individual’s mind; an intrusive and alien psychic function was expressed as a kind of metaprogram for survival called forth under moments of great stress. This psychic function was perceived by those experiencing it as the direct voice of a god, of the king, or of the king in the afterlife. Merchants and traders moving from one society to another brought the unwelcome news that the gods were saying different things in different places, and so cast early seeds of doubt. At some point people integrated this previously autonomous function, and each person became the god and reinterpreted the inner voice as the “self” or, as it was later called, the “ego.”

Jaynes’s theory has been largely dismissed. Regrettably his book on the impact of hallucinations on culture, though 467 pages in length, manages to avoid discussion of hallucinogenic plants or drugs nearly entirely. By this omission Jaynes deprived himself of a mechanism that could reliably drive the kind of transformative changes he saw taking place in the evolution of human consciousness.

CATALYZING CONSCIOUSNESS

The impact of hallucinogens in the diet has been more than psychological; hallucinogenic plants may have been the catalysts for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness. Our society more than others will find this theory difficult to accept, because we have made pharmacologically obtained ecstasy a taboo. Like sexuality, altered states of consciousness are taboo because they are consciously or unconsciously sensed to be entwined with the mysteries of our origin-with where we came from and how we got to be the way we are. Such experiences dissolve boundaries and threaten the order of the reigning patriarchy and the domination of society by the unreflecting expression of ego. Yet consider how plant hallucinogens may have catalyzed the use of language, the most unique of human activities.

One has, in a hallucinogenic state, the incontrovertible impression that language possesses an objectified and visible dimension, which is ordinarily hidden from our awareness. Language, under such conditions, is seen, is beheld, just as we would ordinarily see our homes and normal surroundings. In fact our ordinary cultural environment is correctly recognized, during the experience of the altered state, as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomena have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse.

Once activities involving syntactic self-expression were established habits among early human beings, the continued evolution of language in environments where mushrooms were scarce or unavailable permitted a tendency toward the expression and emergence of the ego. If the ego is not regularly and repeatedly dissolved in the unbounded hyperspace of the Transcendent Other, there will always be slow drift away from the sense of self as part of nature’s larger whole. The ultimate consequence of this drift is the fatal ennui that now permeates Western civilization.

The connection between mushrooms and language was brilliantly anticipated by Henry Munn in his essay “The Mushrooms of Language.” Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. The spontaneity the mushrooms liberate is not only perceptual, but linguistic. For the shaman, it is as if existence were uttering itself through him.

THE FLESH MADE WORD

The evolutionary advantages of the use of speech are both obvious and subtle. Many unusual factors converged at the birth of human language. Obviously speech facilitates communication and cognitive activity, but it also may have had unanticipated effects on the whole human enterprise.

Some neurophysiologists have hypothesized that the vocal vibration associated with human use of language caused a kind of cleansing of the cerebrospinal fluid. It has been observed that vibrations can precipitate and concentrate small molecules in the spinal fluid, which bathes and continuously purifies the brain. Our ancestors may have, consciously or unconsciously, discovered that vocal sound cleared the chemical cobwebs out of their heads. This practice may have affected the evolution of our present-day thin skull structure and proclivity for language. A self-regulated process as simple as singing might well have positive adaptive advantages if it also made the removal of chemical waste from the brain more efficient. The following excerpt supports this provocative idea:

Vibrations of human skull, as produced by loud vocalization, exert a massaging effect on the brain and facilitate elution of metabolic products from the brain into the cerebrospinal fluid (CSF) . . . . The Neanderthals had a brain 15% larger than we have, yet they did not survive in competition with modern humans. Their brains were more polluted, because their massive skulls did not vibrate and therefore the brains were not sufficiently cleaned. In the evolution of the modern humans the thinning of cranial bones was important.’

As already discussed, hominids and hallucinogenic plants must have been in close association for a long span of time, especially if we want to suggest that actual physical changes in the human genome resulted from the association. The structure of the soft palate in the human infant and timing of its descent is a recent adaptation that facilitates the acquisition of language. No other primate exhibits this characteristic. This change may have been a result of selective pressure on mutations originally caused by the new omnivorous diet.

WOMEN AND LANGUAGE

Women, the gatherers in the Archaic hunter-gatherer equation, were under much greater pressure to develop language than were their male counterparts. Hunting, the prerogative of the larger male, placed a premium on strength, stealth, and stoic waiting. The hunter was able to function quite well on a very limited number of linguistic signals, as is still the case among hunting peoples such as the !Kung or the Maku.

For gatherers, the situation was different. Those women with the largest repertoire of communicable images of foods and their sources and secrets of preparation were unquestionably placed in a position of advantage. Language may well have arisen as a mysterious power possessed largely by women-women who spent much more of their waking time together-and, usually, talking-than did men, women who in all societies are seen as group-minded, in contrast to the lone male image, which is the romanticized version of the alpha male of the primate troop.

The linguistic accomplishments of women were driven by a need to remember and describe to each other a variety of locations and landmarks as well as numerous taxonomic and structural details about plants to be sought or avoided. The complex morphology of the natural world propelled the evolution of language toward modeling of the world beheld. To this day a taxonomic description of a plant is a Joycean thrill to read: “Shrub 2 to 6 feet in height, glabrous throughout. Leaves mostly opposite, some in threes or uppermost alternate, sessile, linear-lanceolate or lanceolate, acute or acuminate. Flowers solitary in axils, yellow, with aroma, pedicellate. Calyx campanulate, petals soon caducous, obovate” and so on for many lines.

The linguistic depth women attained as gatherers eventually led to a momentous discovery: the discovery of agriculture. I call it momentous because of its consequences. Women realized that they could simply grow a restricted number of plants. As a result, they learned the needs of only those few plants, embraced a sedentary lifestyle, and began to forget the rest of nature they had once known so well.

At that point the retreat from the natural world began, and the dualism of humanity versus nature was born. As we will soon see, one of the places where the old goddess culture died, fatal Huyuk, in present-day Anatolian Turkey, is the very place where agriculture may have first arisen. At places like fatal Huyuk and Jericho, humans and their domesticated plants and animals became for the first time physically and psychologically separate from the life of untamed nature and the howling unknown. Use of hallucinogens can only be sanctioned in hunting and gathering societies. When agriculturists use these plants, they are unable to get up at dawn the morning after and go hoe the fields. At that point, corn and grain become gods-gods that symbolize domesticity and hard labor. These replace the old goddesses of plant-induced ecstasy.

Agriculture brings with it the potential for overproduction, which leads to excess wealth, hoarding, and trade. Trade leads to cities; cities isolate their inhabitants from the natural world. Paradoxically, more efficient utilization of plant resources through agriculture led to a breaking away from the symbiotic relationship that had bound human beings to nature. I do not mean this metaphorically. The ennui of modernity is the consequence of a disrupted quasisymbiotic relationship between ourselves and Galan nature. Only a restoration of this relationship in some form is capable of carrying us into a full appreciation of our birthright and sense of ourselves as complete human beings.

HABIT AS CULTURE AND RELIGION

At regular intervals that were probably lunar, the ordinary activities of the small nomadic group of herders were put aside. Rains usually followed the new moon in the tropics, making mushrooms plentiful. Gatherings took place at night; night is the time of magical projection and hallucinations, and visions are more easily obtained in darkness. The whole clan was present from oldest to youngest. Elders, especially shamans, usually women but often men, doled out each person’s dose. Each clan member stood before the group and reflectively chewed and swallowed the body of the Goddess before returning to his or her place in the circle. Bone flutes and drums wove within the chanting. Line dances with heavy foot stamping channeled the energy of the first wave of visions. Suddenly the elders signal silence.

In the motionless darkness each mind follows its own trail of sparks into the bush while some people keen softly. They feel fear, and they triumph over fear through the strength of the group. They feel relief mingled with wonder at the beauty of the visionary expanse; some spontaneously reach out to those nearby in simple affection and an impulse for closeness or in erotic desire. An individual feels no distance between himself or herself and the rest of the clan or between the clan and the world. Identity is dissolved in the higher wordless truth of ecstasy. In that world, all divisions are overcome. There is only the One Great Life; it sees itself at play, and it is glad.

The impact of plants on the evolution of culture and consciousness has not been widely explored, though a conservative form of this notion appears in R. Gordon Wasson’s The Road to Eleusis. Wasson does not comment on the emergence of self-reflection in hominids, but does suggest hallucinogenic mushrooms as the causal agent in the appearance of spiritually aware human beings and the genesis of religion. Wasson feels that omnivorous foraging humans would have sooner or later encountered hallucinogenic mushrooms or other psychoactive plants in their environment:

As man emerged from his brutish past, thousands of years ago, there was a stage in the evolution of his awareness when the discovery of the mushroom (or was it a higher plant?) with miraculous properties was a revelation to him, a veritable detonator to his soul, arousing in him sentiments of awe and reverence, and gentleness and love, to the highest pitch of which mankind is capable, all those sentiments and virtues that mankind has ever since regarded as the highest attribute of his kind. It made him see what this perishing mortal eye cannot see. How right the Greeks were to hedge about this Mystery, this imbibing of the potion with secrecy and surveillance! . . . Perhaps with all our modem knowledge we do not need the divine mushroom anymore. Or do we need them more than ever? Some are shocked that the key even to religion might be reduced to a mere chug. On the other hand, the chug is as mysterious as it ever was: “like the wind that comes we know not whence nor why.” Out of a mere chug comes the ineffable, comes ecstasy. It is not the only instance in the history of humankind where the lowly has given birth to the divine.’

Scattered across the African grasslands, the mushrooms would be especially noticeable to hungry eyes because of their inviting smell and unusual form and color. Once having experienced the state of consciousness induced by the mushrooms, foraging humans would return to them repeatedly, in order to reexperience their bewitching novelty. This process would create what C. H. Waddington called a “creode, “z a pathway of developmental activity, what we call a habit.

ECSTASY

We have already mentioned the importance of ecstasy for shamanism. Among early humans a preference for the intoxication experience was ensured simply because the experience was ecstatic. “Ecstatic” is a word central to my argument and preeminently worthy of further attention. It is a notion that is forced on us whenever we wish to indicate an experience or a state of mind that is cosmic in scale. An ecstatic experience transcends duality; it is simultaneously terrifying, hilarious, awe-inspiring, familiar, and bizarre. It is an experience that one wishes to have over and over again.

For a minded and language-using species like ourselves, the experience of ecstasy is not perceived as simple pleasure but, rather, is incredibly intense and complex. It is tied up with the very nature of ourselves and our reality, our languages, and our imagings of ourselves. It is fitting, then, that it is enshrined at the center of shamanic approaches to existence. As Mircea Eliade pointed out, shamanism and ecstasy are atroot one concern:

This shamanic complex is very old; it is found, in whole or in part, among the Australians, the archaic peoples of North and South America, in the polar regions, etc. The essential and defining element of shamanism is ecstasy the shaman is a specialist in the sacred, able to abandon his body and undertake cosmic journeys “in the spirit” (in trance). “Possession” by spirits, although documented in a great many shamanisms, does not seem to have been a primary and essential element. Rather, it suggests a phenomenon of degeneration; for the supreme goal of the shaman is to abandon his body and rise to heaven or descend into hell-not to let himself be “possessed” by his assisting spirits, by demons or the souls of the dead; the shaman’s ideal is to master these spirits, not to let himself be “occupied” by them.’

Gordon Wasson added these observations on ecstasy:

In his trance the shaman goes on a far joumey-the place of the departed ancestors, or the nether world, or there where the gods dwell-and this wonderland is, I submit, precisely where the hallucinogens take us. They are a gateway to ecstasy. Ecstasy in itself is neither pleasant nor unpleasant. The bliss or panic into which it plunges you is incidental to ecstasy. When you are in a state of ecstasy, your very soul seems scooped out from your body and away it goes. Who controls its flight: Is it you, or your “subconscious,” or a “higher power”? Perhaps it is pitch dark, yet you see and hear more clearly than you have ever seen or heard before. You are at last face to face with Ultimate Truth: this is the overwhelming impression (or illusion) that grips you. You may visit Hell, or the Elysian fields of Asphodel, or the Gobi desert, or Arctic wastes. You know awe, you know bliss, and fear, even terror. Everyone experiences ecstasy in his own way, and never twice in the same way. Ecstasy is the very essence of shamanism. The neophyte from the great world associates the mushrooms primarily with visions, but for those who know the Indian language of the shaman the mushrooms “speak” through the shaman. The mushroom is the Word: es habla, as Aurelio told me. The mushroom bestows on the curandero what the Greeks called Logos, the Aryan Vac, Vedic Kavya, “poetic potency,” as Louis Renous put it. The divine afflatus of poetry is the gift of the entheogen. The textual exegete skilled only in dissecting the cruces of the verses lying before him is of course indispensable and his shrewd observations should have our full attention, but unless gifted with Kavya, he does well to be cautious in discussing the higher reaches of Poetry. He dissects the verses but knows not ecstasy, which is the soul of the verses.’

The Magic Language of the Fourth Way
by Pierre Bonnasse
pp. 228-234

Speech, just like sacred medicine, forms the basis of the shamanic path in that it permits us not only to see but also to do. Ethnobotany, the science that studies man as a function of his relationship to the plants around him, offers us new paths of reflection, explaining our relationship to language from a new angle that reconsiders all human evolution in a single movement. It now appears clear that the greatest power of the shaman, that master of ecstasy, resides in his mastery of the magic word stimulated by the ingestion of modifiers of consciousness.

For the shaman, language produces reality, our world being made of language. Terence McKenna, in his revolutionary endeavor to rethink human evolution, shows how plants have been able to influence the development of humans and animals. 41 He explains why farming and the domestication of animals as livestock were a great step forward in our cultural evolution: It was at this moment, according to him, that we were able to come into contact with the Psilocybe mushroom, which grows on and around dung. He supports the idea that “mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities.” 42 Further, because “thinking about human evolution ultimately means thinking about the evolution of human consciousness,” he supports the thesis that psychedelic plants “may well have synergized the emergence of language and religion.” 43

Studies undertaken by Fischer have shown that weak doses of psilocybin can improve certain types of mental performance while making the investigator more aware of the real world. McKenna distinguishes three degrees of effects of psilocybin: improvement of visual acuity, increase of sexual excitation, and, at higher doses, “certainly . . . religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself.” 44 Because “the psilocybin intoxication is a rapture whose breadth and depth is the despair of prose,” it is entirely clear to McKenna that shamanic ecstasy, characterized by its “boundary-dissolving qualities,” played a crucial role in the evolution of human consciousness, which, according to him, can be attributed to “psilocybin’s remarkable property of stimulating the language-forming capacity of the brain.” Indeed, “[i]ts power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.” 45 In response to the neo-Darwinist objection, McKenna states that “the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating,” and that “the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors.” 46

Be that as it may, it is undeniable that the unlimiters of consciousness, as Charles Duits calls them, have a real impact upon linguistic activity in that they strongly stimulate the emergence of speech. If, according to McKenna’s theories, “psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language,” resulting in “more vocabulary and an expanded memory capacity,” 47 then it seems obvious that the birth of poetry, literature, and all the arts came about ultimately through the fantastic encounter between humans and the magic mushroom—a primordial plant, the “umbilical cord linking us to the feminine spirit of the planet,” and thence, inevitably, to poetry. Rich in behavioral and evolutionary consequences, the mushroom, in its dynamic relationship to the human being, propelled us toward higher cultural levels developing parallel to self-reflection. 48

This in no way means that this level of consciousness is inherent in all people, but it must be observed that the experience in itself leads to a gaining of consciousness which, in order to be preserved and maintained, requires rigorous and well-directed work on ourselves. This being said, the experience allows us to observe this action in ourselves in order to endeavor to understand its subtle mechanisms. Terence McKenna writes,

Of course, imagining these higher states of self-reflection is not easy. For when we seek to do this we are acting as if we expect language to somehow encompass that which is, at present, beyond language, or translinguistic. Psilocybin, the hallucinogen unique to mushrooms, is an effective tool in this situation. Psilocybin’s main synergistic effect seems ultimately to be in the domain of language. It excites vocalization; it empowers articulation; it transmutes language into something that is visibly beheld. It could have had an impact on the sudden emergence of consciousness and language use in early humans. We literally may have eaten our way to higher consciousness. 49

If we espouse this hypothesis, then speaking means evoking and repeating the primordial act of eating the sacred medicine. Ethnobotanists insist upon the role of the human brain in the accomplishment of this process, pinpointing precisely the relevant area of activity, which, in Gurdjieffian terms, is located in the center of gravity of the intellectual center: “Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. . . . The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.” 50 It thus appears that these are the areas of the brain that have allowed for the emergence of language and culture. Yet McKenna adds, “our linguistic abilities must have evolved in response to enormous evolutionary pressures,” though we do not know the nature of these pressures. According to him, it is this “immense power to manipulate symbols and language” that “gives us our unique position in the natural world.” 51 This is obvious, in that speech and consciousness, inextricably linked, are solely the property of humans. Thus it seems logical that the plants known as psychoactive must have been the catalysts “for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness,” 52 with the primary position being held by language, “the most unique of human activities,” and the catalyst for poetic and literary activity.

Under the influence of an unlimiter, we have the incontrovertible impression that language possesses an objectified and visible dimension that is ordinarily hidden from our awareness. Under such conditions, language is seen and beheld just as we would ordinarily see our homes and normal surroundings. In fact, during the experience of the altered state, our ordinary cultural environment is recognized correctly as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomenons have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse. 53

Here we are touching upon the higher powers of speech—spontaneous creations, outbursts of poetry and suprahuman communications—which are part of the knowledge of the shamans and “sorcerers” who, through years of rigorous education, have become highly perceptive of these phenomena, which elude the subjective consciousness. In his essay “The Mushrooms of Language,” Henry Munn points to the direct links existing between the states of ecstasy and language: “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic . . . For the shaman, it is as if existence were uttering itself through him.” 54

In the 1920s, the Polish writer S. I. Witkiewicz, who attributed crucial importance to verbal creation, showed how peyote (he was one of the first people in Europe to experiment with it, or, at least, one of the first to give an account of doing so) acts upon the actual creation of words and also intervenes in the structure of sentences themselves: “. . . [I]t must also be remarked that peyote, perhaps by reason of the desire one has to capture with words that which cannot be captured, creates conceptual neologisms that belong to it alone and twists sentences in order to adapt their constructions to the frightening dimensions of its bizarrification . . .” 55 Peyote also gives those who ingest it a desire to create “new combinations of meanings.” Witkiewicz distinguishes three categories of objects in his visions: dead objects, moving objects, and living creatures. Regarding this last category, he distinguishes the “real” living creatures from the “fantastical” living creatures, which “discourage any attempt at description.” This is the moment when peyote intervenes: when those who wish to describe find themselves facing the limits of language. Peyote does not break through these limits; it simply shows that they do not exist, that they are hallucinations of the ordinary consciousness, that they are illusory, a mirage of tradition and the history of language.

The lucidogen—as it is called by Charles Duits, who created other neologisms for describing his experience with the sacred cactus—shows that life is present in everything, including speech, and he proves it. Sometimes, peyote leads us to the signifiers that escape us, always in order better to embrace the signified. Witkiewicz, pushing the phenomenon to the extreme limits of the senses and the sensible, insists:

I must draw attention to the fact that under the influence of peyote, one wants to make up neologisms. One of my friends, the most normal man in the world where language is concerned, in a state of trance and powerless to come to grips with the strangeness of these visions which defied all combinations of normal words, described them thus: “Pajtrakaly symforove i kondjioul v trykrentnykh pordeliansach.” I devised many formulas of this type on the night when I went to bed besieged by visions. I remember only this one. There is therefore nothing surprising in the fact that I, who have such inclinations even under normal conditions, should sometimes be driven to create some fancy word in order to attempt to disentangle and sort out the infernal vortex of creatures that unfurled upon me all night long from the depths of the ancient world of peyote. 56

Here, we cannot help but remember René Daumal’s experience, reported in “Le souvenir déterminant”: Under the influence of carbon tetrachloride, he pronounced with difficulty: “approximately: temgouf temgouf drr . . .” Henry Munn makes a similar remark after having taken part in shamanic rituals: “The mushroom session of language creates the words for phenomena without name.” 57 Sacred plants (and some other substances) are neologens, meaning they produce or generate neologisms from the attempts made at description by the subjects who consume them. This new word, this neologism created by circumstance, appears to be suited for this linguistic reality. We now have a word to designate this particular phenomenon pushing us against the limits of language, which in fact are revealed to be illusory.

Beyond this specific case, what is it that prevents us from creating new words whenever it appears necessary? Witkiewicz, speaking of language and life, defends the writer’s right to take liberties with the rules and invent new words. “Although certain professors insist on clinging to their own tripe,” he writes, “language is a living thing, even if it has always been considered a mummy, even if it has been thought impermissible to change anything in it. We can only imagine what literature, poetry, and even this accursed and beloved life would look like otherwise.” 58 Peyote not only incites us to this, but also, more forcefully, exercising a mysterious magnetic attraction toward a sort of supreme meaning beyond language and shaking up conventional signifiers and beings alike, peyote acts directly upon the heart of speech within the body of language. In this sense, it takes part actively and favorably in the creation of the being, the new and infinitely renewed human who, after a death that is more than symbolic, is reborn to new life. It is also very clear, in light of this example, that psilocybin alone does not explain everything, and that all lucidogenic substances work toward this same opening, this same outpouring of speech. McKenna writes:

Languages appear invisible to the people who speak them, yet they create the fabric of reality for their users. The problem of mistaking language for reality in the everyday world is only too well known. Plant use is an example of a complex language of chemical and social interactions. Yet most of us are unaware of the effects of plants on ourselves and our reality, partly because we have forgotten that plants have always mediated the human cultural relationship to the world at large. 59

pp. 238-239

It is interesting to note this dimension of speech specific to shamans, this inspired, active, healing speech. “It is not I who speak,” Heraclitus said, “it is the word.” The receptiveness brought about by an increased level of consciousness allows us not only to understand other voices, but also, above all, to express them in their entire magical substance. “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, of the logos in activity.” 72

The shamanic paroxysm is therefore the mastery of the word, the mastery of the sacred songs very often inspired by the powers that live in plants—which instruct us, making us receptive to phenomena that escape the ordinary consciousness. The shaman becomes a channel through which subtle energies can pass. Because of the mystic intoxication, he becomes the instrument for spirits that express themselves through him. Hence the word tzo —“says”—which punctuates the phrases of the Mazatec shaman in her communication with the “little growing things”: “Says, says, says. It is said. I say. Who says! We say, man says, language says, being and existence say.” 73 “The inspired man,” writes the Mexican poet Octavio Paz in an essay on Breton, “the man who speaks the truth, says nothing that is his own: Through his mouth, it is the language that speaks.” 74

The language thus regains its primordial power, its creative force and Orphic value, which determine all true poetry, for, as Duits writes, poetry—which is born in the visionary experience—is nothing other than “the language of the gods.” There is nothing phantasmagoric, hallucinated, or illusory about this speech. “[W]ords are materializations of consciousness; language is a privileged vehicle of our relation to reality,” writes Munn. Because poetry carries the world, it is the language of power, a tool in the service of knowledge and action. The incantatory repetition of names, for example, an idea we have already touched upon in our discussion of prayer, acts upon the heart of the being. “The shaman has a conception of poesis in its original sense as an action: words themselves are medicine.” 75 The words—used in their sacred dimension —work toward the transmutation of being, the healing of the spirit, our development, but in order for it be effective, the magic word must be born from a direct confrontation with the experience, because experience alone is a safe reserve for truth. Knowledge is not enough; only those who have eaten are in a position to understand, only those who have heard and seen are in a position to say. If speech goes farther than the eye, it is because it has the power of doing. “Though the psychedelic experience produced by the mushrooms is of heightened perceptivity,” Munn writes, “the I say is of privileged importance to the I see .” 76 Psychedelic speech is speech of power, revealing the spirit.

Darwin’s Pharmacy
by Richard M. Doyle
pp. 8-23

Rhetoric is the practice of learning and teaching eloquence, persuasion, and information architecture by revealing the choices of expression or interpretation open to any given rhetor, viewer, listener, or reader. Robert Anton Wilson offers a definition of rhetoric by example when he focuses on the word “reality” in his book Cosmic Trigger:

“Reality” is a word in the English language which happens to be (a) a noun and (b) singular. Thinking in the English language (and in cognate Indo-European languages) therefore subliminally programs us to conceptualize “reality” as one block-like entity, sort of like a huge New York skyscraper, in which every part is just another “room” within the same building. This linguistic program is so pervasive that most people cannot “think” outside it at all, and when one tries to offer a different perspective they imagine one is talking gibberish. (iii) […]

Mitchell’s vision offers perhaps an equally startling irony: it was only by taking on a literally extraterrestrial perspective that the moon walker overcame alienated perception.5 […]

Thus, perception is not an object but rather the label for a nonlinear process involving an object, a percipient and information.” (Mitchell n.d.; emphasis mine) […]

Like the mind apprehending it, information “wants to be free” if only because it is essentially “not an object,” but rather “the label for a nonlinear process involving an object, a percipient and information.”6 It is worth noting that Mitchell’s experience induces a desire to comprehend, an impulse that is not only the desire to tell the story of his ecodelic imbrication but a veritable symptom of it.7 […]

What are psychedelics such that they seem to persuade humans of their interconnection with an ecosystem?

Terence McKenna’s 1992 book recursively answered this query with a title: Food of the Gods. Psychedelics, McKenna argued, were important vectors in the evolution of consciousness and spiritual practice. In his “shaggy primate story,” McKenna argued that psilocybin mushrooms were a “genome-shaping power” integral to the evolution of human consciousness. On this account, human consciousness—the only instance we know of where one part of the ecosystem is capable of reflecting on itself as a self and acting on the result—was “bootstrapped” by its encounter with the astonishing visions of high-dose psilocybin, an encounter with the Transcendental Other McKenna dubbed “a glimpse of the peacock angel.” Hence for McKenna, psychedelics are both a food fit for the gods and a food that, in scrambling the very distinction between food and drug, man and god, engenders less transcendence than immanence—each is recursively implicated, nested, in the other. […]

Evolutionarily speaking the emergence of widespread animal life on earth is not separable from a “mutualistic” economy of plants, pollinators, and seed dispersers.

The basis for the spectacular radiations of animals on earth today is clearly the resources provided by plants. They are the major primary producers, autotrophically energizing planet Earth…the new ecological relationships of flowering plants resulted in colonizing species with population structures conducive to rapid evolutionary change. (Price, 4)

And if mammalian and primate evolution is enmeshed in a systemic way with angiosperms (flowering plants), so too have humans and other primates been constantly constituted by interaction with plants. […]

Navigating our implication with both plants and their precipitates might begin, then, with the startling recognition of plants as an imbricated power, a nontrivial vector in the evolution of Homo sapiens, a power against which we have waged war. “Life is a rhizome,” wrote Carl Jung, our encrypted ecological “shadow” upon which we manifest as Homo sapiens, whose individuation is an interior folding or “involution” that increases, rather than decreases, our entanglement with any given ecosystem. […]

In other words, psychedelics are (a suppressed) part of evolution. As Italian ethnobotanist Giorgio Samorini put it “the drug phenomenon is a natural phenomenon, while the drug problem is a cultural problem“ (87). […]

Indeed, even DMT, an endogenous and very real product of the human brain, has been “scheduled” by the federal government. DMT would be precisely, by most first person accounts, “the most potent hallucinogen on sale in Haight or Ashbury or Telegraph Avenue” and is a very real attribute of our brains as well as plant ecology. We are all “holding” a Schedule One psychedelic—our own brains, wired for ecodelia, are quite literally against the law. […]

The first principle of harm reduction with psychedelics is therefore this: one must pay attention to set and setting, the organisms for whom and context in which the psychedelic experience unfolds.For even as the (rediscovery of psychedelics by twentieth-century technoscience suggested to many that consciousness was finally understandable via a molecular biology of the brain, this apex of reductionism also fostered the recognition that the effects of psychedelics depend on much more than neurochemistry.23 If ecodelics can undoubtedly provoke the onset of an extra-ordinary state of mind, they do so only on the condition of an excessive response-ability, a responsiveness to rhetorical conditions—the sensory and symbolic framework in which they are assayed. Psychologists Ralph Metzner and Timothy Leary made this point most explicitly in their discussion of session “programming,” the sequencing of text, sound, and sensation that seemed to guide, but not determine the content of psychedelic experiences:

It is by now a well-known fact that psychedelic drugs may produce religious, aesthetic, therapeutic or other kinds of experiences depending on the set and setting…. Using programming we try to control the content of a psychedelic experience in specific desired directions. (5; reversed order)

Leary, Metzner, and many others have provided much shared code for such programming, but all of these recipes are bundled with an unavoidable but difficult to remember premise: an extraordinary sensitivity to initial rhetorical conditions characterizes psychedelic “drug action.” […]

Note that the nature of the psychedelic experience is contingent upon its rhetorical framing—what Leary, Metzner, and Richard Alpert characterized in The Psychedelic Experience as “the all-determining character of thought” in psychedelic experience. The force of rhetorical conditions here is immense— for Huxley it is the force linking premise to conclusion:

“No I couldn’t control it. If one began with fear and hate as the major premise, one would have to go on the conclusion.” (Ibid.)

Rhetorical technologies structure and enable fundamentally different kinds of ecodelic experiences. If the psychonaut “began” with different premises, different experiences would ensue.

pp. 33-37

Has this coevolution of rhetorical practices and humans ceased? This book will argue that psychedelic compounds have already been vectors of tech-noscientific change, and that they have been effective precisely because they are deeply implicated in the history of human problem solving. Our brains, against the law with their endogenous production of DMT, regularly go ecodelic and perceive dense interconnectivity. The human experience of radical interconnection with an ecosystem becomes a most useful snapshot of the systemic breakdowns between “autonomous” organisms necessary to sexual reproduction, and, not incidentally, they render heuristic information about the ecosystem as an ecosystem, amplifying human perception of the connections in their environment and allowing those connections to be mimed and investigated. This increased interconnection can be spurred simply by providing a different vision of the environment. Psychologist Roland Fischer noted that some aspects of visual acuity were heightened under the influence of psilocybin, and his more general theory of perception suggests that this acuity emerges out of a shift in sensory-motor ratios.

For Fischer the very distinction between “hallucination” and “perception” resides in the ratio between sensory data and motor control. Hallucination, for Fischer, is that which cannot be verified in three-dimensional Euclidean space. Hence Fischer differentiates hallucination from perception based not on truth or falsehood, but on a capacity to interact: if a subject can interact with a sensation, and at least work toward verifying it in their lived experience, navigating the shift in sensory-motor ratios, then the subject has experienced something on the order of perception. Such perception is easily fooled and is often false, but it appears to be sufficiently connective to our ecosystems to allow for human survival and sufficiently excitable for sexually selected fitness. If a human subject cannot interact with a sensation, Fischer applies the label “hallucination” for the purpose of creating a “cartography of ecstatic states.”

Given the testimony of psychonauts about their sense of interconnection, Fischer’s model suggests that ecodelic experience tunes perception through a shift of sensory-motor ratios toward an apprehension of, and facility for, interconnection: the econaut becomes a continuum between inside and outside. […] speech itself might plausibly emerge as nothing other than a symptom and practice of early hominid use of ecodelics.

pp. 51-52

It may seem that the visions—as opposed to the description of set and setting or even affect and body load—described in the psychonautic tradition elude this pragmatic dynamic of the trip report. Heinrich Klüver, writing in the 1940s and Benny Shannon, writing in the early twenty-first century, both suggest that the forms of psychedelic vision (for mescaline and ayahuasca respectively) are orderly and consistent even while they are indescribable. Visions, then, would seem to be messages without a code (Barthes) whose very consistency suggested content.

Hence this general consensus on the “indescribableness” (Ellis) of psychedelic experience still yields its share of taxonomies as well as the often remarkable textual treatments of the “retinal circus” that has become emblematic of psychedelic experience. The geometric, fractal, and arabesque visuals of trip reports would seem to be little more than pale snapshots of the much sought after “eye candy” of visual psychedelics such as LSD, DMT, 2C-I, and mescaline. Yet as deeply participatory media technologies, psychedelics involve a learning curve capable of “going with” and accepting a diverse array of phantasms that challenge the beholder and her epistemology, ontology, and identity. Viewed with the requisite detachment, such visions can effect transformation in the observing self, as it finds itself nested within an imbricated hierarchy: egoic self observed by ecstatic Atman which apprehends itself as Brahman reverberating and recoiling back onto ego. Many contemporary investigators of DMT, for example, expect and often encounter what Terence McKenna described as the “machine elves,” elfin entities seemingly tinkering with the ontological mechanics of an interdimension, so much so that the absence of such entities is itself now a frequent aspect of trip reportage and skeptics assemble to debunk elfin actuality (Kent 2004).

p. 63

While synesthesia is classically treated as a transfer or confusion of distinct perceptions, as in the tactile and gustatory conjunction of “sharp cheese,” more recent work in neurobiology by V. S. Ramachandran and others suggests that this mixture is fundamental to language itself—the move from the perceptual to the signifying, in this view, is itself essentially synesthetic. Rather than an odd symptom of a sub-population, then, synesthesia becomes fundamental to any act of perception or communication, an attribute of realistic perception rather than a pathological deviation from it.

pp. 100-126

Rhetorical practices are practically unavoidable on the occasion of death, and scholars in the history of rhetoric and linguistics have both opined that it was as a practice of mourning that rhetoric emerged as a recognizable and repeatable practice in the “West.” […] It is perhaps this capacity of some rhetorical practices to induce and manage the breakdown of borders—such as those between male and female, life and death, silence and talk—that deserves the name “eloquence.” Indeed, the Oxford English Dictionary reminds us that it is the very difference between silence and speech that eloquence manages: a. Fr. éloquent, ad. L. loquent-em, pr. pple., f. loqui to speak out.2 […]

And despite Huxley’s concern that such an opening of the doors of (rhetorical) perception would be biologically “useless,” properly Darwinian treatments of such ordeals of signification would place them squarely within the purview of sexual selection—the competition for mates. If psychedelics such as the west African plant Iboga are revered for “breaking open the head,” it may be because we are rather more like stags butting heads than we are ordinarily comfortable putting into language (Pinchbeck 2004, cover). And our discomfort and fascination ensues, because sexual selection is precisely where sexual difference is at stake rather than determined. A gradient, sexuality is, of course, not a binary form but is instead an enmeshed involutionary zone of recombination: human reproduction takes place in a “bardo” or between space that is neither male nor female nor even, especially, human. Indeed, sex probably emerged as a technique for exploring the space of all possible genotypes, breaking the symmetry of an asexual reproduction and introducing the generative “noise” of sexuality with which Aldous Huxley’s flowers resonated. In this context, psychedelics become a way of altering the context of discursive signaling within which human reproduction likely evolved, a sensory rather than “extra-sensory” sharing of information about fitness.

Doctors of the Word

In an ecstatic treatment of Mazatec mushroom intoxication, Henry Munn casts the curandera as veritable Sophists whose inebriation is marked by an incessant speaking:

The shamans who eat them, their function is to speak, they are the speakers who chant and sing the truth, they are the oral poets of their people, the doctors of the word, they who tell what is wrong and how to remedy it, the seers and oracles, the ones possessed by the voice. (Munn, 88)

Given the contingency of psychedelic states on the rhetorical conditions under which they are used, it is perhaps not surprising that the Mazatec, who have used the “little children” of psilocybin for millennia, have figured out how to modulate and even program psilocybin experience with rhetorical practices. But the central role enjoyed by rhetoricians here—those doctors of the word—should not obscure the difficulty of the shaman/ rhetorician’s task: “possessed by the voice,” such curanderas less control psychedelic experience than consistently give themselves over to it. They do not wield ecstasy, but are taught by it. Munn’s mushroom Sophists are athletes of “negative capability,” nineteenth-century poet John Keats’s term for the capacity to endure uncertainty. Hence the programming of ecodelic experience enables not control but a practiced flexibility within ritual, a “jungle gym” for traversing the transhuman interpolation. […]

Fundamental to shamanic rhetoric is the uncertainty clustering around the possibility of being an “I,” an uncertainty that becomes the very medium in which shamanic medicine emerges. While nothing could appear more straightforward than the relationship between the one who speaks and the subject of the sentence “I speak,” Munn writes, sampling Heraclitus, “It is not I who speak…it is the logos.” This sense of being less in dialogue with a voice than a conduit for language itself leads Munn toward the concept of “ecstatic signification.”

Language is an ecstatic activity of signification…. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. At times it is as if one were being told what to say, for the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings. Message fields of communication with the world, others, and one’s self are disclosed by the mushrooms. (Ibid., 88-89)

If these practices are “ecstatic,” they are so in the strictest of fashions. While recent usage tends to conjoin the “ecstatic” with enjoyment, its etymology suggests an ontological bifurcation—a “being beside oneself” in which the very location, if not existence, of a self is put into disarray and language takes on an unpredictable and lively agency: “words leap to mind, one after another.”3 This displacement suggests that the shaman hardly governs the speech and song she seemingly produces, but is instead astonished by its fluent arrival. Yet this surprise does not give way to panic, and the intoxication increases rather than retards fluency—if anything, Munn’s description suggests that for the Mazatec (and, perhaps, for Munn) psilocybin is a rhetorical adjunct that gives the speaker, singer, listener, eater access to “message fields of communication.” How might we make sense of this remarkable claim? What mechanisms would allow a speaker to deploy intoxication for eloquence?

Classically speaking, rhetoric has treated human discourse as a tripartite affair, a threefold mixture of ethos, an appeal based on character; logos, an appeal based on the word; and pathos, an appeal to or from the body.4 Numerous philosophers and literary critics since Jacques Derrida have decried the Western fascination with the logos, and many scholars have looked to the rich traditions of rhetoric for modalities associated with other offices of persuasion, deliberation, and transformation. But Munn’s account asks us to recall yet another forgotten rhetorical practice—a pharmacopeia of rhetorical adjuncts drawn from plant, fungus, and geological sources. In the context of the Mazatec, the deliberate and highly practiced ingestion of mushrooms serves to give the rhetor access not to individually created statements or acts of persuasion, but to “fields” of communication where rhetorical practice calls less for a “subject position” than it does a capacity to abide multiplicity—the combination and interaction, at the very least, of human and plant.

Writer, philosopher, and pioneering psychonaut Walter Benjamin noted that his experiments with hashish seemed to induce a “speaking out,” a lengthening of his sentences: “One is very much struck by how long one’s sentences are” (20). Longer sentences, of course, are not necessarily more eloquent in any ordinary sense than short ones, since scholars, readers, and listeners find that eloquence inheres in a response to any given rhetorical context. Indeed, Benjamin’s own telegraphic style in his hashish protocols becomes extraordinary, rare, and paradoxical given his own claim for long sentences in a short note. Yet Benjamin’s account does remind us that ecodelics often work on and with the etymological sense of “eloquence,” a “speaking out,” an outburst of language, a provocation to language. Benjamin reported that it was through language that material forms could be momentarily transformed: “The word ‘ginger’ is uttered and suddenly in place of the desk there is a fruit stand” (ibid., 21).

And yet if language and, indeed, the writing table, is the space where hashish begins to resonate for Benjamin, it does so only by making itself available to continual lacunae, openings and closings where, among other things, laughter occurs. For precisely as they are telegraphic, the hashish protocols of Benjamin create a series of non sequiturs: […]

Hashish, then, is an assassin of referentiality, inducing a butterfly effect in thought. In Benjamin, cannabis induces a parataxis wherein sentences less connect to each other through an explicit semantics than resonate together and summon coherence in the bardos between one statement and another. It is the silent murmur between sentences that is consistent while the sentences continually differentiate until, through repetition, an order appears: “You follow the same paths of thought as before. Only, they appear strewn with roses.”

For a comparable practice in classical rhetoric linking “intoxication” with eloquence, we return to Delphi, where the oracles made predictions persuasive even to the always skeptical Socrates, predictions whose oracular ecodelic speech was rendered through the invisible but inebriating “atmosphere” of ethylene gases—a geological rhetoric. Chemist Albert Hofmann, classicist Carl Ruck, ethnobotanist Jonathan Ott, and others have made a compelling case that at Eleusis, where Socrates, well before Bartleby, “preferred not” to go, the Greek Mysteries were delivered in the context of an ecodelic beverage, perhaps one derived from fermented grain or the ergotladen sacrament kykeon, chemically analogous to LSD.5 These Mystery rites occasioned a very specific rhetorical practice—silence—since participants were forbidden from describing the kykeon or its effects. But silence, too, is a rhetorical practice, and one can notice that such a prohibition functions rhetorically not only to repress but also to intensify a desire to “speak out” of the silence that must come before and after Eleusis.

And Mazatec curandera Maria Sabina is explicit that indeed it is not language or even its putative absence, silence, that is an adjunct or “set and setting” for the mushrooms. Rather, the mushrooms themselves are a languaging, eloquence itself, a book that presents itself and speaks out:

At other times, God is not like a man: He is the Book. A Book that is born from the earth, a sacred Book whose birth makes the world shake. It is the Book of God that speaks to me in order for me to speak. It counsels me, it teaches me, it tells me what I have to say to men, to the sick, to life. The Book appears and I learn new words.6

Crucial to this “speaking” is the way in which Maria Sabina puts it. Densely interactive and composed of repetition, the rhetorical encounter with the mushroom is more than informative it is pedagogical and transformative: “The Book appears and I learn new words.” The earth shakes with vitality, manifesting the mushroom orator.7 Like any good teacher, the mushrooms work with rhythms, repetitions that not only reinforce prior knowledge but induce one to take leave of it. “It counsels me, it teaches me.” The repetition of which and through which Maria Sabina speaks communicates more than knowledge, but allows for its gradual arrival, a rhythm of coming into being consonant and perhaps even resonant with the vibrations of the Earth, that scene of continual evolutionary transformation.

More than a supplement or adjunct to the rhetor, the mushroom is a transformer. Mary Barnard maps out a puppetry of flesh that entails becoming a transducer of the mushroom itself: “The mushroom-deity takes possession of the shaman’s body and speaks with the shaman’s lips. The shaman does not say whether the sick child will live or die; the mushroom says” (248).

Nor are reports of psilocybin’s effects as a rhetorical adjunct peculiar to Munn or even the Mazatec tradition. Over a span of ten years, psychologist Roland Fischer and his colleagues at Ohio State University tested the effects of psilocybin on linguistic function. Fischer articulated “the hallucination-perception continuum,” wherein hallucinations would be understood less as failed images of the real than virtual aspects of reality not verifiable in the “Euclidean” space projected by the human sensorium. Fischer, working with the literary critic Colin Martindale, located in the human metabolism of psilocybin (and its consequent rendering into psilocin) linguistic symptoms isomorphic to the epics of world literature. Psilocybin, Fischer and Martindale argued, provoked an increase in the “primary process content” of writing composed under the influence of psilocybin. Repetitious and yet corresponding to the very rhetorical structure of epics, psilocybin can thus be seen to be prima facie adjuncts to an epic eloquence, a “speaking out” that leaves rhetorical patterns consistent with the epic journey (Martindale and Fisher).

And in this journey, it is often language itself that is exhausted—there is a rhythm in the epic structure between the prolix production of primary process content and its interruption. Sage Ramana Maharshi described mouna, a “state which transcends speech and thought,” as the state that emerges only when “silence prevails.” […]

A more recent study conducted of high-dose psilocybin experience among international psychonauts suggested that over 35 percent of subjects heard what they called “the logos” after consuming psilocybin mushrooms.

Based on the responses to the question of the number of times psilocybin was taken, the study examined approximately 3,427 reported psilocybin experiences (n = 118). Of the total questionnaire responses (n = 128), 35.9% (n = 46) of the participants reported having heard a voice(s) with psilocybin use, while 64.0% (n = 82) of the participants stated that they had not. (Beach) […]

Inevitably, this flow fluctuates between silence and discourse. Michaux’s experiments with psychedelics rendered the now recognizable symptoms of graphomania, silence, and rhetorical amplification. In Miserable Miracle, one of the three books Michaux wrote “with mescaline,” Michaux testifies to a strange transformation into a Sophist:

For the first time I understood from within that animal, till now so strange and false, that is called an orator. I seemed to feel how irresistible must be the propensity for eloquence in certain people. Mesc. acted in such a way that it gave me the desire to make proclamations. On what? On anything at all. (81)11

Hence, while their spectrum of effects is wide ranging and extraordinarily sensitive to initial rhetorical conditions, psychedelics are involved in an intense inclination to speak unto silence, to write and sing in a time not limited to the physical duration of the sacramental effect, and this involvement with rhetorical practice—the management of the plume, the voice, and the breath—appears to be essential to the nature of psychedelics; they are compounds whose most persistent symptoms are rhetorical. […]

Crucial to Krippner’s analysis, though, is the efficacy of psychedelics in peeling away these strata of rhetorical practice. By withering some layers of perception, others are amplified:

In one experiment (Jarvik et al. 1955), subjects ingested one hundred micrograms of LSD and demonstrated an increase in their ability to quickly cancel out words on a page of standardized material, but a decreased ability to cancel out individual letters. The drug seemed to facilitate the perceptions of meaningful language units while it interfered with the visual perception of non-meaningful ones. (Krippner, 220)

Krippner notes that the LSD functioned here as a perceptual adjunct, somehow tuning the visual perception toward increased semantic and hence rhetorical efficacy. This intensified visual perception of language no doubt yielded the familiar swelling of font most associated with psychedelic art and pioneered by the psychedelic underground press (such as the San Francisco Oracle.) By amplifying the visual aspect of font—whose medium is the psychedelic message—this psychedelic innovation remixes the alphabet itself, as more information (the visual, often highly sensory swelling of font) is embedded in a given sequence of (otherwise syntactic and semantic) symbols. More information is compressed into font precisely by working with the larger-scale context of any given message rather than its content. This apprehension of larger-scale contexts for any given data may be the very signature of ecodelic experience. Krippner reports that this sensory amplification even reached dimensional thresholds, transforming texts:

Earlier, I had tasted an orange and found it the most intense, delightful taste sensation I had ever experienced. I tried reading a magazine as I was “coming down,” and felt the same sensual delight in moving my eye over the printed page as I had experienced when eating the orange. The words stood out in three dimensions. Reading had never been such a sheer delight and such a complete joy. My comprehension was excellent. I quickly grasped the intent of the author and felt that I knew exactly what meaning he had tried to convey. (221)

Rather than a cognitive modulation, then, psychedelics in Krippner’s analysis seem to affect language function through an intensification of sensory attention on and through language, “a complete joy.” One of Krippner’s reports concerned a student attempting to learn German. The student reported becoming fascinated with the language in a most sensory fashion, noting that it was the “delicacy” of the language that allowed him to, well, “make sense” of it and indulge his desire to “string” together language:

The thing that impressed me at first was the delicacy of the language.…Before long, I was catching on even to the umlauts. Things were speeding up like mad, and there were floods of associations.…Memory, of course, is a matter of association and boy was I ever linking up to things! I had no difficulty recalling words he had given me—in fact, I was eager to string them together. In a couple of hours after that, I was even reading some simple German, and it all made sense. (Ibid.)

Krippner reports that by the end of his LSD session, the student “had fallen in love with German” (222). Krippner rightly notes that this “falling” is anything but purely verbal, and hypothesizes that psychedelics are adjuncts to “non-verbal training”: “The psychedelic session as non-verbal training represents a method by which an individual can attain a higher level of linguistic maturity and sophistication” (225).

What could be the mechanism of such a “non-verbal” training? The motor-control theory of language suggests that language is bootstrapped and developed out of the nonlinguistic rhythms of the ventral premotor system, whose orderly patterns provided the substrate of differential repetition necessary to the arbitrary configuration and reconfiguration of linguistic units. Neuroscientist V. S. Ramachandran describes the discovery of “mirror neurons” by Giaccamo Rizzolati. Rizzolati

recorded from the ventral premotor area of the frontal lobes of monkeys and found that certain cells will fire when a monkey performs a single, highly specific action with its hand: pulling, pushing, tugging, grasping, picking up and putting a peanut in the mouth etc. different neurons fire in response to different actions. One might be tempted to think that these are motor “command” neurons, making muscles do certain things; however, the astonishing truth is that any given mirror neuron will also fire when the monkey in question observes another monkey (or even the experimenter) performing the same action, e.g. tasting a peanut! (Ramachandran)

Here the distinction between observing and performing an action are confused, as watching a primate pick up a peanut becomes indistinguishable from picking up the peanut, at least from the perspective of an EEG. Such neurological patterns are not arbitrary, linked as they are to the isomorphic patterns that are the developmentally articulated motor control system of the body. This may explain how psychedelics can, according to Krippner, allow for the perceptual discernment of meaningful units. By releasing the attention from the cognitive self or ego, human subjects can focus their attention on the orderly structures “below” conscious awareness and distributed across their embodiment and environments. Robin Allot has been arguing for the motor theory of language evolution since the 1980s:

In the evolution of language, shapes or objects seen, sounds heard, and actions perceived or performed, generated neural motor programs which, on transfer to the vocal apparatus, produced words structurally correlated with the perceived shapes, objects, sounds and actions. (1989)

These perceived shapes, objects, sounds, and actions, of course, include the sounds, smells, visions, and actions continually transmitted by ecosystems and the human body itself, and by focusing the attention on them, we browse for patterns not yet articulated by our embodiment. Significantly, as neuroscientist Ramachandran points out, this “mirror neuron” effect seems to occur only when other living systems are involved:

When people move their hands a brain wave called the MU wave gets blocked and disappears completely. Eric Altschuller, Jamie Pineda, and I suggested at the Society for Neurosciences in 1998 that this suppression was caused by Rizzolati’s mirror neuron system. Consistent with this theory we found that such a suppression also occurs when a person watches someone else moving his hand but not if he watches a similar movement by an inanimate object.

Hence, in this view, language evolves and develops precisely by nonverbal means in interaction with other living systems, as the repetitions proper to language iterate on the basis of a prior repetition—the coordinated movements necessary to survival that are coupled to neurological patterns and linked to an animate environment. By blocking the “throttling embrace of the self,” ecodelics perhaps enable a resonance between the mind and nature not usually available to the attention. This resonance creates a continuum between words and things even as it appears to enable the differentiation between meaningful and nonmeaningful units: […]

This continuum between the abstract character of language and its motor control system is consistent with Krippner’s observation that “at the sensory level, words are encoded and decoded in highly unusual ways” (238). This differential interaction with the sensory attributes of language includes an interaction with rhythms and puns common to psychedelic experience, a capacity to become aware of a previously unobserved difference and connection. Puns are often denounced as, er, punishing a reader’s sense of taste, but in fact they set up a field of resonance and association between previously distinct terms, a nonverbal connection of words. In a highly compressed fashion, puns transmit novel information in the form of a meshed relation between terms that would otherwise remain, often for cultural or taboo reasons, radically distinct.12 This punning involves a tuning of a word toward another meaning, a “troping” or bending of language toward increased information through nonsemantic means such as rhyming. This induction of eloquence and its sensory perception becomes synesthetic as an oral utterance becomes visual: […]

Hence, if it is fair to characterize some psychedelic experiences as episodes of rhetorical augmentation, it is nonetheless necessary to understand rhetoric as an ecological practice, one which truly works with all available means of persuasion (Aristotle), human or otherwise, to increase the overall dissipation of energy in any given ecology. One “goes for broke,” attempting the hopeless task of articulating psychedelics in language until exhausting language of any possible referential meaning and becoming silent. By locating “new” information only implicit in a given segment of language and not semantically available to awareness, a pun increases the informational output of an ecosystem featuring humans. This seems to feedback, […]

Paired with an apprehension of the logos, this tuning in to ecodelia suggests that in “ego death,” many psychonauts experience a perceived awareness of what Vernadsky called the noösphere, the effects of their own consciousness on their ecosystem, about which they incessantly cry out: “Will we listen in time?”

In the introduction, I noted that the ecodelic adoption of this non-local and hence distributed perspective of the biosphere was associated with the apprehension of the cosmos as an interconnected whole, and with the language of “interpellation” I want to suggest that this sense of interconnection often appears in psychonautic testimony as a “calling out” by our evolutionary context. […]

The philosopher Louis Althusser used the language of “interpellation” to describe the function of ideology and its purchase on an individual subject to it, and he treats interpellation as precisely such a “calling out.” Rather than a vague overall system involving the repression of content or the production of illusion, ideology for Althusser functions through its ability to become an “interior” rhetorical force that is the very stuff of identity, at least any identity subject to being “hailed” by any authority it finds itself response-able to. I turn to that code commons Wikipedia for Althusser’s most memorable treatment of this concept:

Memorably, Althusser illustrates this with the concept of “hailing” or “interpellation.” He uses the example of an individual walking in a street: upon hearing a policeman shout “Hey you there!”, the individual responds by turning round and in this simple movement of his body she is transformed into a subject. The person being hailed recognizes himself as the subject of the hail, and knows to respond.14

This sense of “hailing” and unconscious “turning” is appropriate to the experience of ecodelic interconnection I am calling “the transhuman interpellation.” Shifting back and forth between the nonhuman perspectives of the macro and the micro, one is hailed by the tiniest of details or largest of overarching structures as reminders of the way we are always already linked to the “evolutionary heritage that bonds all living things genetically and behaviorally to the biosphere” (Roszak et al., 14). And when we find, again and again, that such an interpellation by a “teacher” or other plant entity (à la the logos) is associated not only with eloquence but also with healing,15 we perhaps aren’t surprised by a close-up view of the etymology of “healing.” The Oxford English Dictionary traces it from the Teutonic “heilen,” which links it to “helig” or “holy.” And the alluvial flow of etymology connects “hailing” and “healing” in something more than a pun:

A Com. Teut. vb.: OE. hlan = OFris. hêla, OS. hêlian (MDu. hêlen, heilen, Du. heelen, LG. helen), OHG. heilan (Ger. heilen), ON. heil (Sw. hela, Da. hele), Goth. hailjan, deriv. of hail-s, OTeut. *hailo-z, OS. Hál <HALE><WHOLE>16

Hailed by the whole, one can become healed through ecodelic practice precisely because the subject turns back on who they thought they were, becoming aware of the existence of a whole, a system in which everything “really is” connected—the noösphere. Such a vision can be discouraging and even frightening to the phantasmically self-birthed ego, who feels not guilt but a horror of exocentricity. It appears impossible to many of us that anything hierarchically distinct, and larger and more complex than Homo sapiens—such as Gaia—could exist, and so we often cry out as one in the wilderness, in amazement and repetition.

Synesthesia, and Psychedelics, and Civilization! Oh My!
Were cave paintings an early language?

Choral Singing and Self-Identity
Music and Dance on the Mind
Development of Language and Music
Spoken Language: Formulaic, Musical, & Bicameral
“Beyond that, there is only awe.”
“First came the temple, then the city.”
The Spell of Inner Speech
Language and Knowledge, Parable and Gesture

The Helmsman and the Lookout

There is an apt metaphor for the relationship between what we think of as conscious willpower and the openness of perception.

The egoic consciousnes is the helmsman of the boat as it heads along the river of experience, but he is positioned at the back of the boat crowded with passengers. While he controls the steering, he is driving blind and can’t see what is coming. He primarily operates on memory and mental maps, habit and heuristics. He knows the river or else similar rivers, at least most of the time, as long as remains within the familiar. Still, his predictive abilities are limited and hence so are his steering abilities.

This is why a lookout is needed at the front of the boat. The lookout, although having no direct control, can give warnings. Stop! Don’t go that direction! The lookout has the information the helmsman needs, but the helmsman only listens to the lookout when something is wrong. The lookout is the veto power of volition, what is called free-won’t rather than freewill.

I came across this metaphor from a Chacruna article by Martin Fortier, Are Psychedelic Hallucinations Actually Metaphorical Perceptions?:

“Recent neuroscientific models of the brain stress the importance of prediction within perceptual experience.3 The tenets of the predictive model of the brain can be described with a useful analogy: that of helmsmen steering collective boats on the rivers of lowland South America.

“In the Amazon, to go from one riparian town to another, people usually take a collective boat. Most boats carry between 20 to 60 passengers. These boats are steered in an intriguing way. The helmsman is positioned at the rear part of the boat. Because of this, he cannot see much of the river; what he sees in front of him are mostly the backs of passengers. Yet, the helmsman critically needs to know in minute detail where he is going, as the river is replete with shallows and floating tree trunks that must be avoided by any means. The usual way to make sure that the helmsman is able to steer the boat safely is to position a lookout at the front part of the boat and to have him warn the helmsman in case anything dangerous shows up ahead.

“The human perceptual system roughly works like these collective boats! “Predictive models” of perception strongly contrast with “constructive models,” developed in the 1970s. According to constructive models of visual perception, the retina collects very gross and sparse information about the world, and each level of the visual system elaborates on this limited primary information and makes it gradually richer and more complex.4

“Let us say that the lookout stands for primary perceptual areas—low-level areas of the brain—and the helmsman stands for more frontal areas; the high-level areas of the brain. Furthermore, the trajectory of the boat stands for conscious perception. In the case of classical constructive models of the brain, perception is taken to be a gradual enrichment of information coming from lower areas of the brain. So, to use the boat analogy, constructive models of perception have it that the trajectory of the boat—i.e., conscious perception—is determined by the lookout sending warning signals to the helmsman—i.e., by bottom-up processes.

“Predictive models conceive of perception in a very different way. The first step of determining the trajectory of the boat is the helmsman guessing, on the basis of his past experience, where the boat can safely go. So, within the predictive model, the lookout plays no constitutive role. The lookout influences the trajectory of the boat only when the helmsman’s predictions are proved wrong, and when the lookout needs to warn him.

“Two niceties must be added. First, bottom-up error signals can be variously weighted. In noisy or uncertain situations, bottom-up prediction errors have a smaller influence than usual:5 in noisy or uncertain situations, the lookout’s warnings are not taken into account by the helmsman as much as usual. Second, in the boat analogy, there is only one lookout and one helmsman. In the brain, several duos of lookouts and helmsmen are working together, and each of these duos is specialized in a specific perceptual modality.”

This usually works well. Still, the egoic consciousness can be tiring, especially when it attempts to play both roles. If we never relax, we are in a constant state of stress and anxiety. That is how we get suck in loops of thought, where what the helmsman imagines about the world becomes his reality and so he stops listening as much to the lookout.

This has become ever more problematic for humanity as the boundaries of egoic consciousness have rigidified. Despite egoic self-confidence, we have limited ability to influence our situation and, as research shows, overtaxing ourselves causes us to become ineffective. No matter how hard it tries, the ego-self can’t force the ideology of freewill onto the world. Sometimes, we need to relax and allow ourselves to float along, with trust that the lookout will warn us when necessary.

There are many practices that help us with this non-egoic state. Meditation is the simplest, in which we train the mind to take a passive role but with full alertness. It allows the lookout to relax and take in the world without all of the nervous-inducing jerking around of a helmsman out of control while obsessed with control.

Another method is that of psychedelics, the experience of which is often referred to as a ‘trip’. Traditionally, a shaman or priest would have taken over the role of helmsman, allowing the participants to temporarily drop that role. Without someone else to play that role, a standard recommendation has been to let go and allow yourself to float along, just go with the current and trust where it takes you. In doing this, the environment is important in supporting this state of mind. This is a way of priming the mind with set and setting.

Richard M. Doyle explained this strategy, in Darwin’s Pharmacy (p. 18):

“If psychedelics left any consistent trace on the literature of trip reports and the investigation of psychedelic states, it is that “resistance” is unlikely to be a useful tactic and that experiment is unavoidable. Leary, whose own “setting” was consistently clustered around practices of the sacred, offered this most compressed algorithm for the manipulation (“programming”) of psychedelic experience, a script asking us to experimentally give ourselves over to the turbulence: “Whenever in doubt, turn off your mind, relax, float downstream.” Such an experiment begins, but is not completed, by a serene letting go of the self under the pull of a transhuman and improbable itinerary. This letting go, of course, can be among the greatest of human achievements, the very goal of human life: Meister Eckhart, the fourteenth-century German heretic, reminds us that this gelassenheit is very old and not easily accomplished.”

For anyone who has experienced it, the transformative power of psychedelics is undeniable. Many modern people find themselves near permanently stuck in egoic control mode, their hand ever on the steering mechanism. We don’t easily let our guard down and we hardly can even imagine what that might feel like, until something shuts down that part of our mind-brain.

In a CBC interview with Bob McDonald, Michael Pollan explained why this happens and what exactly happens:

“The observed effect, if you do brain imaging of people who are tripping, you find some very interesting patterns of activity in the brain – specifically something called the default mode network, which is a very kind of important hub in the brain, linking parts of the cerebral cortex to deeper, older areas having to do with memory and emotion. This network is kind of a regulator of all brain activities. One neuroscientist called it, ‘The conductor of the neural symphony,’ and it’s deactivated by psychedelics, which is very interesting because the assumption going in was that they would see lots of strange activity everywhere in the brain because there’s such fireworks in the experience, but in fact, this particular network almost goes off line.

“Now what does this network responsible for? Well, in addition to being this transportation hub for signals in the brain, it is involved with self reflection. It’s where we go to ruminate or mind wander – thinking about the past or thinking about the future – therefore worrying takes place here. Our sense of self, if it can be said to have an address and real, resides in this particular brain network. So this is a very interesting clue to how psychedelics affect the brain and how they create the psychological experience, the experience in the mind, that is so transformative.

“When it goes off line, parts of the brain that don’t ordinarily communicate to one another, strike up conversation. And those connections may represent what people feel during the psychedelic experience as things like synaesthesia. Synaesthesia is when one sense gets cross wired with another. And so you suddenly smell musical notes or taste things that you see.

“It may produce insights. It may produce new metaphors – literally connecting the dots in new ways. Now that I’m being speculative – I’m going a little beyond what we’ve established – we know there are new connections, we don’t know what’s happening with them, or which of them endure. But the fact is, the brain is temporarily rewired. And that rewiring – whether the new connections actually produce the useful material or just shaking up the system – ‘shaking the snow globe,’ as one of the neuroscientists put it, is what’s therapeutic. It is a reboot of the brain.

“If you think about, you know, mental illnesses such as depression, addiction, and anxiety, many of them involve these loops of thought that we can’t control and we get stuck on these stories we tell ourselves – that we can’t get through the next hour without a drink, or we’re worthless and unworthy of love. We get stuck in these stories. This temporarily dissolves those stories and gives us a chance to write new stories.”

Psychedelics give the average person the rare opportunity of full-blown negative capability, as our egoic boundaries become thinner or disappear altogether. When the chatter of the ego-mind ceases, the passengers on the boat can hear themselves and begin talking among themselves. The bundle theory of the mind suddenly becomes apparent. We might even come to the realization that the ego was never all that much in control in the first place, that consciousness is a much more limited phenomenon.

Modernity as Death Cult

Humanity has wiped out 60% of mammals, birds, fish and reptiles since 1970, leading the world’s foremost experts to warn that the annihilation of wildlife is now an emergency that threatens civilisation. […]

“Many scientists believe the world has begun a sixth mass extinction, the first to be caused by a species – Homo sapiens. Other recent analyses have revealed that humankind has destroyed 83% of all mammals and half of plants since the dawn of civilisation and that, even if the destruction were to end now, it would take 5-7 million years for the natural world to recover. […]

“Between 1970 and 2014, the latest data available, populations fell by an average of 60%. Four years ago, the decline was 52%. The “shocking truth”, said Barrett, is that the wildlife crash is continuing unabated.”

Humanity has wiped out 60% of animal populations since 1970, report finds
by Damian Carrington

Plutocratic Mirage of Self-Made Billionaires

I am very, very lucky. I’m lucky in so many ways. I won a lot of lotteries in life. I’m not just talking about Amazon, a certain financial lottery, for sure. I have won so many lotteries.

The man born Jeffrey Preston Jorgensen is commonly known as Jeff Bezos.

He got his middle name from his maternal grandfather, Lawrence Preston Gise. The family called him ‘Pop’ while others called him ‘Preston’.  Bezos spent influential years with his grandfather. It was this patriarch who got his grandson interested in all sorts of technology, including mechanical equipment but most of all computers. And this is who Bezos credits for his success, claiming to have learned his critical business skills while visiting his retired grandfather’s ranch every summer from age four to sixteen.

Keep the following in mind when hearing claims of Jeff Bezos being self-made. Who was Lawrence Preston Gise? Jeff’s grandfather was a major government official, apparently with significant wealth, influence, and connections. He was well respected.

“Some of the top brass in the Pentagon were charged with single-handedly picking top talent for ARPA, renamed DARPA in 1972 — the “D” for “Defense” (Christian Davenport, The Space Barons), “the research and development arm of the Department of Defense that is credited with designing a communications network that could still function even if a nuclear attack demolished conventional lines of communication, ARPAnet, was the foundation of what would eventually become the Internet” (Expose the Deep State, Jeff Bezos). “Wilfred McNeil, the Pentagon’s comptroller, helped recruit top talent to help run the agency. One of his top choices was Lawrence Preston Gise, a stolid and principled former navy lieutenant commander” (Davenport).

Besides Gise being a founding member of DARPA, later on “in 1964, Congress appointed him manager of the Atomic Energy Commission’s Albuquerque operations office, where he supervised 26,000 employees in the AEC’s western region, including the Sandia, Los Alamos, and Lawrence Livermore laboratories” (Chip Bayers, The Inner Bezos). But he had previously worked for the AEC and even earlier in the military: “Born in Texas, Gise had served during World War II, and service records show he was assigned to the USS Neunzer, a destroyer, and then to various administrative jobs. He also served as an assistant director at the Atomic Energy Commission, starting in 1949, and was promoted to assistant director in 1955” (Davenport).

Gise was a creature of government, specifically of the military-industrial complex. He also oversaw government work done with private contractors. In various capacities, he was involved in numerous projects, some of them covert. For example, he was a key member in secret meetings about the development of the hydrogen bomb. This guy had immense knowledge and experience about both technology and the workings of government. He was far beyond the standard bureaucrat, as his technical skill was not only theoretical but applied, with his career having been focused on space technology and missile defense systems. When he helped raise his grandson, Jeff Bezos received the full attention in being tutored and moulded for a life of privilege and ambition. Considering that, it’s not that Bezos as a corporate tycoon sold his soul to gain position and power in government, for he didn’t need to. He inherited the social connections, the access to private and public funding, and the open doors into government.

Like his grandfather, Bezos always was a creature of government. His corporatist worldview, presumably, always leaned toward corporatocracy. And Bezos is likely being honest in his moral claim of corporate patriotism, self-serving as it is, since he undoubtedly doesn’t see a difference between his own interests and those of government. As a member of the ruling elite, he takes it for granted that government is there to serve and represent those born into privilege. The Gise-Bezos family is a variant of the Bush family, in both cases wealth and power passed from one generation to the next and in both cases there was a grandfather as the original patriarch who ensured the family’s legacy.

Gise’s influence wouldn’t have been minor. Young Bezos would have heard his grandfather talk about government programs and government research and development. This would have given him an inside view along with some insider knowledge: “Pop doted, telling stories about missile defense systems and teaching him to lay pipe and castrate bulls” (Mark Liebovich, Child Prodigy, Online Pioneer). It’s unsurprising that Bezos developed a company like Amazon that has a shipping and information system of the kind one would expect from DARPA, as Bezos might have been modeling it on what he learned from listening to his grandfather. It’s even possible that with Gise’s connections, Bezos was able to hire former government workers and advisers who had experience in developing such systems. Either way, Bezos didn’t invent Amazon out of thin air. Amazon is a late product of the Cold War mindset, a distributed system built on the internet which itself was built on DARPA’s ARPANET.

“The Pentagon has been part of the Silicon Valley story all along. Defence contracts during and after World War II turned Silicon Valley from a somnolent landscape of fruit orchards into a hub of electronics production and innovations ranging from mainframes to microprocessors to the internet. The American tech economy rests on the foundations of the military-industrial complex. […]

“The military origins of modern tech gradually faded from view, but the business of war didn’t go away. The Pentagon remained the only place with the resources and the patience to fund blue-sky research that the market wasn’t quite ready for yet. Mr. Bezos knows this history well. His beloved grandfather Lawrence Preston Gise was one of the first employees of the Pentagon’s advanced research agency, Darpa. In the 1980s and 1990s, money from Darpa helped spur breakthroughs in high-speed networking, voice recognition and internet search. Today, it is funding research in artificial intelligence and machine learning, subterranean exploration and deep-space satellites, high-performance molecules and better GPS. Whether their employees realize it or not, today’s tech giants all contain some defense-industry DNA. The result is the conflicted identity we now see in Silicon Valley.” (Margaret O’Mara, Silicon Valley Can’t Escape the Business of War)

Bezo’s elite education began early. “His parents enrolled him in a pilot program for gifted students at Houston’s River Oaks Elementary School, 20 miles from their home. […] In 1978, Exxon transferred Miguel to Miami, where the family lived in a four-bedroom house with a pool in the affluent Palmetto district of Dade County. Jeff enrolled at Palmetto High, an incubator of high achievers. He gravitated to a group of about 10 kids from his honors classes.” (Mark Liebovich, Child Prodigy, Online Pioneer). In high school, he had the opportunity to work with the first generation of personal computers. Also at the time, he participated in the Student Science Training Program at the University of Florida and a space initiative at NASA’s Huntsville, Alabama, center. Not many high schoolers back in the 1970s had such good fortune. Bezos admitted this in an interview with Henry Blodget: “I am very, very lucky. I’m lucky in so many ways. I won a lot of lotteries in life. I’m not just talking about Amazon, a certain financial lottery, for sure. I have won so many lotteries. In life, we get a lot of rolls of the dice. One of the big rolls of the dice is who are your early role models.” One suspects Gise was more than a mere roe model. He likely played an active and maybe interventionist role in ensuring his grandson got the best opportunities and resources, quite likely sometimes pulling strings behind the scenes, by making introductions to important people, etc.

Those connections and that influence would have followed Bezos into adulthood, such as entering Princeton, “the only school he wanted to attend” (Mark Liebovich, Child Prodigy, Online Pioneer). While there, he “had first used the Internet in 1985, in a Princeton astrophysics class.” He “was a member of Phi Beta Kappa,” “was also elected to Tau Beta Pi and was the president of the Princeton chapter of the Students for the Exploration and Development of Space” (Wikipedia), picking up further social connections along the way. Graduating from Princeton then gave him numerous job opportunities with high level tech and financial businesses, his grandfather’s reputation surely having opened up some doors as well: “he was offered jobs at Intel, Bell Labs, and Andersen Consulting, among others.”

Come on, self-made man, really? “Jeff Bezos, now the richest man in modern history, began with a $US100,000 “investment” in 1995 from his parents after leaving “a cushy gig on Wall Street”, where he served as vice-president of D. E. Shaw & Co, to pursue Amazon,” as Roqayah Chamseddine points out. The Washington Post claims the amount his parents gave him was much higher: “The company was launched in 1994 with a $300,000 investment from his parents and loans from his own bank account,” along with raising “$1 million from 20 local investors” (Mark Liebovich, Child Prodigy, Online Pioneer). The background to Bezos success is explained well by that Liebovich article from the Washington Post written long before he bought the paper:

“Among the leaders of the New Economy, Bezos offers perhaps the starkest contrast with the up-from-nothing titans of the Industrial Age. Where many of his corporate forebears had firsthand experience with poverty, Bezos is the child of affluent, suburban comfort and a close-knit family. From within this world view, his competitive drive stems more from his joy in the test than from an appreciation of want or failure.

“Since super-achievers often cluster, Amazon, like many high-tech firms, tends to recruit managers from a New Economy version of the old boys’ network. Many of them graduated from the same schools and worked together at the same companies. Recent hire Jonathan Leblang graduated from Palmetto High with Bezos. Risher attended Princeton with Bezos and was a member of the same eating club, where he recalls Bezos was a ferocious player of beer pong. Risher, like several Amazon managers, came from Microsoft Corp., a company Bezos studies closely and admires for its rigorous hiring practices.”

It sure is a lot easier to bootstrap yourself into the success of billionaire status when you’re born on third base with a silver spoon securely shoved up your ass. That isn’t to say he didn’t work hard. It’s clear that he learned work ethic from his grandfather. But he also inherited immense privilege with opportunities and resources freely and abundantly given at every point in his life. When he almost bankrupted Amazon in its early years, while making no profit, he still was able to secure billions of dollars in bank loans to pull his business from the brink. It partly reminds one of Donald Trump’s business strategy of losing lots of money while always having access to more money. And like Trump, Bezos demonstrates how money leads to more money (as a side note, the explanation for their mutual disgust is that they are two of the most powerful plutocrats vying for American power). It would be nice to be born into such financial security and comforting luxury. No doubt that breeds a confidence of expectation and entitlement.

Bezos became the richest man in the world by running an oligopolistic transnational corporation that dominates and drives others out of business by needing to make no profit, by controlling the largest online market platform that most small-to-medium-sized competitors are forced to use, by years of evading payment of taxes, by being directly tied into the biggest spending parts of big government, and by having many of his employees on welfare: “If a money-losing government-backed organization is providing a service at a price below private competitors, this is the very definition of a subsidy” (James Freeman, Trump, Bezos and the Amazon Subsidy). Amazon isn’t being operated like a normal business and, one might speculate, it isn’t being operated as a business at all. Even after recently losing upwards of $14 billion dollars, a CNBC article stated that, “Still, the company expects to see growth for its high-margin cloud and advertising businesses.” That is to say that will continue making plenty of money selling their customer’s data and in doing business with the CIA, NSA, and DOD. It’s one of the wealthiest and most powerful organizations in the world serving powerful interests, not all of them necessarily out in the open.

As some have indicated, Amazon is more akin to the business model of social media giants such as Facebook. Amazon has never made much money through selling products to customers and instead through selling the information gathered on customers, initially sold to advertisers and other interested parties but one suspects that other buyers are now seeking access, assuming those others didn’t always have access. It would be easy for a single person, maybe Bezos himself, to put a back door into Amazon’s computer system. The government, as we know from leaks, already has back doors into diverse technology and has made use of that ability. As Amazon and government become further entangled, the results are so predictable as to be inevitable.

This is what was so worrisome about Bezos buying the Washington Post. And those worries were confirmed when that newspaper began using unnamed government sources, often to defend government views and promote government agendas, in particular that of the CIA. There is a long history behind such dealings, though:

“Amazon‘s decision is troubling. But would it suggest a real shift? Former Post publisher Katharine Graham gave a speech in 1988 at the CIA headquarters, where she reportedly said this: “We live in a dirty and dangerous world. There are some things that the general public does not need to know and shouldn’t. I believe democracy flourishes when the government can take legitimate steps to keep its secrets and when the press can decide whether to print what it knows.” ” (Peter Hart, Amazon, WikiLeaks, the Washington Post and the CIA)

The days are long gone when the Washington Post challenged corrupt power in publishing the Pentagon Papers. For many decades, it has been a propaganda rag. But that isn’t to downplay how Bezos has brought this corruption to a whole new level. He kicked Wikileaks off of Amazon servers apparently at the behest of the CIA or else to suck up to his prospective business partners in government. If the United States wasn’t yet quite fully a corporatocracy, it certainly is now. Amazon has essentially become an arm of the government with the WaPo as a committed propaganda operation.

By the way, the Washington Post doesn’t inform its readers of its connection to the CIA. No one bothered (or maybe was allowed) to mention this CIA background in the Wikipedia articles for Washington Post and Amazon. It’s not only the CIA, by the way. Bezos is doing business with numerous sectors of the government, from the NSA to the Department of Defense. These often no-bid contracts over the coming decade or so could easily add up in the hundreds of billions of dollars. The crony deal requiring the government to use Amazon as its primary source of online purchases alone will be, according to Vanity Fair, “some $53 billion every year.” That isn’t all profit, of course, as there are operating costs as well. Still, those are large sums of taxpayer money exchanging hands.

Bezos declared that he wouldn’t be ‘intimidated’ by critics. I take that as his saying he won’t be intimidated by free market values, democratic demands, public outrage, moral norms, and basic human decency. He has shown his opposition to even the most basic of democratic institutions such as public schools. Bezos is a neoliberal cast from the mould of the monopolistic Robber Barons and, as with the Golden Age, his big biz corporatism is tightly interwoven with big government corporatocracy, specifically the neocon military-industrial-complex. It’s ironic that President Dwight Eisenhower who warned of the military-industrial complex was the same man who helped build it. DARPA came out of Eisenhower’s administration and, as military men, Eisenhower and Gise probably knew each other.

For all of Eisenhower’s warnings, the military-industrial complex is now far worse. Earlier last century, there was no equivalent to transnational tech giants, although tech giants working with authoritarian governments (Amazon with Saudi Arabia, Google with China, etc) is perfectly in line with old school fascism (from the banana republics to the Bush family making their wealth from Nazis). It’s a new and improved more of the same. Bezos likes to feel superior in his despising mediocrity, considering himself a genius and flattering himself in surrounding himself with a supposed meritocratic intelligentsia. But what occurs to me is how mediocre is his vision of society and humanity. Heartless hyper-competitive tycoons like him have been dime a dozen for centuries and it always leads to the same sad results.

Ask yourself this. If Jeff Bezos was a government agent or spymaster used to recruit agents (as has been the case with professors in the kind of Ivy League schools that Bezos attended)… If the Amazon corporation was a front group for a United States intelligence agency… And, to take this further, if the CIA or NSA had become an independent and autonomous rogue transnational governing body with the United States merely acting as a headquarters or primary client state… If any of this or some similar nefarious conspiracies were true, how would you know? Simply put, you wouldn’t know. Everything would appear exactly the same. That is how all successful conspiracies operate and most conspiracies are never discovered, especially not in the short term. It wouldn’t require many people to even know of the conspiracy, maybe only being necessary to have a single key figure such as Bezos himself. Certainly, considering his company’s deal with the CIA, there is much about that deal that even his top management doesn’t know.

This is the very reason that the American founders, out of terror of imperial and corporate power, ensured not only division of power within government but division of power to separate government and business. They took seriously how conspiracies easily happen when cronyism and corruption is allowed free reign of oligarchic rule. This is also why many of the founders feared standing armies, as they no doubt would have feared even more the modern intelligence agency that, in acting in secret, is entirely lacking in transparency and accountability. They never intended that either the government or corporations would ever gain so much power. And about corporations in particular, they went to immense efforts to curtail their role in society, never having conflated a corporation with a private business, much less with legal personhood — a government corporate charter required an organization to serve the public good toward a narrow and short term purpose (building a bridge, establishing a hospital, etc) that lasted no longer than a single generation and that disallowed any involvement in politics.

A mega-corporation such as Amazon betrays everything the American Revolution was fought for, everything this country was founded upon. Well, not quite everything. Slavery or indentured servitude would fit well into neo-feudal neoliberalism. In fact, these transnationals are often dependent on quasi-slavery work conditions in places like China where employees are locked in guarded factories so that they can’t escape or kill themselves. This brave new world is what Jeff Bezos, more than anyone else, is bringing into reality. It’s the new American Dream. If you want to get a sense of what an authoritarian America would look like, all you have to do is watch the Amazon-produced show, The Man in the High Castle, that portrays an alternative history where the Nazis won. Considering the Nazi-funded Bush family helped so many Nazi war criminals into the country where for decades they worked for the government, who is to say that the Nazis in a broader sense didn’t win. The tech giant tycoons are just a new generation, a friendlier face of an old brutal force.

The rise of fascism once tore apart the world and it will do the same again.

* * *

Jeff Bezos
by Expose the Deep State

Bezos’s maternal grandfather, Lawrence Preston Gise was one of the founding members the Defense Advanced Research Projects Agency (DARPA), the research and development arm of the Department of Defense that is credited with designing a communications network that could still function even if a nuclear attack demolished conventional lines of communication, ARPAnet, was the foundation of what would eventually become the Internet. Gise later became regional director of the U.S. Atomic Energy Commission (AEC) in Albuquerque. As a child Jeff Bezos spent summers working at his grandfather’s ranch in Texas, which is where possible CIA Kid accusations come from.

The Inner Bezos
by Chip Bayers

Lawrence Preston “Pop” Gise had held jobs that a young boy couldn’t help but find cool. Gise worked on space technology and missile defense systems at Darpa in the late 1950s; in 1964, Congress appointed him manager of the Atomic Energy Commission’s Albuquerque operations office, where he supervised 26,000 employees in the AEC’s western region, including the Sandia, Los Alamos, and Lawrence Livermore laboratories. He retired to his southwest Texas spread in 1968, and he doted on Jeff from the time his grandson was an infant. “Mr. Gise was a towering figure in Jeff’s life,” says Weinstein.

Jeff Bezos
by Everipedia

Bezos’s maternal grandfather was Lawrence Preston Gise, a regional director of the U.S. Atomic Energy Commission (AEC) in Albuquerque. Before joining the AEC, Gise had worked for the Defense Advanced Research Projects Agency (DARPA), the research and development arm of the Department of Defense that was created in 1958 as the first response by the US government to the Russian launching of Sputnik I, the first artificial Earth satellite in 1957. Intended to be the counterbalance to military thinking in research and development, DARPA was formed, according to its official mission statement, to assure that the US maintains a lead in applying technology for military capabilities and to prevent other technological surprises from her adversaries. In 1970, DARPA’s engineers created a model for a communications network for the military that could still function even if a nuclear attack demolished conventional lines of communication: ARPAnet, was the foundation of what would eventually become the Internet. Gise retired early to the ranch, where Bezos spent many summers as a youth, working with him.

The Space Barons
by Christian Davenport

Eisenhower’s answer to the reporter’s pointed question was, in essence, that the country was working on it. The real response to the Soviets would come a few months later, when during his 1958 State of the Union address, he talked about the creation of a new agency within the Defense Department that would have “single control in some of our most advanced development projects.” This agency would be in charge of “anti-missile and satellite technology” at a time when “some of the important new weapons which technology has produced do not fit into any existing service pattern.”

The Soviets’ launch of Sputnik opened a new frontier — space — one that “creates new difficulties, reminiscent of those attending the advent of the airplane a half century ago,” he said.

The new organization would be called the Advanced Research Projects Agency (ARPA). Born from what the secretive agency now calls the “traumatic experience of technological surprise,” ARPA would be a sort of elite special force within the Pentagon made of its best and brightest scientists and engineers. But because it would transcend the traditional services — the army, navy, air force — many in the defense establishment looked askance at it.

Eisenhower didn’t care. To keep up with the Soviets, the nation needed to move past “harmful service rivalries,” he said.

Some of the top brass in the Pentagon were charged with single-handedly picking top talent for ARPA, renamed DARPA in 1972 — the “D” for “Defense.” Successful candidates would have to not only be smart and efficient, but they’d also have to be morally strong and confident, able to stand up to generals and admirals that might resent their very presence and consider them outsiders.

They were encouraged to push boundaries, and create new, futuristic technologies that aimed at keeping the nation several steps ahead.

“In the 1960s you could do really any damn thing you wanted, as long as it wasn’t against the law or immoral,” Charles Herzfeld, who directed ARPA from 1965 to 1967, told the Los Angeles Times.

Wilfred McNeil, the Pentagon’s comptroller, helped recruit top talent to help run the agency. One of his top choices was Lawrence Preston Gise, a stolid and principled former navy lieutenant commander. Born in Texas, Gise had served during World War II, and service records show he was assigned to the USS Neunzer, a destroyer, and then to various administrative jobs. He also served as an assistant director at the Atomic Energy Commission, starting in 1949, and was promoted to assistant director in 1955.

By the height of the Cold War, Gise found himself in the middle of an agency that was developing the hydrogen bomb. As a young employee, he had participated in a secret meeting in 1950 to discuss the development of the bomb with some of the agency’s top officials, including then-chairman Gordon Dean.

Gise was intrigued by the possibilities of ARPA, and what it represented at the dawn of the Space Age. But he was also aware that political pressure was mounting against its formation. With a family to support, he hedged his bets, making sure he would have a landing spot, just in case this experimental agency didn’t work out.

“So the agency was controversial even before it was formed,” Gise said in a 1975 history of ARPA. “My deal with McNeil was I would come over and handle the admistrative side of the business with the assurance that if the agency went up in blue smoke that he would absorb me in his immediate office, and he had a job set up for that purpose. But it was that tenuous back in those days.”

Gise was well respected by the agency’s director, Roy Johnson, who had left a high-paying job as an executive at General Electric for the post at ARPA. His goal was to ensure the country caught up and passed the Soviets, focusing much of his energy on space.

“Johnson believed that he had personally been given unlimited authority by the Secretary to produce results,” according to the ARPA history. “He really thought that he was supposed to be the czar of the space program…. Johnson perceived that ARPA’s job was to put up satellites. The space program became his principal interest.”

After three years at ARPA, Gise was lured back to the Atomic Energy Commission, which offered him a job in top management. But he continued to work alongside the agency, collaborating on an endeavor known as the Vela Project, which was designed to detect nuclear explosions from space through a high-altitude satellite system. In a message to his colleagues, Gise reported that “ARPA is implementing on a very urgent basis a program to establish its capability for detection of Argus effects” — an apparent reference to Operation Argus, three high-altitude nuclear test explosions over the South Atlantic Ocean in 1958.

Gise would continue to serve at the Atomic Energy Commission until 1968, when he wanted to close a factory that politicians wanted to keep open. The politicians prevailed, and Gise retired to his ranch in South Texas.

He was young, just fifty-three years old. But he was looking forward to life on the ranch. Plus, he had a young grandson to tend to, a remarkable little boy with big ears and a wide smile, who shred his middle name:

Jeffrey Preston Bezos

Big Tech firms march to the beat of Pentagon, CIA despite dissension
by Tim Johnson

Silicon Valley’s corrupt nexus: War, censorship and inequality
by Andre Damon

Part One: Amazon cashes in on war crimes and mass surveillance
by Evan Blake

Part Two: Amazon, war propaganda, and the suppression of free speech
by Evan Blake

Amazon providing facial recognition technology to police agencies for mass surveillance
by Will Morrow

Amazon Pushes ICE to Buy Its Face Recognition Surveillance Tech
by Jake Laperruque & Andrea Peterson

Jeff Bezos is Using The Washington Post to Protect the CIA
by Josh Gay

The Dark Side of Amazon
by Eric Peters

Jeff Bezos, Amazon, Washington Post and the CIA
by Joseph Farah

Amazon’s frightening CIA partnership: Capitalism, corporations and our massive new surveillance state
by Charles Davis

How The Washington Post’s New Owner Aided the CIA, Blocked WikiLeaks & Decimated the Book Industry
by Democracy Now

How Jeff Bezos’s Washington Post Became the US Military-Industrial Complex’s Chief Propagandist
by Eric Zuesse

The Washington Post, Amazon, and the Intelligence Community
by Cliff Kincaid

Jeff Bezos Is Doing Huge Business with the CIA, While Keeping His Washington Post Readers in the Dark
by Norman Solomon

Amazon’s Marriage to the CIA
by Norman Solomon

Why Amazon’s Collaboration With the CIA Is So Ominous — and Vulnerable
by Norman Solomon

Is Orwell’s Big Brother Here? Bezos & Amazon Team up With Defense, CIA & ICE
by Yves Smith

“Everybody immediately knew that it was for Amazon”: Has Bezos Become More Powerful In D.C. Than Trump?
by May Jeong

Government Ethics Watchdogs Fear Amazon’s Web of Influence May have Tainted Pentagon’s $10 Billion JEDI Cloud Deal
by Andrew Kerr

America Should Send Amazon Packing
by Mo Lotman

Death By Incuriosity

Whether or not curiosity killed the cat, it is the lack of curiosity that killed the human. And sadly, lack of curiosity is common among humans, if not cats.

There are two people I’ve known my entire life. They are highly intelligent and well educated professionals, both having spent their careers as authority figures and both enjoying positions of respect where others look up to them. One worked in healthcare and the other in higher education. They are people one would expect to be curious and I would add that both have above average intellectual capacity. They are accomplished men who know how to get things done.

I pick these examples because each has had health issues. It’s actually the one in healthcare who has shown the least curiosity about his own health. I suspect this is for the very reason he has been an authority figure in healthcare and so has acted in the role of defending establishment views. And nothing kills curiosity quicker than conventional thought.

This guy didn’t only lack curiosity in his own field of expertise, though. In general, he wasn’t one who sought out learning for its own sake. He had no habit of intellectual inquiry. So, he had no habit of intellectual curiosity to fall back on when he had a health scare. The bad news he received was a diagnosis of a major autoimmune disorder. I would assume that he took this as a death sentence and most doctors treat it that way, as no medication has shown any significant improvement. But recent research has shown dietary, nutritional, and lifestyle changes that have reversed the symptoms even in people with somewhat advanced stages of this disease.

Once diagnosed, he was already beginning to show symptoms. He had a brief window to respond during which he maintained his faculties enough that he might have been able to take action to seek remedy or to slow down the decline. But this window turned out to be brief and the choice he made was to do nothing with some combination of denial and fatalism. Inevitably, this attitude became a self-fulfilling prophecy. It was not the diagnosis but his lack of curiosity that was the death sentence. His mind is quickly disintegrating and he won’t likely live long.

The second guy has a less serious diagnosis. He a fairly common disease and he has known about it for a couple of decades. It is one of those conditions more easily managed if one takes a proactive attitude. But that would require curiosity to learn about the condition and to learn about what others have successfully done in seeking healing. The body will eliminate damage and regrow cells when the underlying problems are resolved or lessened while ensuring optimal nutrition and such, not that one is likely to learn about any of this from a standard doctor.

Like the healthcare figure, this educational figure’s first response was not curiosity. In fact, he spent the past couple of decades not even bothering to ask his doctor what exactly was his condition. He didn’t know how bad it was, didn’t know whether it was worsening or remaining stable. He apparently didn’t want to know. He has a bit more curiosity than most people, although it tends to be on narrow issues, none of them being health-related. The condition he has that risks the length and quality of his life, however, elicited no curiosity.

I had more opportunity to speak to him than to the other guy. In the past few months, we’ve had an ongoing discussion about health. I recently was able to get him to read about diet and health. But the real motivation was that his doctor told him to lose weight. Also, he was beginning to see serious symptoms of aging, from constant fatigue to memory loss. It was only after decades of major damage to his body that he finally mustered up some basic curiosity and still he is resistant. It’s easier to thoughtlessly continue what one has always done.

I sympathize and I don’t. Not much in our society encourages curiosity. I get that. It not only takes effort to learn but it also takes risk. Learning can require challenging what you and many others have assumed to be true. In this case, it might even mean challenging your doctor and taking responsibility for your own healthcare decisions. Maybe because these two are authority figures, it is their learned response to defer to authority and any dominant views that stand in for authority. That is the same for others as well. We are all trained from a young age to defer to authority (even if you were raised by wolves, you received such training, as it is a common feature of all social animals).

So, yes, I understand it is difficult and uncomfortable. Some people would rather physically die than allow their sense of identity die. And for many, their identities are tied into a rigid way of being and belonging. Curiosity might lead one to question not only the ideological beliefs and biases of others but, more importantly, one’s own. It could mean changing one’s identity and that is the greatest threat of all, something that effects me as much as anyone (but in my case, I’m psychologically attached to curiosity and so my identity might be a bit more fluid than most; the looseness of ego boundaries does come at a cost, as is attested by the psychiatric literature).

Yet, in the end, it is hard for me to grasp this passive attitude. I’ve always been questioning and so I can’t easily imagine being without this tendency (I have many weaknesses, limitations, and failures; but a lack of curiosity is not one of them). I do know what it is like to be ignorant and to feel lost in having no where to turn for guidance. In the past, knowledge was much harder to come by. When I was diagnosed with depression decades ago, after my own life threatening situation (i.e., suicide attempt), I was offered no resources to understand my condition. The reason for that is, at the time, doctors were as ignorant as anyone else when it came to depression and so much else. High quality information used to be a scarce and unreliable resource.

It has turned out that much of past medical knowledge has proven wrong, only partly correct, or misinterpreted. Because of the power of the internet and social media, this has forced open professional and public debate. We suddenly find ourselves in an overabundance of knowledge. The lack of curiosity is the main thing now holding us back, as individuals and as a society. Still, that downplays the powerful psychological and social forces that keep people ignorant and incurious. For the older generations in particular, they didn’t grow up with easy access to knowledge and so now reaching old age they don’t have a lifetime of mental habit in place.

That is part of the difference. I’m young enough that the emerging forms of knowledge and media had a major impact on my developing brain and my developing identity. On the other hand, there is obviously more going on than mere generational differences. I look to my own generation and don’t see much more curiosity. I know people in my generation who have major health issues and their children have major health issues. Do most of these people respond with curiosity? No. Instead, I observe mostly apathy and indifference. There is something about our society that breeds helplessness, and no doubt there are plenty of reasons to be found for giving up in frustration.

That is something I do empathize with. There is nothing like decades of depression to form an intimacy with feelings of being powerless and hopeless. Nonetheless, I spent the decades of my depression constantly looking for answers, driven to question and doubt everything. I should emphasize the point that answers didn’t come easily, as it took me decades of research and self-experimentation to find what worked for me in dealing with my depression; curiosity of this variety is far from idle for it can be an immense commitment and investment.

My longing to understand never abandoned me, as somehow it was a habit I learned at a young age. That leaves me uncertain about why I learned that habit of open-minded seeking while most others don’t. It’s not as if I can take credit for my state of curiosity, as it is simply the way I’ve always been (maybe in the way an athlete, for random reasons of genetics and epigenecs, might be born with greater lung capacity and endurance). Even in my earliest memories, I was curious about the world. It is a defining feature of my identity, not an achievement I came to later in life.

Because it is so integral to my identity, I’m challenged to imagine those who go through life without feeling much inclination to question and doubt (as happier people may be challenged to imagine my sometimes paralyzing funks of depression). It is even further beyond my comprehension that, for many, not even the threat of death can inspire the most basic curiosity to counter that threat. How can death be more desirable than knowledge? That question implies that it is knowledge that is the greater threat. Put this on the level of national and global society and it becomes an existential threat. In facing mass extinction, ecosystem collapse, superstorms, and refugee crises, most humans are no more motivated to understand what we face, much less motivated to do anything about it.

We don’t have habits of curiosity. It isn’t our first response, not for most of us. And so we have no culture of curiosity, no resources of curiosity to turn to when times are dire. More than a lack of curiosity alone, it is a lack of imagination which is a constraint of identity. We can’t learn anything new without becoming something different. Curiosity is one of the most radical of acts. It is also the simplest of acts, requiring only a moment of wonder or probing uncertainty. But radical or simple, repeated often enough, it becomes a habit that might one day save your life.

Curiosity as an impulse is only one small part. The first step is admitting your ignorance. And following that, what is required is the willingness to remain in ignorance for a while, not grasping too quickly to the next thing that comes along, no matter who offers it with certainty or authority. You might remain in ignorance for longer than you’d prefer. And curiosity alone won’t necessarily save you. But incuriosity for certain will doom you.

* * *

For anyone who thinks I’m being mean-spirited and overly critical, I’d note that I’m an equal opportunity critic. I’ve written posts — some of my most popular posts, in fact — that have dissected the problems of the curious mind, specifically as liberal-mindedness such as seen with the trait openness. The downside to this mindset are many, as it true when considering any mindset taken in its fullest and most extreme form. For example, those who measure high on the openness trait have greater risk of addiction, a far from minor detriment. Curiosity and related attributes don’t always lead to beneficial results and happy ends. But from my perspective, it is better than the alternative, especially in these challenging times.

My argument, of course, is context-dependent. If you are living in an authoritarian state or locked away in prison, curiosity might not do you much good and instead might shorten your lifespan. So, assess your personal situation and act accordingly. If it doesn’t apply, please feel free to ignore my advocating for curiosity. My assumption that my audience shares with me a basic level of life conditions isn’t always a justified assumption. I apologize to anyone who finds themselves stuck in a situation where curiosity is dangerous or simply not beneficial. You have my sympathy and I hope things get better for you in that one day you might have the luxury to contemplate the pros and cons of curiosity.

I realize that life is not fair and that we don’t get to choose the world we are born into. If life was fair, a piece like this would be unnecessary and meaningless. In a society where we didn’t constantly have to worry about harmful advice, including from doctors, in a society where health was the norm, curiosity might not matter much in terms of life expectancy. The average hunter-gatherer no doubt lacks curiosity about their health, but they also lack the consequences of modern society’s unhealthy environment, lifestyle, and diet. As such, in some societies, how to have a healthy life is common knowledge that individuals pick up in childhood.

It would be wonderful to live in such a society. But speaking for myself, that isn’t the case and hence it is why I argue for the necessity of curiosity as a survival tool. Curiosity is only a major benefit where dangerous ignorance rules the social order and, until things change in this society, that major benefit will continue. This isn’t only about allegations of psychological weakness and moral failure. This is about the fate of our civilization, as we face existential crises. The body count of incuriosity might eventually be counted in the numbers of billions. We are long past the point of making excuses, specifically those of us living in relative privilege here in the West.

* * *

To make this concrete, let me give an example beyond anecdotal evidence. It is an example related to healthcare and deference to medical authority.

The United States is experiencing an opioid crisis. There are many reasons for this. Worsening inequality, economic hardship, and social stress are known contributors. We live in a shitty society that is highly abnormal, which is to say we didn’t evolve to act in healthy ways under unhealthy conditions. But there is also the fact that opiods have been overprescribed because of the huge profits to be had and also because painkillers fit conventional medicine’s prioritizing of symptom treatment.

Ignoring why doctors prescribe them, why do people take them? Everyone knows they are highly addictive and, in a significant number of cases, can destroy lives. Why take that risk unless absolutely necessary? It goes beyond addiction, as there are numerous other potential side effects. Yet, in discussing alternatives, Dr. Joseph Mercola points to an NPR piece (Jessica Boddy, POLL: More People Are Taking Opioids, Even As Their Concerns Rise):

“Indeed, the Centers for Disease Control and Prevention note that as many as 1 in 4 people who use opioid painkillers get addicted to them. But despite the drugs’ reputation for addiction, less than a third of people (29 percent) said they questioned or refused their doctor’s prescription for opioids. That hasn’t changed much since 2014 (28 percent) or 2011 (31 percent).

“Dr. Leana Wen, an emergency physician and commissioner of health for the City of Baltimore, says that’s the problem. She says patients should more readily voice their concerns about getting a prescription for narcotics to make sure if it really is the best option. […]

” “Ask why,” Wen says. “Often, other alternatives like not anything at all, taking an ibuprofen or Tylenol, physical therapy, or something else can be effective. Asking ‘why’ is something every patient and provider should do.” ”

* * *

“Knowing is half the battle. G.I. Joe!” That was great wisdom I learned as a child.

The End of History is a Beginning

Francis Fukuyama’s ideological change, from neocon to neoliberal, signaled among the intellectual class a similar but dissimilar change that was happening in the broader population. The two are parallel tracks down which history like a train came barreling and rumbling, the end not in sight.

The difference between them is that the larger shift was ignored, until Donald Trump revealed the charade to be a charade, as it always was. It shouldn’t have come as a surprise, this populist moment. A new mood has been in the air that resonates with an old mood that some thought was lost in the past, the supposed end of history. It has been developing for a long while now. And when reform is denied, much worse comes along.

On that unhappy note, there is a reason why Trump used old school rhetoric of progressivism and fascism (with the underlying corporatism to both ideologies). Just as there is a reason Steve Bannon, while calling himself a Leninist, gave voice to his hope that the present would be as exciting as the 1930s. Back in the early aughts, Fukuyma gave a warning about the dark turn of events, imperialistic ambition turned to hubris. No doubt he hoped to prevent the worse. But not many in the ruling class cared to listen. So here we are.

Whatever you think of him and his views, you have to give Fukuyama credit for the simple capacity of changing his mind and, to some extent, admitting he was wrong. He is a technocratic elitist with anti-populist animosity and paternalistic aspirations. But at the very least his motivations are sincere. One journalist, Andrew O’Hehir, described him this way:

“He even renounced the neoconservative movement after the Iraq war turned into an unmitigated disaster — although he had initially been among its biggest intellectual cheerleaders — and morphed into something like a middle-road Obama-Clinton Democrat. Today we might call him a neoliberal, meaning that not as leftist hate speech but an accurate descriptor.”

Not exactly a compliment. Many neocons and former neocons, when faced with the changes of the Republican Party, found the Clinton Democrats more attractive. For most of them, this conversion only happened with Trump’s campaign. Fukuyama stands out for being one of the early trendsetters on the right in turning against Cold War neoconservatism before it was popular to do so (athough did Fukuyama really change or did he simply look to a softer form of neoconservatism).

For good or ill, the Clinton Democrats, in the mainstream mind, now stand for the sane center, the moderate middle. To those like Fukuyama fearing a populist uprising, Trump marks the far right and Sanders the far left. That leaves the battleground between them that of a milquetoast DNC establishment, holding onto power by its loosening fingertips. Fukuyama doesn’t necessarily offer us much in the way of grand insight or of practical use (here is a harsher critique). It’s still interesting to hear someone like him make such an about face, though — if only in political rhetoric and not in fundamental principles. And for whatever its worth, he so far has been right about Trump’s weakness as a strongman.

It’s also appreciated that those like Francis Fukuyama and Charles Murray bring attention to the dangers of inequality and the failures of capitalism, no matter that I oppose the ideological bent of their respective conclusions. So, even as they disagree with populism as a response, like Teddy Roosevelt, they do take seriously the gut-level assessment of what is being responded to. It’s all the more interesting that these are views coming from respectable figures who once represented the political right, much more stimulating rhetoric than anything coming out of the professional liberal class.

* * *

Donald Trump and the return of class: an interview with Francis Fukuyama

“What is happening in the politics of the US particularly, but also in other countries, is that identity in a form of nationality or ethnicity or race has become a proxy for class.”

Francis Fukuyama interview: “Socialism ought to come back”

Fukuyama, who studied political philosophy under Allan Bloom, the author of The Closing of the American Mind, at Cornell University, initially identified with the neoconservative movement: he was mentored by Paul Wolfowitz while a government official during the Reagan-Bush years. But by late 2003, Fukuyama had recanted his support for the Iraq war, which he now regards as a defining error alongside financial deregulation and the euro’s inept creation. “These are all elite-driven policies that turned out to be pretty disastrous, there’s some reason for ordinary people to be upset.”

The End of History was a rebuke to Marxists who regarded communism as humanity’s final ideological stage. How, I asked Fukuyama, did he view the resurgence of the socialist left in the UK and the US? “It all depends on what you mean by socialism. Ownership of the means of production – except in areas where it’s clearly called for, like public utilities – I don’t think that’s going to work.

“If you mean redistributive programmes that try to redress this big imbalance in both incomes and wealth that has emerged then, yes, I think not only can it come back, it ought to come back. This extended period, which started with Reagan and Thatcher, in which a certain set of ideas about the benefits of unregulated markets took hold, in many ways it’s had a disastrous effect.

“In social equality, it’s led to a weakening of labour unions, of the bargaining power of ordinary workers, the rise of an oligarchic class almost everywhere that then exerts undue political power. In terms of the role of finance, if there’s anything we learned from the financial crisis it’s that you’ve got to regulate the sector like hell because they’ll make everyone else pay. That whole ideology became very deeply embedded within the Eurozone, the austerity that Germany imposed on southern Europe has been disastrous.”

Fukuyama added, to my surprise: “At this juncture, it seems to me that certain things Karl Marx said are turning out to be true. He talked about the crisis of overproduction… that workers would be impoverished and there would be insufficient demand.”

Was Francis Fukuyama the first man to see Trump coming? – Paul Sagar | Aeon Essays

Ancient Atherosclerosis?

In reading about health, mostly about diet and nutrition, I regularly come across studies that are either poorly designed or poorly interpreted. The conclusions don’t always follow from the data or there are so many confounders that other conclusions can’t be discounted. Then the data gets used by dietary ideologues.

There is a major reason I appreciate the dietary debate among proponents of traditional, ancestral, paleo, low-carb, ketogenic, and some other related views (anti-inflammatory diets, autoimmune diets, etc such as the Wahls Protocol for multiple sclerosis and Bredesen Protocol for Alzheimer’s). This area of alternative debate leans heavily on questioning conventional certainties by digging deep into the available evidence. These diets seem to attract people capable of changing their minds or maybe it is simply that many people who eventually come to these unconventional views do so after having already tried numerous other diets.

For example, Dr. Terry Wahls is a clinical professor of Internal Medicine, Epidemiology, and Neurology  at the University of Iowa while also being Associate Chief of Staff at a Veterans Affairs hospital. She was as conventional as doctors come until she developed multiple sclerosis, began researching and experimenting, and eventually became a practitioner of functional medicine. Also, she went from being a hardcore vegetarian following mainstream dietary advice (avoided saturated fats, ate whole grains and legumes, etc) to embracing an essentially nutrient-dense paleo diet; her neurologist at the Cleveland Clinic referred her to Dr. Loren Cordain’s paleo research at Colorado State University. Since that time, she has done medical research and, recently having procured funding, she is in the process of doing a study in order to further test her diet.

Her experimental attitude, both personally and scientifically, is common among those interested in these kinds of diets and functional medicine. This experimental attitude is necessary when one steps outside of conventional wisdom, something Dr. Wahls felt she had to do to save her own life — a motivating factor of health crisis that leads many people to try a paleo, keto, etc diet after trying all else (these involve protocols to deal with serious illnesses, such as ketosis being medically used for treatment of epileptic seizures). Contradicting professional opinion of respected authorities (e.g., the American Heart Association), a diet like this tends to be an option of last resort for most people, something they come to after much failure and worsening of health. That breeds a certain mentality.

On the other hand, it should be unsurprising that people raised on mainstream views and who hold onto those views into adulthood tend not to be people willing to entertain alternative views, no matter what the evidence indicates. This includes those working in the medical field. Some ask, why are doctors so stupid? As Dr. Michael Eades explains, it’s not that they’re stupid but that many of them are ignorant; to put it more nicely, they’re ill-informed. They simply don’t know because, like so many others, they are repeating what they’ve been told by other authority figures. The reason people stick to the known, even when it is wrong, is because it is familiar and so it feels safe (and because of liability, healthcare workers and health insurance companies prefer what is perceived as safe). Doctors, as with everyone else, are dependent on heuristics to deal with a complex world. And doctors, more than most people, are too busy to explore the large amounts of data out there, much less analyze it carefully for themselves.

This maybe relates to why most doctors tend to not make the best researchers, not to dismiss those attempting to do quality research. For that reason, you might think scientific researchers who aren’t doctors would be different than doctors. But that obviously isn’t always the case because, if so, Ancel Keys low quality research wouldn’t have dominated professional dietary advice for more than a half century. Keys wasn’t a medical professional or even trained in nutrition, rather he was educated in a wide variety of other fields (economics, political science, zoology, oceanography, biology, and physiology) with his earliest research done on the physiology of fish.

I came across yet another example of this, although less extreme than that of Keys, but also different in that at least some of the authors of the paper are medical doctors. The study in question involved the participation of 19 people. The paper is “Atherosclerosis across 4000 years of human history: the Horus study of four ancient populations,” peer-reviewed and published (2013) in the highly respectable Lancet Journal (Keys’ work, one might note, was also highly respectable). This study on atherosclerosis was well reported in the mainstream news outlets and received much attention from those critical of paleo diets, offered as a final nail in the coffin, claimed as being absolute proof that ancient people were as unhealthy as we are.

The 19 authors conclude that, “atherosclerosis was common in four preindustrial populations, including a preagricultural hunter-gatherer population, and across a wide span of human history. It remains prevalent in contemporary human beings. The presence of atherosclerosis in premodern human beings suggests that the disease is an inherent component of human ageing and not characteristic of any specific diet or lifestyle.” There you have it. Heart disease is simply in our genetics — so take your statin meds like your doctor tells you to do, just shut up and quit asking questions, quit looking at all the contrary evidence.

But even ignoring all else, does the evidence from this paper support their conclusion? No. It doesn’t require much research or thought to ascertain the weak case presented. In the paper itself, on multiple occasions including in the second table, they admit that three out of four of the populations were farmers who ate largely an agricultural diet and, of course, lived an agricultural lifestyle. At most, these examples can speak to the conditions of the neolithic but not the paleolithic. Of these three, only one was transitioning from an earlier foraging lifestyle, but as with the other two was eating a higher carb diet from foods they farmed. Also, the most well known example of the bunch, the Egyptians, particularly point to the problems of an agricultural diet — as described by Michael Eades in Obesity in ancient Egypt:

“[S]everal thousand years ago when the future mummies roamed the earth their diet was a nutritionist’s nirvana. At least a nirvana for all the so-called nutritional experts of today who are recommending a diet filled with whole grains, fresh fruits and vegetables, and little meat, especially red meat. Follow such a diet, we’re told, and we will enjoy abundant health.

“Unfortunately, it didn’t work that way for the Egyptians. They followed such a diet simply because that’s all there was. There was no sugar – it wouldn’t be produced for another thousand or more years. The only sweet was honey, which was consumed in limited amounts. The primary staple was a coarse bread made of stone-ground, whole wheat. Animals were used as beasts of burden and were valued much more for the work they could do than for the meat they could provide. The banks of the Nile provided fertile soil for growing all kinds of fruits and vegetables, all of which were a part the low-fat, high-carbohydrate Egyptian diet. And there were no artificial sweeteners, artificial coloring, artificial flavors, preservatives, or any of the other substances that are part of all the manufactured foods we eat today.

“Were the nutritionists of today right about their ideas of the ideal diet, the ancient Egyptians should have had abundant health. But they didn’t. In fact, they suffered pretty miserable health. Many had heart disease, high blood pressure, diabetes and obesity – all the same disorders that we experience today in the ‘civilized’ Western world. Diseases that Paleolithic man, our really ancient ancestors, appeared to escape.”

With unintentional humor, the authors of the paper note that, “None of the cultures were known to be vegetarian.” No shit. Maybe that is because until late in the history of agriculture there were no vegetarians and for good reason. As Weston Price noted, there is a wide variety of possible healthy diets as seen in traditional communities. Yet for all his searching for a healthy traditional community that was strictly vegan or even vegetarian, he could never find any; the closest examples were those that relied largely on such things as insects and grubs because of a lack of access to larger sources of protein and fat. On the other hand, the most famous vegetarian population, Hindu Indians, have one of the shortest lifespans (to be fair, though, that could be for other reasons such as poverty-related health issues).

Interestingly, there apparently has never been a study done comparing a herbivore diet and a carnivore diet, although one study touched on it while not quite eliminating all plants from the latter. As for fat, there is no evidence that it is problematic (vegetable oils are another issue), if anything the opposite: “In a study published in the Lancet, they found that people eating high quantities of carbohydrates, which are found in breads and rice, had a nearly 30% higher risk of dying during the study than people eating a low-carb diet. And people eating high-fat diets had a 23% lower chance of dying during the study’s seven years of follow-up compared to people who ate less fat” (Alice Park, The Low-Fat vs. Low-Carb Diet Debate Has a New Answer); and “The Mayo Clinic published a study in the Journal of Alzheimer’s Disease in 2012 demonstrating that in individuals favoring a high-carb diet, risk for mild cognitive impairment was increased by 89%, contrasted to those who ate a high-fat diet, whose risk was decreased by 44%” (WebMD interview of Dr. David Perlmutter). Yet the respectable authorities tell us that fat is bad for our health, making it paradoxical that many fat-gluttonous societies have better health. There are so many paradoxes, according to conventional thought, that one begins to wonder if conventional thought is the real paradox.

Now let me discuss the one group, the Unangan, that at first glance stands out from the rest. The authors say that the, “five Unangan people living in the Aleutian Islands of modern day Alaska (ca 1756–1930 CE, one excavation site).” Those mummies are far different than those from the other populations that came much earlier in history. Four of the Unangan died around 1900 and one around 1850. Why does that matter? Well, for the reason that their entire world was being turned on its head at that time. The authors claim that, “The Unangan’s diet was predominately marine, including seals, sea lions, sea otters, whale, fish, sea urchins, and other shellfish and birds and their eggs. They were hunter-gatherers living in barabaras, subterranean houses to protect against the cold and fierce winds.” They base this claim on the assumption that these particular mummified Unangan had been eating the same diet as their ancestors for thousands of years, but the evidence points in the opposite direction.

Questioning this assumption, Jeffery Gerber explains that, “During life (before 1756–1930 CE) not more than a few short hundred years ago, the 5 Unangan/Aleut mummies were hardly part of an isolated group. The Fur Seal industry exploded in the 18th century bringing outside influence, often violent, from countries including Russia and Europe. These mummies during life, were probably exposed to foods (including sugar) different from their traditional diet and thus might not be representative of their hunter-gatherer origins” (Mummies, Clogged Arteries and Ancient Junk Food). One might add that, whatever Western foods that may have been introduced, we do know of another factor — the Government of Nunavat official website states that, “European whalers regularly travelled to the Arctic in the late 17th and 18th century. When they visited, they introduced tobacco to Inuit.” Why is that significant? Tobacco is a known risk factor for atherosclerosis. Gideon Mailer and Nicola Hale, in their book Decolonizing the Diet, elaborate on the colonial history of the region (pp. 162-171):

“On the eve of Western contact, the indigenous population of present-day Alaska numbered around 80,000. They included the Alutiiq and Unangan communities, more commonly defined as Aleuts, Inupiat and Yupiit, Athabaskans, and the Tinglit and Haida groups. Most groups suffered a stark demographic decline from the mid-eighteenth century to the mid-nineteenth century, during the period of extended European — particularly Russian — contact. Oral traditions among indigenous groups in Alaska described whites as having taken hunting grounds from other related communities, warning of a similar fate to their own. The Unangan community, numbering more than 12,000 at contact, declined by around 80 percent by 1860. By as early as the 1820s, as Jacobs has described, “The rhythm of life had changed completely in the Unangan villages now based on the exigencies of the fur trade rather than the subsistence cycle, meaning that often villages were unable to produce enough food to keep them through the winter.” Here, as elsewhere, societal disruption was most profound in the nutritional sphere, helping account for the failure to recover population numbers following disease epidemics.

“In many parts of Alaska, Native American nutritional strategies and ecological niches were suddenly disrupted by the arrival of Spanish and Russian settlers. “Because,” as Saunt has pointed out “it was extraordinarily difficult to extract food from the challenging environment,” in Alaska and other Pacific coastal communities, “any disturbance was likely to place enormous stress on local residents.” One of indigenous Alaska’s most important ecological niches centered on salmon access points. They became steadily more important between the Paleo-Eskimo era around 4,200 years ago and the precontact period, but were increasingly threatened by Russian and American disruptions from the 1780s through the nineteenth century. Dependent on nutrients and omega fatty acids such as DHA from marine resources such as salmon, Aleut and Alutiiq communities also required other animal products, such as intestines, to prepare tools and waterproof clothing to take advantage of fishing seasons. Through the later part of the eighteenth century, however, Russian fur traders and settlers began to force them away from the coast with ruthless efficiency, even destroying their hunting tools and waterproof apparatus. The Russians were clear in their objectives here, with one of their men observing that the Native American fishing boats were “as indispensable as the plow and the horse for the farmer.”

“Here we are provided with another tragic case study, which allows us to consider the likely association between disrupted access to omega-e fatty acids such as DHA and compromised immunity. We have already noted the link between DHA, reduced inflammation and enhanced immunity in the millennia following the evolution of the small human gut and the comparatively larger human brain. Wild animals, but particularly wild fish, have been shown to contain far higher proportions of omega-3 fatty acids than the food sources that apparently became more abundant in Native American diets after European contact, including in Alaska. Fat-soluble vitamins and DHA are abundantly found in fish eggs and fish fats, which were prized by Native Americans in the Northwest and Great Lakes regions, in the marine life used by California communities, and perhaps more than anywhere else, in the salmon products consumed by indigenous Alaskan communities. […]

“In Alaska, where DHA and vitamin D-rich salmon consumption was central to precontact subsistence strategies, alongside the consumption of nutrient-dense animal products and the regulation of metabolic hormones through periods of fasting or even through the efficient use of fatty acids or ketones for energy, disruptions to those strategies compromised immunity among those who suffered greater incursions from Russian and other European settlers through the first half of the nineteenth century.

“A collapse in sustainable subsistence practices among the Aleuts of Alaska exacerbated population decline during the period of Russian contact. The Russian colonial regime from the 1740s to 1840s destroyed Aleut communities through open warfare and by attacking and curtailing their nutritional resources, such as sea otters, which Russians plundered to supply the Chinese market for animal skins. Aleuts were often forced into labor, and threatened by the regular occurrence of Aleut women being taken as hostages. Curtailed by armed force, Aleuts were often relocated to the Pribilof Islands or to California to collect seals and sea otters. The same process occurred as Aleuts were co-opted into Russian expansion through the Aleutian Islands, Kodiak Island and into the southern coast of Alaska. Suffering murder and other atrocities, Aleuts provided only one use to Russian settlers: their perceived expertise in hunting local marine animals. They were removed from their communities, disrupting demography further and preventing those who remained from accessing vital nutritional resources due to the discontinuation of hunting frameworks. Colonial disruption, warfare, captivity and disease were accompanied by the degradation of nutritional resources. Aleut population numbers declined from 18,000 to 2,000 during the period of Russian occupation in the first half of the nineteenth century. A lag between the first period of contact and the intensification of colonial disruption demonstrates the role of contingent interventions in framing the deleterious effects of epidemics, including the 1837-38 smallpox epidemic in the region. Compounding these problems, communities used to a relatively high-fat and low-fructose diet were introduced to alcohol by the Russians, to the immediate detriment of their health and well-being.”

The traditional hunter-gatherer diet, as Mailer and Hale describe it, was high in the nutrients that protect against inflammation. The loss of these nutrients and the simultaneous decimation to the population was a one-two punch. Without the nutrients, their immune systems were compromised. And with their immune systems compromised, they were prone to all kinds of health conditions, probably including heart disease which of course is related to inflammation. Weston A. Price, in Nutrition and Physical Degeneration, observed that morbidity and mortality of health conditions such as heart disease rise and fall with the seasons, following precisely the growth and dying away of vegetation throughout the year (which varies by region as do the morbidity and mortality rates; the regions of comparison were in the United States and Canada). He was able to track this down to the change of fat soluble vitamins, specifically vitamin D, in dairy. When fresh vegetation was available, cows ate it and so produced more of these nutrients and presumably more omega-3s at the same time.

Prior to colonization, the Unang would have had access to even higher levels of these protective nutrients year round. The most nutritious dairy taken from the springtime wouldn’t come close in comparison to the nutrient profile of wild game. I don’t know why anyone would be shocked that, like agricultural populations, hunter-gatherers also experience worsening health after loss of wild resources. Yet the authors of the mummy studies act like they made a radical discovery that throws to the wind every doubt anyone ever had about simplistic mainstream thought. It turns out, they seem to be declaring, that we are all victims of genetic determinism after all and so toss out your romantic fairy tales about healthy primitives from the ancient world. The problem is all the evidence that undermines their conclusion, including the evidence that they present in their own paper, that is when it is interpreted in full context.

As if responding to researchers, Mailer and Hale write (p. 186): “Conditions such as diabetes are thus often associated with heart disease and other syndromes, given their inflammatory component. They now make up a huge proportion of treatment and spending in health services on both sides of the Atlantic. Yet policy makers and researchers in those same health services often respond to these conditions reactively rather than proactively — as if they were solely genetically determined, rather than arising due to external nutritional factors. A similarly problematic pattern of analysis, as we have noted, has led scholars to ignore the central role of nutritional change in Native American population loss after European contact, focusing instead on purportedly immutable genetic differences.”

There is another angle related to the above but somewhat at a tangent. I’ll bring it up because the research paper mentions it in passing as a factor to be considered: “All four populations lived at a time when infections would have been a common aspect of daily life and the major cause of death. Antibiotics had yet to be developed and the environment was non-hygienic. In 20th century hunter-foragers-horticulturalists, about 75% of mortality was attributed to infections, and only 10% from senescence. The high level of chronic infection and inflammation in premodern conditions might have promoted the inflammatory aspects of atherosclerosis.”

This is familiar territory for me, as I’ve been reading much about inflammation and infections. The authors are presenting the old view of the immune system, as opposed to that of functional medicine that looks at the entire human. An example of the latter is the hygiene hypothesis that argues it is the exposure to microbes that strengthens the immune system and there has been much evidence in support of it (such as children raised with animals or on farms being healthier as adults). The researchers above are making an opposing argument that is contradicted by populations remaining healthy when lacking modern medicine as long as they maintain traditional diet and lifestyle in a healthy ecosystem, including living soil that hasn’t been depleted from intensive farming.

This isn’t only about agriculturalists versus hunter-gatherers. The distinction between populations goes deeper into culture and environment. Weston A. Price discovered this simple truth in finding healthy populations among both agriculturalists and hunter-gatherers, but it was specific populations under specific conditions. Also, at the time when he traveled in the early 20th century, there were still traditional communities living in isolation in Europe. One example is Loetschenatal Valley in Switzerland, while visiting the country in two separate trips in the consecutive years of 1931 and 1932 — as he writes of it:

“We were told that the physical conditions that would not permit people to obtain modern foods would prevent us from reaching them without hardship. However, owing to the completion of the Loetschberg Tunnel, eleven miles long, and the building of a railroad that crosses the Loetschental Valley, at a little less than a mile above sea level, a group of about 2,000 people had been made easily accessible for study, shortly prior to 1931. Practically all the human requirements of the people in that valley, except a few items like sea salt, have been produced in the valley for centuries.”

He points out that, “Notwithstanding the fact that tuberculosis is the most serious disease of Switzerland, according to a statement given me by a government official, a recent report of inspection of this valley did not reveal a single case.” In Switzerland and other countries, he found an “association of dental caries and tuberculosis.” The commonality was early life development, as underdeveloped and maldeveloped bone structure led to diverse issues: crowded teeth, smaller skull size, misaligned features, and what was called tubercular chest. And that was an outward sign of deeper and more systemic developmental issues, including malnutrition, inflammation, and the immune system:

“Associated with a fine physical condition the isolated primitive groups have a high level of immunity to many of our modern degenerative processes, including tuberculosis, arthritis, heart disease, and affections  of the internal organs. When, however, these individuals have lost this high level of physical excellence a definite lowering in their resistance to the modern degenerative processes has taken place. To illustrate, the narrowing of the facial and dental arch forms of the children of the modernized parents, after they had adopted the white man’s food, was accompanied by an increase in susceptibility to pulmonary tuberculosis.”

Any population that lost its traditional way of life became prone to disease. But this could often as easily be reversed by having the diseased individual return to healthy conditions. In discussing Dr. Josef Romig, Price said that, “Growing out of his experience, in which he had seen large numbers of the modernized Eskimos and Indians attacked with tuberculosis, which tended to be progressive and ultimately fatal as long as the patients stayed under modernized living conditions, he now sends them back when possible to primitive conditions and to a primitive diet, under which the death rate is very much lower than under modernized  conditions. Indeed, he reported that a great majority of the afflicted recover under the primitive type of living and nutrition.”

The point made by Mailer and Hale was earlier made by Price. As seen with pre-contact Native Alaskans, the isolated traditional residents of Loetschenatal Valley had nutritious diets. Price explained that he “arranged to have samples of food, particularly dairy products, sent to me about twice a month, summer and winter. These products have been tested for their mineral and vitamin contents, particularly the fat-soluble activators. The samples were found to be high in vitamins and much higher than the average samples of commercial dairy products in America and Europe, and in the lower areas of Switzerland.” Whether fat and organ meats from marine animals or dairy from pastured alpine cows, the key is high levels of fat soluble vitamins and, of course, omega-3 fatty acids procured from a pristine environment (healthy soil and clean water with no toxins, farm chemicals, hormones, etc). It also helped that both populations ate much that was raw which maintains the high nutrient content that is partly destroyed through heat.

Some might find it hard to believe that what you eat can determine whether or not you get a serious disease like tuberculosis. Conventional medicine tells us that the only thing that protects us is either avoiding contact or vaccination. But this view is being seriously challenged, as Mailer and Hale make clear (p. 164): “Several studies have focused on the link between Vitamin D and the health outcomes of individuals infected with tuberculosis, taking care to discount other causal factors and to avoid determining causation merely through association. Given the historical occurrence of the disease among indigenous people after contact, including in Alaska, those studies that have isolated the contingency of immunity on active Vitamin D are particularly pertinent to note. In biochemical experiments, the presence of the active form of vitamin D has been shown to have a crucial role in the destruction of Mycobacterium tuberculosis by macrophages. A recent review has found that tuberculosis patients tend to retain a lower-than-average vitamin D status, and that supplementation of the nutrient improved outcomes in most cases.” As an additional thought, the popular tuberculosis sanitoriums, some in the Swiss Alps, were attractive because “it was believed that the climate and above-average hours of sunshine had something to do with it” (Jo Fahy, A breath of fresh air for an alpine village). What does sunlight help the body to produce? Vitamin D.

As an additional perspective, James C. Scotts’ Against the Grain, writes that, “Virtually every infectious disease caused by micro-organisms and specifically adapted to Homo sapiens has arisen in the last ten thousand years, many of them in the last five thousand years as an effect of ‘civilisation’: cholera, smallpox, measles, influenza, chickenpox, and perhaps malaria” It is not only that agriculture introduces new diseases but also makes people susceptible to them. That might be true, as Scott suggests, even of a disease like malaria. The Piraha are more likely to die of malaria than anything else, but that might not have been true in the past. Let me offer a speculation by connecting to the mummy study.

The Ancestral Puebloans, one of the groups in the mummy study, were at the time farming maize (corn) and squash while foraging pine nuts, seeds, amaranth (grain), and grasses. How does this compare to the more recent Piraha? A 1948 Smithsonian publication, Handbook of South American Indians ed. Julian H. Steward, reported that, “The Pirah grew maize, sweet manioc (macaxera), a kind of yellow squash (jurumum), watermelon, and cotton” (p. 267). So it turns out that, like the Ancestral Puebloan, the Piraha have been on their way toward a more agricultural lifestyle for a while. I also noted that the same publication added the detail that the Piraha “did not drink rum,” but by the time Daniel Everett met the Piraha in 1978 traders had already introduced them to alcohol and it had become an occasional problem. Not only were they becoming agricultural but also Westernized, two factors that likely contributed to decreased immunity.

Like other modern hunter-gatherers, the Piraha have been effected by the Neolithic Revolution and are in many ways far different from Paleolithic hunter-gatherers. Ancient dietary habits are shown in the analysis of ancient bones — M.P. Richards writes that, “Direct evidence from bone chemistry, such as the measurement of the stable isotopes of carbon and nitrogen, do provide direct evidence of past diet, and limited studies on five Neanderthals from three sites, as well as a number of modern Palaeolithic and Mesolithic humans indicates the importance of animal protein in diets. There is a significant change in the archaeological record associated with the introduction of agriculture worldwide, and an associated general decline in health in some areas. However, there is an rapid increase in population associated with domestication of plants, so although in some regions individual health suffers after the Neolithic revolution, as a species humans have greatly expanded their population worldwide” (A brief review of the archaeological evidence for Palaeolithic and Neolithic subsistence). This is further supported in the analysis of coprolites. “Studies of ancient human coprolites, or fossilized human feces, dating anywhere from three hundred thousand to as recent as fifty thousand years ago, have revealed essentially a complete lack of any plant material in the diets of the subjects studied (Bryant and Williams-Dean 1975),” Nora Gedgaudas tells us in Primal Body, Primal Mind (p. 39).

This diet changed as humans entered our present interglacial period with its warmer temperatures and greater abundance of vegetation, which was lacking during the Paleolithic Period: “There was far more plant material in the diets of our more recent ancestors than our more ancient hominid ancestors, due to different factors” (Gedgaudas, p. 37). Following the earlier megafauna mass extinction, it wasn’t only agriculturalists but also hunter-gatherers who began to eat more plants and in many cases make use of cultivated plants (either that they cultivated or that they adopted from nearby agriculturalists). To emphasize how drastic was this change, this loss of abundant meat and fat, consider the fact that humans have yet to regain the average height and skull size of Paleolithic humans.

The authors of the mummy study didn’t even attempt to look at the data of Paleolithic humans. The populations compared are entirely from the past few millennia. And the only hunter-gatherer group included was post-contact. So, why are the authors so confident in their conclusion? I presume they were simply trying to get published and get media attention in a highly competitive market of academic scholarship. These people obviously aren’t stupid, but they had little incentive to fully inform themselves either. All the info I shared in this post I was able to gather in about a half an hour of several web searches, not exactly difficult academic research. It’s amazing the info that is easily available these days, for those who want to find it.

Let me make one last point. The mummy study isn’t without its merits. The paper mentions other evidence that remains to be explained: “We also considered the reliability and previous work of the authors. Autopsy studies done as long ago as the mid-19th century showed atherosclerosis in ancient Egyptians. Also, in more recent times, Zimmerman undertook autopsies and described atherosclerosis in the mummies of two Unangan men from the same cave as our Unangan mummies and of an Inuit woman who lived around 400 CE. A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.”

Let’s break that down. Further examples of Egyptian mummies is irrelevant, as their diet was so strikingly similar to the idealized Western diet recommended by mainstream doctors, dieticians, and nutritionists. That leaves the rest to account for. The older Unangan mummies are far more interesting and any meaningful paper would have led with that piece of data, but even then it wouldn’t mean what the authors think it means. Atherosclerosis is one small factor and not necessarily as significant as assumed. From a functional medicine perspective, it’s the whole picture that matters in how the body actually functions and in the health that results. If so, atherosclerosis might not indicate the same thing for all populations. In Nourishing Diets, Morell writes that (pp. 124-5),

“Critics have pointed out that Keys omitted from his study many areas of the world where consumption of animal foods is high and deaths from heart attack are low, including France — the so-called French paradox. But there is also a Japanese paradox. In 1989, Japanese scientists returned to the same two districts that Keys had studied. In an article titled “lessons fro Science from the Seven Countries Study,” they noted that per capita consumption of rice had declined, while consumption of fats, oils, meats, poultry, dairy products and fruit had all increased. […]

“During the postwar period of increased animal consumption, the Japanese average height increased three inches and the age-adjusted death rate from all causes declined from 17.6 to 7.4 per 1,000 per year. Although the rates of hypertension increased, stroke mortality declined markedly. Deaths from cancer also went down in spite of the consumption of animal foods.

“The researchers also noted — and here is the paradox — that the rate of myocardial infarction (heart attack) and sudden death did not change during this period, in spite of the fact that the Japanese weighed more, had higher blood pressure and higher cholesterol levels, and ate more fat, beef and dairy foods.”

Right here in the United States, we have are own ‘paradox’ as well. Good Calories, Bad Calories by Gary Taubes makes a compelling argument that, based on the scientific research, there is no strong causal link between atherosclerosis and coronary heart disease. Nina Teicholz has also written extensively about this, such as in her book The Big Fat Surprise; and in an Atlantic piece (How Americans Got Red Meat Wrong) she lays out some of the evidence showing that Americans in the 19th century, as compared to the following century, ate more meat and fat while they ate fewer vegetables and fruits. Nonetheless: “During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.” Whether or not earlier Americans had high rates of atherosclerosis, there is strong evidence indicating they did not have high rates of heart disease, of strokes and heart attacks. The health crisis for these conditions, as Tiecholz notes, didn’t take hold until the very moment meat and animal fat consumption took a nosedive. So what gives?

The takeaway is this. We have no reason to assume that atherosclerosis in the present or in the past can tell us much of anything about general health. Even ignoring the fact that none of the mummies studied was from a high protein and high fat Paleo population, we can make no meaningful interpretations of the presence of atherosclerosis among some of the individuals. Going by modern data, there is no reason to jump to the conclusion that they had high mortality rates because of it. Quite likely, they died from completely unrelated health issues. A case in point is that of the Masai, around which there is much debate in interpreting the data. George V. Mann and others wrote a paper, Atherosclerosis in the Masai, that demonstrated the complexity:

“The hearts and aortae of 50 Masai men were collected at autopsy. These pastoral people are exceptionally active and fit and they consume diets of milk and meat. The intake of animal fat exceeds that of American men. Measurements of the aorta showed extensive atherosclerosis with lipid infiltration and fibrous changes but very few complicated lesions. The coronary arteries showed intimal thickening by atherosclerosis which equaled that of old U.S. men. The Masai vessels enlarge with age to more than compensate for this disease. It is speculated that the Masai are protected from their atherosclerosis by physical fitness which causes their coronary vessels to be capacious.”

Put this in the context provided in What Causes Heart Disease? by Sally Fallon Morell and Mary Enig: “The factors that initiate a heart attack (or a stroke) are twofold. One is the pathological buildup of abnormal plaque, or atheromas, in the arteries, plaque that gradually hardens through calcification. Blockage most often occurs in the large arteries feeding the heart or the brain. This abnormal plaque or atherosclerosis should not be confused with the fatty streaks and thickening that is found in the arteries of both primitive and industrialized peoples throughout the world. This thickening is a protective mechanism that occurs in areas where the arteries branch or make a turn and therefore incur the greatest levels of pressure from the blood. Without this natural thickening, our arteries would weaken in these areas as we age, leading to aneurysms and ruptures. With normal thickening, the blood vessel usually widens to accommodate the change. But with atherosclerosis the vessel ultimately becomes more narrow so that even small blood clots may cause an obstruction.”

A distinction is being made here that maybe wasn’t being made in the the mummy study. What gets measured as atherosclerosis could correlate to diverse health conditions and consequences in various populations across dietary lifestyles, regional environments, and historical and prehistorical periods. Finding atherosclerosis in an individual, especially a mummy, might not tell us any useful info about overall health.

Just for good measure, let’s tackle the last piece of remaining evidence the authors mention: “A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.” Calling him Iceman, to most ears, sounds similar to calling an ancient person a caveman — implying that he was a hunter for it is hard to grow plants on ice. In response, Paul Mabry writes in Did Meat Eating Make Ancient Hunter Gatherers Get Heart Disease, showing what was left out in the research paper:

“Sometimes the folks trying to discredit hunter-gather diets bring in Ötzi, “The Iceman” a frozen human found in the Tyrolean Mountains on the border between Austria and Italy that also had plaques in his heart arteries. He was judged to be 5300 years old making his era about 3400 BCE. Most experts feel agriculture had reach Europe almost 700 years before that according to this article. And Ötzi himself suggests they are right. Here’s a quote from the Wikipedia article on Ötzi’s last meal (a sandwich): “Analysis of Ötzi’s intestinal contents showed two meals (the last one consumed about eight hours before his death), one of chamois meat, the other of red deer and herb bread. Both were eaten with grain as well as roots and fruits. The grain from both meals was a highly processed einkornwheat bran,[14] quite possibly eaten in the form of bread. In the proximity of the body, and thus possibly originating from the Iceman’s provisions, chaff and grains of einkorn and barley, and seeds of flax and poppy were discovered, as well as kernels of sloes (small plumlike fruits of the blackthorn tree) and various seeds of berries growing in the wild.[15] Hair analysis was used to examine his diet from several months before. Pollen in the first meal showed that it had been consumed in a mid-altitude conifer forest, and other pollens indicated the presence of wheat and legumes, which may have been domesticated crops. Pollen grains of hop-hornbeam were also discovered. The pollen was very well preserved, with the cells inside remaining intact, indicating that it had been fresh (a few hours old) at the time of Ötzi’s death, which places the event in the spring. Einkorn wheat is harvested in the late summer, and sloes in the autumn; these must have been stored from the previous year.””

Once again, we are looking at the health issues of someone eating an agricultural diet. It’s amazing that the authors, 19 of them, apparently all agreed that diet has nothing to do with a major component of health. That is patently absurd. To the credit of Lancet, they published a criticism of this conclusion (though these critics repeats their own preferred conventional wisdom, in their view on saturated fat) — Atherosclerosis in ancient populations by Gino Fornaciari and Raffaele Gaeta:

“The development of vascular calcification is related not only to atherosclerosis but also to conditions such as disorders of calcium-phosphorus metabolism, diabetes, chronic microinflammation, and chronic renal insufficiency.

“Furthermore, stating that atherosclerosis is not characteristic of any specific diet or lifestyle, but an inherent component of human ageing is not in agreement with recent studies demonstrating the importance of diet and physical activity.5 If atherosclerosis only depended on ageing, it would not have been possible to diagnose it in a young individual, as done in the Horus study.1

“Finally, classification of probable atherosclerosis on the basis of the presence of a calcification in the expected course of an artery seems incorrect, because the anatomy can be strongly altered by post-mortem events. The walls of the vessels might collapse, dehydrate, and have the appearance of a calcific thickening. For this reason, the x-ray CT pattern alone is insufficient and diagnosis should be supported by histological study.”

As far as I know, this didn’t lead to a retraction of the paper. Nor did this criticism receive the attention that the paper itself was given. None of the people who praised the paper bothered to point out the criticism, at least not among what I came across. Anyway, how did this weakly argued paper based on faulty evidence get published in the first place? And then how does it get spread by so many as if proven fact?

This is the uphill battle faced by anyone seeking to offer an alternative perspective, especially on diet. This makes meaningful debate next to impossible. That won’t stop those like me from slowly chipping away at the vast edifice of the dominant paradigm. On a positive note, it helps when the evidence used against an alternative view, after reinterpretation, ends up being strong evidence in favor of it.

An Empire of Shame

America as an empire. This has long been a contentious issue, going back to the colonial era, first as a debate over whether Americans wanted to remain a part of the British Empire and later as a debate over whether Americans wanted to create a new empire. We initially chose against empire with the Declaration of Independence and Articles of Confederation. But then we chose for empire with the Constitution that allowed (pseudo-)Federalists to gain power and reshape the new government.

Some key Federalists openly talked about an American Empire. For the most part, though, American leaders have kept their imperial aspirations hidden behind rhetoric, even as our actions were obviously imperialistic. Heck, an Anti-Federalist like Thomas Jefferson took the imperialistic action of the Louisiana Purchase, a deal between one empire and another. Imperial expansionism continued through the 19th century and into the 20th century with numerous military interventions, from the Indian Wars to the Banana Wars. Not a year has gone by in American history when we weren’t involved in a war of aggression.

Yet it still is hard for Americans to admit that we are an empire. I’ve had numerous discussions with my conservative father on this topic. At times, he has surprisingly admitted we are an empire, but usually he is resistant. In our most recent debate, it occurred to me that the resistance is motivated by shame. We don’t want to admit we are an empire because we are ashamed of our government’s brutal use of power on our behalf. And shame is a powerful force. People will do and allow the most horrific actions out of shame.

Empires of the past tended to be projects built on pride and honor, of brazen rule through force. We Americans, instead, feel the need to hide our imperialism behind an image of benign and reluctant power. The difference is Americans, ever since we were colonists, have had an inferiority complex. It makes us both yearn for greatness and fear mockery. Our country is the young teenager that must prove himself, while not yet having the confidence to really believe in himself. So, we act slyly as an empire with implicit threats and backroom manipulations, proxy wars and covert operations, puppet governments and corporatist front groups. Then, when these morally depraved actions come to light, we rationalize why they were exceptions or why we were being forced to do so because of circumstances. It’s not our fault. We don’t want to hurt others, but we had no other choice. Besides, we were defending freedom and free markets, that is why we constantly intervene in other countries and endlessly kill innocents. It is for the greater good. We are willing to make this self-sacrifice on the behalf of others. We are the real victims.

I regularly come across quotes from American leaders who publicly or privately complained about our government, who spoke of failure and betrayal. This has included presidents and other political officials going back to the beginning of the country. Think of Jefferson and Adams in their later years worrying about the fate of the American experiment, that maybe we Americans didn’t have what it takes, that maybe we aren’t destined to be great. The sense of inferiority has haunted our collective imagination for so long that it is practically a defining feature. Despite our being the largest empire in world history, we don’t have the self-certain righteousness to declare ourselves an empire. That is why we get the false and weak bravado of someone like Donald Trump — sadly, he represents our country all too well. Then again, so did Hillary Clinton represent our country in her suppressing wages in Haiti so that U.S. companies would have cheap foreign labor (i.e., corporate wage slavery), the kind of actions the U.S. does all the time in secret in order to maintain control. We have talent for committing evil with a smiling face… or nervous laughter.

Rather than clear power asserted with pride and honor, the United States government acts like a bully on the world stage. We are constantly trying to prove ourselves. And our denial of imperialism is gaslighting, to make anyone feel crazy if they dare voice the truth of what we are. We tell others that we are the good guys. What we really are trying to do is to convince ourselves of our own bullshit. This causes a nervousness in the public mind, a fear that we might be found out. We are paralyzed by our shame and it gets tiring in our trying to keep up the pretense. The facade is crumbling. Our inner shame has become public such that now we are the shame of the world. That probably means our leaders will soon start a war to divert attention, and both main parties will be glad for the diversion.

There is a compelling argument made by James Gilligan in Preventing Violence. Among other things, he sees as a key cause to interpersonal violence is shame. And that there is something particularly shame-inducing about our society, especially for those on the bottom of society. He is attempting to explain violent crime. But what occurs to me is that our leaders are just as violent, if not more violent. It’s simply that those who make the laws determine which violence is legal and which violence illegal, their own violence being in the former category as it is implemented through the state or with the support of the state. Maybe it is shame that causes our government to be so violent toward foreign populations and toward the American population. And maybe shame is what causes American citizens to remain silent in their complicity, as the violent is done in their name.

The United States was always this way

“The real difficulty is with the vast wealth and power in the hands of the few and the unscrupulous who represent or control capital. Hundreds of laws of Congress and the state legislatures are in the interest of these men and against the interests of workingmen. These need to be exposed and repealed. All laws on corporations, on taxation, on trusts, wills, descent, and the like, need examination and extensive change. This is a government of the people, by the people, and for the people no longer. It is a government of corporations, by corporations, and for corporations.”

―Diary and Letters of Rutherford Birchard Hayes: Nineteenth President of the United States, (from REAL Democracy History Calendar: October 1 – 7)

The Power of Language Learning

“I feel that American as against British English, and English of any major dialect as against Russian, and both languages as against the Tarascan language of Mexico constitute different worlds. I note that it is persons with experience of foreign languages and poetry who feel most acutely that a natural language is a different way not only of talking but of thinking and imaging and of emotional life.”
~Paul Friedrich, The Language Parallax, Kindle Locations 356-359

“Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.”
~Jeremy Lent, The Patterning Instinct, p. 204

Like Daniel Everett, the much earlier Roger Williams went to convert the natives, and in the process he was deconverted, at least to the extent of losing his righteous Puritanism. And as with Everett, he studied the native languages and wrote about them. That could be an example of the power of linguistic relativity, in that studying another language could cause you to enter another cultural worldview.

On a related note, Baruch Spinoza did textual analysis, Thomas Paine did Biblical criticism, Friedrich Nietzsche did philology, etc. It makes one wonder how studying language might help shape the thought and redirect the life trajectory of certain thinkers. Many radicals have a history of studying languages and texts. The same thing is seen with a high number of academics, ministers, and apologists turning into agnostics and atheists through an originally faithful study of the Bible (e.g., Robert M. Price).

There is a trickster quality to language, something observed by many others. To closely study language and the products of language is to risk having one’s mind unsettled and then to risk being scorned by those locked into a single linguistic worldview. What Everett found was that, in trying to translate the Bible for the Piraha, he was destabilizing his place within the religious order and also, in discovering the lack of linguistic recursion, destabilizing his place within the academic order. Both organized religion and organized academia are institutions of power that maintain the proper order. For the same reason of power, governments have often enforced a single language for the entire population, as thought control and social control, as enforced assimilation.

Monolingualism goes hand in hand with monoculturalism. And so simply learning a foreign language can be one of the most radical acts that one can commit. The more foreign the language, the more radical the effect. But sometimes simply scrutinizing one’s own native language can shift one’s mind, a possible connection between writing and a greater potential for independent thought. Then again, knowledge of language can also make one a better rhetorician and propagandist. Language as trickster phenomenon does have two faces.

* * *

The Bilingual Mind
by Aneta Pavlenko
pp. 25-27

Like Humboldt and Sapir before him, Whorf, too, believed in the plasticity of the human mind and its ability to go beyond the categories of the mother tongue. This belief permeates the poignant plea for ‘multilingual awareness’ made by the terminally ill Whorf to the world on the brink of World War II:

I believe that those who envision a world speaking only one tongue, whether English, German, Russian, or any other, hold a misguided ideal and would do the evolution of the human mind the greatest disservice. Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. ([ 1941b ] 2012 : 313)

Whorf’s arguments fell on deaf ears, because they were made in a climate significantly less tolerant of linguistic diversity than that of the late imperial Russia and the USSR. In the nineteenth century, large immigrant communities in the US (in particular German speakers) enjoyed access to native-language education, press and theater. The situation began to change during the period often termed the Great Migration (1880–1924), when approximately 24 million new immigrants entered the country (US Bureau of the Census, 1975 ). The overwhelming influx raised concerns about national unity and the capacity of American society to assimilate such a large body of newcomers. In 1917, when the US entered the European conflict declaring war on Germany, the anti-immigrant sentiments found an outlet in a strong movement against ‘the language of the enemy’: German books were removed from libraries and destroyed, German-language theaters and publications closed, and German speakers became subject to intimidation and threats (Luebke , 1980 ; Pavlenko, 2002a ; Wiley , 1998 ).

The advisability of German – and other foreign-language-medium – instruction also came into question, in a truly Humboldtian fashion that linked the learning of foreign languages with adoption of ‘foreign’ worldviews (e.g., Gordy , 1918 ). The National Education Association went as far as to declare “the practice of giving instruction … in a foreign tongue to be un-American and unpatriotic” (Fitz-Gerald , 1918 : 62). And while many prominent intellectuals stood up in defense of foreign languages (e.g., Barnes, 1918 ), bilingual education gave way and so did foreign-language instruction at the elementary level, where children were judged most vulnerable and where 80% of them ended their education. Between 1917 and 1922, Alabama, Colorado, Delaware, Iowa, Nebraska, Oklahoma, and South Dakota issued laws that prohibited foreign-language instruction in grades I through VIII, while Wisconsin and Minnesota restricted it to one hour a day. Louisiana, Indiana, and Ohio made the teaching of German illegal at the elementary level, and so did several cities with large German-speaking populations, including Baltimore, New York City, and Philadelphia (Luebke , 1980 ; Pavlenko, 2002a ). The double standard that made bilingualism an upper-class privilege reserved for ‘real’ Americans is seen in the address given by Vassar College professor Marian Whitney at the Modern Language Teachers conference in 1918:

In so far as teaching foreign languages in our elementary schools has been a means of keeping a child of foreign birth in the language and ideals of his family and tradition, I think it a bad thing; but to teach young Americans French, German, or Spanish at an age when their oral and verbal memory is keen and when languages come easily, is a good thing. (Whitney , 1918 : 11–12)

The intolerance reached its apogee in Roosevelt ’s 1919 address to the American Defense Society that equated English monolingualism with loyalty to the US:

We have room for but one language here, and that is the English language, for we intend to see that the crucible turns our people out as Americans, of American nationality, and not as dwellers in a polyglot boardinghouse; and we have room for but one sole loyalty, and that is the loyalty to the American people. (cited in Brumberg, 1986 : 7)

Reprinted in countless Board of Education brochures, this speech fortified the pressure not only to learn English but to abandon native languages. This pressure precipitated a rapid shift to English in many immigrant communities, further facilitated by the drastic reduction in immigrant influx, due to the quotas established by the 1924 National Origins Act (Pavlenko , 2002a ). Assimilation efforts also extended to Native Americans, who were no longer treated as sovereign nations – many Native American children were sent to English-language boarding schools, where they lost their native languages (Morgan, 2009 ; Spack , 2002 ).

The endangerment of Native American languages was of great concern to Boas, Sapir , and Whorf , yet their support for linguistic diversity and multilingualism never translated into reforms and policies: in the world outside of academia, Americanization laws and efforts were making US citizenry unapologetically monolingual and the disappearance of ‘multilingual awareness’ was applauded by academics who viewed bilingualism as detrimental to children’s cognitive, linguistic and emotional development (Anastasi & Cordova , 1953 ; Bossard, 1945 ; Smith, 1931 , 1939 ; Spoerl , 1943 ; Yoshioka , 1929 ; for discussion, see Weinreich, 1953 : 115–118). It was only in the 1950s that Arsenian ( 1945 ), Haugen ( 1953 , 1956 ), and Weinreich ( 1953 ) succeeded in promoting a more positive view of bilingualism, yet part of their success resided in the fact that by then bilingualism no longer mattered – it was regarded, as we will see, as an “unusual” characteristic, pervasive at the margins but hardly relevant for the society at large.

In the USSR, on the other hand, linguists’ romantic belief in linguistic rights and politicians’ desire to institutionalize nations as fundamental constituents of the state gave rise to the policy of korenizatsia [nativization] and a unique educational system that promoted the development of multilingual competence (Hirsch, 2005 ; Pavlenko , 2013 ; Smith , 1998 ). It is a little-known and under-appreciated irony that throughout the twentieth century, language policies in the ‘totalitarian’ Soviet Union were significantly more liberal – even during the period of the so-called ‘russification’– than those in the ‘liberal’ United States.