The Disease of Nostalgia

“The nostalgic is looking for a spiritual addressee. Encountering silence, he looks for memorable signs, desperately misreading them.”
― Svetlana Boym, The Future of Nostalgia

Nostalgia is one of those strange medical conditions from the past, first observed in 17th century soldiers being sent off to foreign lands during that era of power struggles between colonial empires. It’s lost that medical framing since then, as it is now seen as a mere emotion or mood or quality. And it has become associated with the reactionary mind and invented traditions. We no longer take it seriously, sometimes even dismissing it as a sign of immaturity.

But it used to be considered a physiological disease with measurable symptoms such as brain inflammation along with serious repercussions, as the afflicted could literally waste away and die. It was a profound homesickness experienced as an existential crisis of identity, a longing for a particular place and and the sense of being uprooted from it.  Then it shifted from a focus on place to a focus on time. It became more abstract and, because of that, it lost its medical status. This happened simultaneously as a new disease, neurasthenia, took its place in the popular imagination.

In America, nostalgia never took hold to the same degree as it did in Europe. It finally made its appearance in the American Civil War, only to be dismissed as unmanly and weak character, a defect and deficiency. It was a disease of civilization, but it strongly affected the least civilized, such as rural farmers. America was sold as a nation of progress and so attachment to old ways was deemed unAmerican. Neurasthenia better fit the mood that the ruling elite sought to promote and, unlike nostalgia, it was presented as a disease of the most civilized, although over time it too became a common malady, specifically as it was Europeanized.

Over the centuries, there was a shift in the sense of time. Up through the early colonial era, a cyclical worldview remained dominant (John Demos, Circles and Lines). As time became linear, there was no possibility of a return. The revolutionary era permanently broke the psychological link between past and future. There was even a revolution in the understanding of ‘revolution’ itself, a term that originated from astrology and literally meant a cyclical return. In a return, there is replenishment. But without that possibility, one is thrown back on individual reserves that are limited and must be managed. The capitalist self of hyper-individualism is finally fully formed. That is what neurasthenia was concerned with and so nostalgia lost its explanatory power. In The Future of Nostalgia, Svetlana Boym writes:

“From the seventeenth to the nineteenth century, the representation of time itself changed; it moved away from allegorical human figures— an old man, a blind youth holding an hourglass, a woman with bared breasts representing Fate— to the impersonal language of numbers: railroad schedules, the bottom line of industrial progress. Time was no longer shifting sand; time was money. Yet the modern era also allowed for multiple conceptions of time and made the experience of time more individual and creative.”

As society turned toward an ethos of the dynamic, it became ungrounded and unstable. Some of the last healthy ties to the bicameral mind were severed. (Interestingly, in early diagnoses of nostalgia as a disease, Boym states that, “One of the early symptoms of nostalgia was an ability to hear voices or see ghosts.” That sounds like the bicameral mind re-emerging under conditions of stress, not unlike John Geiger’s third man factor. In nostalgia as in the archaic mind, there is a secret connection between language and music, as united through voice — see Development of Language and Music and Spoken Language: Formulaic, Musical, & Bicameral.)

Archaic authorization mutated into totalitarianism, a new refuge for the anxiety-riddled mind. And the emerging forms of authoritarianism heavily draw upon the nostalgic turn (Ben G. Price, Authoritarian Grammar and Fundamentalist Arithmetic Part II), just as did the first theocracies (religion, writes Julian Jaynes, is “the nostalgic anguish for the lost bicamerality of a subjectively conscious people”), even as or especially because the respectable classes dismissed it. This is courting disaster for the archaic mind still lives within us, still speaks in the world, even if the voices are no longer recognized.

The first laments of loss echoed out from the rubble of the Bronze Age and, precisely as the longing has grown stronger, the dysfunctions associated with it have become normalized. But how disconnected and lost in abstractions can we get before either we become something entirely else or face another collapse?

“Living amid an ongoing epidemic that nobody notices is surreal. It is like viewing a mighty river that has risen slowly over two centuries, imperceptibly claiming the surrounding land, millimeter by millimeter. . . . Humans adapt remarkably well to a disaster as long as the disaster occurs over a long period of time”
~E. Fuller Torrey & Judy Miller, Invisible Plague

* * *

As a side note, I’d point to utopia as being the other side of the coin to nostalgia. And so the radical is the twin of the reactionary. In a different context, I said something about shame that could apply equally well to nostalgia (“Why are you thinking about this?”): “The issue of shame is a sore spot where conservatism and liberalism have, from their close proximity, rubbed each other raw. It is also a site of much symbolic conflation, the linchpin like a stake in the ground to which a couple of old warriors are tied in their ritual dance of combat and wounding, where both are so focused on one another that neither pays much attention to the stake that binds them together. In circling around, they wind themselves ever tighter and their tethers grow shorter.”

In conversing with someone on the political left, an old pattern became apparent. This guy, although with a slight radical bent, is a fairly mainstream liberal coming out of the Whiggish tradition of ‘moderate’ progressivism, an ideological mindset that is often conservative-minded and sometimes reactionary (e.g., lesser evil voting no matter how evil it gets). This kind of person is forever pulling their punches. To continue from the same piece, I wrote that, “The conservative’s task is much easier for the reason that most liberals don’t want to untangle the knot, to remove the linchpin. Still, that is what conservative’s fear, for they know liberals have that capacity, no matter how unlikely they are to act on it. This fear is real. The entire social order is dependent on overlapping symbolic conflations, each a link in a chain, and so each a point of vulnerability.”

To pull that linchpin would require confronting the concrete issue at hand, getting one’s hands dirty. But that is what the moderate progressive fears for the liberal mind feels safe and protected within abstractions. Real-world context will always be sacrificed. Such a person mistrusts the nostalgia of the reactionary while maybe fearing even more the utopianism of the radical, flitting back and forth between one to the other and never getting anywhere. So, they entirely retreat from the battle and lose themselves in comforting fantasies of abstract ideals (making them prone to false equivalencies in their dreams of equality). In doing so, despite being well informed, they miss the trees for the forest, miss the reality on the ground for all the good intentions.

Neither nostalgia nor utopianism can offer a solution, even as both indicate the problem. That isn’t to say there is an escape either for that also reinforces the pattern of anxiety, of fear and hope. The narrative predetermines our roles and the possibilities of action. We need a new narrative. The disease model of the human psyche, framed as nostalgia or neurasthenia or depression or anything else, is maybe not so helpful. Yet we have to take seriously that the stress of modernity is not merely something in people’s minds. Scapegoating the individual simply distracts from the failure of individualism. These conditions of identity are both real and imagined — that is what makes them powerful, whatever name they go by and ideology they serve.

* * *

Let me throw out some loose thoughts. There is something that feels off about our society and it is hard to put one’s finger on. That is why, in our free floating anxiety, we look for anything to grab hold of. Most of the public debates that divide the public are distractions from the real issue that we don’t know how to face, much less how to comprehend. These red herrings of social control are what I call symbolic conflation. To put it simply, there is plenty of projecting going on — and it is mutual from all sides involved and its extremely distorted.

I’ll leave it at that. What is important for my purposes here is the anxiety itself, the intolerable sense of dissatisfaction or dukkha. Interestingly, this sense gets shifted onto the individual and so further justifies the very individualism that is at the heart of the problem. It is our individuality that makes us feel so ill at ease with the world because it disconnects and isolates us. The individual inevitably fails because individualism is ultimately impossible. We are social creatures through and through. It requires immense effort to create and maintain individuality, and sweet Jesus! is it tiresome. That is the sense of being drained that is common across these many historical conditions, from the earlier melancholia to the present depression and everything in between.

Since the beginning of modernity, there has been a fear that too many individuals are simply not up to the task. When reading about these earlier ‘diseases’, there is a common thread running across the long history. The message is how will the individual be made to get in line with modern world, not how to get the modern world in line with human nature. The show must go on. Progress must continue. There is no going back, so we’re told. Onward and upward. This strain of endless change and uncertainty has required special effort in enculturating and indoctrinating each new generation. In the Middle Ages and in tribal cultures, children weren’t special but basically considered miniature adults. There was no protected childhood with an extended period to raise, train, and educate the child. But in our society, the individual has to be made, as does the citizen and the consumer. None of this comes naturally and so must be artificially imposed. The child will resist and more than a few will come out the other side with severe damage, but the sacrifice must be made for the greater good of society.

This was seen, in the United States, most clearly after the American Revolution. Citizen-making became a collective project. Children needed to be shaped into a civic-minded public. And as seen in Europe, adults needed to be forced into a national identity, even if it required bullying or even occasionally burying a few people alive to get the point across No stragglers will be allowed! (Nonetheless, a large part of the European population maintained local identities until the world war era.) Turning boys into men became a particular obsession in the early 20th century with all of the building of parks, advocacy for hunting and fishing, creation of the Boy Scouts, and on and on. Boys used to turn into men spontaneously without any needed intervention, but with nostalgia and neurasthenia there was this growing fear of effeminacy and degeneracy. The civilizing project was important and must be done, no matter how many people are harmed in the process, even genocides. Creating the modern nation-state was a brutal and often bloody endeavor. No one willingly becomes a modern individual. It only happens under threat of violence and punishment.

By the way, this post is essentially an elaboration on my thoughts from another post, The Crisis of Identity. In that other post, I briefly mention nostalgia, but the focus was more on neurasthenia and related topics. It’s an extensive historical survey. This is part of a longer term intellectual project of mine, in trying to make sense of this society and how it came to be this way. Below are some key posts to consider, although I leave out those related to Jaynesian and related scholarship because that is a large area of thought all on its own (if interested, look at the tags for ConsciousnessBicameral MindJulian Jaynes, and Lewis Hyde):

The Transparent Self to Come?
Technological Fears and Media Panics
Western Individuality Before the Enlightenment Age
Juvenile Delinquents and Emasculated Males
The Breast To Rule Them All
The Agricultural Mind
“Yes, tea banished the fairies.”
Autism and the Upper Crust
Diets and Systems
Sleepwalking Through Our Dreams
Delirium of Hyper-Individualism
The Group Conformity of Hyper-Individualism
Individualism and Isolation
Hunger for Connection
To Put the Rat Back in the Rat Park
Rationalizing the Rat Race, Imagining the Rat Park

* * *

The Future of Nostalgia
by Svetlana Boym
pp. 25-30

Nostalgia was said to produce “erroneous representations” that caused the afflicted to lose touch with the present. Longing for their native land became their single-minded obsession. The patients acquired “a lifeless and haggard countenance,” and “indifference towards everything,” confusing past and present, real and imaginary events. One of the early symptoms of nostalgia was an ability to hear voices or see ghosts. Dr. Albert von Haller wrote: “One of the earliest symptoms is the sensation of hearing the voice of a person that one loves in the voice of another with whom one is conversing, or to see one’s family again in dreams.” 2 It comes as no surprise that Hofer’s felicitous baptism of the new disease both helped to identify the existing condition and enhanced the epidemic, making it a widespread European phenomenon. The epidemic of nostalgia was accompanied by an even more dangerous epidemic of “feigned nostalgia,” particularly among soldiers tired of serving abroad, revealing the contagious nature of the erroneous representations.

Nostalgia, the disease of an afflicted imagination, incapacitated the body. Hofer thought that the course of the disease was mysterious: the ailment spread “along uncommon routes through the untouched course of the channels of the brain to the body,” arousing “an uncommon and everpresent idea of the recalled native land in the mind.” 3 Longing for home exhausted the “vital spirits,” causing nausea, loss of appetite, pathological changes in the lungs, brain inflammation, cardiac arrests, high fever, as well as marasmus and a propensity for suicide. 4

Nostalgia operated by an “associationist magic,” by means of which all aspects of everyday life related to one single obsession. In this respect nostalgia was akin to paranoia, only instead of a persecution mania, the nostalgic was possessed by a mania of longing. On the other hand, the nostalgic had an amazing capacity for remembering sensations, tastes, sounds, smells, the minutiae and trivia of the lost paradise that those who remained home never noticed. Gastronomic and auditory nostalgia were of particular importance. Swiss scientists found that rustic mothers’ soups, thick village milk and the folk melodies of Alpine valleys were particularly conducive to triggering a nostalgic reaction in Swiss soldiers. Supposedly the sounds of “a certain rustic cantilena” that accompanied shepherds in their driving of the herds to pasture immediately provoked an epidemic of nostalgia among Swiss soldiers serving in France. Similarly, Scots, particularly Highlanders, were known to succumb to incapacitating nostalgia when hearing the sound of the bagpipes—so much so, in fact, that their military superiors had to prohibit them from playing, singing or even whistling native tunes in a suggestive manner. Jean-Jacques Rousseau talks about the effects of cowbells, the rustic sounds that excite in the Swiss the joys of life and youth and a bitter sorrow for having lost them. The music in this case “does not act precisely as music, but as a memorative sign.” 5 The music of home, whether a rustic cantilena or a pop song, is the permanent accompaniment of nostalgia—its ineffable charm that makes the nostalgic teary-eyed and tongue-tied and often clouds critical reflection on the subject.

In the good old days nostalgia was a curable disease, dangerous but not always lethal. Leeches, warm hypnotic emulsions, opium and a return to the Alps usually soothed the symptoms. Purging of the stomach was also recommended, but nothing compared to the return to the motherland believed to be the best remedy for nostalgia. While proposing the treatment for the disease, Hofer seemed proud of some of his patients; for him nostalgia was a demonstration of the patriotism of his compatriots who loved the charm of their native land to the point of sickness.

Nostalgia shared some symptoms with melancholia and hypochondria. Melancholia, according to the Galenic conception, was a disease of the black bile that affected the blood and produced such physical and emotional symptoms as “vertigo, much wit, headache, . . . much waking, rumbling in the guts . . . troublesome dreams, heaviness of the heart . . . continuous fear, sorrow, discontent, superfluous cares and anxiety.” For Robert Burton, melancholia, far from being a mere physical or psychological condition, had a philosophical dimension. The melancholic saw the world as a theater ruled by capricious fate and demonic play. 6 Often mistaken for a mere misanthrope, the melancholic was in fact a utopian dreamer who had higher hopes for humanity. In this respect, melancholia was an affect and an ailment of intellectuals, a Hamletian doubt, a side effect of critical reason; in melancholia, thinking and feeling, spirit and matter, soul and body were perpetually in conflict. Unlike melancholia, which was regarded as an ailment of monks and philosophers, nostalgia was a more “democratic” disease that threatened to affect soldiers and sailors displaced far from home as well as many country people who began to move to the cities. Nostalgia was not merely an individual anxiety but a public threat that revealed the contradictions of modernity and acquired a greater political importance.

The outburst of nostalgia both enforced and challenged the emerging conception of patriotism and national spirit. It was unclear at first what was to be done with the afflicted soldiers who loved their motherland so much that they never wanted to leave it, or for that matter to die for it. When the epidemic of nostalgia spread beyond the Swiss garrison, a more radical treatment was undertaken. The French doctor Jourdan Le Cointe suggested in his book written during the French Revolution of 1789 that nostalgia had to be cured by inciting pain and terror. As scientific evidence he offered an account of drastic treatment of nostalgia successfully undertaken by the Russians. In 1733 the Russian army was stricken by nostalgia just as it ventured into Germany, the situation becoming dire enough that the general was compelled to come up with a radical treatment of the nostalgic virus. He threatened that “the first to fall sick will be buried alive.” This was a kind of literalization of a metaphor, as life in a foreign country seemed like death. This punishment was reported to be carried out on two or three occasions, which happily cured the Russian army of complaints of nostalgia. 7 (No wonder longing became such an important part of the Russian national identity.) Russian soil proved to be a fertile ground for both native and foreign nostalgia. The autopsies performed on the French soldiers who perished in the proverbial Russian snow during the miserable retreat of the Napoleonic Army from Moscow revealed that many of them had brain inflammation characteristic of nostalgia.

While Europeans (with the exception of the British) reported frequent epidemics of nostalgia starting from the seventeenth century, American doctors proudly declared that the young nation remained healthy and didn’t succumb to the nostalgic vice until the American Civil War. 8 If the Swiss doctor Hofer believed that homesickness expressed love for freedom and one’s native land, two centuries later the American military doctor Theodore Calhoun conceived of nostalgia as a shameful disease that revealed a lack of manliness and unprogressive attitudes. He suggested that this was a disease of the mind and of a weak will (the concept of an “afflicted imagination” would be profoundly alien to him). In nineteenth-century America it was believed that the main reasons for homesickness were idleness and a slow and inefficient use of time conducive to daydreaming, erotomania and onanism. “Any influence that will tend to render the patient more manly will exercise a curative power. In boarding schools, as perhaps many of us remember, ridicule is wholly relied upon. . . . [The nostalgic] patient can often be laughed out of it by his comrades, or reasoned out of it by appeals to his manhood; but of all potent agents, an active campaign, with attendant marches and more particularly its battles is the best curative.” 9 Dr. Calhoun proposed as treatment public ridicule and bullying by fellow soldiers, an increased number of manly marches and battles and improvement in personal hygiene that would make soldiers’ living conditions more modern. (He also was in favor of an occasional furlough that would allow soldiers to go home for a brief period of time.)

For Calhoun, nostalgia was not conditioned entirely by individuals’ health, but also by their strength of character and social background. Among the Americans the most susceptible to nostalgia were soldiers from the rural districts, particularly farmers, while merchants, mechanics, boatmen and train conductors from the same area or from the city were more likely to resist the sickness. “The soldier from the city cares not where he is or where he eats, while his country cousin pines for the old homestead and his father’s groaning board,” wrote Calhoun. 10 In such cases, the only hope was that the advent of progress would somehow alleviate nostalgia and the efficient use of time would eliminate idleness, melancholy, procrastination and lovesickness.

As a public epidemic, nostalgia was based on a sense of loss not limited to personal history. Such a sense of loss does not necessarily suggest that what is lost is properly remembered and that one still knows where to look for it. Nostalgia became less and less curable. By the end of the eighteenth century, doctors discovered that a return home did not always treat the symptoms. The object of longing occasionally migrated to faraway lands beyond the confines of the motherland. Just as genetic researchers today hope to identify a gene not only for medical conditions but social behavior and even sexual orientation, so the doctors in the eighteenth and nineteenth centuries looked for a single cause of the erroneous representations, one so-called pathological bone. Yet the physicians failed to find the locus of nostalgia in their patient’s mind or body. One doctor claimed that nostalgia was a “hypochondria of the heart” that thrives on its symptoms. To my knowledge, the medical diagnosis of nostalgia survived in the twentieth century in one country only—Israel. (It is unclear whether this reflects a persistent yearning for the promised land or for the diasporic homelands left behind.) Everywhere else in the world nostalgia turned from a treatable sickness into an incurable disease. How did it happen that a provincial ailment, maladie du pays , became a disease of the modern age, mal du siècle?

In my view, the spread of nostalgia had to do not only with dislocation in space but also with the changing conception of time. Nostalgia was a historical emotion, and we would do well to pursue its historical rather than psychological genesis. There had been plenty of longing before the seventeenth century, not only in the European tradition but also in Chinese and Arabic poetry, where longing is a poetic commonplace. Yet the early modern conception embodied in the specific word came to the fore at a particular historical moment. “Emotion is not a word, but it can only be spread abroad through words,” writes Jean Starobinski, using the metaphor of border crossing and immigration to describe the discourse on nostalgia. 11 Nostalgia was diagnosed at a time when art and science had not yet entirely severed their umbilical ties and when the mind and body—internal and external well-being—were treated together. This was a diagnosis of a poetic science—and we should not smile condescendingly on the diligent Swiss doctors. Our progeny well might poeticize depression and see it as a metaphor for a global atmospheric condition, immune to treatment with Prozac.

What distinguishes modern nostalgia from the ancient myth of the return home is not merely its peculiar medicalization. The Greek nostos , the return home and the song of the return home, was part of a mythical ritual. […] Modern nostalgia is a mourning for the impossibility of mythical return, for the loss of an enchanted world with clear borders and values; it could be a secular expression of a spiritual longing, a nostalgia for an absolute, a home that is both physical and spiritual, the edenic unity of time and space before entry into history. The nostalgic is looking for a spiritual addressee. Encountering silence, he looks for memorable signs, desperately misreading them.

The diagnosis of the disease of nostalgia in the late seventeenth century took place roughly at the historical moment when the conception of time and history were undergoing radical change. The religious wars in Europe came to an end but the much prophesied end of the world and doomsday did not occur. “It was only when Christian eschatology shed its constant expectations of the immanent arrival of doomsday that a temporality could have been revealed that would be open to the new and without limit.” 13 It is customary to perceive “linear” Judeo-Christian time in opposition to the “cyclical” pagan time of eternal return and discuss both with the help of spatial metaphors. 14 What this opposition obscures is the temporal and historical development of the perception of time that since Renaissance on has become more and more secularized, severed from cosmological vision.

Before the invention of mechanical clocks in the thirteenth century the question, What time is it? was not very urgent. Certainly there were plenty of calamities, but the shortage of time wasn’t one of them; therefore people could exist “in an attitude of temporal ease. Neither time nor change appeared to be critical and hence there was no great worry about controlling the future.” 15 In late Renaissance culture,Time was embodied in the images of Divine Providence and capricious Fate, independent of human insight or blindness. The division of time into Past, Present and Future was not so relevant. History was perceived as a “teacher of life” (as in Cicero’s famous dictum, historia magistra vitae ) and the repertoire of examples and role models for the future. Alternatively, in Leibniz’s formulation, “The whole of the coming world is present and prefigured in that of the present.” 16

The French Revolution marked another major shift in European mentality. Regicide had happened before, but not the transformation of the entire social order. The biography of Napoleon became exemplary for an entire generation of new individualists, little Napoleons who dreamed of reinventing and revolutionizing their own lives. The “Revolution,” at first derived from natural movement of the stars and thus introduced into the natural rhythm of history as a cyclical metaphor, henceforth attained an irreversible direction: it appeared to unchain a yearned-for future. 17 The idea of progress through revolution or industrial development became central to the nineteenth-century culture. From the seventeenth to the nineteenth century, the representation of time itself changed; it moved away from allegorical human figures—an old man, a blind youth holding an hourglass, a woman with bared breasts representing Fate—to the impersonal language of numbers: railroad schedules, the bottom line of industrial progress. Time was no longer shifting sand; time was money. Yet the modern era also allowed for multiple conceptions of time and made the experience of time more individual and creative.

“The Origin of Consciousness, Gains and Losses: Walker Percy vs. Julian Jaynes”
by Laura Mooneyham White
from Gods, Voices, and the Bicameral Mind
ed. by Marcel Kuijsten

Jaynes is plainly one who understands the human yearning for Eden, the Eden of bicameral innocence. He writes of our longings for a return to that lost organization of human mentality, a return to lost certainty and splendour.” 44 Jones believes, in fact, that Jaynes speaks for himself when he describes the “yearning for divine volition and service [which] is with us still,” 45 of our “nostalgic anguish” which we feel for lost bicamerality. 46 Even schizophrenia, seen from Jaynes’s perspective as a vestige of bicamerality, is the anguishing state it is only because the relapse to bicamerality

is only partial. The learnings that make up a subjective consciousness are powerful and never totally suppressed. And thus the terror and the fury, the agony and the despair. … The lack of cultural support and definition for the voices [heard by schizophrenics] … provide a social withdrawal from the behavior of the absolutely social individual of bicameral societies. … [W]ithout this source of security, … living with hallucinations that are unacceptable and denied as unreal by those around him, the florid schizophrenic is in an opposite world to that of the god-owned laborers of Marduk. … [He] is a mind bared to his environment, waiting on gods in a godless world. 47

Jones, in fact, asserts that Jaynes’s discussion of schizophrenia is held in terms “reminiscent of R. D. Laing’s thesis that schizophrenics are the only sane people in our insane world.” 48 Jones goes on to say that “Jaynes, it would seem, holds that we would all be better off if ‘everyone’ were once again schizophrenic, if we could somehow return to a bicameral society which had not yet been infected by the disease of thinking.” 49

Jaynes does not, in my opinion, intimate a position nearly as reactionary as this; he has in fact made elsewhere an explicit statement to the effect that he himself feels no such longing to return to bicamerality, that he would in fact “shudder” at such a return. 50 Nonetheless, Jaynes does seem at some points in his book to describe introspection as a sort of pathological development in human history. For instance, instead of describing humanity’s move towards consciousness as liberating, Jaynes calls it “the slow inexorable profaning of our species.” 51 And no less an eminence than Northrop Frye recognized this tendency in Jaynes to disvalue consciousness. After surveying Jaynes’s argument and admitting the fascination of that argument’s revolutionary appeal, Frye points out that Jaynes’s ideas provoke a disturbing reflection: “seeing what a ghastly mess our egocentric consciousness has got us into, perhaps the sooner we get back to … hallucinations the better.” Frye expands his discussion of Jaynes to consider the cultural ramifications of this way of thinking, what he terms “one of the major cultural trends of our time”:

It is widely felt that our present form of consciousness, with its ego center, has become increasingly psychotic, incapable of dealing with the world, and that we must develop a more intensified form of consciousness, recapturing many of … Jaynes’ ‘bicameral’ features, if we are to survive the present century. 52

Frye evidently has little sympathy with such a position which would hold that consciousness is a “late … and on the whole regrettable arrival on the human scene” 53 rather than the wellspring of all our essentially human endeavors and achievements: art, philosophy, religion and science. The ground of this deprecatory perspective on consciousness, that is, a dislike or distrust of consciousness, has been held by many modern and postmodern thinkers and artists besides Jaynes, among them Sartre, Nietzsche, Faulkner, Pynchon, Freud, and Lacan, so much so that we might identify such an ill opinion of consciousness as a peculiarly modern ideology.

“Remembrance of Things (Far) Past”
by Julian Jaynes
from The Julian Jaynes Collection
ed. by Marcel Kuijsten

And nostalgia too. For with time metaphored as space, so like the space of our actual lives, a part of us solemnly keeps loitering behind, trying to visit past times as if they were actual spaces. Oh, what a temptation is there! The warm, sullen longing to return to scenes long vanished, to relive some past security or love, to redress some ancient wrong or redecide a past regret, or alter some ill-considered actions toward someone lost to our present lives, or to fill out past omissions — these are artifacts of our new remembering consciousness. Side effects. And they are waste and filler unless we use them to learn about ourselves.

Memory is a privilege for us who are born into the last three millennia. It is both an advantage and a predicament, liberation and an imprisonment. Memory is not a part of our biological evolution, as is our capacity to learn habits or simple knowings. It is an off-shoot of consciousness acquired by mankind only a hundred generations ago. It is thus the new environment of modern man. It is one which we sometimes are like legal aliens waiting for naturalization. The feeling of full franchise and citizenship in that new environment is a quest that is the unique hidden adventure of us all.

The Suffering System
by David Loy

In order to understand why that anxiety exists, we must relate dukkha to another crucial Buddhist term, anatta, or “non-self.” Our basic frustration is due most of all to the fact that our sense of being a separate self, set apart from the world we are in, is an illusion. Another way to express this is that the ego-self is ungrounded, and we experience this ungroundedness as an uncomfortable emptiness or hole at the very core of our being. We feel this problem as a sense of lack, of inadequacy, of unreality, and in compensation we usually spend our lives trying to accomplish things that we think will make us more real.

But what does this have to do with social challenges? Doesn’t it imply that social problems are just projections of our own dissatisfaction? Unfortunately, it’s not that simple. Being social beings, we tend to group our sense of lack, even as we strive to compensate by creating collective senses of self.

In fact, many of our social problems can be traced back to this deluded sense of collective self, this “wego,” or group ego. It can be defined as one’s own race, class, gender, nation (the primary secular god of the modern world), religion, or some combination thereof. In each case, a collective identity is created by discriminating one’s own group from another. As in the personal ego, the “inside” is opposed to the other “outside,” and this makes conflict inevitable, not just because of competition with other groups, but because the socially constructed nature of group identity means that one’s own group can never feel secure enough. For example, our GNP is not big enough, our nation is not powerful (“secure”) enough, we are not technologically developed enough. And if these are instances of group-lack or group-dukkha, our GNP can never be big enough, our military can never be powerful enough, and we can never have enough technology. This means that trying to solve our economic, political, and ecological problems with more of the same is a deluded response.

Just Smile.

“Pain in the conscious human is thus very different from that in any other species. Sensory pain never exists alone except in infancy or perhaps under the influence of morphine when a patient says he has pain but does not mind it. Later, in those periods after healing in which the phenomena usually called chronic pain occur, we have perhaps a predominance of conscious pain.”
~Julian Jaynes, Sensory Pain and Conscious Pain

I’ve lost count of the number of times I’ve seen a child react to a cut or stumble only after their parent(s) freaked out. Children are highly responsive to adults. If others think something bad has happened, they internalize this and act accordingly. Kids will do anything to conform to expectations. But most kids seem impervious to pain, assuming they don’t get the message that they are expected to put on an emotional display.

This difference can be seen when comparing how a child acts by themselves and how they act around a parent or other authority figure. You’ll sometimes see a kid looking around to see if their is an audience paying attention before crying or having a tantrum. We humans are social creatures and our behavior is always social. This is naturally understood even by infants who have an instinct for social cues and social response.

Pain is a physical sensation, an experience that passes, whereas suffering is in the mind, a story we tell ourselves. This is why trauma can last for decades after a bad experience. The sensory pain is gone but the conscious pain continues. We keep repeating a story.

It’s interesting that some cultures like the Piraha don’t appear to experience trauma from the exact same events that would traumatize a modern Westerner. Neither is depression and anxiety common among them. Nor an obsessive fear about death. Not only are the Piraha physically tougher but psychologically tougher as well. Apparently, they tell different stories that embody other expectations.

So, what kind of society is it that we’ve created with our Jaynesian consciousness of traumatized hyper-sensitivity and psychological melodrama? Why are we so attached to our suffering and victimization? What does this story offer us in return? What power does it hold over us? What would happen if we changed the master narrative of our society in replacing the competing claims of victimhood with an entirely different way of relating? What if outward performances of suffering were no longer expected or rewarded?

For one, we wouldn’t have a man-baby like Donald Trump as our national leader. He is the perfect personification of this conscious pain crying out for attention. And we wouldn’t have had the white victimhood that put him into power. But neither would we have any of the other victimhoods that these particular whites were reacting to. The whole culture of victimization would lose its power.

The social dynamic would be something else entirely. It’s hard to imagine what that might be. We’re addicted to the melodrama and we carefully enculturate and indoctrinate each generation to follow our example. To shake us loose from our socially constructed reality would require a challenge to our social order. The extremes of conscious pain isn’t only about our way of behaving. It is inseparable from how we maintain the world we are so desperately attached to.

We need the equivalent, in the cartoon below, of how this father relates to his son. But we need it on the collective level. Or at least we need this in the United States. What if the rest of the world simply stopped reacting to American leaders and American society? Just smile.

Image may contain: text

Credit: The basic observation and the cartoon was originally shared by Mateus Barboza on the Facebook group “Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind”.

Who are we hearing and talking to?

“We are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent ‘me’ from moment to moment.”
~ Charles Fernyhough

“Bicamerality hidden in plain sight.”
~ Andrew Bonci

Image may contain: text that says 'WHAT I TELL YOU IN THE DARK, SPEAK IN THE DAYLIGHT; WHAT IS WHISPERED IN YOUR EAR, PROCLAIM FROM THE ROOFS. MATTHEW 10:27'

“What I tell you in the dark, speak in the daylight; what is whispered in your ear, proclaim from the roofs.”
~ Matthew 10:27

“illusion of a completed, unitary self”
Bundle Theory: Embodied Mind, Social Nature
The Mind in the Body
Making Gods, Making Individuals
The Spell of Inner Speech
Reading Voices Into Our Minds
Verbal Behavior
Keep Your Experience to Yourself

The Breast To Rule Them All

The breast is best. That signifies the central importance of breastfeeding. But one could also take it as pointing to our cultural obsession with human mammary glands, something not shared by all cultures. I’m going to make the argument that the breast, at least in American society, is the main site of social control. Before making my case, let me explore what social control has meant, as society has developed over the millennia.

There is a connection between social control and self-control. The most extreme forms of this dualistic dynamic is authoritarianism and hyper-individualism (Westworld, Scripts, and Freedom), the reason liberty has a close relationship to slavery (Liberty, Freedom, and Fairness). In reading Julian Jaynes’ classic, he makes this clear, although he confuses the matter a bit. He sometimes refers to the early Bronze Age societies as ‘authoritarian’, but he definitely does not mean totalitarianism, something that only describes the civilizations that followed later on. In the broader usage, the word ‘authoritarianism’ is sometimes tinged with his notions of archaic authorization and collective cognitive imperative (“Beyond that, there is only awe.”). The authority in question, as Jaynes argued, are the external or dispersed voices that early humans heard and followed (as today we hear and follow the voices in our own metaphorical “inner space”, what we call thoughts or what Jaynes referred to as self-authorization; The Spell of Inner Speech). Without an archaic authorization heard in the world allowing social order to emerge organically, an authoritarian system has to enforce the social order from above: “the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies” (“Beyond that, there is only awe.”).

And the ego is this new form of authoritarian power internalized, a monotheistic demiurge to rule over the inner world. Totalitarianism turns in on itself and becomes Jaynesian consciousness, a totalizing field of identity, but the bicameral mind continues to lurk in the shadows, something any aspiring authoritarian can take advantage of (Ben G. Price, Authoritarian Grammar and Fundamentalist Arithmetic). “We are all potential goosestepping authoritarian followers, waiting for the right conditions to bring our primal natures out into the open. With the fiery voice of authority, we can be quickly lulled into compliance by an inspiring or invigorating vision […] The danger is that the more we idolize individuality the more prone we become to what is so far beyond the individual. It is the glare of hyper-individualism that casts the shadow of authoritarianism” (Music and Dance on the Mind).

The practice of literally carving laws into stone came rather late in the Bronze Age, during the period that preceded the near total collapse of all the major societies. That totalitarianism then, as today, coincided with brutality and oppression — never before seen in the historical record. Authoritarianism as totalitarianism apparently was something new in human experience. That might be because totalitarianism requires higher levels of abstraction, such as dogmatic laws that are envisioned and enforced as universal truths, principle, and commandments. Such abstract thinking was encouraged by the spread of more complex writing (e.g., literature), beyond what earlier had been primarily limited to minimalistic record-keeping. Individualism, as I said, also arose out of this violent birth of what would eventually mature into the Axial Age. It was the radically emergent individual, after all, that needed to be controlled. We now take this all for granted, the way the world is.

There was authority as archaic authorization prior to any hint of totalitarianism, but I question if it is useful to speak of it as authoritarianism. The earliest civilizations were mostly city-states, closer to hunter-gather tribes than to anything we’d recognize in the later vast empires or in our modern nation-states. Even in gaining the capacity for great achievements, the earliest civilizations remained rather basic in form. Consider the impressive Egyptian kingdoms that, having constructed vast stone monuments, didn’t even bother to build roads and bridges. They were such a small population so tightly clustered together in that narrow patch of fertility surrounded and protected by desert that nothing more complex was required. There weren’t the vast distances of a centralized government, the disconnections between complex hierarchies, nor numerous specialized social roles beyond the immediate work at hand. These societies were small and simple, the conditions necessary for their maintaining order through social identity, through the conformity of groupthink and cultural worldview, rather than violent force. Besides lacking written laws, they also lacked police forces and standing armies. They were loosely organized communities, having originated as informal settlements that had become permanent over time.

Now back to the breast, the first source of sustenance and nurturance. Unfortunately, we don’t have any idea about what the ancients might have thought of the breast as a focus of concern, although Jaynes did have some fascinating thoughts about the naked body and sexuality. As totalitarianism appeared late, so did pornography in the broad sense as found in portrayals of sex engraved in stone, around the same time that laws also were being engraved. With fantasies of sexuality, there was sin that needed to be controlled, guilt that needed to be punished, and the laws to achieve this end. It was all of a single package, an emergent worldview and way of being, an anxiety-driven self-consciousness.

Lacking a time travel machine, the next best option is to look at other societies that challenge biases of Western modernity, specifically here in the United States. Let me begin with American society. First off, I’d note that with the Puritan comes the prurient. Americans are obsessed with all things sexual. And so the sexual has a way of pervading our society. Even something so innocent as the female breast, designed by evolution to feed infants, somehow becomes a sexual object. That projection of lust and shame isn’t seen in all societies. In hunter-gatherer tribes, it is common for the breast to have no grand significance at all. The weirdness doesn’t end there. We don’t have to look to tribal people to find cultures that aren’t sexually prudish. Among some traditional cultures in Asia and elsewhere, even the touching of someone else’s genitals doesn’t necessarily express sexual intentions, as instead it can be a way of greeting someone or showing fondness for a family member. But admittedly, the cultures that seem the most foreign to us are those that have remained the most isolated from Western influences.

The Piraha, according to Daniel Everett, are rather relaxed about sex and sexuality (Dark Matter of the Mind). It’s not that they typically have sex out in the open, except during communal dances when orgies sometimes occur, but their lifestyle doesn’t accord much privacy. Talking about sex is no big deal and children are exposed to it from a young age. Sexuality is considered a normal part of life, certainly not something to be shamed or repressed. As with some other societies, sexual play is common and not always leading to sex. That is true among both adults and children, including what Westerners would call pedophilia. A child groping an adults genitals is not considered a big deal to them. And certainly there is no issue with two children dry-humping each other or whatever, as children are wont to do in their curiosity and budding sexuality. Sex is so common among the Piraha that potential sexual partners are more available, such as with a cousin, step-sibling, or step-parent. The main restrictions are between full siblings and between a child and a biological parent or grandparent. This is a close-knit community.

“The Pirahãs all seem to be intimate friends,” writes Everett, “no matter what village they come from. Pirahãs talk as though they know every other Pirahã extremely well. I suspect that this may be related to their physical connections. Given the lack of stigma attached to and the relative frequency of divorce, promiscuousness associated with dancing and singing, and post- and prepubescent sexual experimentation, it isn’t far off the mark to conjecture that many Pirahãs have had sex with a high percentage of the other Pirahãs. This alone means that their relationships will be based on an intimacy unfamiliar to larger societies (the community that sleeps together stays together?). Imagine if you’d had sex with a sizable percentage of the residents of your neighborhood and that this fact was judged by the entire society as neither good nor bad, just a fact about life— like saying you’ve tasted many kinds of food” (Don’t Sleep, There Are Snakes, p. 88).

[As a quick note, the Piraha have some interesting practices with breastfeeding. When hunting, orphaned animals sometimes are brought back to the village and breastfed alongside human offspring, one at each breast. These human-raised animals will often be eaten later on. But that must create another kind of intimacy for babies and toddlers, a kind of intimacy that includes other species. The toddler who is weaned might have as one of his first meals the meat of the animal that was his early playmate or at least breast-mate. Their diet, as with their entire lifestyle, is intimate in numerous ways.]

That offers quite the contrast to our own society. Appropriate ways of relating and touching are much more constrained (certainly, breastfeeding other species is not typical for American mothers). Not only would an adult Westerner be imprisoned for touching a child’s genitalia and a child severely chastised for touching an adult’s genitalia, two children would be shamed for touching one another or even for touching themselves. Think about that. Think about all of the children over the generations who have been ridiculed, screamed at, spanked, beaten, or otherwise traumatized for simply touching themselves or innocently playing with another child. Every form of touch is potentially fraught and becoming ever more fraught over time. This surely causes immense fear and anxiety in children raised in such a society. A psychological scarification forms into thick egoic boundaries, the individual isolated and separate from all others. It is the foot-binding of the human mind.

There is one and only one form of touch young children in the West are almost always guaranteed. They can breastfeed. They are allowed human contact with their mother’s breast. And it has become increasingly common for breastfeeding to extend for the first several years. All of the psychic energy that has few other human outlets of skin-to-skin contact gets narrowed down to the mother’s breast. The potency of this gets underestimated, as it makes many of us uncomfortable to think about it. Consider that a significant number of mothers have experienced an orgasm while breastfeeding. This happens often enough to be well within the range of a normal biological response, assuming it’s not cultural. Yet such widespread experience is likely to be judged as perverse, either by the mother in judging herself or by others if she were ever to admit to it. The breast becomes a site of shame, even as it is a site of desire.

Then, as part of weening, the child is given a pacifier. All the psychic energy that was limited to the breast then gets transferred to an inanimate object (Pacifiers, Individualism & Enculturation). The argument for pacifiers is that they’re self-soothing, but when you think about that, it is rather demented. Young children need parents and other adults to soothe them. For them to not be able to rely upon others in this basic human need creates a psychological crisis. The pacifier lacks any human quality, any nurturance or nutrient. It is empty and that emptiness is internalized. The child becomes identified with the pacifier as object. The egoic-self becomes an object with a part of the psyche that stands outside of itself (what Jaynes refers to as the analogous ‘I’ and metaphorical ‘me’) — the bundled mind becomes a splintered self (Bundle Theory: Embodied Mind, Social Nature). This is extremely bizarre, an expression of WEIRD culture (western, educated, industrialized, rich, and democratic; although the last part is questionable in the case of the United States). Until quite recently in the scheme of history and evolution, regular intimacy among humans was the norm. The first pacifier wasn’t used until 1935.

So, even in the West, some of these changes don’t go back very far. A certain kind of prudishness was introduced to the Western mind with Christianity, one of the transformative effects of the Axial Age. But even then, sexuality was much more relaxed in the Western world for a long time after that. “As late as Feudalism, heavily Christianized Europe offered little opportunity for privacy and maintained a relatively open attitude about sexuality during many public celebrations, specifically Carnival, and they spent an amazing amount of their time in public celebrations. Barbara Ehrenreich describes this ecstatic communality in Dancing in the Streets. Like the Piraha, these earlier Europeans had a more social and fluid sense of identity” (Hunger for Connection).  It is no surprise that, as more open sexuality and ecstatic communality declined, modern hyper-individualism followed. Some like to praise the Western mind as more fluid (Ricardo Duchesne, The Higher Cognitive Fluidity of the European Mind), but for the same reason it is also more unstable and sometimes self-destructive. This is a far different kind of fluidity, if we are to cal it that at all. Individuality, in its insatiable hunger, cannibalizes its own social foundation.

* * *

It occurs to me that this breast obsession is another example of symbolic conflation. As I’ve often explained, a symbolic conflation is the central way of maintaining social order. And the body is the primary field of their operation, typically involving highly potent focal points involving sexuality (e.g., abortion). The symbolic conflation obscures and distracts from the real issues and points of conflict. Obviously, the female breast becomes a symbol of something far beyond its evolutionary and biological reality as mammalian mammary gland. This also relates to the discussion of metonymy and shame by Lewis Hyde in his book The Trickster Makes This World — see two of my posts where I connect Hyde’s work to that of Jaynes’: Lock Without a Key and “Why are you thinking about this?”.

* * *

Do Other Cultures Allow Sex Acts to Calm Babies?
It depends on how you define “sex act.”
by Cecil Adams

Not to go all Bill Clinton on you, but we need to define what we mean by “performing a sexual act.” For now let’s just say that, based strictly on appearances, some cultures tolerate stuff that in the United States would get you branded as a pervert. Examples:

In 2006 a Cambodian immigrant living in the Las Vegas area was charged with sexual assault for allegedly performing fellatio on her 6-year-old son. The woman’s attorney said what she’d actually done was kiss the kid’s penis, once, when he was 4 or 5. A spokesperson for the Cambodian Association of America said that while this kind of thing wasn’t widespread in Cambodia, some rural folk went in for it as an expression of love or respect, although in his experience never with children older than 1 or maybe 2.

En route to being elected U.S. senator from Virginia in 2006, Jim Webb, onetime Secretary of the Navy under Reagan, was lambasted by his opponent for a passage in his 2001 novel Lost Soldiers in which a Thai man picks up his naked young son and puts his penis in his mouth. Webb responded that he had personally witnessed such a greeting in a Bangkok slum.

Numerous ethnographers report that mothers and caregivers in rural New Guinea routinely fondle the genitals of infants and toddlers of both sexes. In the case of boys this supposedly aids the growth of the penis. It’s often done in public and is a source of great amusement.

The Telegu-speaking people of central India dote on the penises of boys up through age six, which they hold, rub, and kiss. (Girls escape with minor same-sex touching.) A typical greeting involves an adult grabbing a boy’s arm with one hand and his penis with the other.

A 1946 report claimed that among lower-class Japanese families, parents would play with the genitals of children to help them fall asleep, and a researcher visiting Japan in the 1930s noted that mothers played with the genitals of their sons.

I didn’t make an exhaustive search and so don’t know to what extent such things occur in Latin America, Europe, Australia, or elsewhere. However, it appears that:

Fooling with kids’ privates is a fairly widespread practice in Asia, particularly among people toward the lower end of the socioeconomic scale. The reports are too numerous and credible for them all to be dismissed as the ravings of hysterical Westerners. My surmise is that, as societies become more westernized, urban, and affluent, the practice dies out.

The acts are sexual in the sense that those doing the fondling are well aware of the sexual implications and find it droll to give a little boy an erection.

Lurid tales occasionally do surface. Reports of mother-son incest were briefly faddish in Japanese magazines in the 1980s. These stories played off the unflattering Japanese stereotype of the mother obsessed with getting her son into a top school, suggesting some “education mamas” would violate the ultimate taboo to help their horny pubescent boys stay relaxed and focused on studying. A few Westerners have taken these urban legends at face value. Lloyd deMause, founder of and prolific contributor to a publication called the Journal of Psychohistory, cites the Japanese mother-son stories as prime evidence in his account of what he calls “the universality of incest.” It’s pretty clear, however, that incest inspires as much revulsion in Japan as anywhere else.

A less excitable take on things is that Asian societies just aren’t as hung up about matters of the flesh as we Western prudes are. In Japan, mixed-sex naked public bathing was fairly common until the postwar occupation, and some families bathe together now if they have a big enough tub. Nonetheless, so far as I can determine, Asian societies have always drawn a bright line between fooling around with babies and toddlers and having sex with your kids. If Westerners can’t fathom that elementary distinction, well, whose problem is that?

Dark Matter of the Mind
by Daniel L. Everett
Kindle Location 2688-2698

These points of group attachment are strengthened during the children’s maturation through other natural experiences of community life as the children learn their language, the configuration of their village and to sleep on the ground or on rough, uneven wooden platforms made from branches or saplings. As with other children of traditional societies, Pirahã young people experience the biological aspects of life with far less buffering than Western children. They remember these experiences, consciously or unconsciously, even though these apperceptions are not linguistic.

Pirahã children observe their parents’ physical activities in ways that children from more buffered societies do not (though often similar to the surrounding cultures just mentioned). They regularly see and hear their parents and other members of the village engage in sex (though Pirahã adults are modest by most standards, there is still only so much privacy available in a world without walls and locked doors), eliminate bodily waste, bathe, die, suffer severe pain without medication, and so on. 8 They know that their parents are like them. A small toddler will walk up to its mother while she is talking, making a basket, or spinning cotton and pull her breast out of the top of her dress (Pirahã women use only one dress design for all), and nurse— its mother’s body is its own in this respect. This access to the mother’s body is a form of entitlement and strong attachment.

Kindle Location 2736-2745

Sexual behavior is another behavior distinguishing Pirahãs from most middle-class Westerners early on. A young Pirahã girl of about five years came up to me once many years ago as I was working and made crude sexual gestures, holding her genitalia and thrusting them at me repeatedly, laughing hysterically the whole time. The people who saw this behavior gave no sign that they were bothered. Just child behavior, like picking your nose or farting. Not worth commenting about.

But the lesson is not that a child acted in a way that a Western adult might find vulgar. Rather, the lesson, as I looked into this, is that Pirahã children learn a lot more about sex early on, by observation, than most American children. Moreover, their acquisition of carnal knowledge early on is not limited to observation. A man once introduced me to a nine- or ten-year-old girl and presented her as his wife. “But just to play,” he quickly added. Pirahã young people begin to engage sexually, though apparently not in full intercourse, from early on. Touching and being touched seem to be common for Pirahã boys and girls from about seven years of age on. They are all sexually active by puberty, with older men and women frequently initiating younger girls and boys, respectively. There is no evidence that the children then or as adults find this pedophilia the least bit traumatic.

Don’t Sleep, There Are Snakes
by Daniel L. Everett
pp. 82-84

Sex and marriage also involve no ritual that I can see. Although Pirahãs are reluctant to discuss their own intimate sexual details, they have done so in general terms on occasion. They refer to cunnilingus and fellatio as “licking like dogs,” though this comparison to animal behavior is not intended to denigrate the act at all. They consider animals good examples of how to live. Sexual intercourse is described as eating the other. “I ate him” or “I ate her” means “I had sexual intercourse with him or her.” The Pirahãs quite enjoy sex and allude to it or talk about others’ sexual activity freely.

Sex is not limited to spouses, though that is the norm for married men and women. Unmarried Pirahãs have sex as they wish. To have sex with someone else’s spouse is frowned upon and can be risky, but it happens. If the couple is married to each other, they will just walk off in the forest a ways to have sex. The same is true if neither member of the couple is married. If one or both members of the couple are married to someone else, however, they will usually leave the village for a few days. If they return and remain together, the old partners are thereby divorced and the new couple is married. First marriages are recognized simply by cohabitation. If they do not choose to remain together, then the cuckolded spouses may or may not choose to allow them back. Whatever happens, there is no further mention of it or complaint about it, at least not openly, once the couple has returned. However, while the lovers are absent from the village, their spouses search for them, wail, and complain loudly to everyone. Sometimes the spouses left behind asked me to take them in my motorboat to search for the missing partners, but I never did. […]

During the dance, a Pirahã woman asked me, “Do you only lie on top of one woman? Or do you want to lie on others?”
“I just lie on one. I don’t want others.”
“He doesn’t want other women,” she announced.
“Does Keren like other men?”
“No, she just wants me,” I responded as a good Christian husband.

Sexual relations are relatively free between unmarried individuals and even between individuals married to other partners during village dancing and singing, usually during full moons. Aggression is observed from time to time, from mild to severe (Keren witnessed a gang rape of a young unmarried girl by most of the village men). But aggression is never condoned and it is very rare.

p. 88

The Pirahãs all seem to be intimate friends, no matter what village they come from. Pirahãs talk as though they know every other Pirahã extremely well. I suspect that this may be related to their physical connections. Given the lack of stigma attached to and the relative frequency of divorce, promiscuousness associated with dancing and singing, and post- and prepubescent sexual experimentation, it isn’t far off the mark to conjecture that many Pirahãs have had sex with a high percentage of the other Pirahãs. This alone means that their relationships will be based on an intimacy unfamiliar to larger societies (the community that sleeps together stays together?). Imagine if you’d had sex with a sizable percentage of the residents of your neighborhood and that this fact was judged by the entire society as neither good nor bad, just a fact about life— like saying you’ve tasted many kinds of food.

pp. 102-105

Again, couples initiate cohabitation and procreation without ceremony. If they are unattached at the time, they simply begin to live together in the same house. If they are married, they first disappear from the village for two to four days, while their former spouses call for and search for them. Upon their return, they begin a new household or, if it was just a “fling,” return to their previous spouses. There is almost never any retaliation from the cuckolded spouses against those with whom their spouses have affairs. Relations between men and women and boys and girls, whether married or not, are always cordial and often marked by light to heavy flirting.

Sexually it is the same. So long as children are not forced or hurt, there is no prohibition against their participating in sex with adults. I remember once talking to Xisaoxoi, a Pirahã man in his late thirties, when a nine- or ten-year-old girl was standing beside him. As we talked, she rubbed her hands sensually over his chest and back and rubbed his crotch area through his thin, worn nylon shorts. Both were enjoying themselves.

“What’s she doing?” I asked superfluously.
“Oh, she’s just playing. We play together. When she’s big she will be my wife” was his nonchalant reply— and, indeed, after the girl went through puberty, they were married.

Marriage itself among the Pirahãs, like marriage in all cultures, comes with sets of mores that are enforced in different ways. People often ask me, for example, how the Pirahãs deal with infidelity in marriage. So how would this couple, the relatively old man and the young girl, deal with infidelity? They would deal with it like other Pirahãs, in what I take to be a very civilized fashion.

The solution or response to infidelity can even be humorous. One morning I walked over to my friend Kóhoibiíihíai’s home to ask him to teach me more of his language. As I approached his hut, everything looked pretty normal. His wife, Xíbaihóíxoi, was sitting up and he was lying down with his head in her lap.

“Hey, can you help me learn Pirahã words today?” I inquired.

He started to raise his head to answer. Then I noticed that Xíbaihóíxoi was holding him by the hair of his head. As he tried to raise his head, she jerked his head back by the hair, picked up a stick at her side and started whacking him irregularly on the top of his head, occasionally hitting him in the face. He laughed hard, but not too hard, because she jerked his hair every time he moved.

“My wife won’t let me go anywhere,” he said, giggling.

His wife was smirking but the grin disappeared right away and she struck him harder. Some of those whacks looked pretty painful to me. Kóhoi wasn’t in the best position to talk, so I left and found Xahoábisi, another good language teacher. He could work with me, he said.

As we walked back to my house together, I asked, “So what is going on with Kóhoibiíihíai? Xíbaihóíxoi is holding down his head and hitting him with a stick.”
“Oh, he was playing with another woman last night,” Xahoábisi chortled. “So this morning his woman is mad at him. He can’t go anywhere today.”

The fact that Kóhoi, a strong man and a fearless hunter, would lie like that all day and allow his wife to whack him at will (three hours later I revisited them and they were in the same position) was clearly partly voluntary penance. But it was partly a culturally prescribed remedy. I have since seen other men endure the same treatment.

By the next day, all seemed well. I didn’t hear of Kóhoi playing around with women again for quite a while after that. A nifty way to solve marital problems, I thought. It doesn’t always work, of course. There are divorces (without ceremony) among the Pirahãs. But this form of punishment for straying is effective. The woman can express her anger tangibly and the husband can show her he is sorry by letting her bang away on his head at will for a day. It is important to note that this involves no shouting or overt anger. The giggling, smirking, and laughter are all necessary components of the process, since anger is the cardinal sin among the Pirahãs. Female infidelity is also fairly common. When this happens the man looks for his wife. He may say something mean or threatening to the male who cuckolded him. But violence against anyone, children or adults, is unacceptable to the Pirahãs.

Other observations of Pirahã sexuality were a bit more shocking to my Christian sensibilities, especially when they involved clashes between our culture and Pirahã values. One afternoon during our second family stay among the Pirahãs, I walked out of the back room of our split-wood and thatched-roof home on the Maici into the central area of the house, which had no walls and in practice belonged more to the Pirahãs than to us. Shannon was staring at two Pirahã men lying on the floor in front of her. They were laughing, with their shorts pulled down around their ankles, each grabbing the other’s genitals and slapping each other on the back, rolling about the floor. Shannon grinned at me when I walked in. As a product of sexophobic American culture, I was shocked. “Hey, don’t do that in front of my daughter!” I yelled indignantly.

They stopped giggling and looked up at me. “Don’t do what?”
“That, what you’re doing, grabbing each other by the penis.”
“Oh,” they said, looking rather puzzled. “He doesn’t like to see us have fun with each other.” They pulled their pants up and, ever adaptable to new circumstances, changed the subject and asked me if I had any candy.

I never really needed to tell Shannon or her siblings much about human reproduction, death, or other biological processes. They got a pretty good idea of all that from watching the Pirahãs.

The Origin of Consciousness in the Breakdown of the Bicameral Mind
by Julian Jaynes
pp. 465-470

From Mating to “Sex”

The third example I would consider here is the affect of mating. It is similar in some respects to other affects but in other ways quite distinct. Animal studies show that mating, contrary to what the popular mind thinks, is not a necessary drive that builds up like hunger or thirst (although it seems so because of consciousness), but an elaborate behavior pattern waiting to be triggered off by very specific stimuli. Mating in most animals is thus confined to certain appropriate times of the year or day as well as to certain appropriate sets of stimuli as in another’s behavior, or pheromones, light conditions, privacy, security, and many other variables. These include the enormous variety of extremely complicated courtship procedures that for rather subtle evolutionary advantages seem in many animals almost designed to prevent mating rather than to encourage it, as one might expect from an oversimplified idea of the workings of natural selection. Among the anthropoid apes, in contrast to other primates, mating is so rare in the natural habitat as to have baffled early ethologists as to how these most human-like species reproduced at all. So too perhaps with bicameral man.

But when human beings can be conscious about their mating behavior, can reminisce about it in the past and imagine it in the future, we are in a very different world, indeed, one that seems more familiar to us. Try to imagine what your “sexual life” would be if you could not fantasize about sex.

What is the evidence for this change? Scholars of the ancient world, I think, would agree that the murals and sculptures of what I’m calling the bicameral world, that is, before 1000B.C., are chaste; depictions with sexual references are scarcely existent, although there are exceptions. The modest, innocent murals from bicameral Thera now on the second floor of the National Museum in Athens are good examples.

But with the coming of consciousness, particularly in Greece, where the evidence is most clear, the remains of these early Greek societies are anything but chaste. 25 Beginning with seventh century B.C. vase paintings, with the depictions of ithyphallic satyrs, new, semidivine beings, sex seems indeed a prominent concern. And I mean to use the word concern, for it does not at first seem to be simply pornographic excitement. For example, on one island in the Aegean, Delos, is a temple of huge phallic erections.

Boundary stones all over Attica were in the form of what are called herms: square stone posts about four feet high, topped with a sculptured head usually of Hermes and, at the appropriate height, the only other sculptured feature of the post, a penile erection. Not only were these herms not laughter-producing, as they certainly would be to children of today, they were regarded as serious and important, since in Plato’s Symposium “the mutilation of the herms” by the drunken general Alcibiades, in which he evidently knocked off these protuberances with his sword around the city of Athens, is regarded as a sacrilege.

Erect phalli of stone or other material have been found in large numbers in the course of excavations. There were amulets of phalli. Vase paintings show naked female dancers swinging a phallus in a Dionysian cult. One inscription describes the measures to be taken even in times of war to make sure that the phallus procession should be led safely into the city. Colonies were obliged to send phalli to Athens for the great Dionysian festivals. Even Aristotle refers to phallic farces or satyr plays which generally followed the ritual performances of the great tragedies.

If this were all, we might be able to agree with older Victorian interpretations that this phallicism was merely an objective fertility rite. But the evidence from actual sexual behavior following the advent of conscious fantasy speaks otherwise. Brothels, supposedly instituted by Solon, were everywhere and of every kind by the fourth century B.C. Vase paintings depict every possible sexual behavior from masturbation to bestiality to human threesomes, as well as homosexuality in every possible form.

The latter indeed began only at this time, due, I suggest, in part to the new human ability to fantasize. Homosexuality is utterly absent from the Homeric poems. This is contrary to what some recent Freudian interpretations and even classical references of this period (particularly after its proscription by Plato in The Laws as being contrary to physis, or nature), seeking authorization for homosexuality in Homer, having projected into the strong bonding between Achilles and Patroclus.

And again I would have you consider the problem twenty-five hundred years ago, when human beings were first conscious and could first fantasize about sex, of how they learned to control sexual behavior to achieve a stable society. Particularly because erectile tissue in the male is more prominent than in the female, and that feedback from even partial erections would promote the continuance of sexual fantasy (a process called recruitment), we might expect that this was much more of a male problem than a female one. Perhaps the social customs that came into being for such control resulted in the greater social separation of the sexes (which was certainly obvious by the time of Plato) as well as an enhanced male dominance. We can think of modern orthodox Muslim societies in this respect, in which an exposed female ankle or lock of hair is punishable by law.

I certainly will admit that there are large vacant places in the evidence for what I am saying. And of course there are other affects, like anger becoming our hatred, or more positive ones like excitement with the magical touch of consciousness becoming joy, or affiliation consciousized into love. I have chosen anxiety, guilt, and sex as the most socially important. Readers of a Freudian persuasion will note that their theorizing could begin here. I hope that these hypotheses can provide historians more competent than myself with a new way of looking at this extremely important period of human history, when so much of what we regard as modern psychology and personality was being formed for the first time.

Reflections on the Dawn of Consciousness
ed. by Marcel Kuijsten
Chapter 1 – Julian Jaynes: Introducing His Life and Thought
by William R. Woodward & June F. Tower
Kindle Location 1064-1079

Jaynes gave an overview of the “consequences of consciousness.” Here he seems to have been developing the feeling side of consciousness in its evolution during the first millennium b.c. He reminded his audience of the historical origins of shame in human and animal experience:

Think of primary school, toilet accidents. Think how painful it was. … If you say to a dog, “bad dog,” he wonders what he did wrong. He puts his tail between his legs and crawls off. It is such a biological part of us that we are ashamed to admit it. … Guilt is the consciousness of shame over time. 58

For Jaynes, the Bible remains our best source on ideas of sin. He lectured that “sin is an awful word for it,” but “the whole Hebrew Bible is talking about the importance of guilt.” He asked rhetorically “how do you get rid of guilt?” and then answered that “it is very interesting to remember what Paul makes of the crucifixion of Jesus: Jesus was taking away the sins of the world.”

After shame and guilt, he went on to the consequences of consciousness in “mating and sex, which is one of the interesting things to us.” Theoretically, that is. Julian hastened to point out that “if you go back to the bicameral world, all the art is extremely chaste. … Then if you go to the Greek world that begins around 700 b.c., it is anything but. You have never seen anything so dirty. … There were brothels at this time. It happens in the Etruscans. You find these very gross sexual scenes. So I am saying that sex is a very different thing than it was before.” What is the significance of all this lewdness appearing in human history? “You can imagine what your own sex life would be if you could not fantasize about it. This is consciousness coming in and influencing our behavior, and our physiology. Here we have consciousness, and guilt, and sex, and anxiety.” 59

The Julian Jaynes Collection
ed. by Marcel Kuijsten
Chapter 14 – Imagination and the Dance of the Self
pp. 209-212

It is similar with love, although there are differences. It is a little more difficult to talk about. We have affiliation responses in animals (or imprinting, which I have studied) where animals have a very powerful impulse to stay together. But this becomes our complicated kind of love when we can imagine the loved person and go back and forth in our imagination about them.

Similarly — and interestingly — with sex. If you look at the comparative psychology of sexual behavior in animals, it is very clear that this is not an open kind of behavior that happens any time or anything like that. It is cued ethologically into certain kinds of stimuli. So you have to have just the right kind of situation in order for animals to mate.

This is a problem that happens in every zoo: as soon as they get new animals, they want to mate them and have progeny. It is a tremendous problem, because you don’t know ethologically what those tiny cues are — they might be temperature or darkness or whatnot. For human beings it might be moonlight and roses [laughs], but it is this kind of thing that you find evolved into animal behavior.

I tend to think that in bicameral times mating was very similar to what it is in animals in that sense. It was cued into moonlight and roses shall I say, and not otherwise. Therefore it was not a problem in a way. Now, when human beings become conscious, have imagination, and can fantasize about sex, it becomes what we mean in quotes “sex.” Which I think is a problem in the sense that it does not ever quite fit into our conscious society. We go back and forward in history from having a free sex age and then a clamping down of Ms. Grundy 2 and Queen Victoria and so on. It goes back and forth because sex to us is tremendously more important than it was to bicameral man because we can fantasize about it.

Now similarly as I mentioned with the Oedipus story and the idea of guilt, we should be able to go back into history and find evidence for this. The evidence that I found for this — and I should be studying it in different cultures — is again in Greece. If you talk to Greek art historians and you ask them to compare, for example, Greek vase painting of the conscious era with the vase painting or other kinds of painting that went on in what I call the bicameral period — either in Minoan art in Crete or the famous murals that were found in Thera — they will all tell you that there is a big distinction. The older art is chaste, there is nothing about sex in it. But then you come to the vase paintings of Greece. We often think of Greece in terms of Plato and Aristotle and so on, and we do not realize that sex was something very different. For example, they have all of these satyrs with penile erections on their vases and odd things like that. Another example are things called herms. Most people have not heard of them. All the boundary stones of the city were stones about four feet in height called herms. They are called herms, by us anyway, because they were just posts that very often they had a sculpture of Hermes at the top — but sometimes of other people. Then at the appropriate place — the body was just a column — there was a penile erection. I do not think we would find Athens back in these early conscious times very congenial.

These were all over the city of Athens. They were at the boundary stones everywhere. If you think of them being around nowadays you can imagine children giggling and so on. It is enough to make you realize that these people, even at this time, the time of Plato and Aristotle, were very different than we are. And if you read Plato you can find that one of the great crimes of Alcibiades — the Greek general that comes into several of the dialogues — is this terrible, frightful night when he got drunk and went and mutilated the herms. You can imagine what he was knocking off. This is hard for us to realize, because it again makes this point that these people are still not like us even though they are conscious. Because they are new to these emotions. I do not mean to intimate that Greek life was sexually free all over the place because I don’t think that was the case. If you read Kenneth Dover’s 3 classic work about Greek homosexuality, for example, you see it is very different from the gay liberation movement that we can find going on in our country right now. It is a very tame kind of thing.

I don’t think we really understand what is going on. There is the evidence, it is there in vase paintings, it is there in Greek times, but there is something we still do not fully understand about it. But it is different from the bicameral period. We have a different kind of human nature here, and it is against this that we look at where the self can come from.

Chapter 27 – Baltimore Radio Interview: Interview by Robert Lopez
pp. 447-448

Jaynes: Yes indeed. And it happens with other emotions. Fear becomes anxiety. At the same time we have a huge change in sexual behavior. If you try to sit down and imagine what your sexual life would be like if you couldn’t fantasize about it. It’s a hard thing to do, and you probably would think it would be much less, and I suspect it would be. If we go back to bicameral times, and look at all the artwork, wherever we look, there is nothing sexual about it. There is no pornography or anything even reminiscent of that at all. It’s what classicists call chaste. But when we come into the first conscious period, for example in Greece from 700 b.c . up to 200 or 100 b.c . — the sexual life in Greece is difficult to describe because we are taught of great, noble Perician Athens and we don’t think of the sexual symbols … phalli of all kinds were just simply everywhere. This has been well documented now but it’s not something that’s presented to schoolchildren.

Lopez: You mean then that the erotic pottery that we see in ancient Greece was a result of new found consciousness and the resulting new found fascination with sex?

Jaynes: The ability to fantasize about sex immediately brought it in as a major concern. There is something I don’t understand about it… these phalli or erections were on statues everywhere. They were on the boundary stones called herms around the city of Athens. And yet they weren’t unusual to these people as it certainly would be in Baltimore today if you had these things all around the streets. It seems that sex had a religious quality, which is curious. There were a lot of very odd and different kinds of things that were happening.

Chapter 32 – Consciousness and the Voices of the Mind: University of New Hampshire Discussion
pp. 508-510

By affect I mean biologically, genetically organized emotions, such that we share with all mammals, and which have a genetically based way of arousing them and then getting rid of their byproducts. But then these become something — and we really don’t have the terminology for it, so I’m going to call them feelings right now, and by that I mean conscious feelings. We have shame, for example. It is very important and powerful — if you remember your childhood, and the importance of fitting yourself into the group without being humiliated. This becomes guilt when you have consciousness operating on it over time. Guilt is the fear of shame. We also see the emergence of anxiety, which is built on the affect of fear.

Then you have the same thing happening with sex. I think mating was pretty perfunctory back in the bicameral period, just as it is with most of the primates. It isn’t an obvious thing in any of the anthropoid apes — like the orangutans, the gorillas, the gibbons, and the chimpanzees. It is not all that obvious. And I think it was the same thing in the bicameral time — there is nothing really “sexy,” if I may use that adjective — in the bicameral paintings and sculptures. But just after this period, beginning in 700 b.c ., the Greek world is a pornographic world if ever there was one. It’s astonishing what happens. [At museums] most of these vases are not upstairs where children can see them, they are usually kept downstairs. At the same time this isn’t just a matter of artifacts; it is a part of their behavior. There is evidence of brothels beginning here, homosexuality perhaps begins at this same time, and we have various kinds of laws to regulate these things. It is something we don’t understand though, because it isn’t quite like our sexuality — it has a religious basis. It is very strange and odd, this almost religious basis. You have the tragedies, like the Oedipus plays, put on as a trilogy, and it was always followed by a phallic farce, for example. This seems extraordinary to us, because it destroys the whole beauty of these plays.

All that was going on in Greece, and was going in with the Etruscans — who didn’t leave much writing, but they left us enough so that we have a pattern and know that there was group sex going on and things like that. We don’t find it so much among the Hebrews I think because the Hebrews — who in some places were monotheistic and in other places were not — had a very powerful God saying “thou shalt not” and so on — follow the law. At least we don’t have evidence for those behaviors.

So we have for the first time increases in sexual behavior and the emergence of guilt and anxiety. Think of that: anxiety, sex, and guilt — if anybody wants to be a Freudian, this is where it begins [laughs]. Because then you had to have psychological mechanisms of controlling this. I mentioned something about repression — that’s one of the things that comes into play here — but all these methods of forgiveness and the whole concept of sin begins at this time.

Gods, Voices, the the Bicameral Mind
ed. by Marcel Kuijsten
Introduction
p. 9

The birth of consciousness ushered in profound changes for human civilization. In what Jaynes terms the “cognitive explosion,” we see the sudden beginnings of philosophy, science, history, and theater. We also observe the gradual transition from polytheism to monotheism. Consciousness operating on human emotions caused shame to become guilt, fear to become anxiety, anger to become hatred, and mating behavior to give rise to sexual fantasy. Through the spatialization of time, people could, for the first time, think about their lives on a continuum and contemplate their own death.

Chapter 12 – The Origin of Consciousness, Gains and Losses: Walker Percy vs. Julian Jaynes
by Laura Mooneyham White

pp. 174-175

This sort of “regression from a stressful human existence to a peaceable animal existence” 58 also includes a reversion to a bestial sexuality, as women present rearward for intercourse with the disinterestedness of simple physical need. Heavy sodium, among other things, drastically reduces the frequency of a woman’s estrus, so that hormonal urges and, in consequence, mating, become far less common. Sexual activity becomes emotionless and casual, as casual as in the sexual practices of the higher primates. As Jaynes has noted in a 1982 essay on the effect of consciousness on emotions, such mating, “in contrast to ourselves, is casual and almost minimal, with observations of mating in gibbons, chimpanzees, orangutans, and gorillas in the wild being extremely rare.” 59 Jaynes forecasts the emotionless participation in sex we see in Percy’s drugged and regressive characters, for Jaynes connects the erotic with the conscious capacity to narrate, to tell ourselves a story about our presence in time. Narration makes fantasy possible. Preconscious humans were not obsessed by sexuality, Jaynes argues: “All classicists will agree with this, that all Mycenean and Minoan art, in particular before 1000 B.C., is what seems to us as severely chaste”; “… tomb and wall paintings, sculpture and the writings of bicameral civilizations rarely if ever have any sexual references.” 60 But after the advent of human consciousness, the erotic begins to make its claim upon human attention: “About 700 B.C., Greek and Etruscan art is rampant with sexual references, very definitely demonstrating that sexual feelings were a new and profound concern in human development in these regions. We can perhaps appreciate this change in ourselves if we try to imagine what our sexual lives would be like if we could not fantasize about sexual behavior.” 61

The sexually abused and sodium-dosed children at Belle Ame Academy in Percy’s novel have lost that capacity to narrate about themselves and have therefore lost all sense of shame, all sense of what should be either morally perverse or erotically exciting. As Tom More surveys the six photographs which document the sexual abuse at Belle Ame, he is struck by the demeanor of the children’s faces. One child being subjected to fellatio by an adult male seems in countenance merely “agreeable and incurious.” 62 In another picture, a young girl is being penetrated by the chief villain, Van Dorn; she “is gazing at the camera, almost dutifully, like a cheerleader in a yearbook photo, as if to signify that all is well” 63 Another photograph is a group shot of junior-high age boys witnessing an act of cunnilingus: “Two or three, instead of paying attention to the tableau, are mugging a bit for the camera, as if they were bored, yet withal polite.” 64 Another child in yet another appalling picture seems to have a “demure, even prissy expression.” 65 What is remarkable about these photographs is how eloquently they testify to the needfulness of consciousness for the emotions of guilt, shame, or desire. Percy and Jaynes concur that without consciousness, sex is a mildly entertaining physical activity, either at best or worst.

Chapter 16 – Vico and Jaynes: Neurocultural and Cognitive Operations in the Origin of Consciousness
by Robert E. Haskell
pp. 270-271

As noted earlier, there are many differences between Vico and Jaynes that cannot be developed here. The following, however, seems noteworthy. In Vico’s “anthropological” description of the first men, he is systematic throughout his New Science in imagining the early sexual appetites, not only of the first males but also of the first females. In fact, it is basically only in this context that he describes the first females. The first men, he says, “must be supposed to have gone off into bestial wandering … [in] the great forests of the earth Jaynes, become “conscious about their mating behavior, can reminisce about it in the past and imagine it in the future, we are in a very different world, indeed, one that seems more familiar to us” ( OC : 466). Vico can be read as saying the same thing; in describing the sexuality of the first men Vico uses the phrase: “the impulse of the bodily motion of lust” ( NS : 1098, my italics), implying a kind of Jaynesian bicameral sexuality not enhanced by consciousness.

The second line of research supporting Jaynes’s claim is as follows. Scholars of ancient history would agree, says Jaynes, that the murals and sculptures during what he calls the bicameral age, that is, before 1000 B.C., are chaste. Though there are exceptions, depictions with sexual references prior to this time are nearly non-existent. After 1000 B.C., there seems to be a veritable explosion of visual depictions of sexuality: ithyphallic satyrs, large stone phalli, naked female dancers, and later, brothels, apparently instituted by Solon of Athens in the fifth century B.C. Such rampant sexuality had to be controlled. According to Vico it was “frightful superstition” (ibid.) and fear of the gods that lead to control. Jaynes speculates that one way was to separate the sexes socially, which has been observed in many preliterate societies. Since males have more visible erectile tissue than females, something had to be done to inhibit the stimulation of sexual imagination (fantasy). Jaynes cites the example of the orthodox Muslim societies in which to expose female ankles or hair is a punishable offence.29

[Note 29: It is interesting to note that both Vico and Jaynes seem to assume a hyper-sexuality on the part of males, not females. Is this an example of Vico’s “conceit of scholars,” or more specifically, the conceit of male scholars? To the contrary, Mary Jane Sherfey (1996), a physician, has suggested that in early history the female sexual appetite was stronger than the male and therefore had to be controlled by the male in order to create and maintain social order.]

* * *

Bonus material:

At the very bottom is an interview with Marcel Kuijsten who is responsible for reviving Jaynesian scholarship. The other links are about Julian Jaynes view on (egoic-)consciousness and the self, in explaining what he means by analog ‘I’, metaphor ‘me’, metaphier, metaphrand, paraphier, parphrand, spatialization, excerption, narratization, conciliation (or compatibilization, consillience), etc. Even after all these years studying Jaynesian thought, I still struggle to keep it all straight, but it’s worth trying to understand.

Also interesting is the relationship of Jaynes’ view and that of Tor Norretranders, Benjamin Libet, Friedrich Nietzsche, and David Hume. Further connections can be made to Eastern philosophy and religion, specifically Buddehism. Some claim that Hume probably developed his bundle theory from what he learned of Buddhism from returning missionaries.

Julian Jaynes on consciousness and language: Part 1
Julian Jaynes on how metaphors generate consciousness (Part II)
by Elena Maslova-Levin

Language and Consciousness according to Julian Jaynes
Consciousness according to Julian Jaynes
by Yosuke Yanase

Jaynes’s Notion of Consciousness as Self-Referential
by Michael R Finch

Metaphors and Mental Models: The Key to Understanding
by Patrick O’Shaughnessy

Am I in Charge of me or is my Brain: Julian Jaynes Edition PART 2
by Yours Truly

A contribution in three parts to the 100th aniversary of Gotthard Günther
Topic of Part 2: “Negativsprache” (negative language)
by Eberhard von Goldammer

Building Consciousness Back Up To Size – Norretranders, Libet and Free Will
by ignosympathnoramus

What are the dissimilarities between Julian Jaynes’ “analog I” and Nietzsche’s “synthetic I”?
by Sadri Mokni

“Lack of the historical sense is the traditional defect in all philosophers.”

Bicameralism and Bilingualism

A paper on multilingualism was posted by Eva Dunkel in the Facebook group for The Origin of Consciousness in the Breakdown of the Bicameral Mind: Consequences of multilingualism for neural architecture by Sayuri Hayakawa and Viorica Marian. It is a great find. The authors look at how multiple languages are processed within the brain and how they can alter brain structure.

This probably also relates to learning of music, art, and math — one might add that learning music later improves the ability to learn math. These are basically other kinds of languages, especially the former in terms of  musical languages (along with whistle and hum languages) that might indicate language having originated in music, not to mention the close relationship music has to dance, movement, and behavior and close relationship of music to group identity. The archaic authorization of command voices in the bicameral mind quite likely came in the form of music and one could imagine the kinds of synchronized collective activities that could have dominated life and work in bicameral societies. There is something powerful about language that we tend to overlook and take for granted. Also, since language is so embedded in culture, monolinguals never see outside of the cultural reality tunnel they exist within. This could bring us to wonder about the role played post-bicameral society by syncretic languages like English. We can’t forget the influence psychedelics might have had on language development and learning at different periods of human existence. And with psychedelics, there is the connection to shamanism with caves as aural spaces and locations of art, possibly the earliest origin of proto-writing.

There is no reason to give mathematics a mere secondary place in our considerations. Numeracy might be important as well in thinking about the bicameral mind specifically and certainly about the human mind in general (Caleb Everett, Numbers and the Making of Us), as numeracy was an advancement or complexification beyond the innumerate tribal societies (e.g., Piraha). Some of the earliest uses of writing was for calculations: accounting, taxation, astrology, etc. Bicameral societies, specifically the early city-states, can seem simplistic in many ways with their lack of complex hierarchies, large centralized governments, standing armies, police forces, or even basic infrastructure such as maintained roads and bridges. Yet they were capable of immense projects that required impressively high levels of planning, organizing, and coordination — as seen with the massive archaic pyramids and other structures built around the world. It’s strange how later empires in the Axial Age and beyond that, though so much larger and extensive with greater wealth and resources, rarely even attempted the seemingly impossible architectural feats of bicameral humans. Complex mathematical systems probably played a major role in the bicameral mind, as seen in how astrological calculations sometimes extended over millennia.

Hayakawa and Marian’s paper could add to the explanation of the breakdown of the bicameral mind. A central focus of their analysis is the increased executive function and neural integration in managing two linguistic inputs — I could see how that would relate to the development of egoic consciousness. It has been proposed that the first to develop Jaynesian consciousness may have been traders who were required to cross cultural boundaries and, of course, who would have been forced to learn multiple languages. As bicameral societies came into regular contact with more diverse linguistic cultures, their bicameral cognitive and social structures would have been increasingly stressed.

Multilingualism goes hand in hand with literacy. Rates of both have increased over the millennia. That would have been a major force in the post-bicameral Axial Age. The immense multiculturalism of societies like the Roman Empire is almost impossible for us to imagine. Hundreds of ethnicities, each with their own language, would co-exist in the same city and sometimes the same neighborhood. On a single street, there could be hundreds of shrines to diverse gods with people praying, people invoking and incantating in their separate languages. These individuals were suddenly forced to deal with complete strangers and learn some basic level of understanding foreign languages and hence foreign understandings.

This was simultaneous with the rise of literacy and its importance to society, only becoming more important over time as the rate of book reading continues to climb (more books are printed in a year these days than were produced in the first several millennia of writing). Still, it was only quite recently that the majority of the population became literate, following from that is the ability of silent reading and its correlate of inner speech. Multilingualism is close behind and catching up. The consciousness revolution is still under way. I’m willing to bet American society will be transformed as we return to multilingualism as the norm, considering that in the first centuries of American history there was immense multilingualism (e.g., German was once one of the most widely spoken languages in North America).

All of this reminds me of linguistic relativity. I’ve pointed out that, though not explicitly stated, Jaynes obviously was referring to linguistic relativity in his own theorizing about language. He talked quite directly about the power language —- and metaphors within language —- had over thought, perception, behavior, and identity (Anke Snoek has some good insights about this in exploring the thought of Giorgio Agamben). This was an idea maybe first expressed by Wilhelm von Humboldt (On Language) in 1836: “Via the latter, qua character of a speech-sound, a pervasive analogy necessarily prevails in the same language; and since a like subjectivity also affects language in the same notion, there resides in every language a characteristic world-view.” And Humboldt even considered the power of learning another language in stating that, “To learn a foreign language should therefore be to acquire a new standpoint in the world-view hitherto possessed, and in fact to a certain extent is so, since every language contains the whole conceptual fabric and mode of presentation of a portion of mankind.”

Multilingualism is multiperspectivism, a core element of the modern mind and modern way of being in the world. Language has the power to transform us. To study language, to learn a new language is to become something different. Each language is not only a separate worldview but locks into place a different sense of self, a persona. This would be true not only for learning different cultural languages but also different professional languages with their respective sets of terminology, as the modern world has diverse areas with their own ways of talking and we modern humans have to deal with this complexity on a regular basis, whether we are talking about tax codes or dietary lingo.

It’s hard to know what that means for humanity’s trajectory across the millennia. But the more we are caught within linguistic worlds and are forced to navigate our way within them the greater the need for a strong egoic individuality to self-initiate action, that is to say the self-authorization of Jaynesian consciousness. We step further back into our own internal space of meta-cognitive metaphor. To know more than one language strengthens an identity separate from any given language. The egoic self retreats behind its walls and looks out from its parapets. Language, rather than being the world we are immersed in, becomes the world we are trapped in (a world that is no longer home and from which we seek to escape, Philip K. Dick’s Black Iron Prison and William S. Burroughs Control). It closes in on us and forces us to become more adaptive to evade the constraints.

The Transparent Self to Come?

Scott Preston’s newest piece, The Seer, is worth reading. He makes an argument for what is needed next for humanity, what one might think of as getting off the Wheel of Karma. But I can’t help considering about the messy human details, in this moment of societal change and crisis. The great thinkers like Jean Gebser talk of integral consciousness in one way while most people experience the situation in entirely different terms. That is why I’m glad Preston brought in what is far less respectable (and far more popular) like Carlos Castaneda and the Seth Material.

As anyone should know, we aren’t discussing mere philosophy here for it touches upon human experience and social reality. I sense much of what is potentially involved, even as it is hard to put one’s finger on it. The challenge we are confronted with is far more disconcerting than we typically are able and willing to acknowledge, assuming we can even begin to comprehend what we are facing and what is emerging. How we get to the integral is the difficult part. Preston explains well the issue of making the ego/emissary transparent — as the Seth Material put it, “true transparency is not the ability to see through, but to move through”. That is a good way of putting it.

I appreciate his explanation of Satan (the egoic-demiurge) as the ape of God, what Iain McGilchrist calls usurpation. This reminds me of the mimicry of the Trickster archetype and its relation to the co-optation of the reactionary mind (see Corey Robin). A different kind of example of this is that of the folkloric Men in Black, as described by John Keel. It makes me wonder about what such things represent in human reality. This was on my mind because of another discussion I was having in a different post, Normal, from rauldukeblog’s The Violent Ink. The topic had to do with present mass hysteria and, as I’m wont to do, I threw out my own idiosyncratic context. Climate change came up and so I was trying to explain what makes this moment of crisis different than the past.

There is the scientific quality to it. Modern science created climate change through technological innovation and industrialization. And now science warns us about it. But it usually isn’t like a war, famine, or plague that hits a population in an undeniable way — not for most of us, not yet. That is the complexifying change in the scientific worldview we now inhabit and it is why the anxiety is so amorphous, in away profoundly different than before. To come to terms with climate change, something within human nature itself would have to shift. If we are to survive it while maintaining civilization, we will likely have to be as dramatically transformed as were bicameral humans during the collapse of the Bronze Age Civilizations. We won’t come through this unscathed and unchanged.

In speaking of the scientific or pseudo-scientific, there is the phenomenon of UFOs and contact experience. I pointed out that there has been a shift in official military policy toward reporting of UFO sightings, which gets one wondering about motives and also gets one thinking about why now. UFOs and aliens express that free-floating sense of vague anxiety about the unknown, specifically in a modern framework. It’s almost irrelevant what UFOs really are or aren’t. And no doubt, as in the past, various governments will attempt to use UFO reports to manipulate populations, to obfuscate what they wish to keep hidden, or whatever else. The relevant point here is what UFOs symbolize in the human psyche and why they gain so much attention during periods of wide scale uncertainty and stress. The UFO cults that have appeared over the past few generations are maybe akin to the cults like Jesus worship that arose in the Axial Age. Besides Jung, it might be helpful to bring in Jacques Vallee’s even more fascinating view. A new mythos is forming.

I’m not sure what it all adds up to. And my crystal ball is no less cloudy than anyone else’s. It just feels different in that we aren’t only facing crisis and catastrophe. It feels like a far more pivotal point, a fork in the path. During what is called the General Crisis, there was much change going on and it did help bring to an end what remained of feudalism. But the General Crisis didn’t fundamentally change society and culture, much less cut deeper into the human psyche. I’d argue that it simply brought us further down the same path we’d been on for two millennia since the Axial Age. I keep wondering if now the Axial Age is coming to its final conclusion, that there isn’t much further we can go down this path.

By the way, I think my introduction to Jacques Vallee came through my further reading after having discovered John Keel’s The Mothman Prophecies, the book that came out long before the movie. That is where the basic notion comes from that I was working with here. During times of crisis and foreboding, often preceding actual mass death, there is a build up of strangeness that spills out from our normal sense of reality. We can, of course, talk about this in more rational or rather respectable terms without any of the muck of UFO research.

Keith Payne, in The Broken Ladder, notes that people come to hold bizarre beliefs and generally act irrationally when under conditions of high inequality, that is to say when inflicted with unrelenting stress. But it goes beyond that. There is more going on than mere beliefs. People’s sense of reality becomes distorted and they begin experiencing what they otherwise would not. This was the basis of Julian Jaynes’ hypothesis of the bicameral mind where voice-hearing was supposedly elicited through stress. And this is supported by modern evidence, such as the cases recorded by John Geiger in the Third Man Factor.

An additional layer could be brought to this with Jacques Valle’s work in showing how anecdotes of alien contact follow the same pattern as the stories of fairy abductions and the anthropological accounts of shamanic initiation. These are religious experiences. At other times, they were more likely interpreted as visitations by spiritual beings or as transportation into higher realms. Similarly, spinning and flying disks in the sky were interpreted as supernatural manifestations in the pre-scientific age. But maybe it’s all the same phenomenon, whether the source is elsewhere or from within the human psyche.

The interesting part is that these experiences, sometimes sightings involving crowds of people (including many incidents with military personnel and pilots), often correspond with intensified societal conflict. UFO sightings and contact experiences appear to increase at specific periods of stress. Unsurprisingly, people turn to the strange in strange times. And there is something about this strangeness, the pervasiveness of it and the power it holds. To say we are living in a reactionary time when nearly everything and everyone has become reactionary, that is to understate it to an extreme degree. The Trickster quality of the reactionary mind, one might argue, is its most defining feature.

One might call it the return of the repressed. Or it could be thought of as the eruption (irruption?) of the bicameral mind. Whatever it is, it challenges and threatens the world we think we know. Talk of Russian meddling and US political failure is tiddlywinks in comparison. But the fact that we take such tiddlywinks so seriously does add to the sense of crisis. Everything is real to the degree we believe it to be real, in that the effects of it become manifest in our experience and behavior, in the collective choices that we make and accumulate over time.

We manifest our beliefs. And even the strangest of beliefs can become normalized and, as such, become self-fulfilling prophecies. Social realities aren’t only constructed. They are imagined into being. Such imagination is human reality for we are incapable of experiencing it as anything other than reality. We laugh at the strange beliefs of others at our own peril. But what is being responded to can remain hidden or outside of the mainstream frame of consciousness. Think of the way that non-human animals act in unusual ways before an earthquake hits. If all we see is what the animals are doing and lack any greater knowledge, we won’t appreciate that it means we should prepare for the earthquake to come.

Humans too act strangely before coming catastrophes. It doesn’t require anyone to consciously know of and rationally understand what is coming. Most of how humans respond is instinctual or intuitive. I’d only suggest to pay less attention to the somewhat arbitrary focus of anxiety and, instead, to take the anxiety itself as a phenomenon to be taken seriously. Something real is going on. And it portends something on its way.

Here is my point. We see things through a glass darkly. Things are a bit on the opaque side. Transparency of self is more of an aspiration at this point, at least for those of us not yet enlightened beings. All the voices remain loud within us and in the world around us. In many thinkers seeking a new humanity, there is the prioritizing of the visual over the auditory. There is a historical background to this. The bicameral mind was ruled by voices. To be seek freedom from this, to get off the grinding and rumbling Wheel of Karma requires a different relationship to our senses. There is a reason the Enlightenment was so powerfully obsessed with tools that altered and extended our perception with a major focus on the visual, from lenses to the printed word. Oral society was finally losing its power over us or that is what some wanted to believe.

The strangeness of it all is that pre-consciousness maintains its pull over modern consciousness simultaneously as we idealize the next stage of humanity, integral trans-consciousness. Instead of escaping the authoritative power of the bicameral voice, we find ourselves in a world of mass media and social media where voices have proliferated. We are now drowning in voices and so we fantasize about the cool silence of the visionary, that other side of our human nature — as Preston described it:

One of the things we find in don Juan’s teachings is “the nagual” and “the tonal” relation and this is significant because it is clearly the same as McGilchrist’s “Master” and “Emissary” relationship of the two modes of attention of the divided brain. In don Juan’s teachings, these correspond to the what is called the “first” and “the second attentions”. If you have read neuroscientist Jill Bolte-Taylor’s My Stroke of Insight or followed her TED talk about that experience, you will see that she, too, is describing the different modes of attention of the “nagual” and the “tonal” (or the “Master” and the “Emissary”) in her own experience, and that when she, too, shifted into the “nagual” mode, also saw what Castaneda saw — energy as it flows in the universe, and she also called that “the Life Force Power of the Universe”

About getting off the Wheel, rauldukeblog wrote that, “Karma is a Sanskrit word meaning action so the concept is that any act(tion) creates connective tissue which locks one into reaction and counter and so on in an endless loop.” That brings us back to the notion of not only seeing through the egoic self but more importantly to move through the egoic self. If archaic authorization came from voices according to Jaynes, and if self-authorization of the internalized voice of egoic consciousness hasn’t fundamentally changed this equation, then what would offer us an entirely different way of being and acting in the world?

The last time we had a major transformation of the human mind, back during the ending of the Bronze Age, it required the near total collapse of every civilization. Structures of the mind aren’t easily disentangled from entrenched patterns of social identity as long as the structures of civilization remain in place. All these millennia later, we are still struggling to deal with the aftermath of the Axial Age. What are the chances that the next stage of humanity is going to be easier or happen more quickly?

Moralizing Gods as Effect, Not Cause

There is a new study on moralizing gods and social complexity, specifically as populations grow large. The authors are critical of the Axial Age theory: “Although our results do not support the view that moralizing gods were necessary for the rise of complex societies, they also do not support a leading alternative hypothesis that moralizing gods only emerged as a byproduct of a sudden increase in affluence during a first millennium ‘Axial Age’. Instead, in three of our regions (Egypt, Mesopotamia and Anatolia), moralizing gods appeared before 1500.”

I don’t take this criticism as too significant, since it is mostly an issue of dating. Objectively, there are no such things as distinct historical periods. Sure, you’ll find precursors of the Axial Age in the late Bronze Age. Then again, you’ll find precursors of the Renaissance and Protestant Reformation in the Axial Age. And you’ll find the precursors of the Enlightenment in the Renaissance and Protestant Reformation. It turns out all of history is continuous. No big shocker there. Changes build up slowly, until they hit a breaking point. It’s that breaking point, often when it becomes widespread, that gets designated as the new historical period. But the dividing line from one era to the next is always somewhat arbitrary.

This is important to keep in mind. And it does have more than slight relevance. This reframing of what has been called the Axial Age accords perfectly with Julian Jaynes’ theories on the ending of the bicameral mind and the rise of egoic consciousness, along with the rise of the egoic gods with their jealousies, vengeance, and so forth. A half century ago, Jaynes was noting that aspects of moralizing social orders were appearing in the late Bronze Age and he speculated that it had to do with increasing complexity that set those societies up for collapse.

Religion itself, as a formal distinct institution with standardized practices, didn’t exist until well into the Axial Age. Before that, rituals and spiritual/supernatural experience were apparently inseparable from everyday life, as the archaic self was inseparable from the communal sense of the world. Religion as we now know it is what replaced that prior way of being in relationship to ‘gods’, but it wasn’t only a different sense of the divine for the texts refer to early people hearing the voices of spirits, godmen, dead kings, and ancestors. Religion was only necessary, according to Jaynes, when the voices went silent (i.e., when they were no longer heard externally because a singular voice had become internalized). The pre-religious mentality is what Jaynes called the bicameral mind and it represents the earliest and largest portion of civilization, maybe lasting for millennia upon millennia going back to the first city-states.

The pressures on the bicameral mind began to stress the social order beyond what could be managed. Those late Bronze Age civilizations had barely begun to adapt to that complexity and weren’t successful. Only Egypt was left standing and, in its sudden isolation amidst a world of wreckage and refugees, it too was transformed. We speak of the Axial Age in the context of a later date because it took many centuries for empires to be rebuilt around moralizing religions (and other totalizing systems and often totalitarian institutions; e.g., large centralized governments with rigid hierarchies). The archaic civilizations had to be mostly razed to the ground before something else could more fully take their place.

There is something else to understand. To have moralizing big gods to maintain social order, what is required is introspectable subjectivity (i.e., an individual to be controlled by morality). That is to say you need a narratizing inner space where a conscience can operate in the voicing of morality tales and the imagining of narratized scenarios such as considering alternate possible future actions, paths, and consequences. This is what Jaynes was arguing and it wasn’t vague speculation, as he was working with the best evidence he could accrue. Building on Jaynes work with language, Brian J. McVeigh has analyzed early texts to determine how often mind-words were found. Going by language use during the late Bronze Age, there was an increased focus on psychological ways of speaking. Prior to that, morality as such wasn’t necessary, no more than were written laws, court systems, police forces, and standing armies — all of which appeared rather late in civilization.

What creates the introspectable subjectivity of the egoic self, i.e., Jaynesian ‘consciousness’? Jaynes suggests that writing was a prerequisite and it needed to be advanced beyond the stage of simple record-keeping. A literary canon likely developed first to prime the mind for a particular form of narratizing. The authors of the paper do note that written language generally came first:

“This megasociety threshold does not seem to correspond to the point at which societies develop writing, which might have suggested that moralizing gods were present earlier but were not preserved archaeologically. Although we cannot rule out this possibility, the fact that written records preceded the development of moralizing gods in 9 out of the 12 regions analysed (by an average period of 400 years; Supplementary Table 2)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies — suggests that such beliefs were not widespread before the invention of writing. The few small-scale societies that did display precolonial evidence of moralizing gods came from regions that had previously been used to support the claim that moralizing gods contributed to the rise of social complexity (Austronesia and Iceland), which suggests that such regions are the exception rather than the rule.”

As for the exceptions, it’s possible they were influenced by the moralizing religions of societies they came in contact with. Scandinavians, long before they developed complex societies with large concentrated populations, they were traveling and trading all over Eurasia, the Levant, and into North Africa. This was happening in the Bronze Age, during the period of rising big gods and moralizing religion: “The analysis showed that the blue beads buried with the [Nordic] women turned out to have originated from the same glass workshop in Amarna that adorned King Tutankhamun at his funeral in 1323 BCE. King Tut´s golden deathmask contains stripes of blue glass in the headdress, as well as in the inlay of his false beard.” (Philippe Bohstrom, Beads Found in 3,400-year-old Nordic Graves Were Made by King Tut’s Glassmaker). It would be best to not fall prey to notions of untouched primitives.

We can’t assume that these exceptions were actually exceptional, in supposedly being isolated examples contrary to the larger pattern. Even hunter-gatherers have been heavily shaped by the millennia of civilizations that surrounded them. Occasionally finding moralizing religions among simpler and smaller societies is no more remarkable than finding metal axes and t-shirts among tribal people today. All societies respond to changing conditions and adapt as necessary to survive. The appearance of moralizing religions and the empires that went with them transformed the world far beyond the borders of any given society, not that borders were all that defined back then anyway. The large-scale consequences spread across the earth these past three millennia, a tidal wave hitting some places sooner than others but in the end none remain untouched. We are all now under the watchful eye of big gods or else their secularized equivalent, big brother of the surveillance state.

* * *

Moralizing gods appear after, not before, the rise of social complexity, new research suggests
by Redazione Redazione

Professor Whitehouse said: ‘The original function of moralizing gods in world history may have been to hold together large but rather fragile, ethnically diverse societies. It raises the question as to how some of those functions could still be performed in today’s increasingly secular societies – and what the costs might be if they can’t. Even if world history cannot tell us how to live our lives, it could provide a more reliable way of estimating the probabilities of different futures.’

When Ancient Societies Hit a Million People, Vengeful Gods Appeared
by Charles Q. Choi

“For we know Him who said, ‘And I will execute great vengeance upon them with furious rebukes; and they shall know that I am the Lord, when I shall lay my vengeance upon them.'” Ezekiel 25:17.

The God depicted in the Old Testament may sometimes seem wrathful. And in that, he’s not alone; supernatural forces that punish evil play a central role in many modern religions.

But which came first: complex societies or the belief in a punishing god? […]

The researchers found that belief in moralizing gods usually followed increases in social complexity, generally appearing after the emergence of civilizations with populations of more than about 1 million people.

“It was particularly striking how consistent it was [that] this phenomenon emerged at the million-person level,” Savage said. “First, you get big societies, and these beliefs then come.”

All in all, “our research suggests that religion is playing a functional role throughout world history, helping stabilize societies and people cooperate overall,” Savage said. “In really small societies, like very small groups of hunter-gatherers, everyone knows everyone else, and everyone’s keeping an eye on everyone else to make sure they’re behaving well. Bigger societies are more anonymous, so you might not know who to trust.”

At those sizes, you see the rise of beliefs in an all-powerful, supernatural person watching and keeping things under control, Savage added.

Complex societies gave birth to big gods, not the other way around: study
from Complexity Science Hub Vienna

“It has been a debate for centuries why humans, unlike other animals, cooperate in large groups of genetically unrelated individuals,” says Seshat director and co-author Peter Turchin from the University of Connecticut and the Complexity Science Hub Vienna. Factors such as agriculture, warfare, or religion have been proposed as main driving forces.

One prominent theory, the big or moralizing gods hypothesis, assumes that religious beliefs were key. According to this theory, people are more likely to cooperate fairly if they believe in gods who will punish them if they don’t. “To our surprise, our data strongly contradict this hypothesis,” says lead author Harvey Whitehouse. “In almost every world region for which we have data, moralizing gods tended to follow, not precede, increases in social complexity.” Even more so, standardized rituals tended on average to appear hundreds of years before gods who cared about human morality.

Such rituals create a collective identity and feelings of belonging that act as social glue, making people to behave more cooperatively. “Our results suggest that collective identities are more important to facilitate cooperation in societies than religious beliefs,” says Harvey Whitehouse.

Society Creates God, God Does Not Create Society
by  Razib Khan

What’s striking is how soon moralizing gods shows up after the spike in social complexity.

In the ancient world, early Christian writers explicitly asserted that it was not a coincidence that their savior arrived with the rise of the Roman Empire. They contended that a universal religion, Christianity, required a universal empire, Rome. There are two ways you can look at this. First, that the causal arrow is such that social complexity leads to moralizing gods, and that’s that. The former is a necessary condition for the latter. Second, one could suggest that moralizing gods are a cultural adaptation to large complex societies, one of many, that dampen instability and allow for the persistence of those societies. That is, social complexity leads to moralistic gods, who maintain and sustain social complexity. To be frank, I suspect the answer will be closer to the second. But we’ll see.

Another result that was not anticipated I suspect is that ritual religion emerged before moralizing gods. In other words, instead of “Big Gods,” it might be “Big Rules.” With hindsight, I don’t think this is coincidental since cohesive generalizable rules are probably essential for social complexity and winning in inter-group competition. It’s not a surprise that legal codes emerge first in Mesopotamia, where you had the world’s first anonymous urban societies. And rituals lend themselves to mass social movements in public to bind groups. I think it will turn out that moralizing gods were grafted on top of these general rulesets, which allow for coordination, cooperation, and cohesion, so as to increase their import and solidify their necessity due to the connection with supernatural agents, which personalize the sets of rules from on high.

Complex societies precede moralizing gods throughout world history
by Harvey Whitehouse, Pieter François, Patrick E. Savage, Thomas E. Currie, Kevin C. Feeney, Enrico Cioni, Rosalind Purcell, Robert M. Ross, Jennifer Larson, John Baines, Barend ter Haar, Alan Covey, and Peter Turchin

The origins of religion and of complex societies represent evolutionary puzzles1–8. The ‘moralizing gods’ hypothesis offers a solution to both puzzles by proposing that belief in morally concerned supernatural agents culturally evolved to facilitate cooperation among strangers in large-scale societies9–13. Although previous research has suggested an association between the presence of moralizing gods and social complexity3,6,7,9–18, the relationship between the two is disputed9–13,19–24, and attempts to establish causality have been hampered by limitations in the availability of detailed global longitudinal data. To overcome these limitations, here we systematically coded records from 414societies that span the past 10,000years from 30regions around the world, using 51measures of social complexity and 4measures of supernatural enforcement of morality. Our analyses not only confirm the association between moralizing gods and social complexity, but also reveal that moralizing gods follow—rather than precede—large increases in social complexity. Contrary to previous predictions9,12,16,18, powerful moralizing ‘big gods’ and prosocial supernatural punishment tend to appear only after the emergence of ‘megasocieties’ with populations of more than around one million people. Moralizing gods are not a prerequisite for the evolution of social complexity, but they may help to sustain and expand complex multi-ethnic empires after they have become established. By contrast, rituals that facilitate the standardization of religious traditions across large populations25,26 generally precede the appearance of moralizing gods. This suggests that ritual practices were more important than the particular content of religious belief to the initial rise of social complexity.

 

 

Conceptual Spaces

In a Nautilis piece, New Evidence for the Strange Geometry of Thought, Adithya Rajagopalan reports on the fascinating topic of conceptual or cognitive spaces. He begins with the work of the philosopher and cognitive scientist Peter Gärdenfors who wrote about this in a 2000 book, Conceptual Spaces. Then last year, there was published a Science paper by several neuroscientists: Jacob Bellmund, Christian Doeller, and Edvard Moser. It has to do with the brain’s “inner GPS.”

Anyone who has followed my blog for a while should see the interest this has for me. There is Julian Jaynes’ thought on consciousness, of course. And there are all kinds of other thinkers as well. I could throw out Iain McGilchrist and James L. Kugel who, though critical of Jaynes, make similar points about identity and the divided mind.

The work of Gärdenfors and the above neuroscientists helps explain numerous phenomenon, specifically in what way splintering and dissociation operates. How a Nazi doctor could torture Jewish children at work and then go home to play with his own children. How the typical person can be pious at church on Sunday and yet act in complete contradiction to this for the rest of the week. How we can know that the world is being destroyed through climate change and still go on about our lives as if everything remains the same.How we can simultaneously know and not know so many things. Et cetera.

It might begin to give us some more details in explaining the differences between the bicameral mind and Jaynesian consciousness, between Ernest Hartmann’s thin and thick boundaries of the mind, and much else. Also, in light of Lynne Kelly’s work on traditional mnemonic systems, we might be in a better position of understanding the phenomenal memory feats humans are capable of and why they are so often spatial in organization (e.g., the Songlines of Australian Aborigines) and why these often involve shifts in mental states. It might also clarify how people can temporarily or permanently change personalities and identities, how people can compartmentalize parts of themselves such as their childhood selves and maybe help explain why others fail at compartmentalizing.

The potential significance is immense. Our minds are mansions with many rooms. Below is the meat of Rajagopalan’s article.

* * *

“Cognitive spaces are a way of thinking about how our brain might organize our knowledge of the world,” Bellmund said. It’s an approach that concerns not only geographical data, but also relationships between objects and experience. “We were intrigued by evidence from many different groups that suggested that the principles of spatial coding in the hippocampus seem to be relevant beyond the realms of just spatial navigation,” Bellmund said. The hippocampus’ place and grid cells, in other words, map not only physical space but conceptual space. It appears that our representation of objects and concepts is very tightly linked with our representation of space.

Work spanning decades has found that regions in the brain—the hippocampus and entorhinal cortex—act like a GPS. Their cells form a grid-like representation of the brain’s surroundings and keep track of its location on it. Specifically, neurons in the entorhinal cortex activate at evenly distributed locations in space: If you drew lines between each location in the environment where these cells activate, you would end up sketching a triangular grid, or a hexagonal lattice. The activity of these aptly named “grid” cells contains information that another kind of cell uses to locate your body in a particular place. The explanation of how these “place” cells work was stunning enough to award scientists John O’Keefe, May-Britt Moser, and Edvard Moser, the 2014 Nobel Prize in Physiology or Medicine. These cells activate only when you are in one particular location in space, or the grid, represented by your grid cells. Meanwhile, head-direction cells define which direction your head is pointing. Yet other cells indicate when you’re at the border of your environment—a wall or cliff. Rodent models have elucidated the nature of the brain’s spatial grids, but, with functional magnetic resonance imaging, they have also been validated in humans.

Recent fMRI studies show that cognitive spaces reside in the hippocampal network—supporting the idea that these spaces lie at the heart of much subconscious processing. For example, subjects of a 2016 study—headed by neuroscientists at Oxford—were shown a video of a bird’s neck and legs morph in size. Previously they had learned to associate a particular bird shape with a Christmas symbol, such as Santa or a Gingerbread man. The researchers discovered the subjects made the connections with a “mental picture” that could not be described spatially, on a two-dimensional map. Yet grid-cell responses in the fMRI data resembled what one would see if subjects were imagining themselves walking in a physical environment. This kind of mental processing might also apply to how we think about our family and friends. We might picture them “on the basis of their height, humor, or income, coding them as tall or short, humorous or humorless, or more or less wealthy,” Doeller said. And, depending on whichever of these dimensions matters in the moment, the brain would store one friend mentally closer to, or farther from, another friend.

But the usefulness of a cognitive space isn’t just restricted to already familiar object comparisons. “One of the ways these cognitive spaces can benefit our behavior is when we encounter something we have never seen before,” Bellmund said. “Based on the features of the new object we can position it in our cognitive space. We can then use our old knowledge to infer how to behave in this novel situation.” Representing knowledge in this structured way allows us to make sense of how we should behave in new circumstances.

Data also suggests that this region may represent information with different levels of abstraction. If you imagine moving through the hippocampus, from the top of the head toward the chin, you will find many different groups of place cells that completely map the entire environment but with different degrees of magnification. Put another way, moving through the hippocampus is like zooming in and out on your phone’s map app. The area in space represented by a single place cell gets larger. Such size differences could be the basis for how humans are able to move between lower and higher levels of abstraction—from “dog” to “pet” to “sentient being,” for example. In this cognitive space, more zoomed-out place cells would represent a relatively broad category consisting of many types, while zoomed-in place cells would be more narrow.

Yet the mind is not just capable of conceptual abstraction but also flexibility—it can represent a wide range of concepts. To be able to do this, the regions of the brain involved need to be able to switch between concepts without any informational cross-contamination: It wouldn’t be ideal if our concept for bird, for example, were affected by our concept for car. Rodent studies have shown that when animals move from one environment to another—from a blue-walled cage to a black-walled experiment room, for example—place-cell firing is unrelated between the environments. Researchers looked at where cells were active in one environment and compared it to where they were active in the other. If a cell fired in the corner of the blue cage as well as the black room, there might be some cross-contamination between environments. The researchers didn’t see any such correlation in the place-cell activity. It appears that the hippocampus is able to represent two environments without confounding the two. This property of place cells could be useful for constructing cognitive spaces, where avoiding cross-contamination would be essential. “By connecting all these previous discoveries,” Bellmund said, “we came to the assumption that the brain stores a mental map, regardless of whether we are thinking about a real space or the space between dimensions of our thoughts.”

Other People’s Craziness

In a Facebook group dedicated to Julian Jaynes, I was talking to a lady who is an academic and a poet. She happened to mention that she is also a ‘Manbo’, something like a vodou practitioner. She made the admission that she sees and hears spirits, but she qualified it by saying that her rational mind knew it wasn’t real. I found that qualification odd, as if she were worried about maintaining her respectability. She made clear that these experiences weren’t make-believe, as they felt real to her, as real as anything else, and yet one side of her personality couldn’t quite take them as real. So, two different realities existed inside her and she seemed split between them.

None of this is particularly strange in a group like that. Many voice-hearers, for obvious reasons, are attracted to Jaynes’ view on voice-hearing. Jaynes took such experiences seriously and, to a large degree, took the experiences on their own terms. Jaynes offered a rational or rationalizing narrative for why it is ‘normal’ to hear voices. The desire to be normal is powerful social force. Having a theory helps someone like this lady to compartmentalize the two aspects of her being and not feel overwhelmed. If she didn’t qualify her experience, she would be considered crazy by many others and maybe in her own mind. Her academic career might even be threatened. So, the demand of conformity is serious with real consequences.

That isn’t what interested me, though. Our conversation happened in a post about the experience of falling under a trance while driving, such that one ends up where one was going without remember how one got there. It’s a common experience and a key example Jaynes uses about how the human mind functions. I mentioned that many people have experiences of alien contact and UFO abduction while driving, often alone at night on some dark stretch of road. And I added that, according to Jacques Vallee and John Keel, many of these experiences match the descriptions of fairy abductions in folklore and the accounts of shamanic initiations. Her response surprised me, in her being critical.

Vallee also had two sides, on the one hand an analytical type who worked as an astronomer and a computer scientist and on the other a disreputable UFO researcher. He came at the UFO field from a scientific approach, but like Jaynes he felt compelled to take people at their word in accepting that their experience was real to them. He even came to believe there was something to these experiences. It started with a time he was working in an observatory and, after recording anomalous data of something in the sky that wasn’t supposed to be there, the director of the observatory erased the tapes out of fear that if it got out to the press it would draw negative attention to the institution. That is what originally piqued his curiosity and started him down the road of UFO research. But he also came across many cases where entire groups of people, including military, saw the same UFOs in the sky and their movements accorded with no known technology or physics.

That forced him to consider the possibility that people were seeing something that was on some level real, whatever it was. He went so far as to speculate about consciousness being much stranger than science could presently explain, that there really is more to the universe or at an angle to our universe. In this line of thought, he spoke of the phenomena as, “partly associated with a form of non-human consciousness that manipulates space and time.” Sure, to most people, that is crazy talk, though no more crazy than interacting with the spirit world. But the lady I was speaking with immediately dismissed this as going too far. Her anomalous experiences were fine, as long as she pretended that they were pretend or something, thus proving she wasn’t bat-shit loony. Someone else’s anomalous experience, however, was not to be taken seriously. It’s the common perception that only other people’s religion is mythology.

That amused me to no end. And I said that it amused me. She then blocked me. That amused me as well. I’m feeling amused. I was more willing to take her experiences as being valid in a way she was unwilling to do for others. It’s not that I had any skin in the game, as I’ve never talked to spirits nor been abducted by aliens. But I give people the benefit of the doubt that there experiences are real to them. I’m a radical skeptic and extreme agnostic. I take the world as it comes and sometimes the world is strange. No need to rationalize it. And if that strangeness is proof of insanity and disrepute, there are worse fates.

* * *

As for my own variety of crazy, I’ve always felt a kinship with Philip K. Dick. Below is what he what he wrote in justifying himself. Some people feel compelled to speak truth, no matter what. If that truth sounds crazy, maybe that is because we live in a society gone mad. Under such unhappy circumstances, there can be great comfort in feeling validated by someone speaking truth. So, maybe be kind toward the craziness and truths of other people. Here is what PKD has to say:

“What I have done may be good, it may be bad. But the reality that I discern is the true reality; thus I am basically analytical, not creative; my writing is simply a creative way of handling analysis. I am a fictionalizing philosopher, not a novelist; my novel and story-writing ability is employed as a means to formulate my perception. The core of my writing is not art, but truth. Thus what I tell is the truth, yet I can do nothing to alleviate it, either by deed or exploration. Yet this seems somehow to help a certain kind of sensitive and troubled person, for whom I speak. I think I understand the common ingredient in those whom my writing helps; they cannot or will not blunt their own intimations about the irrational, mysterious nature of reality, & for them my corpus of writing is one long ratiocination regarding this inexplicable reality, an investigation & presentation, analysis & response & personal history. My audience will always be limited to these people.”
(In Pursuit of Valis, p.161)

The Agricultural Mind

Let me make an argument about individualism, rigid egoic boundaries, and hence Jaynesian consciousness. But I’ll come at it from a less typical angle. I’ve been reading much about diet, nutrition, and health. With agriculture, the entire environment in which humans lived was fundamentally transformed, such as the rise of inequality and hierarchy, concentrated wealth and centralized power; not to mention the increase of parasites and diseases from urbanization and close cohabitation with farm animals (The World Around Us). We might be able to thank early agricultural societies, as an example, for introducing malaria to the world.

Maybe more importantly, there are significant links between what we eat and so much else: gut health, hormonal regulation, immune system, and neurocognitive functioning. There are multiple pathways, one of which is direct, connecting the gut and the brain (nervous system, immune system, hormonal system, etc). The vagus nerve is a recent discovered direct and near instantaneous gut-brain link, a possible explanation for the ‘gut sense’, with the key neurotransmitter glutamate modulating the rate of transmission in synaptic communication between enteroendocrine cells and vagal nerve neurons (Rich Haridy, Fast and hardwired: Gut-brain connection could lead to a “new sense”), and this is implicated in “episodic and spatial working memory” that might assist in the relocation of food sources (Rich Haridy, Researchers reveal how disrupting gut-brain communication may affect learning and memory). The gut is sometimes called the second brain because it also has neuronal cells, but in evolutionary terms it is the first brain. To demonstrate one example of a connection, many are beginning to refer to Alzheimer’s as type 3 diabetes, and dietary interventions have reversed symptoms in clinical studies. Also, gut microbes and ingested parasites have been shown to influence our neurocognition and psychology, even altering personality traits and behavior such as with toxoplasma gondii. (For more discussion, see Fasting, Calorie Restriction, and Ketosis.)

One possibility to consider is the role of exorphins that are addictive and can be blocked in the same way as opioids. Exorphin, in fact, means external morphine-like substance, in the way that endorphin means indwelling morphine-like substance. Exorphins are found in milk and wheat. Milk, in particular, stands out. Even though exorphins are found in other foods, it’s been argued that they are insignificant because they theoretically can’t pass through the gut barrier, much less the blood-brain barrier. Yet exorphins have been measured elsewhere in the human body. One explanation is gut permeability (related to permeability throughout the body) that can be caused by many factors such as stress but also by milk. The purpose of milk is to get nutrients into the calf and this is done by widening the space in gut surface to allow more nutrients through the protective barrier. Exorphins get in as well and create a pleasurable experience to motivate the calf to drink more. Along with exorphins, grains and dairy also contain dopaminergic peptides, and dopamine is the other major addictive substance. It feels good to consume dairy as with wheat, whether you’re a calf or a human, and so one wants more. Think about that the next time you pour milk over cereal.

Addiction, of food or drugs or anything else, is a powerful force. And it is complex in what it affects, not only physiologically and psychologically but also on a social level. Johann Hari offers a great analysis in Chasing the Scream. He makes the case that addiction is largely about isolation and that the addict is the ultimate individual (see To Put the Rat Back in the Rat Park, Rationalizing the Rat Race, Imagining the Rat Park, & Individualism and Isolation), and by the way this connects to Jaynesian consciousness with its rigid egoic boundaries as opposed to the bundled and porous mind, the extended and enmeshed self of bicameralism and animism. It stands out to me that addiction and addictive substances have increased over civilization, and I’ve argued that this is about a totalizing cultural system and a fully encompassing ideological worldview, what some call a reality tunnel (see discussion of addiction and social control in Diets and Systems & Western Individuality Before the Enlightenment Age). Growing of poppies, sugar, etc came later on in civilization, as did the production of beer and wine — by the way, alcohol releases endorphins, sugar causes a serotonin high, and both activate the hedonic pathway. Also, grain and dairy were slow to catch on, as a large part of the diet. Until recent centuries, most populations remained dependent on animal foods, including wild game (I discuss this era of dietary transition and societal transformation in numerous posts with industrialization and technology pushing the already stressed agricultural mind to an extreme: Ancient Atherosclerosis?To Be Fat And Have Bread, Autism and the Upper Crust“Yes, tea banished the fairies.”, Voice and Perspective, Hubris of Nutritionism, Health From Generation To GenerationDietary Health Across GenerationsMoral Panic and Physical DegenerationThe Crisis of IdentityThe Disease of Nostalgia, & Technological Fears and Media Panics). Americans, for example, ate large amounts of meat, butter, and lard from the colonial era through the 19th century (see Nina Teicholz, The Big Fat Surprise; passage quoted in full at Malnourished Americans). In 1900, Americans on average were only getting 10% of their calorie intake from carbohydrates and sugar was minimal, a potentially ketogenic diet considering how much lower calorie the average diet was back then.

Something else to consider is that low-carb diets can alter how the body and brain functions (the word ‘alter’ is inaccurate, though, since in evolutionary terms ketosis would’ve been the normal state; and so rather the modern high-carb diet is altered from the biological norm). That is even more true if combined with intermittent fasting and restricted eating times that would have been more common in the past. Interestingly, this only applies to adults since we know that babies remain in ketosis during breastfeeding, there is evidence that they are already in ketosis in utero, and well into the teen years humans apparently remain in ketosis: “It is fascinating to see that every single child , so far through age 16, is in ketosis even after a breakfast containing fruits and milk” (Angela A. Stanton, Children in Ketosis: The Feared Fuel). Taken together, earlier humans would have spent more time in ketosis (fat-burning mode, as opposed to glucose-burning) which dramatically affects human biology. The further one goes back in history the greater amount of time people probably spent in ketosis. One difference with ketosis is cravings and food addictions disappear. It’s a non-addictive or maybe even anti-addictive state of mind. (For more discussion of this topic, see previous posts: Fasting, Calorie Restriction, and Ketosis, Ketogenic Diet and Neurocognitive HealthIs Ketosis Normal?, & “Is keto safe for kids?”.) Many hunter-gatherer tribes can go days without eating and it doesn’t appear to bother them, such as Daniel Everett’s account of the Piraha, and that is typical of ketosis. This was also observed of Mongol warriors who could ride and fight for days on end without tiring or needing to stop for food. What is also different about hunter-gatherers and similar traditional societies is how communal they are or were and how more expansive their identities in belonging to a group. Anthropological research shows how hunter-gatherers often have a sense of personal space that extends into the environment around them. What if that isn’t merely cultural but something to do with how their bodies and brains operate? Maybe diet even plays a role. Hold that thought for a moment.

Now go back to the two staples of the modern diet, grains and dairy. Besides exorphins and dopaminergic substances, they also have high levels of glutamate, as part of gluten and casein respectively. Dr. Katherine Reid is a biochemist whose daughter was diagnosed with autism and it was severe. She went into research mode and experimented with supplementation and then diet. Many things seemed to help, but the greatest result came from restriction of dietary glutamate, a difficult challenge as it is a common food additive (see her TED talk here and another talk here or, for a short and informal video, look here). This requires going on a largely whole foods diet, that is to say eliminating processed foods. But when dealing with a serious issue, it is worth the effort. Dr. Reid’s daughter showed immense improvement to such a degree that she was kicked out of the special needs school. After being on this diet for a while, she socialized and communicated normally like any other child, something she was previously incapable of. Keep in mind that glutamate, as mentioned above, is necessary as a foundational neurotransmitter in modulating communication between the gut and brain. But typically we only get small amounts of it, as opposed to the large doses found in the modern diet. In response to the TED Talk given by Reid, Georgia Ede commented that it’s, “Unclear if glutamate is main culprit, b/c a) little glutamate crosses blood-brain barrier; b) anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production.” Either way, glutamate plays a powerful role in brain functioning. And no matter the exact line of causation, industrially processed foods in the modern diet would be involved. By the way, an exacerbating factor might be mercury in its relation to anxiety and adrenal fatigue, as it ramps up the fight or flight system via over-sensitizing the glutamate pathway — could this be involved in conditions like autism where emotional sensitivity is a symptom? Mercury and glutamate simultaneously increasing in the modern world demonstrates how industrialization can push the effects of the agricultural diet to ever further extremes.

Glutamate is also implicated in schizophrenia: “The most intriguing evidence came when the researchers gave germ-free mice fecal transplants from the schizophrenic patients. They found that “the mice behaved in a way that is reminiscent of the behavior of people with schizophrenia,” said Julio Licinio, who co-led the new work with Wong, his research partner and spouse. Mice given fecal transplants from healthy controls behaved normally. “The brains of the animals given microbes from patients with schizophrenia also showed changes in glutamate, a neurotransmitter that is thought to be dysregulated in schizophrenia,” he added. The discovery shows how altering the gut can influence an animals behavior” (Roni Dengler, Researchers Find Further Evidence That Schizophrenia is Connected to Our Guts; reporting on Peng Zheng et al, The gut microbiome from patients with schizophrenia modulates the glutamate-glutamine-GABA cycle and schizophrenia-relevant behaviors in mice, Science Advances journal). And glutamate is involved in other conditions as well, such as in relation to GABA: “But how do microbes in the gut affect [epileptic] seizures that occur in the brain? Researchers found that the microbe-mediated effects of the Ketogenic Diet decreased levels of enzymes required to produce the excitatory neurotransmitter glutamate. In turn, this increased the relative abundance of the inhibitory neurotransmitter GABA. Taken together, these results show that the microbe-mediated effects of the Ketogenic Diet have a direct effect on neural activity, further strengthening support for the emerging concept of the ‘gut-brain’ axis.” (Jason Bush, Important Ketogenic Diet Benefit is Dependent on the Gut Microbiome). Glutamate is one neurotransmitter among many that can be affected in a similar manner; e.g., serotonin is also produced in the gut.

That reminds me of propionate, a short chain fatty acid. It is another substance normally taken in at a low level. Certain foods, including grains and dairy, contain it. The problem is that, as a useful preservative, it has been generously added to the food supply. Research on rodents shows injecting them with propionate causes autistic-like behaviors. And other rodent studies show how this stunts learning ability and causes repetitive behavior (both related to the autistic demand for the familiar), as too much propionate entrenches mental patterns through the mechanism that gut microbes use to communicate to the brain how to return to a needed food source, similar to the related function of glutamate. A recent study shows that propionate not only alters brain functioning but brain development (L.S. Abdelli et al, Propionic Acid Induces Gliosis and Neuro-inflammation through Modulation of PTEN/AKT Pathway in Autism Spectrum Disorder). As reported by Suhtling Wong-Vienneau at University of Central Florida, “when fetal-derived neural stem cells are exposed to high levels of Propionic Acid (PPA), an additive commonly found in processed foods, it decreases neuron development” (Processed Foods May Hold Key to Rise in Autism). This study “is the first to discover the molecular link between elevated levels of PPA, proliferation of glial cells, disturbed neural circuitry and autism.” The impact is profound and permanent — Pedersen offers the details:

“In the lab, the scientists discovered that exposing neural stem cells to excessive PPA damages brain cells in several ways: First, the acid disrupts the natural balance between brain cells by reducing the number of neurons and over-producing glial cells. And although glial cells help develop and protect neuron function, too many glia cells disturb connectivity between neurons. They also cause inflammation, which has been noted in the brains of autistic children. In addition, excessive amounts of the acid shorten and damage pathways that neurons use to communicate with the rest of the body. This combination of reduced neurons and damaged pathways hinder the brain’s ability to communicate, resulting in behaviors that are often found in children with autism, including repetitive behavior, mobility issues and inability to interact with others.”

So, the autistic brain develops according to higher levels of propionate and maybe becomes accustomed to it. A state of dysfunction becomes what feels normal. Propionate causes inflammation and, as Dr. Ede points out, “anything that triggers inflammation/oxidation (i.e. refined carbs) spikes brain glutamate production”. High levels of propionate and glutamate become part of the state of mind the autistic becomes identified with. It all links together. Autistics, along with cravings for foods containing propionate (and glutamate), tend to have larger populations of a particular gut microbe that produces propionate. In killing microbes, this might be why antibiotics can help with autism. But in the case of depression, gut issues are associated instead with the lack of certain microbes that produce butyrate, another important substance that also is found in certain foods (Mireia Valles-Colomer et al, The neuroactive potential of the human gut microbiota in quality of life and depression). Depending on the specific gut dysbiosis, diverse neurocognitive conditions can result. And in affecting the microbiome, changes in autism can be achieved through a ketogenic diet, temporarily reducing the microbiome (similar to an antibiotic) — this presumably takes care of the problematic microbes and readjusts the gut from dysbiosis to a healthier balance. Also, ketosis would reduce the inflammation that is associated with glutamate production.

As with propionate, exorphins injected into rats will likewise elicit autistic-like behaviors. By two different pathways, the body produces exorphins and propionate from the consumption of grains and dairy, the former from the breakdown of proteins and the latter produced by gut bacteria in the breakdown of some grains and refined carbohydrates (combined with the propionate used as a food additive; and also, at least in rodents, artificial sweeteners increase propionate levels). This is part of the explanation for why many autistics have responded well to low-carb ketosis, specifically paleo diets that restrict both wheat and dairy, but ketones themselves play a role in using the same transporters as propionate and so block their buildup in cells and, of course, ketones offer a different energy source for cells as a replacement for glucose which alters how cells function, specifically neurocognitive functioning and its attendant psychological effects.

There are some other factors to consider as well. With agriculture came a diet high in starchy carbohydrates and sugar. This inevitably leads to increased metabolic syndrome, including diabetes. And diabetes in pregnant women is associated with autism and attention deficit disorder in children. “Maternal diabetes, if not well treated, which means hyperglycemia in utero, that increases uterine inflammation, oxidative stress and hypoxia and may alter gene expression,” explained Anny H. Xiang. “This can disrupt fetal brain development, increasing the risk for neural behavior disorders, such as autism” (Maternal HbA1c influences autism risk in offspring); by the way, other factors such as getting more seed oils and less B vitamins are also contributing factors to metabolic syndrome and altered gene expression, including being inherited epigenetically, not to mention mutagenic changes to the genes themselves. The increase of diabetes, not mere increase of diagnosis, could partly explain the greater prevalence of autism over time. Grain surpluses only became available in the 1800s, around the time when refined flour and sugar began to become common. It wasn’t until the following century that carbohydrates finally overtook animal foods as the mainstay of the diet, specifically in terms of what is most regularly eaten throughout the day in both meals and snacks — a constant influx of glucose into the system.

A further contributing factor in modern agriculture is that of pesticides, also associated with autism. Consider DDE, a product of DDT, which has been banned for decades but apparently it is still lingering in the environment. “The odds of autism among children were increased, by 32 percent, in mothers whose DDE levels were high (high was, comparatively, 75th percentile or greater),” one study found (Aditi Vyas & Richa Kalra, Long lingering pesticides may increase risk for autism: Study). “Researchers also found,” the article reports, “that the odds of having children on the autism spectrum who also had an intellectual disability were increased more than two-fold when the mother’s DDE levels were high.” A different study showed a broader effect in terms of 11 pesticides still in use:

“They found a 10 percent or more increase in rates of autism spectrum disorder, or ASD, in children whose mothers lived during pregnancy within about a mile and a quarter of a highly sprayed area. The rates varied depending on the specific pesticide sprayed, and glyphosate was associated with a 16 percent increase. Rates of autism spectrum disorders combined with intellectual disability increased by even more, about 30 percent. Exposure after birth, in the first year of life, showed the most dramatic impact, with rates of ASD with intellectual disability increasing by 50 percent on average for children who lived within the mile-and-a-quarter range. Those who lived near glyphosate spraying showed the most increased risk, at 60 percent” (Nicole Ferox, It’s Personal: Pesticide Exposures Come at a Cost).

So far, my focus has been on what we ingest or are otherwise exposed to because of agriculture and the food system, in general and more specifically in industrialized society with its refined, processed, and adulterated foods, largely from plants. But the other side of the picture is what we are lacking, what we are deficient in. An agricultural diet hasn’t only increased certain foods and substances but simultaneously decreased others. What promoted optimal health throughout human evolution has, in many cases, been displaced or blocked. Agriculture is highly destructive and has depleted the nutrient-level in the soil (see Carnivore Is Vegan) and, along with this, even animal foods as part of the agricultural system are similarly depleted of nutrients as compared to animal foods from pasture or free-range. For example, fat-soluble vitamins (true vitamin A as retinol, vitamin D3, vitamin K2 not to be confused with K1, and vitamin E complex) are not found in plant foods and are found in far less concentration with foods from animals from factory-farming or from grazing on poor soil from agriculture, especially the threat of erosion and desertification.

One of the biggest changes with agriculture was the decrease of fatty animal foods that were nutrient-dense and nutrient-bioavailable. It’s in the fat that are found the fat-soluble vitamins and fat is necessary for their absorption (i.e., fat-soluble), and these key nutrients relate to almost everything else such as minerals as calcium and magnesium that also are found in animal foods (Calcium: Nutrient Combination and Ratios); the relationship of seafood with the balance of sodium, magnesium, and potassium is central (On Salt: Sodium, Trace Minerals, and Electrolytes) and indeed populations that eat more seafood live longer. These animal foods used to hold the prized position in the human diet and the earlier hominid diet as well, as part of our evolutionary inheritance from millions of years of adaptation to a world where fatty animals once were abundant (J. Tyler Faith, John Rowan & Andrew Du, Early hominins evolved within non-analog ecosystems). That was definitely true in the paleolithic before the megafauna die-off, but even to this day hunter-gatherers when they have access to traditional territory and prey will seek out the fattest animals available, entirely ignoring lean animals because rabbit sickness is worse than hunger (humans can always fast for many days or weeks, if necessary).

It wasn’t only fat-soluble vitamins that were lost, though. Humans traditionally ate nose-to-tail and this brought with it a plethora of nutrients, even some thought of as being only sourced from plant foods. In its raw or lightly cooked form, meat has more than enough vitamin C for a low-carb diet; whereas a high-carb diet, since glucose competes with vitamin C, requires higher intake of this antioxidant (see Sailors’ Rations, a High-Carb Diet). Also, consider that prebiotics can be found in animal foods as well and animal-based prebiotics likely feeds a very different kind of microbiome that could shift so much else in the body, such as neurotransmitter production: “I found this list of prebiotic foods that were non-carbohydrate that included cellulose, cartilage, collagen, fructooligosaccharides, glucosamine, rabbit bone, hair, skin, glucose. There’s a bunch of things that are all — there’s also casein. But these tend to be some of the foods that actually have some of the highest prebiotic content,” from Vanessa Spina as quoted in Fiber or Not: Short-Chain Fatty Acids and the Microbiome). Let me briefly mention fat-soluble vitamins again in making a point about other animal-based nutrients. Fat-soluble vitamins, similar to ketosis and autophagy, have a profound effect on human biological functioning, including that of the mind (see the work of Weston A. Price as discussed in Health From Generation To Generation; also see the work of those described in Physical Health, Mental Health). In many ways, they are closer to hormones than mere nutrients, as they orchestrate entire systems in the body and how other nutrients get used, particularly seen with vitamin K2 that Weston A. Price discovered in calling it “Activator X” (only found in animal and fermented foods, not in whole or industrially-processed plant foods). I bring this up because some other animal-based nutrients play a similar important role. Consider glycine that is the main amino acid in collagen. It is available in connective tissues and can be obtained through soups and broths made from bones, skin, ligaments, cartilage, and tendons. Glycine is right up there with the fat-soluble vitamins in being central to numerous systems, processes, and organs.

As I’ve already discussed glutamate at great length, let me further that discussion by pointing out a key link. “Glycine is found in the spinal cord and brainstem where it acts as an inhibitory neurotransmitter via its own system of receptors,” writes Afifah Hamilton. “Glycine receptors are ubiquitous throughout the nervous system and play important roles during brain development. [Ito, 2016] Glycine also interacts with the glutaminergic neurotransmission system via NMDA receptors, where both glycine and glutamate are required, again, chiefly exerting inhibitory effects” (10 Reasons To Supplement With Glycine). Hamilton elucidates the dozens of roles played by this master nutrient and the diverse conditions that follow from its deprivation or insufficiency — it’s implicated in obsessive compulsive disorder, schizophrenia, and alcohol use disorder, along with much else such as metabolic syndrome. But it’s being essential to glutamate really stands out for this discussion. “Glutathione is synthesised,” Hamilton further explains, “from the amino acids glutamate, cysteine, and glycine, but studies have shown that the rate of synthesis is primarily determined by levels of glycine in the tissue. If there is insufficient glycine available the glutathione precursor molecules are excreted in the urine. Vegetarians excrete 80% more of these precursors than their omnivore counterparts indicating a more limited ability to complete the synthesis process.” Did you catch what she is saying there? Autistics already have too much glutamate and, if they are deficient in glycine, they won’t be able to convert glutamate into the important glutathione. When the body is overwhelmed with unused glutamate, it does what it can to eliminate them, but when constantly flooded with high-glutamate intake it can’t keep up. The excess glutamate then wreaks havoc on neurocognitive functioning.

The whole mess of the agricultural diet, specifically in its modern industrialized form, has been a constant onslaught taxing our bodies and minds. And the consequences are worsening with each generation. What stands out to me about autism, in particular, is how isolating it is. The repetitive behavior and focus on objects to the exclusion of human relationships resonates with how addiction isolates the individual. As with other conditions influenced by diet (shizophrenia, ADHD, etc), both autism and addiction block normal human relating in creating an obsessive mindset that, in the most most extreme forms, blocks out all else. I wonder if all of us moderns are simply expressing milder varieties of this biological and neurological phenomenon. And this might be the underpinning of our hyper-individualistic society, with the earliest precursors showing up in the Axial Age following what Julian Jaynes hypothesized as the breakdown of the much more other-oriented bicameral mind. What if our egoic consciousness with its rigid psychological boundaries is the result of our food system, as part of the civilizational project of mass agriculture?

* * *

Mongolian Diet and Fasting:

For anyone who is curious to learn more, the original point of interest for me was a quote by Jack Weatherford in his book Genghis Khan and the Making of the Modern World: “The Chinese noted with surprise and disgust the ability of the Mongol warriors to survive on little food and water for long periods; according to one, the entire army could camp without a single puff of smoke since they needed no fires to cook. Compared to the Jurched soldiers, the Mongols were much healthier and stronger. The Mongols consumed a steady diet of meat, milk, yogurt, and other diary products, and they fought men who lived on gruel made from various grains. The grain diet of the peasant warriors stunted their bones, rotted their teeth, and left them weak and prone to disease. In contrast, the poorest Mongol soldier ate mostly protein, thereby giving him strong teeth and bones. Unlike the Jurched soldiers, who were dependent on a heavy carbohydrate diet, the Mongols could more easily go a day or two without food.” By the way, that biography was written by an anthropologist who lived among and studied the Mongols for years. It is about the historical Mongols, but filtered through the direct experience of still existing Mongol people who have maintained a traditional diet and lifestyle longer than most other populations. It isn’t only that their diet was ketogenic because of being low-carb but also because it involved fasting.

From Mongolia Volume 1 The Tangut Country, and the Solitudes of Northernin (1876), Nikolaĭ Mikhaĭlovich Przhevalʹskiĭ writes in the second note on p. 65 under the section Calendar and Year-Cycle: “On the New Year’s Day, or White Feast of the Mongols, see ‘Marco Polo’, 2nd ed. i. p. 376-378, and ii. p. 543. The monthly fetival days, properly for the Lamas days of fasting and worship, seem to differ locally. See note in same work, i. p. 224, and on the Year-cycle, i. p. 435.” This is alluded to in another text, in describing that such things as fasting were the norm of that time: “It is well known that both medieval European and traditional Mongolian cultures emphasized the importance of eating and drinking. In premodern societies these activities played a much more significant role in social intercourse as well as in religious rituals (e.g., in sacrificing and fasting) than nowadays” (Antti Ruotsala, Europeans and Mongols in the middle of the thirteenth century, 2001). A science journalist trained in biology, Dyna Rochmyaningsih, also mentions this: “As a spiritual practice, fasting has been employed by many religious groups since ancient times. Historically, ancient Egyptians, Greeks, Babylonians, and Mongolians believed that fasting was a healthy ritual that could detoxify the body and purify the mind” (Fasting and the Human Mind).

Mongol shamans and priests fasted, no different than in so many other religions, but so did other Mongols — more from Przhevalʹskiĭ’s 1876 account showing the standard feast and fast cycle of many traditional ketogenic diets: “The gluttony of this people exceeds all description. A Mongol will eat more than ten pounds of meat at one sitting, but some have been known to devour an average-sized sheep in twenty-four hours! On a journey, when provisions are economized, a leg of mutton is the ordinary daily ration for one man, and although he can live for days without food, yet, when once he gets it, he will eat enough for seven” (see more quoted material in Diet of Mongolia). Fasting was also noted of earlier Mongols, such as Genghis Khan: “In the spring of 2011, Jenghis Khan summoned his fighting forces […] For three days he fasted, neither eating nor drinking, but holding converse with the gods. On the fourth day the Khakan emerged from his tent and announced to the exultant multitude that Heaven had bestowed on him the boon of victory” (Michael Prawdin, The Mongol Empire, 1967). Even before he became Khan, this was his practice as was common among the Mongols, such that it became a communal ritual for the warriors:

“When he was still known as Temujin, without tribe and seeking to retake his kidnapped wife, Genghis Khan went to Burkhan Khaldun to pray. He stripped off his weapons, belt, and hat – the symbols of a man’s power and stature – and bowed to the sun, sky, and mountain, first offering thanks for their constancy and for the people and circumstances that sustained his life. Then, he prayed and fasted, contemplating his situation and formulating a strategy. It was only after days in prayer that he descended from the mountain with a clear purpose and plan that would result in his first victory in battle. When he was elected Khan of Khans, he again retreated into the mountains to seek blessing and guidance. Before every campaign against neighboring tribes and kingdoms, he would spend days in Burhkhan Khandun, fasting and praying. By then, the people of his tribe had joined in on his ritual at the foot of the mountain, waiting his return” (Dr. Hyun Jin Preston Moon, Genghis Khan and His Personal Standard of Leadership).

As an interesting side note, the Mongol population have been studied to some extent in one area of relevance. In Down’s Anomaly (1976), Smith et al writes that, “The initial decrease in the fasting blood sugar was greater than that usually considered normal and the return to fasting blood sugar level was slow. The results suggested increased sensitivity to insulin. Benda reported the initial drop in fating blood sugar to be normal but the absolute blood sugar level after 2 hours was lower for mongols than for controls.” That is probably the result of a traditional low-carb diet that had been maintained continuously since before history. For some further context, I noticed some discusion about the Mongolian keto diet (Reddit, r/keto, TIL that Ghenghis Khan and his Mongol Army ate a mostly keto based diet, consisting of lots of milk and cheese. The Mongols were specially adapted genetically to digest the lactase in milk and this made them easier to feed.) that was inspired by the scientific documentary “The Evolution of Us” (presently available on Netflix and elsewhere).

* * *

3/30/19 – An additional comment: I briefly mentioned sugar, that it causes a serotonin high and activates the hedonic pathway. I also noted that it was late in civilization when sources of sugar were cultivated and, I could add, even later when sugar became cheap enough to be common. Even into the 1800s, sugar was minimal and still often considered more as medicine than food.

To extend this thought, it isn’t only sugar in general but specific forms of it. Fructose, in particular, has become widespread because of United States government subsidizing corn agriculture which has created a greater corn yield that humans can consume. So, what doesn’t get fed to animals or turned into ethanol, mostly is made into high fructose corn syrup and then added into almost every processed food and beverage imaginable.

Fructose is not like other sugars. This was important for early hominid survival and so shaped human evolution. It might have played a role in fasting and feasting. In 100 Million Years of Food, Stephen Le writes that, “Many hypotheses regarding the function of uric acid have been proposed. One suggestion is that uric acid helped our primate ancestors store fat, particularly after eating fruit. It’s true that consumption of fructose induces production of uric acid, and uric acid accentuates the fat-accumulating effects of fructose. Our ancestors, when they stumbled on fruiting trees, could gorge until their fat stores were pleasantly plump and then survive for a few weeks until the next bounty of fruit was available” (p. 42).

That makes sense to me, but he goes on to argue against this possible explanation. “The problem with this theory is that it does not explain why only primates have this peculiar trait of triggering fat storage via uric acid. After all, bears, squirrels, and other mammals store fat without using uric acid as a trigger.” This is where Le’s knowledge is lacking for he never discusses ketosis that has been centrally important for humans unlike other animals. If uric acid increases fat production, that would be helpful for fattening up for the next starvation period when the body returned to ketosis. So, it would be a regular switching back and forth between formation of uric acid that stores fat and formation of ketones that burns fat.

That is fine and dandy under natural conditions. Excess fructose on a continuous basiss, however, is a whole other matter. It has been strongly associated with metabolic syndrome. One pathway of causation is that increased production of uric acid. This can lead to gout but other things as well. It’s a mixed bag. “While it’s true that higher levels of uric acid have been found to protect against brain damage from Alzheimer’s, Parkinson’s, and multiple sclerosis, high uric acid unfortunately increases the risk of brain stroke and poor brain function” (Le, p. 43).

The potential side effects of uric acid overdose are related to other problems I’ve discussed in relation to the agricultural mind. “A recent study also observed that high uric acid levels are associated with greater excitement-seeking and impulsivity, which the researchers noted may be linked to attention deficit hyperactivity disorder (ADHD)” (Le, p. 43). The problems of sugar go far beyond mere physical disease. It’s one more factor in the drastic transformation of the human mind.

* * *

4/2/19 – More info: There are certain animal fats, the omega-3 fatty acids EPA and DHA, that are essential to human health (Georgia Ede, The Brain Needs Animal Fat). These were abundant in the hunter-gatherer diet. But over the history of agriculture, they have become less common.

This is associated with psychiatric disorders and general neurocognitive problems, including those already mentioned above in the post. Agriculture and industrialization have replaced these healthy lipids with industrially-processed seed oils that are high in linoleic acid (LA), an omega-6 fatty acids. LA interferes with the body’s use of omega-3 fatty acids. Worse still, these seed oils appear to not only alter gene expression (epigenetics) but also to be mutagenic, a possible causal factor behind conditions like autism (Dr. Catherine Shanahan On Dietary Epigenetics and Mutations).

The loss of healthy animal fats in the diet might be directly related to numerous conditions. “Children who lack DHA are more likely to have increased rates of neurological disorders, in particular attention deficit hyperactivity disorder (ADHD), and autism” (Maria Cross, Why babies need animal fat).

“Biggest dietary change in the last 60 years has been avoidance of animal fat. Coincides with a huge uptick in autism incidence. The human brain is 60 percent fat by weight. Much more investigation needed on correspondence between autism and prenatal/child ingestion of dietary fat.”
~ Brad Lemley

The agricultural diet, along with a drop in animal foods, saw a loss of access to the high levels and full profile of B vitamins. As with the later industrial seed oils, this had a major impact on genetics:

“The phenomenon wherein specific traits are toggled up and down by variations in gene expression has recently been recognized as a result of the built-in architecture of DNA and dubbed “active adaptive evolution.” 44

“As further evidence of an underlying logic driving the development of these new autism-related mutations, it appears that epigenetic factors activate the hotspot, particularly a kind of epigenetic tagging called methylation. 45 In the absence of adequate B vitamins, specific areas of the gene lose these methylation tags, exposing sections of DNA to the factors that generate new mutations. In other words, factors missing from a parent’s diet trigger the genome to respond in ways that will hopefully enable the offspring to cope with the new nutritional environment. It doesn’t always work out, of course, but that seems to be the intent.”
~Catherine Shanahan, Deep Nutrition, p. 56

And one last piece of evidence on the essential nature of animal fats:

“Maternal intake of fish, a key source of fatty acids, has been investigated in association with child neurodevelopmental outcomes in several studies. […]

“Though speculative at this time, the inverse association seen for those in the highest quartiles of intake of ω-6 fatty acids could be due to biological effects of these fatty acids on brain development. PUFAs have been shown to be important in retinal and brain development in utero (37) and to play roles in signal transduction and gene expression and as components of cell membranes (38, 39). Maternal stores of fatty acids in adipose tissue are utilized by the fetus toward the end of pregnancy and are necessary for the first 2 months of life in a crucial period of development (37). The complex effects of fatty acids on inflammatory markers and immune responses could also mediate an association between PUFA and ASD. Activation of the maternal immune system and maternal immune aberrations have been previously associated with autism (5, 40, 41), and findings suggest that increased interleukin-6 could influence fetal brain development and increase risk of autism and other neuropsychiatric conditions (42–44). Although results for effects of ω-6 intake on interleukin-6 levels are inconsistent (45, 46), maternal immune factors potentially could be affected by PUFA intake (47). […]

“Our results provide preliminary evidence that increased maternal intake of ω-6 fatty acids could reduce risk of offspring ASD and that very low intakes of ω-3 fatty acids and linoleic acid could increase risk.”
~Kristen Lyall et al, Maternal Dietary Fat Intake in Association With Autism Spectrum Disorders

* * *

6/13/19 – About the bicameral mind, I saw some other evidence for it in relationship to fasting. In the following quote, it is described that after ten days of fasting ancient humans would experience spirits. One thing for certain is that one can be fully in ketosis in three days. This would be true even if it wasn’t total fasting, as the caloric restriction would achieve the same end.

The author, Michael Carr, doesn’t think fasting was the cause of the spirit visions, but he doesn’t explain the reason(s) for his doubt. There is a long history of fasting used to achieve this intended outcome. If fasting was ineffective for this purpose, why has nearly every known traditional society for millennia used such methods? These people knew what they were doing.

By the way, imbibing alcohol after the fast would really knock someone into an altered state. The body becomes even more sensitive to alcohol when in ketogenic state during fasting. Combine this altered state with ritual, setting, cultural expectation, and archaic authorization. I don’t have any doubt that spirit visions could easily be induced.

Reflections on the Dawn of Consciousness
ed. by Marcel Kuijsten
Kindle Location 5699-5718

Chapter 13
The Shi ‘Corpse/ Personator’ Ceremony in Early China
by Michael Carr

“”Ritual Fasts and Spirit Visions in the Liji” 37 examined how the “Record of Rites” describes zhai 齋 ‘ritual fasting’ that supposedly resulted in seeing and hearing the dead. This text describes preparations for an ancestral sacrifice that included divination for a suitable day, ablution, contemplation, and a fasting ritual with seven days of sanzhai 散 齋 ‘relaxed fasting; vegetarian diet; abstinence (esp. from sex, meat, or wine)’ followed by three days of zhizhai 致 齋 ‘strict fasting; diet of grains (esp. gruel) and water’.

“Devoted fasting is inside; relaxed fasting is outside. During fast-days, one thinks about their [the ancestor’s] lifestyle, their jokes, their aspirations, their pleasures, and their affections. [After] fasting three days, then one sees those [spirits] for whom one fasted. On the day of the sacrifice, when one enters the temple, apparently one must see them at the spirit-tablet. When one returns to go out the door [after making sacrifices], solemnly one must hear sounds of their appearance. When one goes out the door and listens, emotionally one must hear sounds of their sighing breath. 38

“This context unequivocally uses biyou 必 有 ‘must be/ have; necessarily/ certainly have’ to describe events within the ancestral temple; the faster 必 有 見 “must have sight of, must see” and 必 有 聞 “must have hearing of, must hear” the deceased parent. Did 10 days of ritual fasting and mournful meditation necessarily cause visions or hallucinations? Perhaps the explanation is extreme or total fasting, except that several Liji passages specifically warn against any excessive fasts that could harm the faster’s health or sense perceptions. 39 Perhaps the explanation is inebriation from drinking sacrificial jiu 酒 ‘( millet) wine; alcohol’ after a 10-day fast. Based on measurements of bronze vessels and another Liji passage describing a shi personator drinking nine cups of wine, 40 York University professor of religious studies Jordan Paper   calculates an alcohol equivalence of “between 5 and 8 bar shots of eighty-proof liquor.” 41 On the other hand, perhaps the best explanation is the bicameral hypothesis, which provides a far wider-reaching rationale for Chinese ritual hallucinations and personation of the dead.”

* * *

7/16/19 – One common explanation for autism is the extreme male brain theory. A recent study may have come up with supporting evidence (Christian Jarrett, Autistic boys and girls found to have “hypermasculinised” faces – supporting the Extreme Male Brain theory). Autistics, including females, tend to have hypermasculinised. This might be caused by greater exposure to testosterone in the womb.

This made my mind immediately wonder how this relates. Changes in diets alter hormonal functioning. Endocrinology, the study of hormones, has been a major part of the diet debate going back to European researchers from earlier last century (as discussed by Gary Taubes). Diet affects hormones and hormones in turn affect diet. But I had something more specific in mind.

What about propionate and glutamate? What might their relationship be to testosterone? In a brief search, I couldn’t find anything about propionate. But I did find some studies related to glutamate. There is an impact on the endocrine system, although these studies weren’t looking at the results in terms of autism specifically or neurocognitive development in general. It points to some possibilities, though.

One could extrapolate from one of these studies that increased glutamate in the pregnant mother’s diet could alter what testosterone does to the developing fetus, in that testosterone increases the toxicity of glutamate which might not be a problem under normal conditions of lower glutamate levels. This would be further exacerbated during breastfeeding and later on when the child began eating the same glutamate-rich diet as the mother.

Testosterone increases neurotoxicity of glutamate in vitro and ischemia-reperfusion injury in an animal model
by Shao-Hua Yang et al

Effect of Monosodium Glutamate on Some Endocrine Functions
by Yonetani Shinobu and Matsuzawa Yoshimasa

* * *