Moral Panic and Physical Degeneration

From the beginning of the country, there has been an American fear of moral and mental decline that was always rooted in the physical, involving issues of vitality of land and health of the body, and built on an ancient divide between the urban and rural. Over time, it grew into a fever pitch of moral panic about degeneration and degradation of the WASP culture, the white race, and maybe civilization itself. Some saw the end was near, maybe being able to hold out for another few generations before finally succumbing to disease and weakness. The need for revitalization and rebirth became a collective project (Jackson Lears, Rebirth of a Nation), which sadly fed into ethno-nationalist bigotry and imperialistic war-mongering — Make America Great Again!

A major point of crisis, of course, was the the Civil War. Racial ideology became predominant, not only because of slavery but maybe moreso because of mass immigration, the latter being the main reason the North won. Racial tensions merged with the developing scientific mindset of Darwinism and out of this mix came eugenics. For all we can now dismiss this kind of simplistic ignorance and with hindsight see the danger it led to, the underlying anxieties were real. Urbanization and industrialization were having an obvious impact on public health that was observed by many, and it wasn’t limited to mere physical ailments. “Cancer, like insanity, seems to increase with the progress of civilization,” noted Stanislas Tanchou, a mid-19th century French physician.

The diseases of civilization, including mental sickness, have been spreading for centuries (millennia, actually, considering the ‘modern’ chronic health conditions were first detected in the mummies of the agricultural Egyptians). Consider how talk of depression suddenly showed up in written accounts with the ending of feudalism (Barbara Ehrenreich, Dancing in the Street). That era included the enclosure movement that forced millions of then landless serfs into the desperate conditions of crowded cities and colonies where they faced stress, hunger, malnutrition, and disease. The loss of rural life hit Europe much earlier than America, but it eventually came here as well. The majority of white Americans were urban by the beginning of the 20th century and the majority of black Americans were urban by the 1970s. There has been a consistent pattern of mass problems following urbanization, everywhere it happens. It still is happening. The younger generation, more urbanized than any generation before, are seeing rising rates of psychosis that is specifically concentrated in the most urbanized areas.

In the United States, it was the last decades of the 19th century that was the turning point, the period of the first truly big cities. Into this milieu, Weston A. Price was born (1870) in a small rural village in Canada. As an adult, he became a dentist and sought work in Cleveland, Ohio (1893). Initially, most of his patients probably had, like him, grown up in rural areas. But over the decades, he increasingly was exposed to the younger generations having spent their entire lives in the city. Lierre Keith puts Price’s early observations in context, after pointing out that he started his career in 1893: “This date is important, as he entered the field just prior to the glut of industrial food. Over the course of the next thirty years, he watched children’s dentition — and indeed their overall health deteriorate. There was suddenly children whose teeth didn’t fit in their mouths, children with foreshortened jaws, children with lots of cavities. Not only were their dental arches too small, but he noticed their nasal passages were also too narrow, and they had poor health overall; asthma, allergies, behavioral problems” (The Vegetarian Myth, p. 187). This was at the time when the industrialization of farming and food had reached a new level, far beyond the limited availability of canned foods that in the mid-to-late 1800s when most Americans still relied on a heavy amount of wild-sourced meat, fish, nuts, etc. Even city-dwellers in early America had ready access to wild game because of the abundance of surrounding wilderness areas. In fact, in the 19th century, the average American ate more meat (mostly hunted) than bread.

We are once again coming back to the ever recurrent moral panic about the civilizational project. The same fears given voice in the late 19th to early 20th century are being repeated again. For example, Dr. Leonard Sax alerts us to how girls are sexually maturing early (1% of female infants showing signs of puberty), whereas boys are maturing later. As a comparison, hunter-gatherers don’t have such a large gender disparity of puberty nor do they experience puberty so early for girls, instead both genders typically coming to puberty around 18 years old with sex, pregnancy, and marriage happening more or less simultaneously. Dr. Sax, along with others, speculates about a number of reasons. Common causes that are held responsible include health factors, from diet to chemicals. Beyond altered puberty, many other examples could be added: heart disease, autoimmune disorders, mood disorders, autism, ADHD, etc; all of them increasing and worsening with each generation (e.g., type 2 diabetes used to be known as adult onset diabetes but now is regularly diagnosed in young children; the youngest victim recorded recently was three years old when diagnosed).

In the past, Americans responded to moral panic with genocide of Native Americans, Prohibition targeting ethnic (hyphenated) Americans and the poor, and immigrant restrictions to keep the bad sort out; the spread of racism and vigilantism such as KKK and Jim Crow and sundown towns and redlining, forced assimilation such as English only laws and public schools, and internment camps for not only Japanese-Americans but also German-Americans and Italian-Americans; implementation of citizen-making projects like national park systems, Boy Scouts, WPA, and CCC; promotion of eugenics, war on poverty (i.e., war on the poor), imperial expansionism, neo-colonial exploitation, and world wars; et cetera. The cure sought was often something to be forced onto the population by a paternalistic elite, that is to say rich white males, most specifically WASPs of the capitalist class.

Eugenics was, of course, one of the main focuses as it carried the stamp of science (or rather scientism). Yet at the same time, there were those challenging biological determinism and race realism, as views shifted toward environmental explanations. The anthropologists were at the front lines of this battle, but there were also Social Christians who changed their minds after having seen poverty firsthand. Weston A. Price, however, didn’t come to this from a consciously ideological position or religious motivation. He was simply a dentist who couldn’t ignore the severe health issues of his patients. So, he decided to travel the world in order to find healthy populations to study, in the hope of explaining why the change had occurred (Nutrition and Physical Degeneration).

Although familiar with eugenics literature, what Price observed in ‘primitive’ communities (including isolated villages in Europe) did not conform to eugenicist thought. It didn’t matter which population he looked at. Those who ate traditional diets were healthy and those who ate an industrialized Western diet were not. And it was a broad pattern that he saw everywhere he went, not only physical health but also neurocognitive health as indicated by happiness, low anxiety, and moral character. Instead of blaming individuals or races, he saw the common explanation as nutrition and he made a strong case by scientifically analyzing the nutrition of available foods.

In reading about traditional foods, paleo diet/lifestyle and functional medicine, Price’s work comes up quite often. He took many photographs that compared people from healthy and unhealthy populations. The contrast is stark. But what really stands out is how few people in the modern world look close to as healthy as those from the healthiest societies of the past. I live in a fairly wealthy college and medical town where there is a far above average concern for health along with access to healthcare. Even so, I now can’t help noticing how many people around me show signs of stunted or perturbed development of the exact kind Price observed in great detail: thin bone structure, sunken chests, sloping shoulders, narrow facial features, asymmetry, etc. That is even with modern healthcare correcting some of the worst conditions: cavities, underbites, pigeon-toes, etc. My fellow residents in this town are among the most privileged people in the world and, nonetheless, their state of health is a sad state of affairs in what it says about humanity at present.

It makes me wonder, as it made Price wonder, what consequences this has on neurocognitive health for individuals and the moral health of society. Taken alone, it isn’t enough to get excited about. But put in a larger context of looming catastrophes and it does become concerning. It’s not clear that our health will be up to the task of the problems we need to solve. We are a sickly population, far more sickly than when moral panic took hold in past generations.

As important, there is the personal component. I’m at a point where I’m not going to worry too much about decline and maybe collapse of civilization. I’m kind of hoping the American Empire will meet its demise. Still, that leaves us with many who suffer, no matter what happens to society as a whole. I take that personally, as one who has struggled with physical and mental health issues. And I’ve come around to Price’s view of nutrition as being key. I see these problems in other members of my family and it saddens me to watch as health conditions seem to get worse from one generation to the next.

It’s far from being a new problem, the central point I’m trying to make here. Talking to my mother, she has a clear sense of the differences on the two sides of her family. Her mother’s family came from rural areas and, even after moving to a larger city for work, they continued to hunt on a daily basis as there were nearby fields and woods that made that possible. They were a healthy, happy, and hard-working lot. They got along well as a family. Her father’s side of the family was far different. They had been living in towns and cities for several generations by the time she was born. They didn’t hunt at all. They were known for being surly, holding grudges, and being mean drunks. They also had underbites (i.e., underdeveloped jaw structure) and seemed to have had learning disabilities, though no one was diagnosing such conditions back then. Related to this difference, my mother’s father raised rabbits whereas my mother’s mother’s family hunted rabbits (and other wild game). This makes a big difference in terms of nutrition, as wild game has higher levels of omega-3 fatty acids and fat-soluble vitamins, all of which are key to optimal health and development.

What my mother observed in her family is basically the same as what Price observed in hundreds of communities in multiple countries on every continent. And I now observe the same pattern repeating. I grew up with an underbite. My brothers and I all required orthodontic work, as do so many now. I was diagnosed with a learning disability when young. Maybe not a learning disability, but behavioral issues were apparent when my oldest brother was young, likely related to his mildew allergies and probably an underlying autoimmune condition. I know I had food allergies as a child, as I think my other brother did as well. All of us have had neurocognitive and psychological issues of a fair diversity, besides learning disabilities: stuttering, depression, anxiety, and maybe some Asperger’s.

Now another generation is coming along with increasing rates of major physical and mental health issues. My nieces and nephews are sick all the time. They don’t eat well and are probably malnourished. During a medical checkup for my nephew, my mother asked the doctor about his extremely unhealthy diet, consisting mostly of white bread and sugar. The doctor bizarrely dismissed it as ‘normal’ in that, as she claimed, no kid eats healthy. If that is the new normal, maybe we should be in a moral panic.

* * *

Violent Behavior: A Solution in Plain Sight
by Sylvia Onusic

Nutrition and Mental Development
by Sally Fallon Morell

You Are What You Eat: The Research and Legacy of Dr. Weston Andrew Price
by John Larabell

While practicing in his Cleveland office, Dr. Price noticed an increase in dental problems among the younger generations. These issues included the obvious dental caries (cavities) as well as improper jaw development leading to crowded, crooked teeth. In fact, the relatively new orthodontics industry was at that time beginning to gain popularity. Perplexed by these modern problems that seemed to be affecting a greater and greater portion of the population, Dr. Price set about to research the issue by examining people who did not display such problems. He suspected (correctly, as he would later find) that many of the dental problems, as well as other degenerative health problems, that were plaguing modern society were the result of inadequate nutrition owing to the increasing use of refined, processed foods.

Nasty, Brutish and Short?
by Sally Fallon Morell

It seems as if the twentieth century will exit with a crescendo of disease. Things were not so bad back in the 1930’s, but the situation was already serious enough to cause one Cleveland, Ohio dentist to be concerned. Dr. Weston Price was reluctant to accept the conditions exhibited by his patients as normal. Rarely did an examination of an adult patient reveal anything but rampant decay, often accompanied by serious problems elsewhere in the body, such as arthritis, osteoporosis, diabetes, intestinal complaints and chronic fatigue. (They called it neurasthenia in Price’s day.) But it was the dentition of younger patients that alarmed him most. Price observed that crowded, crooked teeth were becoming more and more common, along with what he called “facial deformities”-overbites, narrowed faces, underdevelopment of the nose, lack of well-defined cheekbones and pinched nostrils. Such children invariably suffered from one or more complaints that sound all too familiar to mothers of the 1990’s: frequent infections, allergies, anemia, asthma, poor vision, lack of coordination, fatigue and behavioral problems. Price did not believe that such “physical degeneration” was God’s plan for mankind. He was rather inclined to believe that the Creator intended physical perfection for all human beings, and that children should grow up free of ailments.

Is it Mental or is it Dental?
by Raymond Silkman

The widely held model of orthodontics, which considers developmental problems in the jaws and head to be genetic in origin, never made sense to me. Since they are wedded to the genetic model, orthodontists dealing with crowded teeth end up treating the condition with tooth extraction in a majority of the cases. Even though I did not resort to pulling teeth in my practice, and I was using appliances to widen the jaws and getting the craniums to look as they should, I still could not come up with the answer as to why my patients looked the way they did. I couldn’t believe that the Creator had given them a terrible blueprint –it just did not make sense. In four years of college education, four years of dental school education and almost three years of post-graduate orthodontic training, students never hear a mention of Dr. Price, so they never learn the true reasons for these malformations. I have had the opportunity to work with a lot of very knowledgeable doctors in various fields of allopathic and alternative healthcare who still do not know about Dr. Price and his critical findings.

These knowledgeable doctors have not stared in awe at the beautiful facial development that Price captured in the photographs he took of primitive peoples throughout the globe and in so doing was able to answer this most important question: What do humans look like in health? And how have humans been able to carry on throughout history and populate such varied geographical and physical environments on the earth without our modern machines and tools?

The answer that Dr. Price was able to illuminate came through his photographs of beautiful, healthy human beings with magnificent physical form and mental development, living in harmony with their environments. […]

People who are not well oxygenated and who have poor posture often suffer from fatigue and fibromyalgia symptoms, they snore and have sleep apnea, they have sinusitis and frequent ear infections. Life becomes psychologically and physically challenging for them and they end up with long-term dependence on medications—and all of that just from the seemingly simple condition of crowded teeth.

In other words, people with poor facial development are not going to live very happily. […]

While very few people have heard of the work of Weston Price these days, we haven’t lost our ability to recognize proper facial form. To make it in today’s society, you must have good facial development. You’re not going to see a general or a president with a weak chin, you’re not going to see coaches with weak chins, you’re not going to see a lot of well-to-do personalities in the media with underdeveloped faces and chins. You don’t see athletes and newscasters with narrow palates and crooked teeth.

Weston A. Price: An Unorthodox Dentist
by Nourishing Israel

Price discovered that the native foods eaten by the isolated populations were far more nutrient dense than the modern foods. In the first generation that changed their diet there was noticeable tooth decay; in subsequent generations the dental and facial bone structure changed, as well as other changes that were seen in American and European families and previously considered to be the result of interracial marriage.

By studying the different routes that the same populations had taken – traditional versus modern diet – he saw that the health of the children is directly related to the health of the parents and the germ plasms that they provide, and are as important to the child’s makeup as the health of the mother before and during pregnancy.

Price also found that primitive populations were very conscious of the importance of the mothers’ health and many populations made sure that girls were given a special diet for several months before they were allowed to marry.

Another interesting finding was that although genetic makeup was important, it did not have as great a degree of influence on a person’s development and health as was thought, but that a lot of individual characteristics, including brain development and brain function, where due to environmental influence, what he called “intercepted heredity”.

The origin of personality and character appear in the light of the newer date to be biologic products and to a much less degree than usually considered pure hereditary traits. Since these various factors are biologic, being directly related to both the nutrition of the parents and to the nutritional environment of the individuals in the formative and growth period any common contributing factor such as food deficiencies due to soil depletion will be seen to produce degeneration of the masses of people due to a common cause. Mass behavior therefore, in this new light becomes the result of natural forces, the expression of which may not be modified by propaganda but will require correction at the source. [1] …

It will be easy for the reader to be prejudiced since many of the applications suggested are not orthodox. I suggest that conclusions be deferred until the new approach has been used to survey the physical and mental status of the reader’s own family, of his brothers and sisters, of associated families, and finally, of the mass of people met in business and on the street. Almost everyone who studies the matter will be surprised that such clear-cut evidence of a decline in modern reproductive efficiency could be all about us and not have been previously noted and reviewed.[2]

From Nutrition and Physical Degeneration by Weston Price

Food Freedom – Nourishing Raw Milk
by Lisa Virtue

In 1931 Price visited the people of the Loetschental Valley in the Swiss Alps. Their diet consisted of rye bread, milk, cheese and butter, including meat once a week (Price, 25). The milk was collected from pastured cows, and was consumed raw: unpasteurized, unhomogenized (Schmid, 9).

Price described these people as having “stalwart physical development and high moral character…superior types of manhood, womanhood and childhood that Nature has been able to produce from a suitable diet and…environment” (Price, 29). At this time, Tuberculosis had taken more lives in Switzerland than any other disease. The Swiss government ordered an inspection of the valley, revealing not a single case. No deaths had been recorded from Tuberculosis in the history of the Loetschental people (Shmid, 8). Upon return home, Price had dairy samples from the valley sent to him throughout the year. These samples were higher in minerals and vitamins than samples from commercial (thus pasteurized) dairy products in America and the rest of Europe. The Loetschental milk was particularly high in fat soluble vitamin D (Schmid, 9).

The daily intake of calcium and phosphorous, as well as fat soluble vitamins would have been higher than average North American children. These children were strong and sturdy, playing barefoot in the glacial waters into the late chilly evenings. Of all the children in the valley eating primitive foods, cavities were detected at an average of 0.3 per child (Price, 25). This without visiting a dentist or physician, for the valley had none, seeing as there was no need (Price, 23). To offer some perspective, the rate of cavities per child between the ages of 6-19 in the United States has been recorded to be 3.25, over 10 times the rate seen in Loetschental (Nagel).

Price offers some perspective on a society subsisting mainly on raw dairy products: “One immediately wonders if there is not something in the life-giving vitamins and minerals of the food that builds not only great physical structures within which their souls reside, but builds minds and hearts capable of a higher type of manhood…” (Price, 26).

100 Years Before Weston Price
by Nancy Henderson

Like Price, Catlin was struck by the beauty, strength and demeanor of the Native Americans. “The several tribes of Indians inhabiting the regions of the Upper Missouri. . . are undoubtedly the finest looking, best equipped, and most beautifully costumed of any on the Continent.” Writing of the Blackfoot and Crow, tribes who hunted buffalo on the rich glaciated soils of the American plains, “They are the happiest races of Indian I have met—picturesque and handsome, almost beyond description.”

“The very use of the word savage,” wrote Catlin, “as it is applied in its general sense, I am inclined to believe is an abuse of the word, and the people to whom it is applied.” […]

As did Weston A. Price one hundred years later, Catlin noted the fact that moral and physical degeneration came together with the advent of civilized society. In his late 1830s portrait of “Pigeon’s Egg Head (The Light) Going to and Returning from Washington” Catlin painted him corrupted with “gifts of the great white father” upon his return to his native homeland. Those gifts including two bottles of whiskey in his pockets. […]

Like Price, Catlin discusses the issue of heredity versus environment. “No diseases are natural,” he writes, “and deformities, mental and physical, are neither hereditary nor natural, but purely the result of accidents or habits.”

So wrote Dr. Price: “Neither heredity nor environment alone cause our juvenile delinquents and mental defectives. They are cripples, physically, mentally and morally, which could have and should have been prevented by adequate education and by adequate parental nutrition. Their protoplasm was not normally organized.”

The Right Price
by Weston A. Price Foundation

Many commentators have criticized Price for attributing “decline in moral character” to malnutrition. But it is important to realize that the subject of “moral character” was very much on the minds of commentators of his day. As with changes in facial structure, observers in the first half of the 20th century blamed “badness” in people to race mixing, or to genetic defects. Price quotes A.C. Jacobson, author of a 1926 publication entitled Genius (Some Revaluations),35 who stated that “The Jekyll-Hydes of our common life are ethnic hybrids.” Said Jacobson, “Aside from the effects of environment, it may safely be assumed that when two strains of blood will not mix well a kind of ‘molecular insult’ occurs which the biologists may some day be able to detect beforehand, just as blood is now tested and matched for transfusion.” The implied conclusion to this assertion is that “degenerates” can be identified through genetic testing and “weeded out” by sterilizing the unfit–something that was imposed on many women during the period and endorsed by powerful individuals, including Oliver Wendell Holmes.

It is greatly to Price’s credit that he objected to this arrogant point of view: “Most current interpretations are fatalistic and leave practically no escape from our succession of modern physical, mental and moral cripples. . . If our modern degeneration were largely the result of incompatible racial stocks as indicated by these premises, the outlook would be gloomy in the extreme.”36 Price argued that nutritional deficiencies affecting the physical structure of the body can also affect the brain and nervous system; and that while “bad” character may be the result of many influences–poverty, upbringing, displacement, etc.–good nutrition also plays a role in creating a society of cheerful, compassionate individuals.36

Rebirth of a Nation:
The Making of Modern America, 1877-1920
By Jackson Lears
pp. 7-9

By the late nineteenth century, dreams of rebirth were acquiring new meanings. Republican moralists going back to Jefferson’s time had long fretted about “overcivilization,” but the word took on sharper meaning among the middle and upper classes in the later decades of the nineteenth century. During the postwar decades, “overcivilization” became not merely a social but an individual condition, with a psychiatric diagnosis. In American Nervousness (1880), the neurologist George Miller Beard identified “neurasthenia,” or “lack of nerve force,” as the disease of the age. Neurasthenia encompassed a bewildering variety of symptoms (dyspepsia, insomnia, nocturnal emissions, tooth decay, “fear of responsibility, of open places or closed places, fear of society, fear of being alone, fear of fears, fear of contamination, fear of everything, deficient mental control, lack of decision in trifling matters, hopelessness”), but they all pointed to a single overriding effect: a paralysis of the will.

The malady identified by Beard was an extreme version of a broader cultural malaise—a growing sense that the Protestant ethic of disciplined achievement had reached the end of its tether, had become entangled in the structures of an increasingly organized capitalist society. Ralph Waldo Emerson unwittingly predicted the fin de siècle situation. “Every spirit makes its house,” he wrote in “Fate” (1851), “but afterwards the house confines the spirit.” The statement presciently summarized the history of nineteenth-century industrial capitalism, on both sides of the Atlantic.

By 1904, the German sociologist Max Weber could put Emerson’s proposition more precisely. The Protestant ethic of disciplined work for godly ends had created an “iron cage” of organizations dedicated to the mass production and distribution of worldly goods, Weber argued. The individual striver was caught in a trap of his own making. The movement from farm to factory and office, and from physical labor outdoors to sedentary work indoors, meant that more Europeans and North Americans were insulated from primary processes of making and growing. They were also caught up in subtle cultural changes—the softening of Protestantism into platitudes; the growing suspicion that familiar moral prescriptions had become mere desiccated, arbitrary social conventions. With the decline of Christianity, the German philosopher Friedrich Nietzsche wrote, “it will seem for a time as though all things had become weightless.”

Alarmists saw these tendencies as symptoms of moral degeneration. But a more common reaction was a diffuse but powerful feeling among the middle and upper classes—a sense that they had somehow lost contact with the palpitating actuality of “real life.” The phrase acquired unprecedented emotional freight during the years around the turn of the century, when reality became something to be pursued rather than simply experienced. This was another key moment in the history of longing, a swerve toward the secular. Longings for this-worldly regeneration intensified when people with Protestant habits of mind (if not Protestant beliefs) confronted a novel cultural situation: a sense that their way of life was being stifled by its own success.

On both sides of the Atlantic, the drive to recapture “real life” took myriad cultural forms. It animated popular psychotherapy and municipal reform as well as avant-garde art and literature, but its chief institutional expression was regeneration through military force. As J. A. Hobson observed in Imperialism (1902), the vicarious identification with war energized jingoism and militarism. By the early twentieth century, in many minds, war (or the fantasy of it) had become the way to keep men morally and physically fit. The rise of total war between the Civil War and World War I was rooted in longings for release from bourgeois normality into a realm of heroic struggle. This was the desperate anxiety, the yearning for rebirth, that lay behind official ideologies of romantic nationalism, imperial progress, and civilizing mission—and that led to the trenches of the Western Front.

Americans were immersed in this turmoil in peculiarly American ways. As the historian Richard Slotkin has brilliantly shown, since the early colonial era a faith in regeneration through violence underlay the mythos of the American frontier. With the closing of the frontier (announced by the U.S. census in 1890), violence turned outward, toward empire. But there was more going on than the refashioning of frontier mythology. American longings for renewal continued to be shaped by persistent evangelical traditions, and overshadowed by the shattering experience of the Civil War. American seekers merged Protestant dreams of spiritual rebirth with secular projects of purification—cleansing the body politic of secessionist treason during the war and political corruption afterward, reasserting elite power against restive farmers and workers, taming capital in the name of the public good, reviving individual and national vitality by banning the use of alcohol, granting women the right to vote, disenfranchising African-Americans, restricting the flow of immigrants, and acquiring an overseas empire.

Of course not all these goals were compatible. Advocates of various versions of rebirth—bodybuilders and Prohibitionists, Populists and Progressives, Social Christians and Imperialists—all laid claims to legitimacy. Their crusades met various ends, but overall they relieved the disease of the fin de siècle by injecting some visceral vitality into a modern culture that had seemed brittle and about to collapse. Yearning for intense experience, many seekers celebrated Force and Energy as ends in themselves. Such celebrations could reinforce militarist fantasies but could also lead in more interesting directions—toward new pathways in literature and the arts and sciences. Knowledge could be revitalized, too. William James, as well as Houdini and Roosevelt, was a symbol of the age.

The most popular forms of regeneration had a moral dimension.

pp. 27-29

But for many other observers, too many American youths—especially among the upper classes—had succumbed to the vices of commerce: the worship of Mammon, the love of ease. Since the Founding Fathers’ generation, republican ideologues had fretted about the corrupting effects of commercial life. Norton and other moralists, North and South, had imagined war would provide an antidote. During the Gilded Age those fears acquired a peculiarly palpable intensity. The specter of “overcivilization”—invoked by republican orators since Jefferson’s time—developed a sharper focus: the figure of the overcivilized businessman became a stock figure in social criticism. Flabby, ineffectual, anxious, possibly even neurasthenic, he embodied bourgeois vulnerability to the new challenges posed by restive, angry workers and waves of strange new immigrants. “Is American Stamina Declining?” asked William Blaikie, a former Harvard athlete and author of How to Get Strong and Stay So, in Harper’s in 1889. Among white-collar “brain-workers,” legions of worried observers were asking similar questions. Throughout the country, metropolitan life for the comfortable classes was becoming a staid indoor affair. Blaikie caught the larger contours of the change:

“A hundred years ago, there was more done to make our men and women hale and vigorous than there is to-day. Over eighty per cent of all our men then were farming, hunting, or fishing, rising early, out all day in the pure, bracing air, giving many muscles very active work, eating wholesome food, retiring early, and so laying in a good stock of vitality and health. But now hardly forty per cent are farmers, and nearly all the rest are at callings—mercantile, mechanical, or professional—which do almost nothing to make one sturdy and enduring.”

This was the sort of anxiety that set men (and more than a few women) to pedaling about on bicycles, lifting weights, and in general pursuing fitness with unprecedented zeal. But for most Americans, fitness was not merely a matter of physical strength. What was equally essential was character, which they defined as adherence to Protestant morality. Body and soul would be saved together.

This was not a gender-neutral project. Since the antebellum era, purveyors of conventional wisdom had assigned respectable women a certain fragility. So the emerging sense of physical vulnerability was especially novel and threatening to men. Manliness, always an issue in Victorian culture, had by the 1880s become an obsession. Older elements of moral character continued to define the manly man, but a new emphasis on physical vitality began to assert itself as well. Concern about the over-soft socialization of the young promoted the popularity of college athletics. During the 1880s, waves of muscular Christianity began to wash over campuses.

pp. 63-71

NOT MANY AMERICAN men, even among the comparatively prosperous classes, were as able as Carnegie and Rockefeller to master the tensions at the core of their culture. Success manuals acknowledged the persistent problem of indiscipline, the need to channel passion to productive ends. Often the language of advice literature was sexually charged. In The Imperial Highway (1881), Jerome Bates advised:

[K]eep cool, have your resources well in hand, and reserve your strength until the proper time arrives to exert it. There is hardly any trait of character or faculty of intellect more valuable than the power of self-possession, or presence of mind. The man who is always “going off” unexpectedly, like an old rusty firearm, who is easily fluttered and discomposed at the appearance of some unforeseen emergency; who has no control over himself or his powers, is just the one who is always in trouble and is never successful or happy.

The assumptions behind this language are fascinating and important to an understanding of middle-and upper-class Americans in the Gilded Age. Like many other purveyors of conventional wisdom—ministers, physicians, journalists, health reformers—authors of self-help books assumed a psychic economy of scarcity. For men, this broad consensus of popular psychology had sexual implications: the scarce resource in question was seminal fluid, and one had best not be diddling it away in masturbation or even nocturnal emissions. This was easier said than done, of course, as Bates indicated, since men were constantly addled by insatiable urges, always on the verge of losing self-control—the struggle to keep it was an endless battle with one’s own darker self. Spiritual, psychic, and physical health converged. What Freud called “‘civilized’ sexual morality” fed directly into the “precious bodily fluids” school of health management. The man who was always “‘going off’ unexpectedly, like an old rusty firearm,” would probably be sickly as well as unsuccessful—sallow, sunken-chested, afflicted by languorous indecision (which was how Victorian health literature depicted the typical victim of what was called “self-abuse”).

But as this profile of the chronic masturbator suggests, scarcity psychology had implications beyond familiar admonitions to sexual restraint. Sexual scarcity was part of a broader psychology of scarcity; the need to conserve semen was only the most insistently physical part of a much more capacious need to conserve psychic energy. As Bates advised, the cultivation of “self-possession” allowed you to “keep your resources well in hand, and reserve your strength until the proper time arrives to exert it.” The implication was that there was only so much strength available to meet demanding circumstances and achieve success in life. The rhetoric of “self-possession” had financial as well as sexual connotations. To preserve a cool, unruffled presence of mind (to emulate Rockefeller, in effect) was one way to stay afloat on the storm surges of the business cycle.

The object of this exercise, at least for men, was personal autonomy—the ownership of one’s self. […]

It was one thing to lament excessive wants among the working class, who were supposed to be cultivating contentment with their lot, and quite another to find the same fault among the middle class, who were supposed to be improving themselves. The critique of middle-class desire posed potentially subversive questions about the dynamic of dissatisfaction at the core of market culture, about the very possibility of sustaining a stable sense of self in a society given over to perpetual jostling for personal advantage. The ruinous results of status-striving led advocates of economic thrift to advocate psychic thrift as well.

By the 1880s, the need to conserve scarce psychic resources was a commonly voiced priority among the educated and affluent. Beard’s American Nervousness had identified “the chief and primary cause” of neurasthenia as “modern civilization,” which placed unprecedented demands on limited emotional energy. “Neurasthenia” and “nervous prostration” became catchall terms for a constellation of symptoms that today would be characterized as signs of chronic depression—anxiety, irritability, nameless fears, listlessness, loss of will. In a Protestant culture, where effective exercise of will was the key to individual selfhood, the neurasthenic was a kind of anti-self—at best a walking shadow, at worst a bedridden invalid unable to make the most trivial choices or decisions. Beard and his colleagues—neurologists, psychiatrists, and self-help writers in the popular press—all agreed that nervous prostration was the price of progress, a signal that the psychic circuitry of “brain workers” was overloaded by the demands of “modern civilization.”

While some diagnoses of this disease deployed electrical metaphors, the more common idiom was economic. Popular psychology, like popular economics, was based on assumptions of scarcity: there was only so much emotional energy (and only so much money) to go around. The most prudent strategy was the husbanding of one’s resources as a hedge against bankruptcy and breakdown. […]

Being reborn through a self-allowed regime of lassitude was idiosyncratic, though important as a limiting case. Few Americans had the leisure or the inclination to engage in this kind of Wordsworthian retreat. Most considered neurasthenia at best a temporary respite, at worst an ordeal. They strained, if ambivalently, to be back in harness.

The manic-depressive psychology of the business class mimicked the lurching ups and downs of the business cycle. In both cases, assumptions of scarcity underwrote a pervasive defensiveness, a circle-the-wagons mentality. This was the attitude that lay behind the “rest cure” devised by the psychiatrist Silas Weir Mitchell, who proposed to “fatten” and “redden” the (usually female) patient by isolating her from all mental and social stimulation. (This nearly drove the writer Charlotte Perkins Gilman crazy, and inspired her story “The Yellow Wallpaper.”) It was also the attitude that lay behind the fiscal conservatism of the “sound-money men” on Wall Street and in Washington—the bankers and bondholders who wanted to restrict the money supply by tying it to the gold standard. Among the middle and upper classes, psyche and economy alike were haunted by the common specter of scarcity. But there were many Americans for whom scarcity was a more palpable threat.

AT THE BOTTOM of the heap were the urban poor. To middle-class observers they seemed little more than a squalid mass jammed into tenements that were festering hives of “relapsing fever,” a strange malady that left its survivors depleted of strength and unable to work. The disease was “the most efficient recruiting officer pauperism ever had,” said a journalist investigating tenement life in the 1870s. Studies of “the nether side of New York” had been appearing for decades, but—in the young United States at least—never before the Gilded Age had the story of Dives and Lazarus been so dramatically played out, never before had wealth been so flagrant, or poverty been so widespread and so unavoidably appalling. The army of thin young “sewing-girls” trooping off in the icy dawn to sweatshops all over Manhattan, the legions of skilled mechanics forced by high New York rents to huddle with their families amid a crowd of lowlifes, left without even a pretense of privacy in noisome tenements that made a mockery of the Victorian cult of home—these populations began to weigh on the bourgeois imagination, creating concrete images of the worthy, working poor.

pp. 99-110

Racial animosities flared in an atmosphere of multicultural fluidity, economic scarcity, and sexual rivalry. Attitudes arising from visceral hostility acquired a veneer of scientific objectivity. Race theory was nothing new, but in the late nineteenth century it mutated into multiple forms, many of them characterized by manic urgency, sexual hysteria, and biological determinism. Taxonomists had been trying to arrange various peoples in accordance with skull shape and brain size for decades; popularized notions of natural selection accelerated the taxonomic project, investing it more deeply in anatomical details. The superiority of the Anglo-Saxon—according to John Fiske, the leading pop-evolutionary thinker—arose not only from the huge size of his brain, but also from the depth of its furrows and the plenitude of its creases. The most exalted mental events had humble somatic origins. Mind was embedded in body, and both could be passed on to the next generation.

The year 1877 marked a crucial development in this hereditarian synthesis: in that year, Richard Dugdale published the results of his investigation into the Juke family, a dull-witted crew that had produced more than its share of criminals and mental defectives. While he allowed for the influence of environment, Dugdale emphasized the importance of inherited traits in the Juke family. If mental and emotional traits could be inherited along with physical ones, then why couldn’t superior people be bred like superior dogs or horses? The dream of creating a science of eugenics, dedicated to improving and eventually even perfecting human beings, fired the reform imagination for decades. Eugenics was a kind of secular millennialism, a vision of a society where biological engineering complemented social engineering to create a managerial utopia. The intellectual respectability of eugenics, which lasted until the 1930s, when it became associated with Nazism, underscores the centrality of racialist thinking among Americans who considered themselves enlightened and progressive. Here as elsewhere, racism and modernity were twinned.

Consciousness of race increasingly pervaded American culture in the Gilded Age. Even a worldview as supple as Henry James’s revealed its moorings in conventional racial categories when, in The American (1877), James presented his protagonist, Christopher Newman, as a quintessential Anglo-Saxon but with echoes of the noble Red Man, with the same classical posture and physiognomy. There was an emerging kinship between these two groups of claimants to the title “first Americans.” The iconic American, from this view, was a blend of Anglo-Saxon refinement and native vigor. While James only hints at this, in less than a generation such younger novelists as Frank Norris and Jack London would openly celebrate the rude vitality of the contemporary Anglo-Saxon, proud descendant of the “white savages” who subdued a continent. It should come as no surprise that their heroes were always emphatically male. The rhetoric of race merged with a broader agenda of masculine revitalization.[…]

By the 1880s, muscular Christians were sweeping across the land, seeking to meld spiritual and physical renewal, establishing institutions like the Young Men’s Christian Association. The YMCA provided prayer meetings and Bible study to earnest young men with spiritual seekers’ yearnings, gyms and swimming pools to pasty young men with office workers’ midriffs. Sometimes they were the same young men. More than any other organization, the YMCA aimed to promote the symmetry of character embodied in the phrase “body, mind, spirit”—which a Y executive named Luther Gulick plucked from Deuteronomy and made the motto of the organization. The key to the Y’s appeal, a Harper’s contributor wrote in 1882, was the “overmastering conviction” of its members: “The world always respects manliness, even when it is not convinced [by theological argument]; and if the organizations did not sponsor that quality in young men, they would be entitled to no respect.” In the YMCA, manliness was officially joined to a larger agenda.

For many American Protestants, the pursuit of physical fitness merged with an encompassing vision of moral and cultural revitalization—one based on the reassertion of Protestant self-control against the threats posed to it by immigrant masses and mass-marketed temptation. […]

Science and religion seemed to point in the same direction: Progress and Providence were one.

Yet the synthesis remained precarious. Physical prowess, the basis of national supremacy, could not be taken for granted. Strong acknowledged in passing that Anglo-Saxons could be “devitalized by alcohol and tobacco.” Racial superiority could be undone by degenerate habits. Even the most triumphalist tracts contained an undercurrent of anxiety, rooted in the fear of flab. The new stress on the physical basis of identity began subtly to undermine the Protestant synthesis, to reinforce the suspicion that religion was a refuge for effeminate weaklings. The question inevitably arose, in some men’s minds: What if the YMCA and muscular Christianity were not enough to revitalize tired businessmen and college boys?

Under pressure from proliferating ideas of racial “fitness,” models of manhood became more secular. Despite the efforts of muscular Christians to reunite body and soul, the ideal man emerging among all classes by the 1890s was tougher and less introspective than his mid-Victorian predecessors. He was also less religious. Among advocates of revitalization, words like “Energy” and “Force” began to dominate discussion—often capitalized, often uncoupled from any larger frameworks of moral or spiritual meaning, and often combined with racist assumptions. […]

The emerging worship of force raised disturbing issues. Conventional morality took a backseat to the celebration of savage strength. After 1900, in the work of a pop-Nietzschean like Jack London, even criminality became a sign of racial vitality: as one of his characters says, “We whites have been land-robbers and sea-robbers from remotest time. It is in our blood, I guess, and we can’t get away from it.” This reversal of norms did not directly challenge racial hierarchies, but the assumptions behind it led toward disturbing questions. If physical prowess was the mark of racial superiority, what was one to make of the magnificent specimens of manhood produced by allegedly inferior races? Could it be that desk-bound Anglo-Saxons required an infusion of barbarian blood (or at least the “barbarian virtues” recommended by Theodore Roosevelt)? Behind these questions lay a primitivist model of regeneration, to be accomplished by incorporating the vitality of the vanquished, dark-skinned other. The question was how to do that and maintain racial purity.

pp. 135-138

Yet to emphasize the gap between country and the city was not simply an evasive exercise: dreams of bucolic stillness or urban energy stemmed from motives more complex than mere escapist sentiment. City and country were mother lodes of metaphor, sources for making sense of the urban-industrial revolution that was transforming the American countryside and creating a deep sense of discontinuity in many Americans’ lives during the decades after the Civil War. If the city epitomized the attraction of the future, the country embodied the pull of the past. For all those who had moved to town in search of excitement or opportunity, rural life was ineluctably associated with childhood and memory. The contrast between country and city was about personal experience as well as political economy. […]

REVERENCE FOR THE man of the soil was rooted in the republican tradition. In his Notes on the State of Virginia (1785), Jefferson articulated the antithesis that became central to agrarian politics (and to the producerist worldview in general)—the contrast between rural producers and urban parasites. “Those who labour in the earth are the chosen people of God, if ever he had a chosen people, whose breasts he has made his peculiar deposit for substantial and genuine virtue,” he announced. “Corruption of morals in the mass of cultivators is a phenomenon of which no age nor nation has furnished an example. It is the mark set on those, who not looking up to heaven, to their own soil and industry, as does the husbandman, for their subsistence, depend for it on the casualties and caprice of customers. Dependence begets subservience and venality, suffocates the germ of virtue, and prepares fit tools for the design of ambition.” Small wonder, from this view, that urban centers of commerce seemed to menace the public good. “The mobs of great cities,” Jefferson concluded, “add just so much to the support of pure government as sores do to the strength of the human body.” Jefferson’s invidious distinctions echoed through the nineteenth century, fueling the moral passion of agrarian rebels. Watson, among many, considered himself a Jeffersonian.

There were fundamental contradictions embedded in Jefferson’s conceptions of an independent yeomanry. Outside certain remote areas in New England, most American farmers were not self-sufficient in the nineteenth century—nor did they want to be. Many were eager participants in the agricultural market economy, animated by a restless, entrepreneurial spirit. Indeed, Jefferson’s own expansionist policies, especially the Louisiana Purchase, encouraged centrifugal movement as much as permanent settlement. “What developed in America,” the historian Richard Hofstadter wrote, “was an agricultural society whose real attachment was not to the land but to land values.” The figure of the independent yeoman, furnishing enough food for himself and his family, participating in the public life of a secure community—this icon embodied longings for stability amid a maelstrom of migration.

Often the longings were tinged with a melancholy sense of loss. […] For those with Jeffersonian sympathies, abandoned farms were disturbing evidence of cultural decline. As a North American Review contributor wrote in 1888: “Once let the human race be cut off from personal contact with the soil, once let the conventionalities and artificial restrictions of so-called civilization interfere with the healthful simplicity of nature, and decay is certain.” Romantic nature-worship had flourished fitfully among intellectuals since Emerson had become a transparent eye-ball on the Concord common and Whitman had loafed among leaves of grass. By the post–Civil War decades, romantic sentiment combined with republican tradition to foster forebodings. Migration from country to city, from this view, was a symptom of disease in the body politic. Yet the migration continued. Indeed, nostalgia for rural roots was itself a product of rootlessness. A restless spirit, born of necessity and desire, spun Americans off in many directions—but mainly westward. The vision of a stable yeomanry was undercut by the prevalence of the westering pioneer.

pp. 246-247

Whether energy came from within or without, it was as limitless as electricity apparently was. The obstacles to access were not material—class barriers or economic deprivation were never mentioned by devotees of abundance psychology—they were mental and emotional. The most debilitating emotion was fear, which cropped up constantly as the core problem in diagnoses of neurasthenia. The preoccupation with freeing oneself from internal constraints undermined the older, static ideal of economic self-control at its psychological base. As one observer noted in 1902: “The root cause of thrift, which we all admire and preach because it is so convenient to the community, is fear, fear of future want; and that fear, we are convinced, when indulged overmuch by pessimist minds is the most frequent cause of miserliness….” Freedom from fear meant freedom to consume.

And consumption began at the dinner table. Woods Hutchinson claimed in 1913 that the new enthusiasm for calories was entirely appropriate to a mobile, democratic society. The old “stagnation” theory of diet merely sought to maintain the level of health and vigor; it was a diet for slaves or serfs, for people who were not supposed to rise above their station. “The new diet theory is based on the idea of progress, of continuous improvement, of never resting satisfied with things as they are,” Hutchinson wrote. “No diet is too liberal or expensive that will…yield good returns on the investment.” Economic metaphors for health began to focus on growth and process rather than stability, on consumption and investment rather than savings.

As abundance psychology spread, a new atmosphere of dynamism enveloped old prescriptions for success. After the turn of the century, money was less often seen as an inert commodity, to be gradually accumulated and tended to steady growth; and more often seen as a fluid and dynamic force. To Americans enraptured by the strenuous life, energy became an end itself—and money was a kind of energy. Success mythology reflected this subtle change. In the magazine hagiographies of business titans—as well as in the fiction of writers like Dreiser and Norris—the key to success frequently became a mastery of Force (as those novelists always capitalized it), of raw power. Norris’s The Pit (1903) was a paean to the furious economic energies concentrated in Chicago. “It was Empire, the restless subjugation of all this central world of the lakes and prairies. Here, mid-most in the land, beat the Heart of the nation, whence inevitably must come its immeasurable power, its infinite, inexhaustible vitality. Here of all her cities, throbbed the true life—the true power and spirit of America: gigantic, crude, with the crudity of youth, disdaining rivalry; sane and healthy and vigorous; brutal in its ambition, arrogant in the new-found knowledge of its giant strength, prodigal of its wealth, infinite in its desires.” This was the vitalist vision at its most breathless and jejune, the literary equivalent of Theodore Roosevelt’s adolescent antics.

The new emphasis on capital as Force translated the psychology of abundance into economic terms. The economist who did the most to popularize this translation was Simon Nelson Patten, whose The New Basis of Civilization (1907) argued that the United States had passed from an “era of scarcity” to an “era of abundance” characterized by the unprecedented availability of mass-produced goods. His argument was based on the confident assumption that human beings had learned to control the weather. “The Secretary of Agriculture recently declared that serious crop failures will occur no more,” Patten wrote. “Stable, progressive farming controls the terror, disorder, and devastation of earlier times. A new agriculture means a new civilization.” Visions of perpetual growth were in the air, promising both stability and dynamism.

The economist Edward Atkinson pointed the way to a new synthesis with a hymn to “mental energy” in the Popular Science Monthly. Like other forms of energy, it was limitless. “If…there is no conceivable limit to the power of mind over matter or to the number of conversions of force that can be developed,” he wrote, “it follows that pauperism is due to want of mental energy, not of material resources.” Redistribution of wealth was not on the agenda; positive thinking was.

pp. 282-283

TR’s policies were primarily designed to protect American corporations’ access to raw materials, investment opportunities, and sometimes markets. The timing was appropriate. In the wake of the merger wave of 1897–1903, Wall Street generated new pools of capital, while Washington provided new places to invest it. Speculative excitement seized many among the middle and upper classes who began buying stocks for the first time. Prosperity spread even among the working classes, leading Simon Nelson Patten to detect a seismic shift from an era of scarcity to an era of abundance. For him, a well-paid working population committed to ever-expanding consumption would create what he called The New Basis of Civilization (1907).

Patten understood that the mountains of newly available goods were in part the spoils of empire, but he dissolved imperial power relations in a rhetoric of technological determinism. The new abundance, he argued, depended not only on the conquest of weather but also on the annihilation of time and space—a fast, efficient distribution system that provided Americans with the most varied diet in the world, transforming what had once been luxuries into staples of even the working man’s diet. “Rapid distribution of food carries civilization with it, and the prosperity that gives us a Panama canal with which to reach untouched tropic riches is a distinctive laborer’s resource, ranking with refrigerated express and quick freight carriage.” The specific moves that led to the seizure of the Canal Zone evaporated in the abstract “prosperity that gives us a Panama Canal,” which in turn became as much a boon to the workingman as innovative transportation. Empire was everywhere, in Patten’s formulation, and yet nowhere in sight.

What Patten implied (rather than stated overtly) was that imperialism underwrote expanding mass consumption, raising standards of living for ordinary folk. “Tropic riches” became cheap foods for the masses. The once-exotic banana was now sold from pushcarts for 6 cents a dozen, “a permanent addition to the laborer’s fund of goods.” The same was true of “sugar, which years ago was too expensive to be lavishly consumed by the well-to-do,” but “now freely gives its heat to the workingman,” as Patten wrote. “The demand that will follow the developing taste for it can be met by the vast quantities latent in Porto Rico and Cuba, and beyond them by the teeming lands of South America, and beyond them by the virgin tropics of another hemisphere.” From this view, the relation between empire and consumption was reciprocal: if imperial policies helped stimulate consumer demand, consumer demand in turn promoted imperial expansion. A society committed to ever-higher levels of mass-produced abundance required empire to be a way of life.

Lead Toxicity is a Hyperobject

What is everywhere cannot be seen. What harms everyone cannot be acknowledged. So, we obsess over what is trivial and distract ourselves with false narratives. The point isn’t to understand, much less solve, problems. We’d rather large numbers of people to suffer and die, as long as we don’t have to face the overwhelming sense of anxiety about the world we’ve created.

We pretend to care about public health. We obsess over pharmaceuticals and extreme medical interventions while pandering about exercise and diet, not to mention going on about saving the planet while only taking symbolic actions. But some of the worst dangers to public health go with little mention or media reporting. Lead toxicity is an example of this. It causes numerous diseases and health conditions: lowered IQ, ADHD, aggressive behavior, asthma, and on and on. Now we know it also causes heart disease. Apparently, it even immensely contributes to diabetes. A common explanation might be that heavy metals interfere with important systems in the body such as the immune system and hormone system. In the comments section of Dr. Malcolm Kendrick’s post shared below, I noticed this interesting piece of info:

“I recently listened to a presentation, as a part of a class I’m taking, put on by the lead researcher for the TACT trial. He is a cardiologist himself. I would say that a 48% ABSOLUTE risk reduction in further events in diabetic patients, and a 30-something % risk reduction in patients without diabetes, is extremely significant. I went and read the study afterward to verify the numbers he presented. I would say, based on the fact that he admitted freely he thought he was going to prove exactly the opposite, and that his numbers and his statements show it does work, are pretty convincing. Naturally, no one that works for JAMA will ever tell you that. They would prefer to do acrobatics with statistics to prove otherwise.”

Lead toxicity is one of the leading causes of disease and death in the world. It damages the entire body, especially the brain. For the survivors of lead toxicity, they are crippled for life. It was also behind the violent crime wave of paste decades. The prison population has higher than average rates of lead toxicity, which means we are using prisons to store and hide the victims and scapegoat them all in one fell swoop. And since it is the poor who are primarily targeted by our systematic indifference (maybe not indifference, since there are profits and privileges incentivizing it), it is they who are disproportionately poisoned by lead and then, as victims, imprisoned or otherwise caught up in the legal system or institutionalized or left as one of the vast multitudes of forgotten, of the homeless, of those who die without anyone bothering to find out what killed them.

But if only the poor worked harder, got an education, followed the USDA-recommended diet, and got a good job to pay for all the pills pushed on them by the pharmaceutical-funded doctors, then… well, then what the fuck would good would it do them? Tell me that. The irony is that, as we like to pity the poor for their supposed failures and bad luck, we are all being screwed over. It’s just we feel slightly better, slightly less anxious as long as others are doing worse than us. Who cares that we live in a society slowly killing us. The real victory is knowing that it is killing you slightly slower than your neighbor or those other people elsewhere. For some odd reason, most people find that comforting.

It’s sad. Despite making some minor progress in cleaning up the worst of it, the decades of lead accumulation still lingers in the soil, oceans, infrastructure, and old buildings. Entire communities continue to raise new generations with lead exposure. On top of that, we’ve been adding even more pollutants and toxins to the environment, to our food supply, and to every variety of product we buy. I will say this. Even if diet doesn’t have as big of a direct affect on some of these conditions as does removing dangerous toxins, diet has the advantage of being a factor one can personally control. If you eat an optimally healthy diet, especially if you can avoid foods that are poisoned (either unintentionally with environmental toxins or intentionally with farm chemicals), you’ll be doing yourself a world of good. Greater health won’t eliminate all of the dangers we are surrounded by, but it will help you to detoxify and heal from the damage. It may not be much  in the big picture, but it’s better than nothing.

On the other hand, even if our diet obsession is overblown, maybe it’s more significant than we realize. Sammy Pepys, in Fat is our Friend, writes about Roseto, Pennsylvania. Scientists studying this uniquely healthy American community called the phenomenon the Roseto Effect. These people ate tons of processed meat and lard, smoked cigars and drink wine, and they worked back-breaking labor in quarries where they would have been exposed to toxins (“Rosetan men worked in such toxic environments as the nearby slate quarries … inhaling gases, dusts and other niceties.” p. 117). Yet their health was great. At the time, diet was dismissed because it didn’t conform to USDA standards. While most Americans had already switched to industrial seed oils, the Rosetans were still going strong on animal fats. Maybe their diet was dismissed too easily. As with earlier lard-and-butter-gorging Americans, maybe all the high quality animal fats (probably from pasture-raised animals) was essential to avoiding disease. Also, maybe it had something to do with their ability to handle the toxins as well. Considering Weston A. Price’s research, it’s obvious that all of those additional fat-soluble vitamins sure would have helped.

Still, let’s clean up the toxins. And also, let’s quit polluting like there is no tomorrow.

* * *

What causes heart disease part 65 – Lead again
by Dr. Malcolm Kendrick

There are several things about the paper that I found fascinating. However, the first thing that I noticed was that…. it hadn’t been noticed. It slipped by in a virtual media blackout. It was published in 2018, and I heard nothing.

This is in direct contrast to almost anything published about diet. We are literally bombarded with stories about red meat causing cancer and sausages causing cancer and heart disease, and veganism being protective against heart disease and cancer, and on and on. Dietary articles often end up on the front page on national newspapers. […]

Where was I? Oh yes, lead. The heavy metal. The thing that, unlike diet, makes no headlines whatsoever, the thing that everyone ignores. Here is one top-line fact from that study on lead, that I missed:

‘Our findings suggest that, of 2·3 million deaths every year in the USA, about 400 000 are attributable to lead exposure, an estimate that is about ten times larger than the current one.’ 1

Yes, according to this study, one in six deaths is due to lead exposure. I shall repeat that. One in six. Eighteen per cent to be exact, which is nearer a fifth really. […]

So, on one side, we have papers (that make headlines around the world) shouting about the risk of red meat and cancer. Yet the association is observational, tiny, and would almost certainly disappear in a randomised controlled trial, and thus mean nothing.

On the other we have a substance that could be responsible for one sixth of all deaths, the vast majority of those CVD deaths. The odds ratio, highest vs lowest lead exposure, by the way, depending on age and other factors, was a maximum of 5.30 [unadjusted].

Another study in the US found the following

‘Cumulative lead exposure, as reflected by bone lead, and cardiovascular events have been studied in the Veterans’ Normative Aging Study, a longitudinal study among community-based male veterans in the greater Boston area enrolled in 1963. Patients had a single measurement of tibial and patellar bone lead between 1991 and 1999. The HR for ischemic heart disease mortality comparing patellar lead >35 to <22 μg/g was 8.37 (95% CI: 1.29 to 54.4).’ 3

HR = Hazard Ratio, which is similar, if not the same to OR = Odds Ratio. A Hazard Ratio of 8.37, means (essentially) a 737% increase in risk (Relative Risk).

Anyway, I shall repeat that finding a bit more loudly. A higher level of lead in the body leads to a seven hundred and thirty-seven per cent increase in death from heart disease. This is, in my opinion, correlation proving causation.

Looking at this from another angle, it is true that smoking causes a much greater risk of lung cancer (and a lesser but significant increase in CVD), but not everyone smokes. Therefore, the overall damage to health from smoking is far less than the damage caused by lead toxicity.

Yet no-one seems remotely interested. Which is, in itself, very interesting.

It is true that most Governments have made efforts to reduce lead exposure. Levels of lead in the children dropped five-fold between the mid-sixties and the late nineties. 4 Indeed, once the oil industry stopped blowing six hundred thousand tons of lead into the atmosphere from vehicle exhausts things further improved. Lead has also been removed from water pipes, paint, and suchlike.

However, it takes a long old time from lead to be removed from the human body. It usually lingers for a lifetime. Equally, trying to get rid of lead is not easy, that’s for sure. Having said this, chelation therapy has been tried, and does seem to work.

‘On November 4, 2012, the TACT (Trial to Assess Chelation Therapy) investigators reported publicly the first large, randomized, placebo-controlled trial evidence that edetate disodium (disodium ethylenediaminetetraacetic acid) chelation therapy significantly reduced cardiac events in stable post–myocardial infarction (MI) patients. These results were so unexpected that many in the cardiology community greeted the report initially with either skepticism (it is probably wrong) or outright disbelief (it is definitely wrong).’ 3

Cardiologists, it seems from the above quotes, know almost nothing about the subject in which they claim to be experts. Just try mentioning glycocalyx to them… ‘the what?’

Apart from a few brave souls battling to remove lead from the body, widely derided and dismissed by the mainstream world of cardiology, nothing else is done. Nothing at all. We spend trillions on cholesterol lowering, and trillions on blood pressure lowering, and more trillions on diet. On the other hand, we do nothing active to try and change a risk factor that kicks all the others – in terms of numbers killed – into touch.

Hubris of Nutritionism

There is a fundamental disagreement over diets. It is about one’s philosophical position on humanity and the world, about the kind of society one aspires to. Before getting to nutritionism, let me explain my present understanding that has developed from what I’ve learned. It’s all quite fascinating. There is a deeper reason why, for example, I see vegetarianism as potentially healthy but not veganism (see debate in comments section of my recent post A Fun Experiment), and that distinction will be central in my following argument. There have been some, not many, traditional societies that were vegetarian or rather semi-vegetarian for millennia (e.g., India; see specific comment in the above linked post), but veganism didn’t exist until the Seventh Day Adventists invented it in the late 19th century. Few people know this history. It’s not exactly something most vegan advocates, other than Adventists themselves, would want to mention.

Veganism was a modernization of ancient Greek Galenic theory of humors, having originally been incorporated into mainstream Christian thought during feudalism, especially within the monastic tradition of abstinence and self-denial but also applied to the population at large through food laws. A particular Galenic argument is that, by limiting red meat and increasing plant foods, there would be a suppression or weakening of libido/virility as hot-bloodedness that otherwise threatens to ‘burn’ up the individual. (The outline of this ideology remains within present dietary thought in the warning that too much animal protein will up-regulate mTOR and over-activate IGF-1 which, as it is asserted, will shorten lifespan. Many experts such as Dr. Steven Gundry in The Longevity Paradox, biological anthropologist Stephen Le in 100 Million Years of Food, etc have been parroting Galenic thought without any awareness of the origin of the ideas they espouse. See my posts High vs Low Protein and Low-Carb Diets On The Rise.) Also, it was believed this Galenic strategy would help control problematic behaviors like rowdiness, the reason in the Middle ages that red meat sometimes was banned prior to Carnival (about dietary systems as behavioral manipulation and social control, see Food and Faith in Christian Culture ed. by Ken Albala and Trudy Eden and some commentary about that book at my posts Western Individuality Before the Enlightenment Age and The Crisis of Identity; for similar discussion, also check out The Agricultural Mind, “Yes, tea banished the fairies.”, Autism and the Upper Crust, and Diets and Systems). For the purposes of Christian societies, this has been theologically reinterpreted and reframed. Consider the attempt to protect against the moral sin of masturbation as part of the Adventist moral reform, such that modern cereal was originally formulated specifically for an anti-masturbation campaign — the Breakfast of Champions!

High protein vs low protein is an old conflict, specifically in terms of animal meat and even more specifically as red meat. It’s more of a philosophical or theological disagreement than a scientific debate. The anti-meat argument would never hold such a central position in modern dietary thought if not for the influence of heavily Christianized American culture. It’s part of Christian theology in general. Gary Taubes discusses it in how dieting gets portrayed as the sins of gluttony and sloth: “Of all the dangerous ideas that health officials could have embraced while trying to understand why we get fat, they would have been hard-pressed to find one ultimately more damaging than calories-in/calories-out. That it reinforces what appears to be so obvious – obesity as the penalty for gluttony and sloth – is what makes it so alluring. But it’s misleading and misconceived on so many levels that it’s hard to imagine how it survived unscathed and virtually unchallenged for the last fifty years” (Why We Get Fat). Read mainstream dietary advice and you’ll quickly hear this morality-drenched worldview of fallen humanity and Adam’s sinful body. This goes along with the idea of “no pain, no gain” (an ideology I came to question in seeing how simple and easy are low-carb diets, specifically with how ketosis eliminates endless hunger and cravings while making fat melt away with little effort, not to mention how my decades of drug-resistant and suicidally-prone depression also disappeared, something many others have experienced; so it turns out that for many people great gain can be had with no pain at all). The belief has been that we must suffer and struggle to attain goodness (with physical goodness being an outward sign of moral goodness), such that the weak flesh of the mortal frame must be punished with bodily mortification (i.e., dieting and exercise) to rid it of its inborn sinful nature. Eating meat is a pleasurable temptation in nurturing the ‘fallen’ body and so it must be morally wrong. This Christian theology has become so buried in our collective psyche, even in science itself, that we no longer are able to recognize it for what it is. And because of historical amnesia, we are unaware of where these mind viruses come from.

It’s not only that veganism is a modern ideology in a temporal sense, as a product of post-Enlightenment fundamentalist theology and its secularization. More importantly, it is a broader expression of modern ways of thinking and perceiving, of being in and relating to the world, including but far from limited to how it modernizes and repurposes ancient philosophy (Galen wasn’t advocating veganism, religious or secularized, that is for sure). Besides the crappy Standard American Diet (SAD), veganism is the only other diet entirely dependent on industrialization by way of chemical-laden monoculture, high-tech food processing, and global trade networks — and hence enmeshed in the web of big ag, big food, big oil, and big gov (all of this, veganism and the industrialization that made it possible, surely was far beyond Galen’s imagination). To embrace veganism, no matter how well-intentioned, is to be fully complicit in modernity and all that goes with it — not that it makes individual vegans bad people, as to varying degrees all of us are complicit in this world we are born into. Still, veganism stands out for, within that ideological framework, there is no other choice outside of modern industrialization.

At the heart of veganism, is a techno-utopian vision and technocratic impulse. It’s part of the push for a plant-based diet that began with the Seventh Day Adventists, most infamously Dr. John Harvey Kellogg, who formed the foundation of modern American nutritional research and dietary recommendations (see the research of Bellinda Fettke who made this connection: Ellen G White and Medical EvangelismThou Shalt not discuss Nutrition ‘Science’ without understanding its driving force, and Lifestyle Medicine … where did the meat go?). I don’t say this to be mean or dismissive of vegans. If one insists on being a vegan, there are better ways to do it. But it will never be an optimal diet, neither for the individual nor for the environment (and, yes, industrial agriculture does kill large numbers of animals, whether or not the vegan has to see it in the grocery store or on their plate; see my post Carnivore Is Vegan: if veganism is defined by harming and killing the fewest lives, if veganism is dependent on industrialization that harms and kills large numbers of lives, and if potentially carnivore is the least dependent on said industrialization, then we are forced to come the conclusion that, by definition, “carnivore is vegan”). Still, if vegans insist, they should be informed and honest in embracing industrialization as a strength, rather than hiding it as a weakness, in overtly arguing for techno-utopian and technocratic solutions in the Enlightenment fashion of Whiggish progressivism. Otherwise, this unacknowledged shadow side of veganism remains an Achille’s heel that eventually will take down veganism as a movement when the truth is finally revealed and becomes public knowledge. I don’t care if veganism continues in its influence, but if vegans care about advocating their moral vision they better do some soul-searching about what exactly they are advocating and for what reason and to what end.

Veganism is not limited to being unique as the only specific diet that is fully industrialized (SAD isn’t comparable because it isn’t a specific diet, since one could argue that veganism as an industrialized diet is one variety of SAD). More importantly, what makes veganism unique is its ethical impetus. That is how it originated within the righteously moralizing theology of Adventism (to understand the moral panic of that era, read my post The Crisis of Identity). The Adventist Ellen G. White’s divine visions from God preceded the health arguments. And even those later health arguments within Adventism were predicated upon a moralistic hypothesis of human nature and reality, that is to say theology. Veganism has maintained the essence of that theology of moral health, even though the dietary ideology was quickly sanitized and secularized. Adventists like Dr. Kellogg realized that this new kind of plant-based diet would not spread unless it was made to seem natural and scientific, a common strategy of fundamentalist apologetics such as pseudo-scientific Creationism (I consider this theologically-oriented rhetoric to be a false framing; for damn sure, veganism is not more natural since it is one of the least natural diets humanity was ever attempted). So, although the theology lost its emphasis, one can still sense this religious-like motivation and righteous zeal that remains at the heart of veganism, more than a mere diet but an entire social movement and political force.

Let’s return to the health angle and finally bring in nutritionism. The only way a vegan diet is possible at all is through the industrial agriculture that eliminated the traditional farming practices, including an entire lifestyle as part of farming communities, that was heavily dependent on animal husbandry and pasturage (similar to how fundamentalist religion such as Adventism is also a product of modernity, an argument made by Karen Armstrong; modern fundamentalism is opposed to traditional religion in the way that, as Corey Robin explains, reactionary conservatism is opposed to the ancien regime it attacked and replaced). This is the industrial agriculture that mass produces plant foods through monoculture and chemicals (that, by the way, destroys ecosystems and kills the soil). And on top of that, vegans would quickly die of malnutrition if not for the industrial production of supplements and fortified foods to compensate for the immense deficiencies of their diet. This is based on an ideology of nutritionism, that as clever apes we can outsmart nature, that humanity is separate from and above nature — this is the main point I’m making here, that veganism is unnatural to the human condition formed under millions of years of hominid evolution. This isn’t necessarily a criticism from a Christian perspective since it is believed that the human soul ultimately isn’t at home in this world, but it is problematic when this theology is secularized and turned into pseudo-scientific dogma. This further disconnects us from the natural world and from our own human nature. Hence, veganism is very much a product of modernity and all of its schisms and dissociations, very much seen in American society of the past century or so. Of course, the Adventists want the human soul to be disconnected from the natural world and saved from the fallen nature of Adam’s sin. As for the rest of us who aren’t Adventists, we might have a different view on the matter. This is definitely something atheist or pagan vegans should seriously consider and deeply contemplate. We should all think about how the plant-based and anti-meat argument has come to dominate mainstream thought. Will veganism and industrialization save us? Is that what we want to put our faith in? Is that faith scientifically justified?

It’s not that I’m against plant-based diets in general. I’ve been vegetarian. And when I was doing a paleo diet, I ate more vegetables than I had ever done in my life, far more than most vegetarians. I’m not against plants themselves based on some strange principle. It’s specifically veganism that I’m concerned about. Unlike vegetarianism, there is no way to do veganism with traditional, sustainable, and restorative farming practices. Vegetarianism, omnivory, and carnivory are all fully compatible in the possibility of eliminating industrial agriculture, including factory farming. That is not the case with veganism, a diet that is unique in its place in the modern world. Not all plant-based diets are the same. Veganism is entirely different from plant-heavy diets such as vegetarianism and paleo that also allow animal foods (also, consider the fact that any diet other than carnivore is “plant-based”, a somewhat meaningless label). That is no small point since plant foods are limited in seasonality in all parts of the world, whereas most animal foods are not. If a vegetarian wanted, they could live fairly far north and avoid out-of-season plant foods shipped in from other countries simply by eating lots of eggs and dairy (maybe combined with very small amounts of what few locally-grown plant foods were traditionally and pre-industrially stored over winter: nuts, apples, fermented vegetables, etc; or maybe not even that since, technically, a ‘vegetarian’ diet could be ‘carnivore’ in only eating eggs and dairy). A vegetarian could be fully locavore. A vegan could not, at least not in any Western country, although a vegan near the equator might be able to pull off a locavore diet as long as they could rely upon local industrial agriculture, which at least would eliminate the harm from mass transportation, but it still would be an industrial-based diet with all the problems, including mass suffering and death, that entails.

Veganism in entirely excluding animal foods (and excluding insect foods such as honey) does not allow this option of a fully natural way of eating, both local and seasonal without any industrialization. Even in warmer climes amidst lush foliage, a vegan diet was never possible and never practiced prior to industrialization. Traditional communities, surrounded by plant foods or not, have always found it necessary to include animal and insect foods to survive and thrive. Hunter-gatherers living in the middle of dense jungles (e.g., Piraha) typically get most of their calories from animal foods, as long as they maintain access to their traditional hunting grounds and fishing waters, and as long as poaching and environmental destruction or else hunting laws haven’t disrupted their traditional foodways. The closest to a more fully plant-based diet among traditional people was found among Hindus in India, but even there they unintentionally (prior to chemical insecticides) included insects and insect eggs in their plant foods while intentionally allowing individuals during fertile phases of life to eat meat. So, even traditional (i.e., pre-industrial) Hindus weren’t entirely and strictly vegetarian, much less vegan (see my comment at my post A Fun Experiment), but still high quality eggs and dairy can go a long way toward nourishment, as many healthy traditional societies included such foods, especially dairy from pasture-raised animals (consider Weston A. Price’s early 20th century research of healthy traditional communities; see my post Health From Generation To Generation).

Anyway, one basic point is that plant-based diet is not necessarily and always identical to veganism, in that other plant-based diets exist with various forms of animal foods. This is a distinction many vegan advocates want to confound in muddying the water of public debate. In discussing the just released documentary The Game Changers, Paul Kita writes that it “repeatedly pits a vegan diet against a diet that includes meat. The film does this to such an extent that you slowly realize that “plant-based” is just a masquerade for “vegan.” Either you eat animal products and suffer the consequences or avoid animal products and thrive, the movie argues.” (This New Documentary Says Meat Will Kill You. Here’s Why It’s Wrong.). That is a false dichotomy, a forced choice driven by an ideological-driven agenda. Kita makes a simple point that challenges this entire frame: “Except that there’s another choice: Eat more vegetables” Or simply eat less industrial foods that have been industrially grown, industrially processed, and/or industrially transported — basically, don’t eat heavily processed crap, from either meat or plants (specifically refined starches, added sugar, and vegetable oils) but also don’t eat the unhealthy (toxic and nutrient-depleted) produce of industrial agriculture, that is to say make sure to eat locally and in season. But that advice also translates as: Don’t be vegan. That isn’t the message vegan advocates want you to hear.

Dietary ideologies embody social, political, and economic ideologies, sometimes as all-encompassing cultural worldviews. They can shape our sense of identity and reality, what we perceive as true, what we believe is desirable, and what we imagine is possible. It goes further than that, in fact. Diets can alter our neurocognitive development and so potentially alter the way we think and feel. This is one way mind viruses could quite literally parasitize our brains and come to dominate a society, which I’d argue is what has brought our own society to this point of mass self-harm through dietary dogma of pseudo-scientific “plant-based” claims of health (with possibly hundreds of millions of people who have been harmed and had their lives cut short). A diet is never merely a diet. And we are all prone to getting trapped in ideological systems. In my criticisms of veganism as a diet, that doesn’t make vegans as individuals bad people. And I don’t wish them any ill will, much less failure in their dietary health. But I entirely oppose the ideological worldview and social order that, with conscious intention or not, they are promoting. I have a strong suspicion that the world that vegans are helping to create is not a world I want to live in. It is not their beautiful liberal dream that I criticize and worry about. I’m just not so sure that the reality will turn out to be all that wonderful. So far, the plant-based agenda doesn’t seem to be working out all that well. Americans eat more whole grains and legumes, vegetables and fruits than ever before since data was kept and yet the health epidemic continues to worsen (see my post Malnourished Americans). It was never rational to blame public health concerns on meat and animal fat.

Maybe I’m wrong about veganism and the ultimate outcome of their helping to shape the modern world. Maybe technological innovation and progress will transform and revolutionize industrial agriculture and food processing, the neoliberal trade system and capitalist market in a beneficial way for all involved, for the health and healing of individuals and the whole world. Maybe… but I’m not feeling confident enough to bet the fate of future generations on what, to me, seems like a flimsy promise of vegan idealism borne out of divine visions and theological faith. More simply, veganism doesn’t seem all that healthy on the most basic of levels. No diet that doesn’t support health for the individual will support health for society, as society is built on the functioning of humans. That is the crux of the matter. To return to nutritionism, that is the foundation of veganism — the argument that, in spite of all of the deficiencies of veganism and other varieties of the modern industrial diet, we can simply supplement and fortify the needed nutrients and all will be well. To my mind, that seems like an immense leap of faith. Adding some nutrients back into a nutrient-depleted diet is better than nothing, but comes nowhere close to the nutrition of traditional whole foods. If we have to supplement the deficiencies of a diet, that diet remains deficient and we are merely covering up the worst aspects of it, what we are able to most obviously observe and measure. Still, even with those added vitamins, minerals, cofactors, etc, it doesn’t follow that the body is getting all that it needs for optimal health. In traditional whole foods, there are potentially hundreds or thousands of compounds, most of which have barely been researched or not researched at all. There are certain health conditions that require specific supplements. Sure, use them when necessary, as we are not living under optimal conditions of health in general. But when anyone and everyone on a particular diet is forced to supplement to avoid serious health decline as is the case with veganism, there is a serious problem with that diet.

It’s not exactly that I disagree with the possible solution vegans are offering to this problem, as I remain open to future innovative progress. I’m not a nostalgic reactionary and romantic revisionist seeking to turn back the clock to re-create a past that never existed. I’m not, as William F. Buckley jr. put it, “someone who stands athwart history, yelling Stop”. Change is great — I have nothing against it. And I’m all for experimenting. That’s not where I diverge from the “plant-based” vision of humanity’s salvation. Generally speaking, vegans simply ignore the problem I’ve detailed or pretend it doesn’t exist. They believe that such limitations don’t apply to them. That is a very modern attitude coming from a radically modern diet and the end result would be revolutionary in remaking humanity, a complete overturning of what came before. It’s not to be obsessed with the past, to believe we are limited to evolutionary conditions and historical precedence. But ignoring the past is folly. Our collective amnesia about the traditional world keeps getting us into trouble. We’ve nearly lost all traces of what health once meant, the basic level of health that used to be the birthright of all humans.

My purpose here is to create a new narrative. It isn’t vegans and vegetarians against meat-eaters. The fact of the matter is most Americans eat more plant foods than animal foods, in following this part of dietary advice from the AHA, ADA, and USDA (specifically eating more vegetables, fruits, whole grains, and legumes than ever before measured since data has been kept). When snacking, it is plant foods (crackers, potato chips, cookies, donuts, etc) that we gorge on, not animal foods. Following Upton Sinclair’s writing of The Jungle, the average intake of red meat went on a decline. And since the 1930s, Americans have consumed more industrial seed oils than animal fat. “American eats only about 2oz of red meat per day,” tweets Dr. Shawn Baker, “and consumes more calories from soybean oil than beef!” Even total fat hasn’t increased but remained steady with the only change in the ratio of what kinds of fats, that is to say more industrial seed oils. It’s true that most Americans aren’t vegan, but what they share with vegans is an industrialized diet that is “plant-based”. To push the American diet further in this direction would hardly be a good thing. And it would require ever greater dependence on the approach of nutritionism, of further supplementation and fortification as Americans increasingly become malnourished. That is no real solution to the problem we face.

Instead of scapegoating meat and animal fat, we should return to the traditional American diet or else some other variant of the traditional human diet. The fact of the matter is historically Americans ate massive amounts of meat and, at the time, they were known as the healthiest population around. Meat-eating Americans in past centuries towered over meat-deprived Europeans. And those Americans, even the poor, were far healthier than their demographic counterparts elsewhere in the civilized and increasingly industrialized world. The United States, one of the last Western countries to be fully industrialized and urbanized, was one of the last countries to see the beginning of a health epidemic. The British noticed the first signs of physical decline in the late 1800s, whereas Americans didn’t clearly see this pattern until World War II. With this in mind, it would be more meaningful to speak of animal-based diets, including vegetarianism that allows dairy and eggs. This would be far more meaningful than grouping together supposed “plant-based” diets. Veganism is worlds apart from vegetarianism. Nutritionally speaking, vegetarianism has more in common with the paleo diet or even carnivore diet than with veganism, the latter being depleted of essential nutrients from animal foods (fat-soluble vitamins, EPA, DHA, DPA, choline, cholesterol, etc; yes, we sicken and die without abundant cholesterol in our diet, the reason dementia and other forms of neurocognitive decline are a common symptom of statins in lowering cholesterol levels). To entirely exclude all animal foods is a category unto itself, a category that didn’t exist and was unimaginable until recent history.

* * *

Nutritionism
by Gyorgy Scrinin

In Defense of Food
by Michael Pollan

Vegan Betrayal
by Mara Kahn

The Vegetarian Myth
by Lierre Keith

Mike Mutzel:

On the opposite side of the spectrum, the vegans argue that now we have the technologies like B12, synthetic b12, we can get DHA from algae. So it’s a beautiful time to be be vegan because we don’t need to rely upon animals for these compounds. What would you say to that argument?

Paul Saladino:

I would say that that’s a vast oversimplification of the sum total of human nutrition to think that, if we can get synthetic B12 and synthetic DHA, we’re getting everything in an animal. It’s almost like this reductionist perspective, in my opinion.

I’ve heard some people say that it doesn’t matter what you eat. It’s all about calories in and calories out, and then you can just take a multivitamin for your minerals and vitamins. And I always bristle at that I think that is so reductionist. You really think you’ve got it all figured out that you can just take one multivitamin and your calories and that is the same as real food?

That to me is just a travesty of an intellectual hypothesis or intellectual position to take because that’s clearly not the case. We know that animal foods are much more than the reductionist vitamins and minerals that are in them. And they are the structure or they are the matrix they are the amino acids… they are the amino acid availability… they are the cofactors. And to imagine that you can substitute animal foods with B12 and DHA is just a very scary position for me.

I think this is an intellectual error that we make over and over as humans in our society and this is a broader context… I think that we are smart and because we have had some small victories in medicine and nutrition and health. We’ve made scanning electron microscopes and we’ve understood quarks. I think that we’ve gotten a little too prideful and we imagine that as humans we can outsmart natural the natural world, that we can outsmart nature. And that may sound woo-woo, but I think it’s pretty damn difficult to outsmart 3 million years of natural history and evolution. And any time we try to do that I get worried.

Whether it’s peptides, whether it’s the latest greatest drug, whether it’s the latest greatest hormone or hormone combination, I think you are messing with three million years of the natural world’s wisdom. You really think you’re smarter than that? Just wait just wait, just wait, you’ll see. And to reduce animal foods to B12 and DHA, that’s a really really bad idea.

And as we’ve been talking about all those plant foods that you’re eating on a vegan diet are gonna come with tons of plants toxins. So yes, I think that we are at a time in human history when you can actually eat all plants and not get nutritional deficiencies in the first year or two because you can supplement the heck out of it, right? You can get… but, but… I mean, the list goes on.

Where’s your zinc? Where’s your carnitine? Where’s your carnosine? Where’s your choline? It’s a huge list of things. How much protein are you getting? Are you actually a net positive nitrogen balance? Let’s check your labs. Are you getting enough iodine? Where are you getting iodine from on a vegan diet?

It doesn’t make sense. You have to supplement with probably 27 different things. You have to think about the availability of your protein, the net nitrogen uses of your protein.

And you know people may not know this about me. I was a vegan, I was a raw vegan for about 7 months about 14 years ago. And my problem — and one thing I’ve heard from a lot of other people, in fact my clients, are the same thing today — is that, even if you’re able to eat the foods and perfectly construct micronutrients, you’re going to have so much gas that nobody’s going to want to be around you in the first place.

And I don’t believe that, in any way, shape or form, a synthetic diet is the same as a real foods diet. You can eat plants and take 25 supplements. But then you think what’s in your supplements? And are they bioavailable in the same way? And do they have the cofactors like they do in the food? And to imagine — we’ve done so much in human nutrition — but to imagine that we really understand fully the way that humans eat and digest their food I think is just that’s just pride and that’s just a folly.

Mike Mutzel:

Well, I agree I mean I think there’s a lot more to food than we recognize: micro RNA, transfer RNA, like other molecules that are not quote-unquote macronutrients. Yeah, now I think that’s what you’re getting from plants and animals in a good or bad way that a lot of people don’t think about. For example, you know there’s animal studies that show stress on animals; for example, like pre-slaughter stress affects the transcription patches and various genes in the animal product.

So, I love how you’re bringing to this whole carnivore movement — like the grass-fed movement, eating more organic free-range, things like that — because one of the qualms that I had seeing this thing take off is a lot of people going to fast food were taking the bun off the burger saying that there’s really no difference between grass-fed or a grain-fed. Like meat’s meat, just get what you can afford. I understand that some people… I’ve been in that place financially before in my life where grass-fed was a luxury.

But the other constituents that could potentially be in lower quality foods, both plant and animal. And the other thing about that you, just to hit on one more thing… The supplements —  been in the supplement space since ’06 — they’re not free of iatrogenesis, right. So there is heavy metals, arsenic, lead, mercury, cadmium in supplements; even vegan proteins, for example.

Paul Saladino:

Yeah, highly contaminated. Yeah, people don’t think about the metals in their supplements. And I see a lot of clients with high heavy metals and we think where are you getting this from. I saw a guy the other day with a really high tin and I think it’s in his supplements. And so anyway, that’s a whole other story

The Disease of Nostalgia

“The nostalgic is looking for a spiritual addressee. Encountering silence, he looks for memorable signs, desperately misreading them.”
― Svetlana Boym, The Future of Nostalgia

Nostalgia is one of those strange medical conditions from the past, first observed in 17th century soldiers being sent off to foreign lands during that era of power struggles between colonial empires. It’s lost that medical framing since then, as it is now seen as a mere emotion or mood or quality. And it has become associated with the reactionary mind and invented traditions. We no longer take it seriously, sometimes even dismissing it as a sign of immaturity.

But it used to be considered a physiological disease with measurable symptoms such as brain inflammation along with serious repercussions, as the afflicted could literally waste away and die. It was a profound homesickness experienced as an existential crisis of identity, a longing for a particular place and and the sense of being uprooted from it.  Then it shifted from a focus on place to a focus on time. It became more abstract and, because of that, it lost its medical status. This happened simultaneously as a new disease, neurasthenia, took its place in the popular imagination.

In America, nostalgia never took hold to the same degree as it did in Europe. It finally made its appearance in the American Civil War, only to be dismissed as unmanly and weak character, a defect and deficiency. It was a disease of civilization, but it strongly affected the least civilized, such as rural farmers. America was sold as a nation of progress and so attachment to old ways was deemed unAmerican. Neurasthenia better fit the mood that the ruling elite sought to promote and, unlike nostalgia, it was presented as a disease of the most civilized, although over time it too became a common malady, specifically as it was Europeanized.

Over the centuries, there was a shift in the sense of time. Up through the early colonial era, a cyclical worldview remained dominant (John Demos, Circles and Lines). As time became linear, there was no possibility of a return. The revolutionary era permanently broke the psychological link between past and future. There was even a revolution in the understanding of ‘revolution’ itself, a term that originated from astrology and literally meant a cyclical return. In a return, there is replenishment. But without that possibility, one is thrown back on individual reserves that are limited and must be managed. The capitalist self of hyper-individualism is finally fully formed. That is what neurasthenia was concerned with and so nostalgia lost its explanatory power. In The Future of Nostalgia, Svetlana Boym writes:

“From the seventeenth to the nineteenth century, the representation of time itself changed; it moved away from allegorical human figures— an old man, a blind youth holding an hourglass, a woman with bared breasts representing Fate— to the impersonal language of numbers: railroad schedules, the bottom line of industrial progress. Time was no longer shifting sand; time was money. Yet the modern era also allowed for multiple conceptions of time and made the experience of time more individual and creative.”

As society turned toward an ethos of the dynamic, it became ungrounded and unstable. Some of the last healthy ties to the bicameral mind were severed. (Interestingly, in early diagnoses of nostalgia as a disease, Boym states that, “One of the early symptoms of nostalgia was an ability to hear voices or see ghosts.” That sounds like the bicameral mind re-emerging under conditions of stress, not unlike John Geiger’s third man factor. In nostalgia as in the archaic mind, there is a secret connection between language and music, as united through voice — see Development of Language and Music and Spoken Language: Formulaic, Musical, & Bicameral.)

Archaic authorization mutated into totalitarianism, a new refuge for the anxiety-riddled mind. And the emerging forms of authoritarianism heavily draw upon the nostalgic turn (Ben G. Price, Authoritarian Grammar and Fundamentalist Arithmetic Part II), just as did the first theocracies (religion, writes Julian Jaynes, is “the nostalgic anguish for the lost bicamerality of a subjectively conscious people”), even as or especially because the respectable classes dismissed it. This is courting disaster for the archaic mind still lives within us, still speaks in the world, even if the voices are no longer recognized.

The first laments of loss echoed out from the rubble of the Bronze Age and, precisely as the longing has grown stronger, the dysfunctions associated with it have become normalized. But how disconnected and lost in abstractions can we get before either we become something entirely else or face another collapse?

“Living amid an ongoing epidemic that nobody notices is surreal. It is like viewing a mighty river that has risen slowly over two centuries, imperceptibly claiming the surrounding land, millimeter by millimeter. . . . Humans adapt remarkably well to a disaster as long as the disaster occurs over a long period of time”
~E. Fuller Torrey & Judy Miller, Invisible Plague

* * *

As a side note, I’d point to utopia as being the other side of the coin to nostalgia. And so the radical is the twin of the reactionary. In a different context, I said something about shame that could apply equally well to nostalgia (“Why are you thinking about this?”): “The issue of shame is a sore spot where conservatism and liberalism have, from their close proximity, rubbed each other raw. It is also a site of much symbolic conflation, the linchpin like a stake in the ground to which a couple of old warriors are tied in their ritual dance of combat and wounding, where both are so focused on one another that neither pays much attention to the stake that binds them together. In circling around, they wind themselves ever tighter and their tethers grow shorter.”

In conversing with someone on the political left, an old pattern became apparent. This guy, although with a slight radical bent, is a fairly mainstream liberal coming out of the Whiggish tradition of ‘moderate’ progressivism, an ideological mindset that is often conservative-minded and sometimes reactionary (e.g., lesser evil voting no matter how evil it gets). This kind of person is forever pulling their punches. To continue from the same piece, I wrote that, “The conservative’s task is much easier for the reason that most liberals don’t want to untangle the knot, to remove the linchpin. Still, that is what conservative’s fear, for they know liberals have that capacity, no matter how unlikely they are to act on it. This fear is real. The entire social order is dependent on overlapping symbolic conflations, each a link in a chain, and so each a point of vulnerability.”

To pull that linchpin would require confronting the concrete issue at hand, getting one’s hands dirty. But that is what the moderate progressive fears for the liberal mind feels safe and protected within abstractions. Real-world context will always be sacrificed. Such a person mistrusts the nostalgia of the reactionary while maybe fearing even more the utopianism of the radical, flitting back and forth between one to the other and never getting anywhere. So, they entirely retreat from the battle and lose themselves in comforting fantasies of abstract ideals (making them prone to false equivalencies in their dreams of equality). In doing so, despite being well informed, they miss the trees for the forest, miss the reality on the ground for all the good intentions.

Neither nostalgia nor utopianism can offer a solution, even as both indicate the problem. That isn’t to say there is an escape either for that also reinforces the pattern of anxiety, of fear and hope. The narrative predetermines our roles and the possibilities of action. We need a new narrative. The disease model of the human psyche, framed as nostalgia or neurasthenia or depression or anything else, is maybe not so helpful. Yet we have to take seriously that the stress of modernity is not merely something in people’s minds. Scapegoating the individual simply distracts from the failure of individualism. These conditions of identity are both real and imagined — that is what makes them powerful, whatever name they go by and ideology they serve.

* * *

Let me throw out some loose thoughts. There is something that feels off about our society and it is hard to put one’s finger on. That is why, in our free floating anxiety, we look for anything to grab hold of. Most of the public debates that divide the public are distractions from the real issue that we don’t know how to face, much less how to comprehend. These red herrings of social control are what I call symbolic conflation. To put it simply, there is plenty of projecting going on — and it is mutual from all sides involved and its extremely distorted.

I’ll leave it at that. What is important for my purposes here is the anxiety itself, the intolerable sense of dissatisfaction or dukkha. Interestingly, this sense gets shifted onto the individual and so further justifies the very individualism that is at the heart of the problem. It is our individuality that makes us feel so ill at ease with the world because it disconnects and isolates us. The individual inevitably fails because individualism is ultimately impossible. We are social creatures through and through. It requires immense effort to create and maintain individuality, and sweet Jesus! is it tiresome. That is the sense of being drained that is common across these many historical conditions, from the earlier melancholia to the present depression and everything in between.

Since the beginning of modernity, there has been a fear that too many individuals are simply not up to the task. When reading about these earlier ‘diseases’, there is a common thread running across the long history. The message is how will the individual be made to get in line with modern world, not how to get the modern world in line with human nature. The show must go on. Progress must continue. There is no going back, so we’re told. Onward and upward. This strain of endless change and uncertainty has required special effort in enculturating and indoctrinating each new generation. In the Middle Ages and in tribal cultures, children weren’t special but basically considered miniature adults. There was no protected childhood with an extended period to raise, train, and educate the child. But in our society, the individual has to be made, as does the citizen and the consumer. None of this comes naturally and so must be artificially imposed. The child will resist and more than a few will come out the other side with severe damage, but the sacrifice must be made for the greater good of society.

This was seen, in the United States, most clearly after the American Revolution. Citizen-making became a collective project. Children needed to be shaped into a civic-minded public. And as seen in Europe, adults needed to be forced into a national identity, even if it required bullying or even occasionally burying a few people alive to get the point across No stragglers will be allowed! (Nonetheless, a large part of the European population maintained local identities until the world war era.) Turning boys into men became a particular obsession in the early 20th century with all of the building of parks, advocacy for hunting and fishing, creation of the Boy Scouts, and on and on. Boys used to turn into men spontaneously without any needed intervention, but with nostalgia and neurasthenia there was this growing fear of effeminacy and degeneracy. The civilizing project was important and must be done, no matter how many people are harmed in the process, even genocides. Creating the modern nation-state was a brutal and often bloody endeavor. No one willingly becomes a modern individual. It only happens under threat of violence and punishment.

By the way, this post is essentially an elaboration on my thoughts from another post, The Crisis of Identity. In that other post, I briefly mention nostalgia, but the focus was more on neurasthenia and related topics. It’s an extensive historical survey. This is part of a longer term intellectual project of mine, in trying to make sense of this society and how it came to be this way. Below are some key posts to consider, although I leave out those related to Jaynesian and related scholarship because that is a large area of thought all on its own (if interested, look at the tags for ConsciousnessBicameral MindJulian Jaynes, and Lewis Hyde):

The Transparent Self to Come?
Technological Fears and Media Panics
Western Individuality Before the Enlightenment Age
Juvenile Delinquents and Emasculated Males
The Breast To Rule Them All
The Agricultural Mind
“Yes, tea banished the fairies.”
Autism and the Upper Crust
Diets and Systems
Sleepwalking Through Our Dreams
Delirium of Hyper-Individualism
The Group Conformity of Hyper-Individualism
Individualism and Isolation
Hunger for Connection
To Put the Rat Back in the Rat Park
Rationalizing the Rat Race, Imagining the Rat Park

* * *

The Future of Nostalgia
by Svetlana Boym
pp. 25-30

Nostalgia was said to produce “erroneous representations” that caused the afflicted to lose touch with the present. Longing for their native land became their single-minded obsession. The patients acquired “a lifeless and haggard countenance,” and “indifference towards everything,” confusing past and present, real and imaginary events. One of the early symptoms of nostalgia was an ability to hear voices or see ghosts. Dr. Albert von Haller wrote: “One of the earliest symptoms is the sensation of hearing the voice of a person that one loves in the voice of another with whom one is conversing, or to see one’s family again in dreams.” 2 It comes as no surprise that Hofer’s felicitous baptism of the new disease both helped to identify the existing condition and enhanced the epidemic, making it a widespread European phenomenon. The epidemic of nostalgia was accompanied by an even more dangerous epidemic of “feigned nostalgia,” particularly among soldiers tired of serving abroad, revealing the contagious nature of the erroneous representations.

Nostalgia, the disease of an afflicted imagination, incapacitated the body. Hofer thought that the course of the disease was mysterious: the ailment spread “along uncommon routes through the untouched course of the channels of the brain to the body,” arousing “an uncommon and everpresent idea of the recalled native land in the mind.” 3 Longing for home exhausted the “vital spirits,” causing nausea, loss of appetite, pathological changes in the lungs, brain inflammation, cardiac arrests, high fever, as well as marasmus and a propensity for suicide. 4

Nostalgia operated by an “associationist magic,” by means of which all aspects of everyday life related to one single obsession. In this respect nostalgia was akin to paranoia, only instead of a persecution mania, the nostalgic was possessed by a mania of longing. On the other hand, the nostalgic had an amazing capacity for remembering sensations, tastes, sounds, smells, the minutiae and trivia of the lost paradise that those who remained home never noticed. Gastronomic and auditory nostalgia were of particular importance. Swiss scientists found that rustic mothers’ soups, thick village milk and the folk melodies of Alpine valleys were particularly conducive to triggering a nostalgic reaction in Swiss soldiers. Supposedly the sounds of “a certain rustic cantilena” that accompanied shepherds in their driving of the herds to pasture immediately provoked an epidemic of nostalgia among Swiss soldiers serving in France. Similarly, Scots, particularly Highlanders, were known to succumb to incapacitating nostalgia when hearing the sound of the bagpipes—so much so, in fact, that their military superiors had to prohibit them from playing, singing or even whistling native tunes in a suggestive manner. Jean-Jacques Rousseau talks about the effects of cowbells, the rustic sounds that excite in the Swiss the joys of life and youth and a bitter sorrow for having lost them. The music in this case “does not act precisely as music, but as a memorative sign.” 5 The music of home, whether a rustic cantilena or a pop song, is the permanent accompaniment of nostalgia—its ineffable charm that makes the nostalgic teary-eyed and tongue-tied and often clouds critical reflection on the subject.

In the good old days nostalgia was a curable disease, dangerous but not always lethal. Leeches, warm hypnotic emulsions, opium and a return to the Alps usually soothed the symptoms. Purging of the stomach was also recommended, but nothing compared to the return to the motherland believed to be the best remedy for nostalgia. While proposing the treatment for the disease, Hofer seemed proud of some of his patients; for him nostalgia was a demonstration of the patriotism of his compatriots who loved the charm of their native land to the point of sickness.

Nostalgia shared some symptoms with melancholia and hypochondria. Melancholia, according to the Galenic conception, was a disease of the black bile that affected the blood and produced such physical and emotional symptoms as “vertigo, much wit, headache, . . . much waking, rumbling in the guts . . . troublesome dreams, heaviness of the heart . . . continuous fear, sorrow, discontent, superfluous cares and anxiety.” For Robert Burton, melancholia, far from being a mere physical or psychological condition, had a philosophical dimension. The melancholic saw the world as a theater ruled by capricious fate and demonic play. 6 Often mistaken for a mere misanthrope, the melancholic was in fact a utopian dreamer who had higher hopes for humanity. In this respect, melancholia was an affect and an ailment of intellectuals, a Hamletian doubt, a side effect of critical reason; in melancholia, thinking and feeling, spirit and matter, soul and body were perpetually in conflict. Unlike melancholia, which was regarded as an ailment of monks and philosophers, nostalgia was a more “democratic” disease that threatened to affect soldiers and sailors displaced far from home as well as many country people who began to move to the cities. Nostalgia was not merely an individual anxiety but a public threat that revealed the contradictions of modernity and acquired a greater political importance.

The outburst of nostalgia both enforced and challenged the emerging conception of patriotism and national spirit. It was unclear at first what was to be done with the afflicted soldiers who loved their motherland so much that they never wanted to leave it, or for that matter to die for it. When the epidemic of nostalgia spread beyond the Swiss garrison, a more radical treatment was undertaken. The French doctor Jourdan Le Cointe suggested in his book written during the French Revolution of 1789 that nostalgia had to be cured by inciting pain and terror. As scientific evidence he offered an account of drastic treatment of nostalgia successfully undertaken by the Russians. In 1733 the Russian army was stricken by nostalgia just as it ventured into Germany, the situation becoming dire enough that the general was compelled to come up with a radical treatment of the nostalgic virus. He threatened that “the first to fall sick will be buried alive.” This was a kind of literalization of a metaphor, as life in a foreign country seemed like death. This punishment was reported to be carried out on two or three occasions, which happily cured the Russian army of complaints of nostalgia. 7 (No wonder longing became such an important part of the Russian national identity.) Russian soil proved to be a fertile ground for both native and foreign nostalgia. The autopsies performed on the French soldiers who perished in the proverbial Russian snow during the miserable retreat of the Napoleonic Army from Moscow revealed that many of them had brain inflammation characteristic of nostalgia.

While Europeans (with the exception of the British) reported frequent epidemics of nostalgia starting from the seventeenth century, American doctors proudly declared that the young nation remained healthy and didn’t succumb to the nostalgic vice until the American Civil War. 8 If the Swiss doctor Hofer believed that homesickness expressed love for freedom and one’s native land, two centuries later the American military doctor Theodore Calhoun conceived of nostalgia as a shameful disease that revealed a lack of manliness and unprogressive attitudes. He suggested that this was a disease of the mind and of a weak will (the concept of an “afflicted imagination” would be profoundly alien to him). In nineteenth-century America it was believed that the main reasons for homesickness were idleness and a slow and inefficient use of time conducive to daydreaming, erotomania and onanism. “Any influence that will tend to render the patient more manly will exercise a curative power. In boarding schools, as perhaps many of us remember, ridicule is wholly relied upon. . . . [The nostalgic] patient can often be laughed out of it by his comrades, or reasoned out of it by appeals to his manhood; but of all potent agents, an active campaign, with attendant marches and more particularly its battles is the best curative.” 9 Dr. Calhoun proposed as treatment public ridicule and bullying by fellow soldiers, an increased number of manly marches and battles and improvement in personal hygiene that would make soldiers’ living conditions more modern. (He also was in favor of an occasional furlough that would allow soldiers to go home for a brief period of time.)

For Calhoun, nostalgia was not conditioned entirely by individuals’ health, but also by their strength of character and social background. Among the Americans the most susceptible to nostalgia were soldiers from the rural districts, particularly farmers, while merchants, mechanics, boatmen and train conductors from the same area or from the city were more likely to resist the sickness. “The soldier from the city cares not where he is or where he eats, while his country cousin pines for the old homestead and his father’s groaning board,” wrote Calhoun. 10 In such cases, the only hope was that the advent of progress would somehow alleviate nostalgia and the efficient use of time would eliminate idleness, melancholy, procrastination and lovesickness.

As a public epidemic, nostalgia was based on a sense of loss not limited to personal history. Such a sense of loss does not necessarily suggest that what is lost is properly remembered and that one still knows where to look for it. Nostalgia became less and less curable. By the end of the eighteenth century, doctors discovered that a return home did not always treat the symptoms. The object of longing occasionally migrated to faraway lands beyond the confines of the motherland. Just as genetic researchers today hope to identify a gene not only for medical conditions but social behavior and even sexual orientation, so the doctors in the eighteenth and nineteenth centuries looked for a single cause of the erroneous representations, one so-called pathological bone. Yet the physicians failed to find the locus of nostalgia in their patient’s mind or body. One doctor claimed that nostalgia was a “hypochondria of the heart” that thrives on its symptoms. To my knowledge, the medical diagnosis of nostalgia survived in the twentieth century in one country only—Israel. (It is unclear whether this reflects a persistent yearning for the promised land or for the diasporic homelands left behind.) Everywhere else in the world nostalgia turned from a treatable sickness into an incurable disease. How did it happen that a provincial ailment, maladie du pays , became a disease of the modern age, mal du siècle?

In my view, the spread of nostalgia had to do not only with dislocation in space but also with the changing conception of time. Nostalgia was a historical emotion, and we would do well to pursue its historical rather than psychological genesis. There had been plenty of longing before the seventeenth century, not only in the European tradition but also in Chinese and Arabic poetry, where longing is a poetic commonplace. Yet the early modern conception embodied in the specific word came to the fore at a particular historical moment. “Emotion is not a word, but it can only be spread abroad through words,” writes Jean Starobinski, using the metaphor of border crossing and immigration to describe the discourse on nostalgia. 11 Nostalgia was diagnosed at a time when art and science had not yet entirely severed their umbilical ties and when the mind and body—internal and external well-being—were treated together. This was a diagnosis of a poetic science—and we should not smile condescendingly on the diligent Swiss doctors. Our progeny well might poeticize depression and see it as a metaphor for a global atmospheric condition, immune to treatment with Prozac.

What distinguishes modern nostalgia from the ancient myth of the return home is not merely its peculiar medicalization. The Greek nostos , the return home and the song of the return home, was part of a mythical ritual. […] Modern nostalgia is a mourning for the impossibility of mythical return, for the loss of an enchanted world with clear borders and values; it could be a secular expression of a spiritual longing, a nostalgia for an absolute, a home that is both physical and spiritual, the edenic unity of time and space before entry into history. The nostalgic is looking for a spiritual addressee. Encountering silence, he looks for memorable signs, desperately misreading them.

The diagnosis of the disease of nostalgia in the late seventeenth century took place roughly at the historical moment when the conception of time and history were undergoing radical change. The religious wars in Europe came to an end but the much prophesied end of the world and doomsday did not occur. “It was only when Christian eschatology shed its constant expectations of the immanent arrival of doomsday that a temporality could have been revealed that would be open to the new and without limit.” 13 It is customary to perceive “linear” Judeo-Christian time in opposition to the “cyclical” pagan time of eternal return and discuss both with the help of spatial metaphors. 14 What this opposition obscures is the temporal and historical development of the perception of time that since Renaissance on has become more and more secularized, severed from cosmological vision.

Before the invention of mechanical clocks in the thirteenth century the question, What time is it? was not very urgent. Certainly there were plenty of calamities, but the shortage of time wasn’t one of them; therefore people could exist “in an attitude of temporal ease. Neither time nor change appeared to be critical and hence there was no great worry about controlling the future.” 15 In late Renaissance culture,Time was embodied in the images of Divine Providence and capricious Fate, independent of human insight or blindness. The division of time into Past, Present and Future was not so relevant. History was perceived as a “teacher of life” (as in Cicero’s famous dictum, historia magistra vitae ) and the repertoire of examples and role models for the future. Alternatively, in Leibniz’s formulation, “The whole of the coming world is present and prefigured in that of the present.” 16

The French Revolution marked another major shift in European mentality. Regicide had happened before, but not the transformation of the entire social order. The biography of Napoleon became exemplary for an entire generation of new individualists, little Napoleons who dreamed of reinventing and revolutionizing their own lives. The “Revolution,” at first derived from natural movement of the stars and thus introduced into the natural rhythm of history as a cyclical metaphor, henceforth attained an irreversible direction: it appeared to unchain a yearned-for future. 17 The idea of progress through revolution or industrial development became central to the nineteenth-century culture. From the seventeenth to the nineteenth century, the representation of time itself changed; it moved away from allegorical human figures—an old man, a blind youth holding an hourglass, a woman with bared breasts representing Fate—to the impersonal language of numbers: railroad schedules, the bottom line of industrial progress. Time was no longer shifting sand; time was money. Yet the modern era also allowed for multiple conceptions of time and made the experience of time more individual and creative.

“The Origin of Consciousness, Gains and Losses: Walker Percy vs. Julian Jaynes”
by Laura Mooneyham White
from Gods, Voices, and the Bicameral Mind
ed. by Marcel Kuijsten

Jaynes is plainly one who understands the human yearning for Eden, the Eden of bicameral innocence. He writes of our longings for a return to that lost organization of human mentality, a return to lost certainty and splendour.” 44 Jones believes, in fact, that Jaynes speaks for himself when he describes the “yearning for divine volition and service [which] is with us still,” 45 of our “nostalgic anguish” which we feel for lost bicamerality. 46 Even schizophrenia, seen from Jaynes’s perspective as a vestige of bicamerality, is the anguishing state it is only because the relapse to bicamerality

is only partial. The learnings that make up a subjective consciousness are powerful and never totally suppressed. And thus the terror and the fury, the agony and the despair. … The lack of cultural support and definition for the voices [heard by schizophrenics] … provide a social withdrawal from the behavior of the absolutely social individual of bicameral societies. … [W]ithout this source of security, … living with hallucinations that are unacceptable and denied as unreal by those around him, the florid schizophrenic is in an opposite world to that of the god-owned laborers of Marduk. … [He] is a mind bared to his environment, waiting on gods in a godless world. 47

Jones, in fact, asserts that Jaynes’s discussion of schizophrenia is held in terms “reminiscent of R. D. Laing’s thesis that schizophrenics are the only sane people in our insane world.” 48 Jones goes on to say that “Jaynes, it would seem, holds that we would all be better off if ‘everyone’ were once again schizophrenic, if we could somehow return to a bicameral society which had not yet been infected by the disease of thinking.” 49

Jaynes does not, in my opinion, intimate a position nearly as reactionary as this; he has in fact made elsewhere an explicit statement to the effect that he himself feels no such longing to return to bicamerality, that he would in fact “shudder” at such a return. 50 Nonetheless, Jaynes does seem at some points in his book to describe introspection as a sort of pathological development in human history. For instance, instead of describing humanity’s move towards consciousness as liberating, Jaynes calls it “the slow inexorable profaning of our species.” 51 And no less an eminence than Northrop Frye recognized this tendency in Jaynes to disvalue consciousness. After surveying Jaynes’s argument and admitting the fascination of that argument’s revolutionary appeal, Frye points out that Jaynes’s ideas provoke a disturbing reflection: “seeing what a ghastly mess our egocentric consciousness has got us into, perhaps the sooner we get back to … hallucinations the better.” Frye expands his discussion of Jaynes to consider the cultural ramifications of this way of thinking, what he terms “one of the major cultural trends of our time”:

It is widely felt that our present form of consciousness, with its ego center, has become increasingly psychotic, incapable of dealing with the world, and that we must develop a more intensified form of consciousness, recapturing many of … Jaynes’ ‘bicameral’ features, if we are to survive the present century. 52

Frye evidently has little sympathy with such a position which would hold that consciousness is a “late … and on the whole regrettable arrival on the human scene” 53 rather than the wellspring of all our essentially human endeavors and achievements: art, philosophy, religion and science. The ground of this deprecatory perspective on consciousness, that is, a dislike or distrust of consciousness, has been held by many modern and postmodern thinkers and artists besides Jaynes, among them Sartre, Nietzsche, Faulkner, Pynchon, Freud, and Lacan, so much so that we might identify such an ill opinion of consciousness as a peculiarly modern ideology.

“Remembrance of Things (Far) Past”
by Julian Jaynes
from The Julian Jaynes Collection
ed. by Marcel Kuijsten

And nostalgia too. For with time metaphored as space, so like the space of our actual lives, a part of us solemnly keeps loitering behind, trying to visit past times as if they were actual spaces. Oh, what a temptation is there! The warm, sullen longing to return to scenes long vanished, to relive some past security or love, to redress some ancient wrong or redecide a past regret, or alter some ill-considered actions toward someone lost to our present lives, or to fill out past omissions — these are artifacts of our new remembering consciousness. Side effects. And they are waste and filler unless we use them to learn about ourselves.

Memory is a privilege for us who are born into the last three millennia. It is both an advantage and a predicament, liberation and an imprisonment. Memory is not a part of our biological evolution, as is our capacity to learn habits or simple knowings. It is an off-shoot of consciousness acquired by mankind only a hundred generations ago. It is thus the new environment of modern man. It is one which we sometimes are like legal aliens waiting for naturalization. The feeling of full franchise and citizenship in that new environment is a quest that is the unique hidden adventure of us all.

The Suffering System
by David Loy

In order to understand why that anxiety exists, we must relate dukkha to another crucial Buddhist term, anatta, or “non-self.” Our basic frustration is due most of all to the fact that our sense of being a separate self, set apart from the world we are in, is an illusion. Another way to express this is that the ego-self is ungrounded, and we experience this ungroundedness as an uncomfortable emptiness or hole at the very core of our being. We feel this problem as a sense of lack, of inadequacy, of unreality, and in compensation we usually spend our lives trying to accomplish things that we think will make us more real.

But what does this have to do with social challenges? Doesn’t it imply that social problems are just projections of our own dissatisfaction? Unfortunately, it’s not that simple. Being social beings, we tend to group our sense of lack, even as we strive to compensate by creating collective senses of self.

In fact, many of our social problems can be traced back to this deluded sense of collective self, this “wego,” or group ego. It can be defined as one’s own race, class, gender, nation (the primary secular god of the modern world), religion, or some combination thereof. In each case, a collective identity is created by discriminating one’s own group from another. As in the personal ego, the “inside” is opposed to the other “outside,” and this makes conflict inevitable, not just because of competition with other groups, but because the socially constructed nature of group identity means that one’s own group can never feel secure enough. For example, our GNP is not big enough, our nation is not powerful (“secure”) enough, we are not technologically developed enough. And if these are instances of group-lack or group-dukkha, our GNP can never be big enough, our military can never be powerful enough, and we can never have enough technology. This means that trying to solve our economic, political, and ecological problems with more of the same is a deluded response.

“Consciousness is a very recent acquisition of nature…”

“There are historical reasons for this resistance to the idea of an unknown part of the human psyche. Consciousness is a very recent acquisition of nature, and it is still in an “experimental” state. It is frail, menaced by specific dangers, and easily injured. As anthropologists have noted, one of the most common mental derangements that occur among primitive people is what they call “the loss of a soul”—which means, as the name indicates, a noticeable disruption (or, more technically, a dissociation) of consciousness.

“Among such people, whose consciousness is at a different level of development from ours, the “soul” (or psyche) is not felt to be a unit. Many primitives assume that a man has a “bush soul” as well as his own, and that this bush soul is incarnate in a wild animal or a tree, with which the human individual has some kind of psychic identity. This is what the distinguished French ethnologist Lucien Lévy-Brühl called a “mystical participation.” He later retracted this term under pressure of adverse criticism, but I believe that his critics were wrong. It is a well-known psychological fact that an individual may have such an unconscious identity with some other person or object.

“This identity takes a variety of forms among primitives. If the bush soul is that of an animal, the animal itself is considered as some sort of brother to the man. A man whose brother is a crocodile, for instance, is supposed to be safe when swimming a crocodile-infested river. If the bush soul is a tree, the tree is presumed to have something like parental authority over the individual concerned. In both cases an injury to the bush soul is interpreted as an injury to the man.

“In some tribes, it is assumed that a man has a number of souls; this belief expresses the feeling of some primitive individuals that they each consist of several linked but distinct units. This means that the individual’s psyche is far from being safely synthesized; on the contrary, it threatens to fragment only too easily under the onslaught of unchecked emotions.”

Carl Jung, Man and His Symbols
Part 1: Approaching the Unconscious
The importance of dreams

“For the average American or European, Coca-Cola poses a far deadlier threat than al-Quaeda.”

Homo Deus: A Brief History of Tomorrow
by Yuval Noah Harari

  • “Poverty certainly causes many other health problems, and malnutrition shortens life expectancy even in the richest countries on earth. In France, for example, 6 million people (about 10 percent of the population) suffer from nutritional insecurity. They wake up in the morning not knowing whether they will have anything to eat for lunch: they often go to sleep hungry; and the nutrition they do obtain is unbalanced and unhealthy — lots of starches, sugar and salt, and not enough protein and vitamins. Yet nutritional insecurity isn’t famine, and France of the early twenty-first century isn’t France of 1694. Even in the worst slums around Beauvais or Paris, people don’t die because they have not eaten for weeks on end.”
  • “Indeed, in most countries today overeating has become a far worse problem than famine. In the eighteenth century Marie Antoinette allegedly advised the starving masses that if they ran out of bread, they should just eat cake instead. Today, the poor are following this advice to the letter. Whereas the rich residents of Beverly Hills eat lettuce salad and steamed tofu with quinoa, in the slums and ghettos the poor gorge on Twinkie cakes, Cheetos, hamburgers and pizza. In 2014 more than 2.1 billion people were overweight compared to 850 million who suffered from malnutrition. Half of humankind is expected to be overweight by 2030. In 2010 famine and malnutrition combined killed about 1 million people, whereas obesity killed 3 million.”
  • “During the second half of the twentieth century this Law of the Jungle has finally been broken, if not rescinded. In most areas wars became rarer than ever. Whereas in ancient agricultural societies human violence caused about 15 per cent of all deaths, during the twentieth century violence caused only 5 per cent of deaths, and in the early twenty-first century it is responsible for about 1 per cent of global mortality. In 2012, 620,000 people died in the world due to human violence (war killed 120,000 people, and crime killed another 500,000). In contrast, 800,000 committed suicide, and 1.5 million died of diabetes. Sugar is now more dangerous than gunpowder.”
  • “What about terrorism, then? Even if central governments and powerful states have learned restraint, terrorists might have no such qualms about using new and destructive weapons. That is certainly a worrying possibility. However, terrorism is a strategy of weakness adopted by those who lack access to real power. At least in the past, terrorism worked by spreading fear rather than by causing significant material damage. Terrorists usually don’t have the strength to defeat an army, occupy a country or destroy entire cities. In 2010 obesity and related illnesses killed about 3 million people, terrorists killed a total of 7697 people across the globe, most of them in developing countries. For the average American or European, Coca-Cola poses a far deadlier threat than al-Quaeda.”

Harari’s basic argument is compelling. The kinds of violence and death we experience now is far different. The whole reason I wrote this post is because of a few key points that stood out to me: “Sugar is now more dangerous than gunpowder.” And: “For the average American or European, Coca-Cola poses a far deadlier threat than al-Quaeda.” As those quotes make clear, our first world problems are of a different magnitude. But I would push back against his argument, as for much of the rest of the world, in his making the same mistake as Steven Pinker by ignoring slow violence (so pervasive and systemic as to go unnoticed and uncounted, unacknowledged and unreported, often intentionally hidden). Parts of the United States also are in third world conditions. So, it isn’t simply a problem of nutritional excess from a wealthy economy. That wealth isn’t spread evenly, much less the nutrient-dense healthy foods or the healthcare. Likewise, the violence oppression falls harder upon some than others. Those like Harari and Pinker can go through their entire lives seeing very little of it.

Since World War Two, there have been thousands of acts of mass violence: wars and proxy wars, invasions and occupations, bombings and drone strikes; covert operations in promoting toppled governments, paramilitaries, and terrorists; civil wars, revolutions, famines, droughts, refugee crises, and genocides; et cetera. Most of these events of mass violence were directly or indirectly caused by the global superpowers, besides through military aggression and such, in their destabilizing regions, exploiting third world countries, stealing wealth and resources, enforcing sanctions on food and medicine, economic manipulations, debt entrapment, artificially creating poverty, and being the main contributors to environmental destruction and climate change. One way or another, these institutionalized and globalized forms of injustice and oppression might be the combined largest cause of death, possibly a larger number than in any society seen before. Yet they are rationalized away as ‘natural’ deaths, just people dying.

Over the past three-quarters of a century, probably billions of people in world have been killed, maimed, imprisoned, tortured, starved, orphaned, and had their lives cut short. Some of this was blatant violent actions and the rest was slow violence. But it was all intentional, as part of the wealthy and powerful seeking to maintain their wealth and power and gain even more. There is little justification for all this violence. Even the War on Terror involved cynical plans for attacking countries like Iraq that had preceded the terrorist attacks themselves. The Bush cronies, long before the 2000 presidential election, had it written down on paper that they were looking for an excuse to take Saddam Hussein out of power. The wars in Afghanistan and Iraq killed millions of people, around 5% or so of the population (the equivalent would be if a foreign power killed a bit less than 20 million Americans). The used uranium weapons spread across the landscape will add millions of more deaths over the decades — slow, torturous, and horrific deaths, many of them children. Multiply that by the hundreds of other similar US actions, and then multiply that by the number of other countries that have committed similar crimes against humanity.

Have we really become less violent? Or has violence simply taken new forms? Maybe we should wait until after the coming World War Three before declaring a new era of peace, love, and understanding. Numerous other historical periods had a few generations without war and such. That is not all that impressive. The last two world wars are still in living memory and hence living trauma. Let’s give it some time before we start singing the praises and glory of our wonderful advancement as a civilization guided by our techno-utopian fantasies of Whiggish liberalism. But let’s also not so easily dismiss the tremendous suffering and costs from the diseases of civilization that worsen with each generation; not only obesity, diabetes, heart disease but also autoimmune conditions, Alzheimer’s, schizophrenia, mood disorders, ADHD, autism, and on and on — besides diet and nutrition, much of it caused by chemical exposure from factory pollution, oil spills, ocean dumping, industrial farming, food additives, packaging, and environmental toxins. And we must not forget the role that governments have played in pushing harmful dietary recommendations of low-fat and high-carb that, in being spread worldwide by the wealth and power and influence of the United States, has surely harmed at least hundreds of millions over the past several generations.

The fact that sugar is more dangerous than gun powder, Coca-Cola more dangerous than al-Queda… This is not a reason to stop worrying about mass violence and direct violence. Rather than as a percentage, the total number of violent deaths is still going up, just as there are more slaves now than at the height of slavery prior to the American Civil War. Talking about percentages of certain deaths while excluding other deaths is sleight of hand rhetoric. That misses an even bigger point. The corporate plutocracy that now rules our neo-fascist society of inverted totalitarianism poses the greatest threat of our age. That is not an exaggeration. It is simply what the data shows us to be true, as Harari unintentionally reveals. Privatized profit comes at a public price, a price we can’t afford. Even ignoring the greater externalized costs of environmental harm from corporations (and the general degradation of society from worsening inequality), the increasing costs of healthcare because of diseases caused by highly-profitable and highly-processed foods that are scientifically-designed to be palatable and addictive (along with the systematic dismantling of traditional food systems) could bankrupt many countries in the near future and cripple their populations in the process. World War Three might turn out to be the least of our worries. Just because most of the costs have been externalized on the poor and delayed to future generations doesn’t mean they aren’t real. It will take a while to get the full death count.

 

Just Smile.

“Pain in the conscious human is thus very different from that in any other species. Sensory pain never exists alone except in infancy or perhaps under the influence of morphine when a patient says he has pain but does not mind it. Later, in those periods after healing in which the phenomena usually called chronic pain occur, we have perhaps a predominance of conscious pain.”
~Julian Jaynes, Sensory Pain and Conscious Pain

I’ve lost count of the number of times I’ve seen a child react to a cut or stumble only after their parent(s) freaked out. Children are highly responsive to adults. If others think something bad has happened, they internalize this and act accordingly. Kids will do anything to conform to expectations. But most kids seem impervious to pain, assuming they don’t get the message that they are expected to put on an emotional display.

This difference can be seen when comparing how a child acts by themselves and how they act around a parent or other authority figure. You’ll sometimes see a kid looking around to see if their is an audience paying attention before crying or having a tantrum. We humans are social creatures and our behavior is always social. This is naturally understood even by infants who have an instinct for social cues and social response.

Pain is a physical sensation, an experience that passes, whereas suffering is in the mind, a story we tell ourselves. This is why trauma can last for decades after a bad experience. The sensory pain is gone but the conscious pain continues. We keep repeating a story.

It’s interesting that some cultures like the Piraha don’t appear to experience trauma from the exact same events that would traumatize a modern Westerner. Neither is depression and anxiety common among them. Nor an obsessive fear about death. Not only are the Piraha physically tougher but psychologically tougher as well. Apparently, they tell different stories that embody other expectations.

So, what kind of society is it that we’ve created with our Jaynesian consciousness of traumatized hyper-sensitivity and psychological melodrama? Why are we so attached to our suffering and victimization? What does this story offer us in return? What power does it hold over us? What would happen if we changed the master narrative of our society in replacing the competing claims of victimhood with an entirely different way of relating? What if outward performances of suffering were no longer expected or rewarded?

For one, we wouldn’t have a man-baby like Donald Trump as our national leader. He is the perfect personification of this conscious pain crying out for attention. And we wouldn’t have had the white victimhood that put him into power. But neither would we have any of the other victimhoods that these particular whites were reacting to. The whole culture of victimization would lose its power.

The social dynamic would be something else entirely. It’s hard to imagine what that might be. We’re addicted to the melodrama and we carefully enculturate and indoctrinate each generation to follow our example. To shake us loose from our socially constructed reality would require a challenge to our social order. The extremes of conscious pain isn’t only about our way of behaving. It is inseparable from how we maintain the world we are so desperately attached to.

We need the equivalent, in the cartoon below, of how this father relates to his son. But we need it on the collective level. Or at least we need this in the United States. What if the rest of the world simply stopped reacting to American leaders and American society? Just smile.

Image may contain: text

Credit: The basic observation and the cartoon was originally shared by Mateus Barboza on the Facebook group “Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind”.

Oil Industry Knew About Coming Climate Crisis Since 1950s

“Even now, man may be unwittingly changing the world’s climate through the waste products of his civilization. Due to our release through factories and automobiles every year of 6 billion tons of carbon dioxide (CO2), which helps air absorb heat from the sun. Our atmosphere seems to be getting warmer.”
~Unchained Goddess, film from Bell Telephone Science Hour (1958)

“[C]urrent scientific opinion overwhelmingly favors attributing atmospheric carbon dioxide increase to fossil fuel combustion.”
~James F. Black, senior scientist in the Products Research Division of Exxon Research and Engineering, from his presentation to Exxon corporate management entitled “The Greenhouse Effect” (July, 1977)

“Data confirm that greenhouse gases are increasing in the atmosphere. Fossil fuels contribute most of the CO2.”
~Duane G. Levine, Exxon scientist, presentation to the Board of Directors of Exxon entitled “Potential Enhanced Greenhouse Effects: Status and Outlook” (February 22, 1989)

“Scientists also agree that atmospheric levels of greenhouse gases (such as C02) are increasing as a result of human activity.”
~Oil industry Global Climate Coalition, internal report entitled “Science and Global Climate Change: What Do We Know? What are the Uncertainties?” (early 1990s)

“The scientific basis for the Greenhouse Effect and the potential impact of human emissions of greenhouse gases such as CO2 on climate is well established and cannot be denied.”
~Oil industry group Global Climate Coalition’s advisory committee of scientific and technical experts reported in the internal document “Predicting Future Climate Change: A Primer”, written in 1995 but redacted and censored version distributed in 1996 (see UCSUSA’s “Former Exxon Employee Says Company Considered Climate Risks as Early as 1981”)

“Perhaps the most interesting effect concerning carbon in trees which we have thus far observed is a marked and fairly steady increase in the 12C/13C ratio with time. Since 1840 the ratio has clearly increased markedly. This effect can be explained on the basis of a changing carbon dioxide concentration in the atmosphere resulting from industrialization and the consequent burning of large quantities of coal and petroleum.”
~Harrison Brown, a biochemist along with colleagues at the California Institute of Technology submitted a research proposal to the American Petroleum Institute entitled “The determination of the variations and causes of variations of the isotopic composition of carbon in nature” (1954)

“This report unquestionably will fan emotions, raise fears, and bring demand for action. The substance of the report is that there is still time to save the world’s peoples from the catastrophic consequence of pollution, but time is running out.
“One of the most important predictions of the report is carbon dioxide is being added to the Earth’s atmosphere by the burning of coal, oil, and natural gas at such a rate that by the year 2000, the heat balance will be so modified as possibly to cause marked changes in climate beyond local or even national efforts. The report further state, and I quote “. . . the pollution from internal combustion engines is so serious, and is growing so fast, that an alternative nonpolluting means of powering automobiles, buses, and trucks is likely to become a national necessity.””

~Frank Ikard, then-president of the American Petroleum Institute addressed
industry leaders at annual meeting, “Meeting the challenges of 1966” (November 8, 1965), given 3 days after the U.S. Science Advisory Committee’s official report, “Restoring the Quality of Our Environment”

“At a 3% per annum growth rate of CO2, a 2.5°C rise brings world economic growth to a halt in about 2025.”
~J. J. Nelson, American Petroleum Institute, notes from CO2 and Climate Task Force (AQ-9) meeting, meeting attended by attended by representatives from Exxon, SOHIO, and Texaco (March 18, 1980)

“Exxon position: Emphasize the uncertainty in scientific conclusions regarding the potential enhanced Greenhouse effect.”
~Joseph M. Carlson, Exxon spokesperson writing in “1988 Exxon Memo on the Greenhouse Effect” (August 3, 1988)

“Victory Will Be Achieved When
• “Average citizens understand (recognise) uncertainties in climate science; recognition of
uncertainties becomes part of the ‘conventional wisdom
• “Media ‘understands’ (recognises) uncertainties in climate science
• “Those promoting the Kyoto treaty on the basis of extant science appear to be out of touch
with reality.”
~American Petroleum Institute’s 1998 memo on denialist propaganda, see Climate Science vs. Fossil Fuel Fiction; “The API’s task force was made up of the senior scientists and engineers from Amoco, Mobil, Phillips, Texaco, Shell, Sunoco, Gulf Oil and Standard Oil of California, probably the highest paid and sought-after senior scientists and engineers on the planet. They came from companies that, just like Exxon, ran their own research units and did climate modeling to understand the impact of climate change and how it would impact their company’s bottom line.” (Not Just Exxon: The Entire Oil and Gas Industry Knew The Truth About Climate Change 35 Years Ago.)

[C]urrent scientific opinion overwhelmingly favors attributing atmospheric carbon dioxide increase to fossil fuel combustion. […] In the first place, there is general scientific agreement that the most likely manner in which mankind is influencing the global climate is through carbon dioxide release from the burning of fossil fuels. A doubling of carbon dioxide is estimated to be capable of increasing the average global temperature by from 1 [degree] to 3 [degrees Celsius], with a 10 [degrees Celsius] rise predicted at the poles. More research is needed, however, to establish the validity and significance of predictions with respect to the Greenhouse Effect. Present thinking holds that man has a time window of five to 10 years before the need for hard decisions regarding changes in energy strategies might become critical.
~James F. Black, senior scientist in the Products Research Division of Exxon Research and Engineering, from his presentation to Exxon corporate management entitled “The Greenhouse Effect” (July, 1977)

Present climactic models predict that the present trend of fossil fuel use will lead to dramatic climatic changes within the next 75 years. However, it is not obvious whether these changes would be all bad or all good. The major conclusion from this report is that, should it be deemed necessary to maintain atmospheric CO2 levels to prevent significant climatic changes, dramatic changes in patterns of energy use would be required.
~W. L. Ferrall, Exxon scientist writing in an internal Exxon memo, “Controlling Atmospheric CO2” (October 16, 1979)

In addition to the effects of climate on the globe, there are some particularly dramatic questions that might cause serious global problems. For example, if the Antarctic ice sheet which is anchored on land, should melt, then this could cause a rise in the sea level on the order of 5 meters. Such a rise would cause flooding in much of the US East Coast including the state of Florida and Washington D.C.
~Henry Shaw and P. P. McCall, Exxon scientists writing in an internal Exxon report, “Exxon Research and Engineering Company’s Technological Forecast: CO2 Greenhouse Effect” (Shaw, Henry; McCall, P. P. (December 18, 1980)

“but changes of a magnitude well short of catastrophic…” I think that this statement may be too reassuring. Whereas I can agree with the statement that our best guess is that observable effects in the year 2030 are likely to be “well short of catastrophic”, it is distinctly possible that the CPD scenario will later produce effects which will indeed be catastrophic (at least for a substantial fraction of the earth’s population). This is because the global ecosystem in 2030 might still be in a transient, headed for much significant effects after time lags perhaps of the order of decades. If this indeed turns out to be the case, it is very likely that we will unambiguously recognize the threat by the year 2000 because of advances in climate modeling and the beginning of real experimental confirmation of the CO2 problem.
~Roger Cohen, director of the Theoretical and Mathematical Sciences Laboratory at Exxon Research writing in inter-office correspondence “Catastrophic effects letter” (August 18, 1981)

In addition to the effects of climate on global agriculture, there are some potentially catastrophe events that must be considered. For example, if the Antarctic ice sheet which is anchored on land should melt, then this could cause e rise in sea level on the order of 5 meters. Such a rise would cause flooding on much of the U.S. East Coast, including the state of Florida and Washington, D.C. […]
The greenhouse effect ls not likely to cause substantial climactic changes until the average global temperature rises at least 1 degree Centigrade above today’s levels. This could occur in the second to third quarter of the next century. However, there is concern among some scientific groups that once the effects are measurable, they might not be reversible and little could be done to correct the situation in the short term. Therefore, a number of environmental groups are calling for action now to prevent an undesirable future situation from developing.
Mitigation of the “greenhouse effect” would require major reductions in fossil fuel combustion.
~Marvin B. Glaser, Environmental Affairs Manager, Coordination and Planning Division of Exxon Research and Engineering Company writing in “Greenhouse Effect: A Technical Review” (Glaser, M. B. (April 1, 1982)

In summary, the results of our research are in accord with the scientific consensus on the effect of increased atmospheric CO2 on climate. […]
Furthermore our ethical responsibility is to permit the publication of our research in the scientific literature. Indeed, to do otherwise would be a breach of Exxon’s public position and ethical credo on honesty and integrity.
~Roger W. Cohen, Director of Exxon’s Theoretical and Mathematical Sciences Laboratory, memo  “Consensus on CO2 Impacts” to A. M. Natkin, of Exxon’s Office of Science and Technology (Cohen, Roger W. (September 2, 1982)

[F]aith in technologies, markets, and correcting feedback mechanisms is less than satisfying for a situation such as the one you are studying at this year’s Ewing Symposium. […]
Clearly, there is vast opportunity for conflict. For example, it is more than a little disconcerting the few maps showing the likely effects of global warming seem to reveal the two superpowers losing much of the rainfall, with the rest of the world seemingly benefitting.
~Dr. Edward E. David, Jr., president of the Exxon Research and Engineering Company, keynote address to the Maurice Ewing symposium at the Lamont–Doherty Earth Observatory on the Palisades, New York campus of Columbia University, published in ““Inventing the Future: Energy and the CO2 “Greenhouse Effect”” (October 26, 1982)

Data confirm that greenhouse gases are increasing in the atmosphere. Fossil fuels contribute most of the CO2. […]
Projections suggest significant climate change with a variety of regional impacts. Sea level rise with generally negative consequences. […]
Arguments that we can’t tolerate delay and must act now can lead to irreversible and costly Draconian steps. […]
To be a responsible participant and part of the solution to [potential enhanced greenhouse], Exxon’s position should recognize and support 2 basic societal needs. First […] to improve understanding of the problem […] not just the science […] but the costs and economics tempered by the sociopolitical realities. That’s going to take years (probably decades).
~Duane G. Levine, Exxon scientist, presentation to the Board of Directors of Exxon entitled “Potential Enhanced Greenhouse Effects: Status and Outlook” (February 22, 1989)

* * *

To see more damning quotes from Exxon insiders, see Wikiquote page on ExxonMobil climate change controversy. Here are other resources:

We Made Climate Change Documentaries for Science Classes Way back in 1958 So Why Do Folks Still Pretend Not to Know?
from O Society

Report: Oil Industry Knew About Dangers of Climate Change in 1954
from Democracy Now! (see O Society version)

CO2’s Role in Global Warming Has Been on the Oil Industry’s Radar Since the 1960s
by Neela Banerjee

Exxon Knew about Climate Change 40 years ago
by Shannon Hall (see O Society version)

Industry Ignored Its Scientists on Climate
by Andrew C. Revkin

Exxon: The Road Not Taken
by Neela Banerjee, Lisa Song, & David Hasemyer

The Climate Deception Dossiers
(and full report)
from Union of Concerned Scientists

Exxon Has Spent $30+ Million on Think Tanks?
from Think Tank Watch

How Fossil Fuel Money Made Climate Change Denial the Word of God
by Brendan O’Connor (see O Society version)

A Timeline of Climate Science and Policy
by Brad Johnson

Voice and Perspective

“No man should [refer to himself in the third person] unless he is the King of England — or has a tapeworm.”
~ Mark Twain

“Love him or hate him, Trump is a man who is certain about what he wants and sets out to get it, no holds barred. Women find his power almost as much of a turn-on as his money.”
~ Donald Trump

The self is a confusing matter. As always, who is speaking and who is listening. Clues can come from the language that is used. And the language we use shapes human experience, as studied in linguistic relativity.

Speaking in first person may be a more recent innovation of the human society and psyche. The autobiographical self requires the self-authorization of Jaynesian narrative consciousness. The emergence of the egoic self is the fall into historical time, an issue too complex for discussion here (see Julian Jaynes’ classic work or the diverse Jaynesian scholarship it inspired, or look at some of my previous posts on the topic).

Consider the mirror effect. When hunter-gatherers encounter a mirror for the first time there is what is called  “the tribal terror of self-recognition” (Edmund Carpenter as quoted by Philippe Rochat, from Others in Mind, p. 31). “After a frightening reaction,” Carpenter wrote about the Biamis of Papua New Guinea, “they become paralyzed, covering their mouths and hiding their heads — they stood transfixed looking at their own images, only their stomach muscles betraying great tension.”

Research has shown that heavy use of first person is associated with depression, anxiety, and other distressing emotions. Oddly, this full immersion into subjectivity can lead into depressive depersonalization and depressive realism — the individual sometimes passes through the self and into some other state. And in that other state, I’ve noticed that silence befalls the mind, that is to say the loss of the ‘I’ where the inner dialogue goes silent. One sees the world as if coldly detached, as if outside of it all.

Third person is stranger and with a much more ancient pedigree. In the modern mind, third person is often taken as an effect of narcissistic inflation of the ego, such as seen with celebrities speaking of themselves in terms of their media identities. But in other countries and at other times, it has been an indication of religious humility or a spiritual shifting of perspective (possibly expressing the belief that only God can speak of Himself as ‘I’).

There is also the Batman effect. Children act more capable and with greater perseverance when speaking of themselves in third person, specifically as superhero character. As with religious practice, this serves the purpose of distancing from emotion. Yet a sense of self can simultaneously be strengthened when the individual becomes identified with a character. This is similar to celebrities who turn their social identities into something akin to mythological figures. Or as the child can be encouraged to invoke their favorite superhero to stand in for their underdeveloped ego-selves, a religious true believer can speak of God or the Holy Spirit working through them. There is immense power in this.

This might point to the Jaynesian bicameral mind. When an Australian Aborigine ritually sings a Songline, he is invoking a god-spirit-personality. That third person of the mythological story shifts the Aboriginal experience of self and reality. The Aborigine has as many selves as he has Songlines, each a self-contained worldview and way of being. This could be a more natural expression of human nature… or at least an easier and less taxing mode of being (Hunger for Connection). Jaynes noted that schizophrenics with their weakened and loosened egoic boundaries have seemingly inexhaustible energy.

He suspected this might explain why archaic humans could do seemingly impossible tasks such as building pyramids, something moderns could only accomplish through use of our largest and most powerful cranes. Yet the early Egyptians managed it with a small, impoverished, and malnourished population that lacked even basic infrastructure of roads and bridges. Similarly, this might explain how many tribal people can dance for days on end with little rest and no food. And maybe also like how armies can collectively march for days on end in a way no individual could (Music and Dance on the Mind).

Upholding rigid egoic boundaries is tiresome work. This might be why, when individuals reach exhaustion under stress (mourning a death, getting lost in the wilderness, etc), they can experience what John Geiger called the third man factor, the appearance of another self often with its own separate voice. Apparently, when all else fails, this is the state of mind we fall back on and it’s a common experience at that. Furthermore, a negatory experience, as Jaynes describes it, can lead to negatory possession in the re-emergence of a bicameral-like mind with a third person identity becoming a fully expressed personality of its own, a phenomenon that can happen through trauma-induced dissociation and splitting:

“Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.”

Jaynes noted that those who are abused in childhood are more easily hypnotized. Their egoic boundaries never as fully develop or else the large gaps are left in this self-construction, gaps through which other voices can slip in. This relates to what has variously been referred to as the porous self, thin boundary type, fantasy proneness, etc. Compared to those who have never experienced trauma, I bet such people would find it easier to speak in the third person and when doing so would show a greater shift in personality and behavior.

As for first person subjectivity, it has its own peculiarities. I think of the association of addiction and individuality, as explored by Johann Hari and as elaborated in my own writings (Individualism and Isolation; To Put the Rat Back in the Rat Park; & The Agricultural Mind). As the ego is a tiresome project that depletes one’s reserves, maybe it’s the energy drain that causes the depression, irritability, and such. A person with such a guarded sense of self would be resistant to speak in third person in finding it hard to escape the trap of ego they’ve so carefully constructed. So many of us have fallen under its sway and can’t imagine anything else (The Spell of Inner Speech). That is probably why it so often requires trauma to break open our psychological defenses.

Besides trauma, many moderns have sought to escape the egoic prison through religious practices. Ancient methods include fasting, meditation, and prayer — these are common across the world. Fasting, by the way, fundamentally alters the functioning of the body and mind through ketosis (also the result of a very low-carb diet), something I’ve speculated may have been a supporting factor for the bicameral mind and related to do with the much earlier cultural preference of psychedelics over addictive stimulants, an entirely different discussion (“Yes, tea banished the fairies.”; & Autism and the Upper Crust). The simplest method of all is using third person language until it becomes a new habit of mind, something might require a long period of practice to feel natural.

The modern mind has always been under stress. That is because it is the source of that stress. It’s not a stable and sustainable way of being in the world (The Crisis of Identity). Rather, it’s a transitional state and all of modernity has been a centuries-long stage of transformation into something else. There is an impulse hidden within, if we could only trigger the release of the locking mechanism (Lock Without a Key). The language of perspectives, as Scott Preston explores (The Three Gems and The Cross of Reality), tells us something important about our predicament. Words such as ‘I’, ‘you’, etc aren’t merely words. In language, we discover our humanity as we come to know the other.

* * *

Are Very Young Children Stuck in the Perpetual Present?
by Jesse Bering

Interestingly, however, the authors found that the three-year-olds were significantly more likely to refer to themselves in the third person (using their first names rather and saying that the sticker is on “his” or “her” head) than were the four-year-olds, who used first-person pronouns (“me” and “my head”) almost exclusively. […]

Povinelli has pointed out the relevancy of these findings to the phenomenon of “infantile amnesia,” which tidily sums up the curious case of most people being unable to recall events from their first three years of life. (I spent my first three years in New Jersey, but for all I know I could have spontaneously appeared as a four-year-old in my parent’s bedroom in Virginia, which is where I have my first memory.) Although the precise neurocognitive mechanisms underlying infantile amnesia are still not very well-understood, escaping such a state of the perpetual present would indeed seemingly require a sense of the temporally enduring, autobiographical self.

5 Reasons Shaq and Other Athletes Refer to Themselves in the Third Person
by Amelia Ahlgren

“Illeism,” or the act of referring to oneself in the third person, is an epidemic in the sports world.

Unfortunately for humanity, the cure is still unknown.

But if we’re forced to listen to these guys drone on about an embodiment of themselves, we might as well guess why they do it.

Here are five reasons some athletes are allergic to using the word “I.”

  1. Lag in Linguistic Development (Immaturity)
  2. Reflection of Egomania
  3. Amp-Up Technique
  4. Pure Intimidation
  5. Goofiness

Rene Thinks, Therefore He Is. You?
by Richard Sandomir

Some strange, grammatical, mind-body affliction is making some well-known folks in sports and politics refer to themselves in the third person. It is as if they have stepped outside their bodies. Is this detachment? Modesty? Schizophrenia? If this loopy verbal quirk were simple egomania, then Louis XIV might have said, “L’etat, c’est Lou.” He did not. And if it were merely a sign of one’s overweening power, the Queen Victoria would not have invented the royal we (“we are not amused”) but rather the royal she. She did not.

Lately, though, some third persons have been talking in a kind of royal he:

* Accepting the New York Jets’ $25 million salary and bonus offer, the quarterback Neil O’Donnell said of his former team, “The Pittsburgh Steelers had plenty of opportunities to sign Neil O’Donnell.”

* As he pushed to be traded from the Los Angeles Kings, Wayne Gretzky said he did not want to wait for the Kings to rebuild “because that doesn’t do a whole lot of good for Wayne Gretzky.”

* After his humiliating loss in the New Hampshire primary, Senator Bob Dole proclaimed: “You’re going to see the real Bob Dole out there from now on.”

These people give you the creepy sense that they’re not talking to you but to themselves. To a first, second or third person’s ear, there’s just something missing. What if, instead of “I am what I am,” we had “Popeye is what Popeye is”?

Vocative self-address, from ancient Greece to Donald Trump
by Ben Zimmer

Earlier this week on Twitter, Donald Trump took credit for a surge in the Consumer Confidence Index, and with characteristic humility, concluded the tweet with “Thanks Donald!”

The “Thanks Donald!” capper led many to muse about whether Trump was referring to himself in the second person, the third person, or perhaps both.

Since English only marks grammatical person on pronouns, it’s not surprising that there is confusion over what is happening with the proper name “Donald” in “Thanks, Donald!” We associate proper names with third-person reference (“Donald Trump is the president-elect”), but a name can also be used as a vocative expression associated with second-person address (“Pleased to meet you, Donald Trump”). For more on how proper names and noun phrases in general get used as vocatives in English, see two conference papers from Arnold Zwicky: “Hey, Whatsyourname!” (CLS 10, 1974) and “Isolated NPs” (Semantics Fest 5, 2004).

The use of one’s own name in third-person reference is called illeism. Arnold Zwicky’s 2007 Language Log post, “Illeism and its relatives” rounds up many examples, including from politicians like Bob Dole, a notorious illeist. But what Trump is doing in tweeting “Thanks, Donald!” isn’t exactly illeism, since the vocative construction implies second-person address rather than third-person reference. We can call this a form of vocative self-address, wherein Trump treats himself as an addressee and uses his own name as a vocative to create something of an imagined interior dialogue.

Give me that Prime Time religion
by Mark Schone

Around the time football players realized end zones were for dancing, they also decided that the pronouns “I” and “me,” which they used an awful lot, had worn out. As if to endorse the view that they were commodities, cartoons or royalty — or just immune to introspection — athletes began to refer to themselves in the third person.

It makes sense, therefore, that when the most marketed personality in the NFL gets religion, he announces it in the weirdly detached grammar of football-speak. “Deion Sanders is covered by the blood of Jesus now,” writes Deion Sanders. “He loves the Lord with all his heart.” And in Deion’s new autobiography, the Lord loves Deion right back, though the salvation he offers third-person types seems different from what mere mortals can expect.

Refering to yourself in the third person
by Tetsuo

It does seem to be a stylistic thing in formal Chinese. I’ve come across a couple of articles about artists by the artist in question where they’ve referred to themselves in the third person throughout. And quite a number of politicians do the same, I’ve been told.

Illeism
from Wikipedia

Illeism in everyday speech can have a variety of intentions depending on context. One common usage is to impart humility, a common practice in feudal societies and other societies where honorifics are important to observe (“Your servant awaits your orders”), as well as in master–slave relationships (“This slave needs to be punished”). Recruits in the military, mostly United States Marine Corps recruits, are also often made to refer to themselves in the third person, such as “the recruit,” in order to reduce the sense of individuality and enforce the idea of the group being more important than the self.[citation needed] The use of illeism in this context imparts a sense of lack of self, implying a diminished importance of the speaker in relation to the addressee or to a larger whole.

Conversely, in different contexts, illeism can be used to reinforce self-promotion, as used to sometimes comic effect by Bob Dole throughout his political career.[2] This was particularly made notable during the United States presidential election, 1996 and lampooned broadly in popular media for years afterwards.

Deepanjana Pal of Firstpost noted that speaking in the third person “is a classic technique used by generations of Bollywood scriptwriters to establish a character’s aristocracy, power and gravitas.”[3] Conversely, third person self referral can be associated with self-irony and not taking oneself too seriously (since the excessive use of pronoun “I” is often seen as a sign of narcissism and egocentrism[4]), as well as with eccentricity in general.

In certain Eastern religions, like Hinduism or Buddhism, this is sometimes seen as a sign of enlightenment, since by doing so, an individual detaches his eternal self (atman) from the body related one (maya). Known illeists of that sort include Swami Ramdas,[5] Ma Yoga Laxmi,[6] Anandamayi Ma,[7] and Mata Amritanandamayi.[8] Jnana yoga actually encourages its practitioners to refer to themselves in the third person.[9]

Young children in Japan commonly refer to themselves by their own name (a habit probably picked from their elders who would normally refer to them by name. This is due to the normal Japanese way of speaking, where referring to another in the third person is considered more polite than using the Japanese words for “you”, like Omae. More explanation given in Japanese pronouns, though as the children grow older they normally switch over to using first person references. Japanese idols also may refer to themselves in the third person so to give off the feeling of childlike cuteness.

Four Paths to the Goal
from Sheber Hinduism

Jnana yoga is a concise practice made for intellectual people. It is the quickest path to the top but it is the steepest. The key to jnana yoga is to contemplate the inner self and find who our self is. Our self is Atman and by finding this we have found Brahman. Thinking in third person helps move us along the path because it helps us consider who we are from an objective point of view. As stated in the Upanishads, “In truth, who knows Brahman becomes Brahman.” (Novak 17).

Non-Reactivity: The Supreme Practice of Everyday Life
by Martin Schmidt

Respond with non-reactive awareness: consider yourself a third-person observer who watches your own emotional responses arise and then dissipate. Don’t judge, don’t try to change yourself; just observe! In time this practice will begin to cultivate a third-person perspective inside yourself that sometimes is called the Inner Witness.[4]

Frequent ‘I-Talk’ may signal proneness to emotional distress
from Science Daily

Researchers at the University of Arizona found in a 2015 study that frequent use of first-person singular pronouns — I, me and my — is not, in fact, an indicator of narcissism.

Instead, this so-called “I-talk” may signal that someone is prone to emotional distress, according to a new, follow-up UA study forthcoming in the Journal of Personality and Social Psychology.

Research at other institutions has suggested that I-talk, though not an indicator of narcissism, may be a marker for depression. While the new study confirms that link, UA researchers found an even greater connection between high levels of I-talk and a psychological disposition of negative emotionality in general.

Negative emotionality refers to a tendency to easily become upset or emotionally distressed, whether that means experiencing depression, anxiety, worry, tension, anger or other negative emotions, said Allison Tackman, a research scientist in the UA Department of Psychology and lead author of the new study.

Tackman and her co-authors found that when people talk a lot about themselves, it could point to depression, but it could just as easily indicate that they are prone to anxiety or any number of other negative emotions. Therefore, I-talk shouldn’t be considered a marker for depression alone.

Talking to yourself in the third person can help you control emotions
from Science Daily

The simple act of silently talking to yourself in the third person during stressful times may help you control emotions without any additional mental effort than what you would use for first-person self-talk — the way people normally talk to themselves.

A first-of-its-kind study led by psychology researchers at Michigan State University and the University of Michigan indicates that such third-person self-talk may constitute a relatively effortless form of self-control. The findings are published online in Scientific Reports, a Nature journal.

Say a man named John is upset about recently being dumped. By simply reflecting on his feelings in the third person (“Why is John upset?”), John is less emotionally reactive than when he addresses himself in the first person (“Why am I upset?”).

“Essentially, we think referring to yourself in the third person leads people to think about themselves more similar to how they think about others, and you can see evidence for this in the brain,” said Jason Moser, MSU associate professor of psychology. “That helps people gain a tiny bit of psychological distance from their experiences, which can often be useful for regulating emotions.”

Pretending to be Batman helps kids stay on task
by Christian Jarrett

Some of the children were assigned to a “self-immersed condition”, akin to a control group, and before and during the task were told to reflect on how they were doing, asking themselves “Am I working hard?”. Other children were asked to reflect from a third-person perspective, asking themselves “Is James [insert child’s actual name] working hard?” Finally, the rest of the kids were in the Batman condition, in which they were asked to imagine they were either Batman, Bob The Builder, Rapunzel or Dora the Explorer and to ask themselves “Is Batman [or whichever character they were] working hard?”. Children in this last condition were given a relevant prop to help, such as Batman’s cape. Once every minute through the task, a recorded voice asked the question appropriate for the condition each child was in [Are you working hard? or Is James working hard? or Is Batman working hard?].

The six-year-olds spent more time on task than the four-year-olds (half the time versus about a quarter of the time). No surprise there. But across age groups, and apparently unrelated to their personal scores on mental control, memory, or empathy, those in the Batman condition spent the most time on task (about 55 per cent for the six-year-olds; about 32 per cent for the four-year-olds). The children in the self-immersed condition spent the least time on task (about 35 per cent of the time for the six-year-olds; just over 20 per cent for the four-year-olds) and those in the third-person condition performed in between.

Dressing up as a superhero might actually give your kid grit
by Jenny Anderson

In other words, the more the child could distance him or herself from the temptation, the better the focus. “Children who were asked to reflect on the task as if they were another person were less likely to indulge in immediate gratification and more likely to work toward a relatively long-term goal,” the authors wrote in the study called “The “Batman Effect”: Improving Perseverance in Young Children,” published in Child Development.

Curmudgucation: Don’t Be Batman
by Peter Greene

This underlines the problem we see with more and more or what passes for early childhood education these days– we’re not worried about whether the school is ready to appropriately handle the students, but instead are busy trying to beat three-, four- and five-year-olds into developmentally inappropriate states to get them “ready” for their early years of education. It is precisely and absolutely backwards. I can’t say this hard enough– if early childhood programs are requiring “increased demands” on the self-regulatory skills of kids, it is the programs that are wrong, not the kids. Full stop.

What this study offers is a solution that is more damning than the “problem” that it addresses. If a four-year-old child has to disassociate, to pretend that she is someone else, in order to cope with the demands of your program, your program needs to stop, today.

Because you know where else you hear this kind of behavior described? In accounts of victims of intense, repeated trauma. In victims of torture who talk about dealing by just pretending they aren’t even there, that someone else is occupying their body while they float away from the horror.

That should not be a description of How To Cope With Preschool.

Nor should the primary lesson of early childhood education be, “You can’t really cut it as yourself. You’ll need to be somebody else to get ahead in life.” I cannot even begin to wrap my head around what a destructive message that is for a small child.

Can You Live With the Voices in Your Head?
by Daniel B. Smith

And though psychiatrists acknowledge that almost anyone is capable of hallucinating a voice under certain circumstances, they maintain that the hallucinations that occur with psychoses are qualitatively different. “One shouldn’t place too much emphasis on the content of hallucinations,” says Jeffrey Lieberman, chairman of the psychiatry department at Columbia University. “When establishing a correct diagnosis, it’s important to focus on the signs or symptoms” of a particular disorder. That is, it’s crucial to determine how the voices manifest themselves. Voices that speak in the third person, echo a patient’s thoughts or provide a running commentary on his actions are considered classically indicative of schizophrenia.

Auditory hallucinations: Psychotic symptom or dissociative experience?
by Andrew Moskowitz & Dirk Corstens

While auditory hallucinations are considered a core psychotic symptom, central to the diagnosis of schizophrenia, it has long been recognized that persons who are not psychotic may also hear voices. There is an entrenched clinical belief that distinctions can be made between these groups, typically on the basis of the perceived location or the ‘third-person’ perspective of the voices. While it is generally believed that such characteristics of voices have significant clinical implications, and are important in the differential diagnosis between dissociative and psychotic disorders, there is no research evidence in support of this. Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or with no mental disorder at all. On this and other bases outlined below, we argue that hearing voices should be considered a dissociative experience, which under some conditions may have pathological consequences. In other words, we believe that, while voices may occur in the context of a psychotic disorder, they should not be considered a psychotic symptom.

Hallucinations and Sensory Overrides
by T. M. Luhrmann

The psychiatric and psychological literature has reached no settled consensus about why hallucinations occur and whether all perceptual “mistakes” arise from the same processes (for a general review, see Aleman & Laroi 2008). For example, many researchers have found that when people hear hallucinated voices, some of these people have actually been subvocalizing: They have been using muscles used in speech, but below the level of their awareness (Gould 1949, 1950). Other researchers have not found this inner speech effect; moreover, this hypothesis does not explain many of the odd features of the hallucinations associated with psychosis, such as hearing voices that speak in the second or third person (Hoffman 1986). But many scientists now seem to agree that hallucinations are the result of judgments associated with what psychologists call “reality monitoring” (Bentall 2003). This is not the process Freud described with the term reality testing, which for the most part he treated as a cognitive higher-level decision: the ability to distinguish between fantasy and the world as it is (e.g., he loves me versus he’s just not that into me). Reality monitoring refers to the much more basic decision about whether the source of an experience is internal to the mind or external in the world.

Originally, psychologists used the term to refer to judgments about memories: Did I really have that conversation with my boyfriend back in college, or did I just think I did? The work that gave the process its name asked what it was about memories that led someone to infer that these memories were records of something that had taken place in the world or in the mind (Johnson & Raye 1981). Johnson & Raye’s elegant experiments suggested that these memories differ in predictable ways and that people use those differences to judge what has actually taken place. Memories of an external event typically have more sensory details and more details in general. By contrast, memories of thoughts are more likely to include the memory of cognitive effort, such as composing sentences in one’s mind.

Self-Monitoring and Auditory Verbal Hallucinations in Schizophrenia
by Wayne Wu

It’s worth pointing out that a significant portion of the non-clinical population experiences auditory hallucinations. Such hallucinations need not be negative in content, though as I understand it, the preponderance of AVH in schizophrenia is or becomes negative. […]

I’ve certainly experienced the “third man”, in a moment of vivid stress when I was younger. At the time, I thought it was God speaking to me in an encouraging and authoritative way! (I was raised in a very strict religious household.) But I wouldn’t be surprised if many of us have had similar experiences. These days, I have more often the cell-phone buzzing in my pocket illusion.

There are, I suspect, many reasons why they auditory system might be activated to give rise to auditory experiences that philosophers would define as hallucinations: recalling things in an auditory way, thinking in inner speech where this might be auditory in structure, etc. These can have positive influences on our ability to adapt to situations.

What continues to puzzle me about AVH in schizophrenia are some of its fairly consistent phenomenal properties: second or third-person voice, typical internal localization (though plenty of external localization) and negative content.

The Digital God, How Technology Will Reshape Spirituality
by William Indick
pp. 74-75

Doubled Consciousness

Who is this third who always walks beside you?
When I count, there are only you and I together.
But when I look ahead up the white road
There is always another one walking beside you
Gliding wrapt in a brown mantle, hooded.
—T.S. Eliot, The Waste Land

The feeling of “doubled consciousness” 81 has been reported by numerous epileptics. It is the feeling of being outside of one’s self. The feeling that you are observing yourself as if you were outside of your own body, like an outsider looking in on yourself. Consciousness is “doubled” because you are aware of the existence of both selves simultaneously—the observer and the observed. It is as if the two halves of the brain temporarily cease to function as a single mechanism; but rather, each half identifies itself separately as its own self. 82 The doubling effect that occurs as a result of some temporal lobe epileptic seizures may lead to drastic personality changes. In particular, epileptics following seizures often become much more spiritual, artistic, poetic, and musical. 83 Art and music, of course, are processed primarily in the right hemisphere, as is poetry and the more lyrical, metaphorical aspects of language. In any artistic endeavor, one must engage in “doubled consciousness,” creating the art with one “I,” while simultaneously observing the art and the artist with a critically objective “other-I.” In The Great Gatsby, Fitzgerald expressed the feeling of “doubled consciousness” in a scene in which Nick Caraway, in the throes of profound drunkenness, looks out of a city window and ponders:

Yet high over the city our line of yellow windows must have contributed their share of human secrecy to the casual watcher in the darkening streets, and I was him too , looking up and wondering . I was within and without , simultaneously enchanted and repelled by the inexhaustible variety of life.

Doubled-consciousness, the sense of being both “within and without” of one’s self, is a moment of disconnection and disassociation between the two hemispheres of the brain, a moment when left looks independently at right and right looks independently at left, each recognizing each other as an uncanny mirror reflection of himself, but at the same time not recognizing the other as “I.”

The sense of doubled consciousness also arises quite frequently in situations of extreme physical and psychological duress. 84 In his book, The Third Man Factor John Geiger delineates the conditions associated with the perception of the “sensed presence”: darkness, monotony, barrenness, isolation, cold, hunger, thirst, injury, fatigue, and fear. 85 Shermer added sleep deprivation to this list, noting that Charles Lindbergh, on his famous cross–Atlantic flight, recorded the perception of “ghostly presences” in the cockpit, that “spoke with authority and clearness … giving me messages of importance unattainable in ordinary life.” 86 Sacks noted that doubled consciousness is not necessarily an alien or abnormal sensation, we all feel it, especially when we are alone, in the dark, in a scary place. 87 We all can recall a memory from childhood when we could palpably feel the presence of the monster hiding in the closet, or that indefinable thing in the dark space beneath our bed. The experience of the “sensed other” is common in schizophrenia, can be induced by certain drugs, is a central aspect of the “near death experience,” and is also associated with certain neurological disorders. 88

To speak of oneself in the third person; to express the wish to “find myself,” is to presuppose a plurality within one’s own mind. 89 There is consciousness, and then there is something else … an Other … who is nonetheless a part of our own mind, though separate from our moment-to-moment consciousness. When I make a statement such as: “I’m disappointed with myself because I let myself gain weight,” it is quite clear that there are at least two wills at work within one mind—one will that dictates weight loss and is disappointed—and another will that defies the former and allows the body to binge or laze. One cannot point at one will and say: “This is the real me and the other is not me.” They’re both me. Within each “I” there exists a distinct Other that is also “I.” In the mind of the believer—this double-I, this other-I, this sentient other, this sensed presence who is me but also, somehow, not me—how could this be anyone other than an angel, a spirit, my own soul, or God? Sacks recalls an incident in which he broke his leg while mountain climbing alone and had to descend the mountain despite his injury and the immense pain it was causing him. Sacks heard “an inner voice” that was “wholly unlike” his normal “inner speech”—a “strong, clear, commanding voice” that told him exactly what he had to do to survive the predicament, and how to do it. “This good voice, this Life voice, braced and resolved me.” Sacks relates the story of Joe Simpson, author of Touching the Void , who had a similar experience during a climbing mishap in the Andes. For days, Simpson trudged along with a distinctly dual sense of self. There was a distracted self that jumped from one random thought to the next, and then a clearly separate focused self that spoke to him in a commanding voice, giving specific instructions and making logical deductions. 90 Sacks also reports the experience of a distraught friend who, at the moment she was about to commit suicide, heard a “voice” tell her: “No, you don’t want to do that…” The male voice, which seemed to come from outside of her, convinced her not to throw her life away. She speaks of it as her “guardian angel.” Sacks suggested that this other voice may always be there, but it is usually inhibited. When it is heard, it’s usually as an inner voice, rather than an external one. 91 Sacks also reports that the “persistent feeling” of a “presence” or a “companion” that is not actually there is a common hallucination, especially among people suffering from Parkinson’s disease. Sacks is unsure if this is a side-effect of L-DOPA, the drug used to treat the disease, or if the hallucinations are symptoms of the neurological disease itself. He also noted that some patients were able to control the hallucinations to varying degrees. One elderly patient hallucinated a handsome and debonair gentleman caller who provided “love, attention, and invisible presents … faithfully each evening.” 92

Part III: Off to the Asylum – Rational Anti-psychiatry
by Veronika Nasamoto

The ancients were also clued up in that the origins of mental instability was spiritual but they perceived it differently. In The Origins of Consciousness in the Breakdown of Bicameral Mind, Julian Jaynes’ book present a startling thesis, based on an analysis of the language of the Iliad, that the ancient Greeks were not conscious in the same way that modern humans are. Because the ancient Greeks had no sense of “I” (also Victorian England would sometimes speak in the third person rather than say I, because the eternal God – YHWH was known as the great “I AM”) with which to locate their mental processes. To them their inner thoughts were perceived as coming from the gods, which is why the characters in the Iliad find themselves in frequent communication with supernatural entities.

The Shadows of Consciousness in the Breakdown of the Bicameral Mirror
by Chris Savia

Jaynes’s description of consciousness, in relation to memory, proposes what people believe to be rote recollection are concepts, the platonic ideals of their office, the view out of the window, et al. These contribute to one’s mental sense of place and position in the world. The memories enabling one to see themselves in the third person.

Language, consciousness and the bicameral mind
by Andreas van Cranenburgh

Consciousness not a copy of experience Since Locke’s tabula rasa it has been thought that consciousness records our experiences, to save them for possible later reflection. However, this is clearly false: most details of our experience are immediately lost when not given special notice. Recalling an arbitrary past event requires a reconstruction of memories. Interestingly, memories are often from a third-person perspective, which proves that they could not be a mere copy of experience.

The Origin of Consciousness in the Breakdown of the Bicameral Mind
by Julian Jaynes
pp. 347-350

Negatory Possession

There is another side to this vigorously strange vestige of the bicameral mind. And it is different from other topics in this chapter. For it is not a response to a ritual induction for the purpose of retrieving the bicameral mind. It is an illness in response to stress. In effect, emotional stress takes the place of the induction in the general bicameral paradigm just as in antiquity. And when it does, the authorization is of a different kind.

The difference presents a fascinating problem. In the New Testament, where we first hear of such spontaneous possession, it is called in Greek daemonizomai, or demonization. 10 And from that time to the present, instances of the phenomenon most often have that negatory quality connoted by the term. The why of the negatory quality is at present unclear. In an earlier chapter (II. 4) I have tried to suggest the origin of ‘evil’ in the volitional emptiness of the silent bicameral voices. And that this took place in Mesopotamia and particularly in Babylon, to which the Jews were exiled in the sixth century B.C., might account for the prevalence of this quality in the world of Jesus at the start of this syndrome.

But whatever the reasons, they must in the individual be similar to the reasons behind the predominantly negatory quality of schizophrenic hallucinations. And indeed the relationship of this type of possession to schizophrenia seems obvious.

Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.

Always the patients are uneducated, usually illiterate, and all believe heartily in spirits or demons or similar beings and live in a society which does. The attacks usually last from several minutes to an hour or two, the patient being relatively normal between attacks and recalling little of them. Contrary to horror fiction stories, negatory possession is chiefly a linguistic phenomenon, not one of actual conduct. In all the cases I have studied, it is rare to find one of criminal behavior against other persons. The stricken individual does not run off and behave like a demon; he just talks like one.

Such episodes are usually accompanied by twistings and writhings as in induced possession. The voice is distorted, often guttural, full of cries, groans, and vulgarity, and usually railing against the institutionalized gods of the period. Almost always, there is a loss of consciousness as the person seems the opposite of his or her usual self. ‘He’ may name himself a god, demon, spirit, ghost, or animal (in the Orient it is often ‘the fox’), may demand a shrine or to be worshiped, throwing the patient into convulsions if these are withheld. ‘He’ commonly describes his natural self in the third person as a despised stranger, even as Yahweh sometimes despised his prophets or the Muses sneered at their poets. 12 And ‘he’ often seems far more intelligent and alert than the patient in his normal state, even as Yahweh and the Muses were more intelligent and alert than prophet or poet.

As in schizophrenia, the patient may act out the suggestions of others, and, even more curiously, may be interested in contracts or treaties with observers, such as a promise that ‘he’ will leave the patient if such and such is done, bargains which are carried out as faithfully by the ‘demon’ as the sometimes similar covenants of Yahweh in the Old Testament. Somehow related to this suggestibility and contract interest is the fact that the cure for spontaneous stress-produced possession, exorcism, has never varied from New Testament days to the present. It is simply by the command of an authoritative person often following an induction ritual, speaking in the name of a more powerful god. The exorcist can be said to fit into the authorization element of the general bicameral paradigm, replacing the ‘demon.’ The cognitive imperatives of the belief system that determined the form of the illness in the first place determine the form of its cure.

The phenomenon does not depend on age, but sex differences, depending on the historical epoch, are pronounced, demonstrating its cultural expectancy basis. Of those possessed by ‘demons’ whom Jesus or his disciples cured in the New Testament, the overwhelming majority were men. In the Middle Ages and thereafter, however, the overwhelming majority were women. Also evidence for its basis in a collective cognitive imperative are its occasional epidemics, as in convents of nuns during the Middle Ages, in Salem, Massachusetts, in the eighteenth century, or those reported in the nineteenth century at Savoy in the Alps. And occasionally today.

The Emergence of Reflexivity in Greek Language and Thought
by Edward T. Jeremiah
p. 3

Modernity’s tendency to understand the human being in terms of abstract grammatical relations, namely the subject and self, and also the ‘I’—and, conversely, the relative indifference of Greece to such categories—creates some of the most important semantic contrasts between our and Greek notions of the self.

p. 52

Reflexivisations such as the last, as well as those like ‘Know yourself’ which reconstitute the nature of the person, are entirely absent in Homer. So too are uses of the reflexive which reference some psychological aspect of the subject. Indeed the reference of reflexives directly governed by verbs in Homer is overwhelmingly bodily: ‘adorning oneself’, ‘covering oneself’, ‘defending oneself’, ‘debasing oneself physically’, ‘arranging themselves in a certain formation’, ‘stirring oneself’, ad all the prepositional phrases. The usual reference for indirect arguments is the self interested in its own advantage. We do not find in Homer any of the psychological models of self-relation discussed by Lakoff.

Use of the Third Person for Self-Reference by Jesus and Yahweh
by Rod Elledge
pp. 11-13

Viswanathan addresses illeism in Shakespeare’s works, designating it as “illeism with a difference.” He writes: “It [‘illeism with a difference’] is one by which the dramatist makes a character, speaking in the first person, refer to himself in the third person, not simple as a ‘he’, which would be illeism proper, a traditional grammatical mode, but by name.” He adds that the device is extensively used in Julius Caesar and Troilus and Cressida, and occasionally in Hamlet and Othello. Viswanathan notes the device, prior to Shakespeare, was used in the medieval theater simply to allow a character to announce himself and clarify his identity. Yet, he argues that, in the hands of Shakespeare, the device becomes “a masterstroke of dramatic artistry.” He notes four uses of this illeism with a difference.” First, it highlights the character using it and his inner self. He notes that it provides a way of “making the character momentarily detach himself from himself, achieve a measure of dramatic (and philosophical) depersonalization, and create a kind of aesthetic distance from which he can contemplate himself.” Second, it reflects the tension between the character’s public and private selves. Third, the device “raises the question of the way in which the character is seen to behave and to order his very modes of feeling and thought in accordance with a rightly or wrongly conceived image or idea of himself.” Lastly, he notes the device tends to point toward the larger philosophical problem for man’s search for identity. Speaking of the use of illeism within Julius Caesar, Spevak writes that “in addiction to the psychological and other implications, the overall effect is a certain stateliness, a classical look, a consciousness on the part of the actors that they are acting in a not so everyday context.”

Modern linguistic scholarship

Otto Jespersen notes various examples of the third-person self-reference including those seeking to reflect deference or politeness, adults talking to children as “papa” or “Aunt Mary” to be more easily understood, as well as the case of some writers who write “the author” or “this present writer” in order to avoid the mention of “I.” He notes Caesar as a famous example of “self-effacement [used to] produce the impression of absolute objectivity.” Yet, Head writes, in response to Jespersen, that since the use of the third person for self-reference

is typical of important personages, whether in autobiography (e.g. Caesar in De Bello Gallico and Captain John Smith in his memoirs) or in literature (Marlowe’s Faustus, Shakespeare’s Julius Caesar, Cordelia and Richared II, Lessing’s Saladin, etc.), it is actually an indication of special status and hence implies greater social distance than does the more commonly used first person singular.

Land and Kitzinger argue that “very often—but not always . . . the use of a third-person reference form in self-reference is designed to display that the speaker is talking about themselves as if from the perspective of another—either the addressee(s) . . . or a non-present other.” The linguist Laurence Horn, noting the use of illeism by various athlete and political celebrities, notes that “the celeb is viewing himself . . . from the outside.” Addressing what he refers to as “the dissociative third person,” he notes that an athlete or politician “may establish distance between himself (virtually never herself) and his public persona, but only by the use of his name, never a 3rd person pronoun.”

pp. 15-17

Illeism in Clasical Antiquity

As referenced in the history of research, Kostenberger writes: “It may strike the modern reader as curious that Jesus should call himself ‘Jesus Christ’; however, self-reference in the third person was common in antiquity.” While Kostenberger’s statement is a brief comment in the context of a commentary and not a monographic study on the issue, his comment raises a critical question. Does a survey of the evidence reveal that Jesus’s use of illeism in this verse (and by implication elsewhere in the Gospels) reflects simply another example of a common mannerism in antiquity? […]

Early Evidence

From the fifth century BC to the time of Jesus the following historians refer to themselves in the third person in their historical accounts: Hecataeus (though the evidence is fragmentary), Herodotus, Thucydides, Xenophon, Polybius, Caesar, and Josephus. For the scope of this study this point in history (from fifth century BC to first century AD) is the primary focus. Yet, this feature was adopted from the earlier tendency in literature in which an author states his name as a seal or sphragis for their work. Herkommer notes that the “self-introduction” (Selbstvorstellung) in the Homeric Hymn to Apollo, in choral poetry (Chorlyrik) such as that by the Greek poet Alkman (seventh century BC), and in the poetic mxims, (Spruchdichtung) such as those of the Greek poet Phokylides (seventh century BC). Yet, from fifth century onward, this feature appears primarily in the works of Greek historians. In addition to early evidence (prior to the fifth century of an author’s self-reference in his historiographic work, the survey of evidence also noted an early example of illeism within Homer’s Illiad. Because this ancient Greek epic poem reflects an early use of the third-person self-reference in a narrative context and offers a point of comparison to its use in later Greek historiography, this early example of the use of illeism is briefly addressed.

Maricola notes that the style of historical narrative that first appears in Herodotus is a legacy from Homer (ca. 850 BC). He notes that “as the writer of the most ‘authoritative’ third-person narrative, [Homer] provided a model not only for later poets, epic and otherwise, but also to the prose historians who, by way of Herodotus, saw him as their model and rival.” While Homer provided the authoritative example of third-person narrative, he also, centuries before the development of Greek historiography, used illeism in his epic poem the Iliad. Illeism occurs in the direct speech of Zeus (the king of the gods), Achilles (the “god-like” son of a king and goddess), and Hector (the mighty Trojan prince).

Zeus, addressing the assembled gods on Mt. Olympus, refers to himself as “Zeus, the supreme Master” […] and states how superior he is above all gods and men. Hector’s use of illeism occurs as he addresses the Greeks and challenges the best of them to fight against “good Hector” […]. Muellner notes in these instances of third person for self-reference (Zeus twice and Hector once) that “the personage at the top and center of the social hierarchy is asserting his superiority over the group . . . . In other word, these are self-aggrandizing third-person references, like those in the war memoirs of Xenophon, Julius Caesar, and Napoleon.” He adds that “the primary goal of this kind of third-person self-reference is to assert the status accruing to exception excellence. Achilles refers to himself in the context of an oath (examples of which are reflected in the OT), yet his self-reference serves to emphasize his status in relation to the Greeks, and especially to King Agamemnon. Addressing Agamemnon, the general of the Greek armies, Achillies swears by his sceptor and states that the day will come when the Greeks will long for Achilles […].

Homer’s choice to use illeism within the direct speech of these three characters contributes to an understanding of its potential rhetorical implications. In each case the character’s use of illeism serves to set him apart by highlighting his innate authority and superior status. Also, all three characters reflect divine and/or royal aspects (Zeus, king of gods; Achilles, son of a king and a goddess, and referred to as “god-like”; and Hector, son of a king). The examples of illeism in the Iliad, among the earliest evidence of illeism, reflect a usage that shares similarities with the illeism as used by Jesus and Yahweh. The biblical and Homeric examples each reflects illeism in direct speech within narrative discourse and the self-reverence serves to emphasize authority or status as well as a possible associated royal and/or divine aspect(s). Yet, the examples stand in contrast to the use of illeism by later historians. As will be addressed next, these ancient historians used the third-person self-reference as a literary device to give their historical accounts a sense of objectivity.

Women and Gender in Medieval Europe: An Encyclopedia
edited by Margaret C. Schaus
“Mystics’ Writings”

by Patricia Dailey
p. 600

The question of scribal mediation is further complicated in that the mystic’s text is, in essence, a message transmitted through her, which must be transmitted to her surrounding community. Thus, the denuding of voice of the text, of a first-person narrative, goes hand in hand with the status of the mystic as “transcriber” of a divine message that does not bear the mystic’s signature, but rather God’s. In addition, the tendency to write in the third person in visionary narratives may draw from a longstanding tradition that stems from Paul in 2 Cor. of communicating visions in the third person, but at the same time, it presents a means for women to negotiate with conflicts with regard to authority or immediacy of the divine through a veiled distance or humility that conformed to a narrative tradition.

Romantic Confession: Jean-Jacques Rousseau and Thomas de Quincey
by Martina Domines Veliki

It is no accident that the term ‘autobiography’, entailing a special amalgam of ‘autos’, ‘bios’ and ‘graphe’ (oneself, life and writing), was first used in 1797 in the Monthly Review by a well-known essayist and polyglot, translator of German romantic literature, William Taylor of Norwich. However, the term‘autobiographer’ was first extensively used by an English Romantic poet, one of the Lake Poets, Robert Southey1. This does not mean that no autobiographies were written before the beginning of the nineteenth century. The classical writers wrote about famous figures of public life, the Middle Ages produced educated writers who wrote about saints’ lives and from Renaissance onward people wrote about their own lives. However, autobiography, as an auto-reflexive telling of one’s own life’s story, presupposes a special understanding of one’s‘self’ and therefore, biographies and legends of Antiquity and the Middle Ages are fundamentally different from ‘modern’ autobiography, which postulates a truly autonomous subject, fully conscious of his/her own uniqueness2. Life-writing, whether in the form of biography or autobiography, occupied the central place in Romanticism. Autobiography would also often appear in disguise. One would immediately think of S. T. Coleridge’s Biographia Literaria (1817) which combines literary criticism and sketches from the author’s life and opinions, and Mary Wollstonecratf’s Short Residence in Sweden, Norway and Denmark (1796),which combines travel narrative and the author’s own difficulties of travelling as a woman.

When one thinks about the first ‘modern’ secular autobiography, it is impossible to avoid the name of Jean-Jacques Rousseau. He calls his first autobiography The Confessions, thus aligning himself in the long Western tradition of confessional writings inaugurated by St. Augustine (354 – 430 AD). Though St. Augustine confesses to the almighty God and does not really perceive his own life as significant, there is another dimension of Augustine’s legacy which is important for his Romantic inheritors: the dichotomies inherent in the Christian way of perceiving the world, namely the opposition of spirit/matter, higher/lower, eternal/temporal, immutable/changing become ultimately emanations of a single binary opposition, that of inner and outer (Taylor 1989: 128). The substance of St. Augustine’s piety is summed up by a single sentence from his Confessions:

“And how shall I call upon my God – my God and my Lord? For when I call on Him, I ask Him to come into me. And what place is there in me into which my God can come? (…) I could not therefore exist, could not exist at all, O my God, unless Thou wert in me.” (Confessions, book I, chapter 2, p.2, emphasis mine)

The step towards inwardness was for Augustine the step towards Truth, i.e. God, and as Charles Taylor explains, this turn inward was a decisive one in the Western tradition of thought. The ‘I’ or the first person standpoint becomes unavoidable thereafter. It took a long way from Augustine’s seeing these sources to reside in God to Rousseau’s pivotal turn to inwardness without recourse to God. Of course, one must not lose sight of the developments in continental philosophy pre-dating Rousseau’s work. René Descartes was the first to embrace Augustinian thinking at the beginning of the modern era, and he was responsible for the articulation of the disengaged subject: the subject asserting that the real locus of all experience is in his own mind3. With the empiricist philosophy of John Locke and David Hume, who claimed that we reach the knowledge of the surrounding world through disengagement and procedural reason, there is further development towards an idea of the autonomous subject. Although their teachings seemed to leave no place for subjectivity as we know it today, still they were a vital step in redirecting the human gaze from the heavens to man’s own existence.

2 Furthermore, the Middle Ages would not speak about such concepts as ‘the author’and one’s ‘individuality’ and it is futile to seek in such texts the appertaining subject. When a Croatian fourteenth-century-author, Hanibal Lucić, writes about his life in a short text called De regno Croatiae et Dalmatiae? Paulus de Paulo, the last words indicate that the author perceives his life as being insignificant and invaluable. The nuns of the fourteenth century writing their own confessions had to use the third person pronoun to refer to themselves and the ‘I’ was reserved for God only. (See Zlatar 2000)

Return to Childhood by Leila Abouzeid
by Geoff Wisner

In addition, autobiography has the pejorative connotation in Arabic of madihu nafsihi wa muzakkiha (he or she who praises and recommends him- or herself). This phrase denotes all sorts of defects in a person or a writer: selfishness versus altruism, individualism versus the spirit of the group, arrogance versus modesty. That is why Arabs usually refer to themselves in formal speech in the third person plural, to avoid the use of the embarrassing íI.ë In autobiography, of course, one uses íIë frequently.

Becoming Abraham Lincoln
by Richard Kigel
Preface, XI

A note about the quotations and sources: most of the statements were collected by William Herndon, Lincoln’s law partner and friend, in the years following Lincoln’s death. The responses came in original handwritten letters and transcribed interviews. Because of the low literacy levels of many of his subjects, sometimes these statements are difficult to understand. Often they used no punctuation and wrote in fragments of thoughts. Misspellings were common and names and places were often confused. “Lincoln” was sometimes spelled “Linkhorn” or “Linkern.” Lincoln’s grandmother “Lucy” was sometimes “Lucey.” Some respondents referred to themselves in third person. Lincoln himself did in his biographical writings.

p. 35

“From this place,” wrote Abe, referring to himself in the third person, “he removed to what is now Spencer County, Indiana, in the autumn of 1816, Abraham then being in his eighth [actually seventh] year. This removal was partly on account of slavery, but chiefly on account of the difficulty in land titles in Kentucky.”

Ritual and the Consciousness Monoculture
by Sarah Perry

Mirrors only became common in the nineteenth century; before, they were luxury items owned only by the rich. Access to mirrors is a novelty, and likely a harmful one.

In Others In Mind: Social Origins of Self-Consciousness, Philippe Rochat describes an essential and tragic feature of our experience as humans: an irreconcilable gap between the beloved, special self as experienced in the first person, and the neutrally-evaluated self as experienced in the third person, imagined through the eyes of others. One’s first-person self image tends to be inflated and idealized, whereas the third-person self image tends to be deflated; reminders of this distance are demoralizing.

When people without access to mirrors (or clear water in which to view their reflections) are first exposed to them, their reaction tends to be very negative. Rochat quotes the anthropologist Edmund Carpenter’s description of showing mirrors to the Biamis of Papua New Guinea for the first time, a phenomenon Carpenter calls “the tribal terror of self-recognition”:

After a first frightening reaction, they became paralyzed, covering their mouths and hiding their heads – they stood transfixed looking at their own images, only their stomach muscles betraying great tension.

Why is their reaction negative, and not positive? It is that the first-person perspective of the self tends to be idealized compared to accurate, objective information; the more of this kind of information that becomes available (or unavoidable), the more each person will feel the shame and embarrassment from awareness of the irreconcilable gap between his first-person specialness and his third-person averageness.

There are many “mirrors”—novel sources of accurate information about the self—in our twenty-first century world. School is one such mirror; grades and test scores measure one’s intelligence and capacity for self-inhibition, but just as importantly, peers determine one’s “erotic ranking” in the social hierarchy, as the sociologist Randall Collins terms it. […]

There are many more “mirrors” available to us today; photography in all its forms is a mirror, and internet social networks are mirrors. Our modern selves are very exposed to third-person, deflating information about the idealized self. At the same time, say Rochat, “Rich contemporary cultures promote individual development, the individual expression and management of self-presentation. They foster self-idealization.”

My Beef With Ken Wilber
by Scott Preston (also posted on Integral World)

We see immediately from this schema why the persons of grammar are minimally four and not three. It’s because we are fourfold beings and our reality is a fourfold structure, too, being constituted of two times and two spaces — past and future, inner and outer. The fourfold human and the fourfold cosmos grew up together. Wilber’s model can’t account for that at all.

So, what’s the problem here? Wilber seems to have omitted time and our experience of time as an irrelevancy. Time isn’t even represented in Wilber’s AQAL model. Only subject and object spaces. Therefore, the human form cannot be properly interpreted, for we have four faces, like some representations of the god Janus, that face backwards, forwards, inwards, and outwards, and we have attendant faculties and consciousness functions organised accordingly for mastery of these dimensions — Jung’s feeling, thinking, sensing, willing functions are attuned to a reality that is fourfold in terms of two times and two spaces. And the four basic persons of grammar — You, I, We, He or She — are the representation in grammar of that reality and that consciousness, that we are fourfold beings just as our reality is a fourfold cosmos.

Comparing Wilber’s model to Rosenstock-Huessy’s, I would have to conclude that Wilber’s model is “deficient integral” owing to its apparent omission of time and subsequently of the “I-thou” relationship in which the time factor is really pronounced. For the “I-It” (or “We-Its”) relation is a relation of spaces — inner and outer, while the “I-Thou” (or “We-thou”) relation is a relation of times.

It is perhaps not so apparent to English speakers especially that the “thou” or “you” form is connected with time future. Other languages, like German, still preserve the formal aspects of this. In old English you had to say “go thou!” or “be thou loving!”, and so on. In other words, the “thou” or “you” is most closely associated with the imperative form and that is the future addressing the past. It is a call to change one’s personal or collective state — what we call the “vocation” or “calling” is time future in dialogue with time past. Time past is represented in the “we” form. We is not plural “I’s”. It is constituted by some historical act, like a marriage or union or congregation of peoples or the sexes in which “the two shall become one flesh”. We is the collective person, historically established by some act. The people in “We the People” is a singularity and a unity, an historically constituted entity called “nation”. A bunch of autonomous “I’s” or egos never yet formed a tribe or a nation — or a commune for that matter. Nor a successful marriage.

Though “I-It” (or “We-Its”) might be permissible in referring to the relation of subject and object spaces, “we-thou” is the relation in which the time element is outstanding.

Who are we hearing and talking to?

“We are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent ‘me’ from moment to moment.”
~ Charles Fernyhough

“Bicamerality hidden in plain sight.”
~ Andrew Bonci

Image may contain: text that says 'WHAT I TELL YOU IN THE DARK, SPEAK IN THE DAYLIGHT; WHAT IS WHISPERED IN YOUR EAR, PROCLAIM FROM THE ROOFS. MATTHEW 10:27'

“What I tell you in the dark, speak in the daylight; what is whispered in your ear, proclaim from the roofs.”
~ Matthew 10:27

“illusion of a completed, unitary self”
Bundle Theory: Embodied Mind, Social Nature
The Mind in the Body
Making Gods, Making Individuals
The Spell of Inner Speech
Reading Voices Into Our Minds
Verbal Behavior
Keep Your Experience to Yourself