“Pain in the conscious human is thus very different from that in any other species. Sensory pain never exists alone except in infancy or perhaps under the influence of morphine when a patient says he has pain but does not mind it. Later, in those periods after healing in which the phenomena usually called chronic pain occur, we have perhaps a predominance of conscious pain.”
~Julian Jaynes, Sensory Pain and Conscious Pain
I’ve lost count of the number of times I’ve seen a child react to a cut or stumble only after their parent(s) freaked out. Children are highly responsive to adults. If others think something bad has happened, they internalize this and act accordingly. Kids will do anything to conform to expectations. But most kids seem impervious to pain, assuming they don’t get the message that they are expected to put on an emotional display.
This difference can be seen when comparing how a child acts by themselves and how they act around a parent or other authority figure. You’ll sometimes see a kid looking around to see if their is an audience paying attention before crying or having a tantrum. We humans are social creatures and our behavior is always social. This is naturally understood even by infants who have an instinct for social cues and social response.
Pain is a physical sensation, an experience that passes, whereas suffering is in the mind, a story we tell ourselves. This is why trauma can last for decades after a bad experience. The sensory pain is gone but the conscious pain continues. We keep repeating a story.
It’s interesting that some cultures like the Piraha don’t appear to experience trauma from the exact same events that would traumatize a modern Westerner. Neither is depression and anxiety common among them. Nor an obsessive fear about death. Not only are the Piraha physically tougher but psychologically tougher as well. Apparently, they tell different stories that embody other expectations.
So, what kind of society is it that we’ve created with our Jaynesian consciousness of traumatized hyper-sensitivity and psychological melodrama? Why are we so attached to our suffering and victimization? What does this story offer us in return? What power does it hold over us? What would happen if we changed the master narrative of our society in replacing the competing claims of victimhood with an entirely different way of relating? What if outward performances of suffering were no longer expected or rewarded?
For one, we wouldn’t have a man-baby like Donald Trump as our national leader. He is the perfect personification of this conscious pain crying out for attention. And we wouldn’t have had the white victimhood that put him into power. But neither would we have any of the other victimhoods that these particular whites were reacting to. The whole culture of victimization would lose its power.
The social dynamic would be something else entirely. It’s hard to imagine what that might be. We’re addicted to the melodrama and we carefully enculturate and indoctrinate each generation to follow our example. To shake us loose from our socially constructed reality would require a challenge to our social order. The extremes of conscious pain isn’t only about our way of behaving. It is inseparable from how we maintain the world we are so desperately attached to.
We need the equivalent, in the cartoon below, of how this father relates to his son. But we need it on the collective level. Or at least we need this in the United States. What if the rest of the world simply stopped reacting to American leaders and American society? Just smile.
Credit: The basic observation and the cartoon was originally shared by Mateus Barboza on the Facebook group “Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind”.
Here in Iowa City, we’ve been in a permanent state of construction for years now. I can’t remember the last time some part of the downtown wasn’t in disarray while in the process of being worked on. Large parts of the pedestrian mall have been a maze of fencing and torn up brick for years upon years (Michael Shea, Ped Mall updates soon to come). An entire generation of Iowa Citians has grown up with construction as their childhood memory of the town.
For a smaller town with a population of only 75,798, the city government impressively throws millions of dollars at projects like it’s pocket change. The pedestrian mall renovation alone is projected to be $7.4 million and that’s limited to about a block of the downtown. The city has had many similar projects in recent years, including the building of multiple massive parks and a city-wide infrastructure overhaul, to name a few. Over the past decade or so, the city expenditures for these kinds of improvements might add up to hundreds of millions of dollars. That is a lot of money for such a limited area, considering one can take a relaxed stroll from one side of town to the other in a couple of hours or less.
All of this public investment is called progress, so I hear. As part of this project to improve and beautify the downtown, they apparently built a wall as a memorial to very important people (a wall to Make Iowa City Great Again?). It’s entitled entitled “A Mark was Made”. From the official City of Iowa City website, it is reported that, “The wall was created to become an evolving acknowledgement celebrating the leadership, activism, and creativity of those who have influenced the Iowa City community and beyond” (Installation of ‘A Mark was Made’ story wall completed as part of Ped Mall project).
One of the local figures included is John Alberhasky, now deceased. He was a respectable member of the local capitalist elite and still well-remembered by many. For the older generations who are fond of what capitalism once meant, this is the kind of guy they’re thinking of. Apparently, I’m now officially part of the “older generations”, as I can recall what Iowa City used to be like… ah, the good ol’ days.
Mr. Alberhasky was not only a small business owner but also a widely known community leader. The small mom-and-pop grocery store that he started, affectionately known as “Dirty John’s”, has long been a regularly stop even for people not living in the neighborhood and the store’s deli used to make sandwiches that were sold at a local high school. Once among dozens of such corner grocery stores, it is the only example left remaining in this town. The store itself is a memorial to a bygone era.
This local businessman seems like a worthy addition to this memorial. He was beloved by the community. And he seems to have established an honorable family business that is being carried on with care by his descendants. There are few families that have been part of the Iowa City community for so long, going back to the 1800s, the kinds of ethnic immigrants that built this country. They are good people, the best landlords I’ve ever had I might add (as a tenant for a couple of decades, does that make me their landpeasant?). I approve of their family’s patriarch being included on this fine wall of public distinction.
Still, I can’t help but noting an irony about this memorial to community involvement and public service. It is located in the People’s Park that was turned into the gentrified front yard of a TIF-funded high-rise built for rich people (TIFs, Gentrification, and Plutocracy). It effectively evicted the common folk from this public park for years and a once thriving community space has never been the same since (Freedom and Public Space). Only recently did they finally put seating back to allow the dirty masses to once again rest their weary bodies, but it has yet to regain the welcoming feel it once held as a vibrant expression of community.
To this day, there is no memorial or even a small plaque indicating that this is a unique park separate from and having preceded the pedestrian mall, originally a green space that was established through community organizing and public demand, the first public space established downtown. It’s as if the People’s Park does not exist and, as far as public memory goes, never did exist. The number of people who remember it are growing fewer in number.
Not even the local government will officially acknowledge it. In the article about the new wall from the city website, they don’t mention that this is the People’s Park and, instead, refer to it as merely Black Hawk Mini Park. I did a quick site search and the People’s Park is not mentioned by name anywhere on the city website. But at least Chief Black Hawk gets mentioned for his role in surrendering to the US military that allowed white people to take his people’s land… that’s something.
“Even now, man may be unwittingly changing the world’s climate through the waste products of his civilization. Due to our release through factories and automobiles every year of 6 billion tons of carbon dioxide (CO2), which helps air absorb heat from the sun. Our atmosphere seems to be getting warmer.”
~Unchained Goddess, film from Bell Telephone Science Hour (1958)
“[C]urrent scientific opinion overwhelmingly favors attributing atmospheric carbon dioxide increase to fossil fuel combustion.”
~James F. Black, senior scientist in the Products Research Division of Exxon Research and Engineering, from his presentation to Exxon corporate management entitled “The Greenhouse Effect” (July, 1977)
“Data confirm that greenhouse gases are increasing in the atmosphere. Fossil fuels contribute most of the CO2.”
~Duane G. Levine, Exxon scientist, presentation to the Board of Directors of Exxon entitled “Potential Enhanced Greenhouse Effects: Status and Outlook” (February 22, 1989)
“The scientific basis for the Greenhouse Effect and the potential impact of human emissions of greenhouse gases such as CO2 on climate is well established and cannot be denied.”
~Oil industry group Global Climate Coalition’s advisory committee of scientific and technical experts reported in the internal document “Predicting Future Climate Change: A Primer”, written in 1995 but redacted and censored version distributed in 1996 (see UCSUSA’s “Former Exxon Employee Says Company Considered Climate Risks as Early as 1981”)
“Perhaps the most interesting effect concerning carbon in trees which we have thus far observed is a marked and fairly steady increase in the 12C/13C ratio with time. Since 1840 the ratio has clearly increased markedly. This effect can be explained on the basis of a changing carbon dioxide concentration in the atmosphere resulting from industrialization and the consequent burning of large quantities of coal and petroleum.”
~Harrison Brown, a biochemist along with colleagues at the California Institute of Technology submitted a research proposal to the American Petroleum Institute entitled “The determination of the variations and causes of variations of the isotopic composition of carbon in nature” (1954)
“This report unquestionably will fan emotions, raise fears, and bring demand for action. The substance of the report is that there is still time to save the world’s peoples from the catastrophic consequence of pollution, but time is running out.
“One of the most important predictions of the report is carbon dioxide is being added to the Earth’s atmosphere by the burning of coal, oil, and natural gas at such a rate that by the year 2000, the heat balance will be so modified as possibly to cause marked changes in climate beyond local or even national efforts. The report further state, and I quote “. . . the pollution from internal combustion engines is so serious, and is growing so fast, that an alternative nonpolluting means of powering automobiles, buses, and trucks is likely to become a national necessity.””
~Frank Ikard, then-president of the American Petroleum Institute addressed
industry leaders at annual meeting, “Meeting the challenges of 1966” (November 8, 1965), given 3 days after the U.S. Science Advisory Committee’s official report, “Restoring the Quality of Our Environment”
“At a 3% per annum growth rate of CO2, a 2.5°C rise brings world economic growth to a halt in about 2025.”
~J. J. Nelson, American Petroleum Institute, notes from CO2 and Climate Task Force (AQ-9) meeting, meeting attended by attended by representatives from Exxon, SOHIO, and Texaco (March 18, 1980)
“Exxon position: Emphasize the uncertainty in scientific conclusions regarding the potential enhanced Greenhouse effect.”
~Joseph M. Carlson, Exxon spokesperson writing in “1988 Exxon Memo on the Greenhouse Effect” (August 3, 1988)
“Victory Will Be Achieved When
• “Average citizens understand (recognise) uncertainties in climate science; recognition of uncertainties becomes part of the ‘conventional wisdom • “Media ‘understands’ (recognises) uncertainties in climate science • “Those promoting the Kyoto treaty on the basis of extant science appear to be out of touch with reality.” ~American Petroleum Institute’s 1998 memo on denialist propaganda, see Climate Science vs. Fossil Fuel Fiction; “The API’s task force was made up of the senior scientists and engineers from Amoco, Mobil, Phillips, Texaco, Shell, Sunoco, Gulf Oil and Standard Oil of California, probably the highest paid and sought-after senior scientists and engineers on the planet. They came from companies that, just like Exxon, ran their own research units and did climate modeling to understand the impact of climate change and how it would impact their company’s bottom line.” (Not Just Exxon: The Entire Oil and Gas Industry Knew The Truth About Climate Change 35 Years Ago.)
[C]urrent scientific opinion overwhelmingly favors attributing atmospheric carbon dioxide increase to fossil fuel combustion. […] In the first place, there is general scientific agreement that the most likely manner in which mankind is influencing the global climate is through carbon dioxide release from the burning of fossil fuels. A doubling of carbon dioxide is estimated to be capable of increasing the average global temperature by from 1 [degree] to 3 [degrees Celsius], with a 10 [degrees Celsius] rise predicted at the poles. More research is needed, however, to establish the validity and significance of predictions with respect to the Greenhouse Effect. Present thinking holds that man has a time window of five to 10 years before the need for hard decisions regarding changes in energy strategies might become critical.
~James F. Black, senior scientist in the Products Research Division of Exxon Research and Engineering, from his presentation to Exxon corporate management entitled “The Greenhouse Effect” (July, 1977)
Present climactic models predict that the present trend of fossil fuel use will lead to dramatic climatic changes within the next 75 years. However, it is not obvious whether these changes would be all bad or all good. The major conclusion from this report is that, should it be deemed necessary to maintain atmospheric CO2 levels to prevent significant climatic changes, dramatic changes in patterns of energy use would be required.
~W. L. Ferrall, Exxon scientist writing in an internal Exxon memo, “Controlling Atmospheric CO2” (October 16, 1979)
In addition to the effects of climate on the globe, there are some particularly dramatic questions that might cause serious global problems. For example, if the Antarctic ice sheet which is anchored on land, should melt, then this could cause a rise in the sea level on the order of 5 meters. Such a rise would cause flooding in much of the US East Coast including the state of Florida and Washington D.C.
~Henry Shaw and P. P. McCall, Exxon scientists writing in an internal Exxon report, “Exxon Research and Engineering Company’s Technological Forecast: CO2 Greenhouse Effect” (Shaw, Henry; McCall, P. P. (December 18, 1980)
“but changes of a magnitude well short of catastrophic…” I think that this statement may be too reassuring. Whereas I can agree with the statement that our best guess is that observable effects in the year 2030 are likely to be “well short of catastrophic”, it is distinctly possible that the CPD scenario will later produce effects which will indeed be catastrophic (at least for a substantial fraction of the earth’s population). This is because the global ecosystem in 2030 might still be in a transient, headed for much significant effects after time lags perhaps of the order of decades. If this indeed turns out to be the case, it is very likely that we will unambiguously recognize the threat by the year 2000 because of advances in climate modeling and the beginning of real experimental confirmation of the CO2 problem.
~Roger Cohen, director of the Theoretical and Mathematical Sciences Laboratory at Exxon Research writing in inter-office correspondence “Catastrophic effects letter” (August 18, 1981)
In addition to the effects of climate on global agriculture, there are some potentially catastrophe events that must be considered. For example, if the Antarctic ice sheet which is anchored on land should melt, then this could cause e rise in sea level on the order of 5 meters. Such a rise would cause flooding on much of the U.S. East Coast, including the state of Florida and Washington, D.C. […] The greenhouse effect ls not likely to cause substantial climactic changes until the average global temperature rises at least 1 degree Centigrade above today’s levels. This could occur in the second to third quarter of the next century. However, there is concern among some scientific groups that once the effects are measurable, they might not be reversible and little could be done to correct the situation in the short term. Therefore, a number of environmental groups are calling for action now to prevent an undesirable future situation from developing. Mitigation of the “greenhouse effect” would require major reductions in fossil fuel combustion.
~Marvin B. Glaser, Environmental Affairs Manager, Coordination and Planning Division of Exxon Research and Engineering Company writing in “Greenhouse Effect: A Technical Review” (Glaser, M. B. (April 1, 1982)
In summary, the results of our research are in accord with the scientific consensus on the effect of increased atmospheric CO2 on climate. […] Furthermore our ethical responsibility is to permit the publication of our research in the scientific literature. Indeed, to do otherwise would be a breach of Exxon’s public position and ethical credo on honesty and integrity.
~Roger W. Cohen, Director of Exxon’s Theoretical and Mathematical Sciences Laboratory, memo “Consensus on CO2 Impacts” to A. M. Natkin, of Exxon’s Office of Science and Technology (Cohen, Roger W. (September 2, 1982)
[F]aith in technologies, markets, and correcting feedback mechanisms is less than satisfying for a situation such as the one you are studying at this year’s Ewing Symposium. […] Clearly, there is vast opportunity for conflict. For example, it is more than a little disconcerting the few maps showing the likely effects of global warming seem to reveal the two superpowers losing much of the rainfall, with the rest of the world seemingly benefitting.
~Dr. Edward E. David, Jr., president of the Exxon Research and Engineering Company, keynote address to the Maurice Ewing symposium at the Lamont–Doherty Earth Observatory on the Palisades, New York campus of Columbia University, published in ““Inventing the Future: Energy and the CO2 “Greenhouse Effect”” (October 26, 1982)
Data confirm that greenhouse gases are increasing in the atmosphere. Fossil fuels contribute most of the CO2. […] Projections suggest significant climate change with a variety of regional impacts. Sea level rise with generally negative consequences. […] Arguments that we can’t tolerate delay and must act now can lead to irreversible and costly Draconian steps. […] To be a responsible participant and part of the solution to [potential enhanced greenhouse], Exxon’s position should recognize and support 2 basic societal needs. First […] to improve understanding of the problem […] not just the science […] but the costs and economics tempered by the sociopolitical realities. That’s going to take years (probably decades).
~Duane G. Levine, Exxon scientist, presentation to the Board of Directors of Exxon entitled “Potential Enhanced Greenhouse Effects: Status and Outlook” (February 22, 1989)
“No man should [refer to himself in the third person] unless he is the King of England — or has a tapeworm.” ~ Mark Twain
“Love him or hate him, Trump is a man who is certain about what he wants and sets out to get it, no holds barred. Women find his power almost as much of a turn-on as his money.”
~ Donald Trump
The self is a confusing matter. As always, who is speaking and who is listening. Clues can come from the language that is used. And the language we use shapes human experience, as studied in linguistic relativity. Speaking in first person may be a more recent innovation of the human society and psyche:
“An unmistakable individual voice, using the first person singular “I,” first appeared in the works of lyric poets. Archilochus, who lived in the first half of the seventh century B.C., sang his own unhappy love rather than assume the role of a spectator describing the frustrations of love in others. . . [H]e had in mind an immaterial sort of soul, with which Homer was not acquainted” (Yi-Fu Tuan, Segmented Worlds and Self, p. 152).
The autobiographical self requires the self-authorization of Jaynesian narrative consciousness. The emergence of the egoic self is the fall into historical time, an issue too complex for discussion here (see Julian Jaynes’ classic work or the diverse Jaynesian scholarship it inspired, or look at some of my previous posts on the topic).
Consider the mirror effect. When hunter-gatherers encounter a mirror for the first time there is what is called “the tribal terror of self-recognition” (Edmund Carpenter as quoted by Philippe Rochat, from Others in Mind, p. 31). “After a frightening reaction,” Carpenter wrote about the Biamis of Papua New Guinea, “they become paralyzed, covering their mouths and hiding their heads — they stood transfixed looking at their own images, only their stomach muscles betraying great tension.”
Research has shown that heavy use of first person is associated with depression, anxiety, and other distressing emotions. Oddly, this full immersion into subjectivity can lead into depressive depersonalization and depressive realism — the individual sometimes passes through the self and into some other state. And in that other state, I’ve noticed that silence befalls the mind, that is to say the loss of the ‘I’ where the inner dialogue goes silent. One sees the world as if coldly detached, as if outside of it all.
Third person is stranger and with a much more ancient pedigree. In the modern mind, third person is often taken as an effect of narcissistic inflation of the ego, such as seen with celebrities speaking of themselves in terms of their media identities. But in other countries and at other times, it has been an indication of religious humility or a spiritual shifting of perspective (possibly expressing the belief that only God can speak of Himself as ‘I’).
There is also the Batman effect. Children act more capable and with greater perseverance when speaking of themselves in third person, specifically as superhero character. As with religious practice, this serves the purpose of distancing from emotion. Yet a sense of self can simultaneously be strengthened when the individual becomes identified with a character. This is similar to celebrities who turn their social identities into something akin to mythological figures. Or as the child can be encouraged to invoke their favorite superhero to stand in for their underdeveloped ego-selves, a religious true believer can speak of God or the Holy Spirit working through them. There is immense power in this.
This might point to the Jaynesian bicameral mind. When an Australian Aborigine ritually sings a Songline, he is invoking a god-spirit-personality. That third person of the mythological story shifts the Aboriginal experience of self and reality. The Aborigine has as many selves as he has Songlines, each a self-contained worldview and way of being. This could be a more natural expression of human nature… or at least an easier and less taxing mode of being (Hunger for Connection). Jaynes noted that schizophrenics with their weakened and loosened egoic boundaries have seemingly inexhaustible energy.
He suspected this might explain why archaic humans could do seemingly impossible tasks such as building pyramids, something moderns could only accomplish through use of our largest and most powerful cranes. Yet the early Egyptians managed it with a small, impoverished, and malnourished population that lacked even basic infrastructure of roads and bridges. Similarly, this might explain how many tribal people can dance for days on end with little rest and no food. And maybe also like how armies can collectively march for days on end in a way no individual could (Music and Dance on the Mind).
Upholding rigid egoic boundaries is tiresome work. This might be why, when individuals reach exhaustion under stress (mourning a death, getting lost in the wilderness, etc), they can experience what John Geiger called the third man factor, the appearance of another self often with its own separate voice. Apparently, when all else fails, this is the state of mind we fall back on and it’s a common experience at that. Furthermore, a negatory experience, as Jaynes describes it, can lead to negatory possession in the re-emergence of a bicameral-like mind with a third person identity becoming a fully expressed personality of its own, a phenomenon that can happen through trauma-induced dissociation and splitting:
“Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.”
Jaynes noted that those who are abused in childhood are more easily hypnotized. Their egoic boundaries never as fully develop or else the large gaps are left in this self-construction, gaps through which other voices can slip in. This relates to what has variously been referred to as the porous self, thin boundary type, fantasy proneness, etc. Compared to those who have never experienced trauma, I bet such people would find it easier to speak in the third person and when doing so would show a greater shift in personality and behavior.
As for first person subjectivity, it has its own peculiarities. I think of the association of addiction and individuality, as explored by Johann Hari and as elaborated in my own writings (Individualism and Isolation; To Put the Rat Back in the Rat Park; & The Agricultural Mind). As the ego is a tiresome project that depletes one’s reserves, maybe it’s the energy drain that causes the depression, irritability, and such. A person with such a guarded sense of self would be resistant to speak in third person in finding it hard to escape the trap of ego they’ve so carefully constructed. So many of us have fallen under its sway and can’t imagine anything else (The Spell of Inner Speech). That is probably why it so often requires trauma to break open our psychological defenses.
Besides trauma, many moderns have sought to escape the egoic prison through religious practices. Ancient methods include fasting, meditation, and prayer — these are common across the world. Fasting, by the way, fundamentally alters the functioning of the body and mind through ketosis (also the result of a very low-carb diet), something I’ve speculated may have been a supporting factor for the bicameral mind and related to do with the much earlier cultural preference of psychedelics over addictive stimulants, an entirely different discussion (“Yes, tea banished the fairies.”; & Autism and the Upper Crust). The simplest method of all is using third person language until it becomes a new habit of mind, something might require a long period of practice to feel natural.
The modern mind has always been under stress. That is because it is the source of that stress. It’s not a stable and sustainable way of being in the world (The Crisis of Identity). Rather, it’s a transitional state and all of modernity has been a centuries-long stage of transformation into something else. There is an impulse hidden within, if we could only trigger the release of the locking mechanism (Lock Without a Key). The language of perspectives, as Scott Preston explores (The Three Gems and The Cross of Reality), tells us something important about our predicament. Words such as ‘I’, ‘you’, etc aren’t merely words. In language, we discover our humanity as we come to know the other.
Interestingly, however, the authors found that the three-year-olds were significantly more likely to refer to themselves in the third person (using their first names rather and saying that the sticker is on “his” or “her” head) than were the four-year-olds, who used first-person pronouns (“me” and “my head”) almost exclusively. […]
Povinelli has pointed out the relevancy of these findings to the phenomenon of “infantile amnesia,” which tidily sums up the curious case of most people being unable to recall events from their first three years of life. (I spent my first three years in New Jersey, but for all I know I could have spontaneously appeared as a four-year-old in my parent’s bedroom in Virginia, which is where I have my first memory.) Although the precise neurocognitive mechanisms underlying infantile amnesia are still not very well-understood, escaping such a state of the perpetual present would indeed seemingly require a sense of the temporally enduring, autobiographical self.
Some strange, grammatical, mind-body affliction is making some well-known folks in sports and politics refer to themselves in the third person. It is as if they have stepped outside their bodies. Is this detachment? Modesty? Schizophrenia? If this loopy verbal quirk were simple egomania, then Louis XIV might have said, “L’etat, c’est Lou.” He did not. And if it were merely a sign of one’s overweening power, the Queen Victoria would not have invented the royal we (“we are not amused”) but rather the royal she. She did not.
Lately, though, some third persons have been talking in a kind of royal he:
* Accepting the New York Jets’ $25 million salary and bonus offer, the quarterback Neil O’Donnell said of his former team, “The Pittsburgh Steelers had plenty of opportunities to sign Neil O’Donnell.”
* As he pushed to be traded from the Los Angeles Kings, Wayne Gretzky said he did not want to wait for the Kings to rebuild “because that doesn’t do a whole lot of good for Wayne Gretzky.”
* After his humiliating loss in the New Hampshire primary, Senator Bob Dole proclaimed: “You’re going to see the real Bob Dole out there from now on.”
These people give you the creepy sense that they’re not talking to you but to themselves. To a first, second or third person’s ear, there’s just something missing. What if, instead of “I am what I am,” we had “Popeye is what Popeye is”?
Earlier this week on Twitter, Donald Trump took credit for a surge in the Consumer Confidence Index, and with characteristic humility, concluded the tweet with “Thanks Donald!”
The “Thanks Donald!” capper led many to muse about whether Trump was referring to himself in the second person, the third person, or perhaps both.
Since English only marks grammatical person on pronouns, it’s not surprising that there is confusion over what is happening with the proper name “Donald” in “Thanks, Donald!” We associate proper names with third-person reference (“Donald Trump is the president-elect”), but a name can also be used as a vocative expression associated with second-person address (“Pleased to meet you, Donald Trump”). For more on how proper names and noun phrases in general get used as vocatives in English, see two conference papers from Arnold Zwicky: “Hey, Whatsyourname!” (CLS 10, 1974) and “Isolated NPs” (Semantics Fest 5, 2004).
The use of one’s own name in third-person reference is called illeism. Arnold Zwicky’s 2007 Language Log post, “Illeism and its relatives” rounds up many examples, including from politicians like Bob Dole, a notorious illeist. But what Trump is doing in tweeting “Thanks, Donald!” isn’t exactly illeism, since the vocative construction implies second-person address rather than third-person reference. We can call this a form of vocative self-address, wherein Trump treats himself as an addressee and uses his own name as a vocative to create something of an imagined interior dialogue.
Around the time football players realized end zones were for dancing, they also decided that the pronouns “I” and “me,” which they used an awful lot, had worn out. As if to endorse the view that they were commodities, cartoons or royalty — or just immune to introspection — athletes began to refer to themselves in the third person.
It makes sense, therefore, that when the most marketed personality in the NFL gets religion, he announces it in the weirdly detached grammar of football-speak. “Deion Sanders is covered by the blood of Jesus now,” writes Deion Sanders. “He loves the Lord with all his heart.” And in Deion’s new autobiography, the Lord loves Deion right back, though the salvation he offers third-person types seems different from what mere mortals can expect.
It does seem to be a stylistic thing in formal Chinese. I’ve come across a couple of articles about artists by the artist in question where they’ve referred to themselves in the third person throughout. And quite a number of politicians do the same, I’ve been told.
Illeism in everyday speech can have a variety of intentions depending on context. One common usage is to impart humility, a common practice in feudal societies and other societies where honorifics are important to observe (“Your servant awaits your orders”), as well as in master–slave relationships (“This slave needs to be punished”). Recruits in the military, mostly United States Marine Corps recruits, are also often made to refer to themselves in the third person, such as “the recruit,” in order to reduce the sense of individuality and enforce the idea of the group being more important than the self.[citation needed] The use of illeism in this context imparts a sense of lack of self, implying a diminished importance of the speaker in relation to the addressee or to a larger whole.
Conversely, in different contexts, illeism can be used to reinforce self-promotion, as used to sometimes comic effect by Bob Dole throughout his political career.[2] This was particularly made notable during the United States presidential election, 1996 and lampooned broadly in popular media for years afterwards.
Deepanjana Pal of Firstpost noted that speaking in the third person “is a classic technique used by generations of Bollywood scriptwriters to establish a character’s aristocracy, power and gravitas.”[3] Conversely, third person self referral can be associated with self-irony and not taking oneself too seriously (since the excessive use of pronoun “I” is often seen as a sign of narcissism and egocentrism[4]), as well as with eccentricity in general.
In certain Eastern religions, like Hinduism or Buddhism, this is sometimes seen as a sign of enlightenment, since by doing so, an individual detaches his eternal self (atman) from the body related one (maya). Known illeists of that sort include Swami Ramdas,[5] Ma Yoga Laxmi,[6] Anandamayi Ma,[7] and Mata Amritanandamayi.[8] Jnana yoga actually encourages its practitioners to refer to themselves in the third person.[9]
Young children in Japan commonly refer to themselves by their own name (a habit probably picked from their elders who would normally refer to them by name. This is due to the normal Japanese way of speaking, where referring to another in the third person is considered more polite than using the Japanese words for “you”, like Omae. More explanation given in Japanese pronouns, though as the children grow older they normally switch over to using first person references. Japanese idols also may refer to themselves in the third person so to give off the feeling of childlike cuteness.
Jnana yoga is a concise practice made for intellectual people. It is the quickest path to the top but it is the steepest. The key to jnana yoga is to contemplate the inner self and find who our self is. Our self is Atman and by finding this we have found Brahman. Thinking in third person helps move us along the path because it helps us consider who we are from an objective point of view. As stated in the Upanishads, “In truth, who knows Brahman becomes Brahman.” (Novak 17).
Respond with non-reactive awareness: consider yourself a third-person observer who watches your own emotional responses arise and then dissipate. Don’t judge, don’t try to change yourself; just observe! In time this practice will begin to cultivate a third-person perspective inside yourself that sometimes is called the Inner Witness.[4]
Researchers at the University of Arizona found in a 2015 study that frequent use of first-person singular pronouns — I, me and my — is not, in fact, an indicator of narcissism.
Instead, this so-called “I-talk” may signal that someone is prone to emotional distress, according to a new, follow-up UA study forthcoming in the Journal of Personality and Social Psychology.
Research at other institutions has suggested that I-talk, though not an indicator of narcissism, may be a marker for depression. While the new study confirms that link, UA researchers found an even greater connection between high levels of I-talk and a psychological disposition of negative emotionality in general.
Negative emotionality refers to a tendency to easily become upset or emotionally distressed, whether that means experiencing depression, anxiety, worry, tension, anger or other negative emotions, said Allison Tackman, a research scientist in the UA Department of Psychology and lead author of the new study.
Tackman and her co-authors found that when people talk a lot about themselves, it could point to depression, but it could just as easily indicate that they are prone to anxiety or any number of other negative emotions. Therefore, I-talk shouldn’t be considered a marker for depression alone.
The simple act of silently talking to yourself in the third person during stressful times may help you control emotions without any additional mental effort than what you would use for first-person self-talk — the way people normally talk to themselves.
A first-of-its-kind study led by psychology researchers at Michigan State University and the University of Michigan indicates that such third-person self-talk may constitute a relatively effortless form of self-control. The findings are published online in Scientific Reports, a Nature journal.
Say a man named John is upset about recently being dumped. By simply reflecting on his feelings in the third person (“Why is John upset?”), John is less emotionally reactive than when he addresses himself in the first person (“Why am I upset?”).
“Essentially, we think referring to yourself in the third person leads people to think about themselves more similar to how they think about others, and you can see evidence for this in the brain,” said Jason Moser, MSU associate professor of psychology. “That helps people gain a tiny bit of psychological distance from their experiences, which can often be useful for regulating emotions.”
Some of the children were assigned to a “self-immersed condition”, akin to a control group, and before and during the task were told to reflect on how they were doing, asking themselves “Am I working hard?”. Other children were asked to reflect from a third-person perspective, asking themselves “Is James [insert child’s actual name] working hard?” Finally, the rest of the kids were in the Batman condition, in which they were asked to imagine they were either Batman, Bob The Builder, Rapunzel or Dora the Explorer and to ask themselves “Is Batman [or whichever character they were] working hard?”. Children in this last condition were given a relevant prop to help, such as Batman’s cape. Once every minute through the task, a recorded voice asked the question appropriate for the condition each child was in [Are you working hard? or Is James working hard? or Is Batman working hard?].
The six-year-olds spent more time on task than the four-year-olds (half the time versus about a quarter of the time). No surprise there. But across age groups, and apparently unrelated to their personal scores on mental control, memory, or empathy, those in the Batman condition spent the most time on task (about 55 per cent for the six-year-olds; about 32 per cent for the four-year-olds). The children in the self-immersed condition spent the least time on task (about 35 per cent of the time for the six-year-olds; just over 20 per cent for the four-year-olds) and those in the third-person condition performed in between.
In other words, the more the child could distance him or herself from the temptation, the better the focus. “Children who were asked to reflect on the task as if they were another person were less likely to indulge in immediate gratification and more likely to work toward a relatively long-term goal,” the authors wrote in the study called “The “Batman Effect”: Improving Perseverance in Young Children,” published in Child Development.
This underlines the problem we see with more and more or what passes for early childhood education these days– we’re not worried about whether the school is ready to appropriately handle the students, but instead are busy trying to beat three-, four- and five-year-olds into developmentally inappropriate states to get them “ready” for their early years of education. It is precisely and absolutely backwards. I can’t say this hard enough– if early childhood programs are requiring “increased demands” on the self-regulatory skills of kids, it is the programs that are wrong, not the kids. Full stop.
What this study offers is a solution that is more damning than the “problem” that it addresses. If a four-year-old child has to disassociate, to pretend that she is someone else, in order to cope with the demands of your program, your program needs to stop, today.
Because you know where else you hear this kind of behavior described? In accounts of victims of intense, repeated trauma. In victims of torture who talk about dealing by just pretending they aren’t even there, that someone else is occupying their body while they float away from the horror.
That should not be a description of How To Cope With Preschool.
Nor should the primary lesson of early childhood education be, “You can’t really cut it as yourself. You’ll need to be somebody else to get ahead in life.” I cannot even begin to wrap my head around what a destructive message that is for a small child.
And though psychiatrists acknowledge that almost anyone is capable of hallucinating a voice under certain circumstances, they maintain that the hallucinations that occur with psychoses are qualitatively different. “One shouldn’t place too much emphasis on the content of hallucinations,” says Jeffrey Lieberman, chairman of the psychiatry department at Columbia University. “When establishing a correct diagnosis, it’s important to focus on the signs or symptoms” of a particular disorder. That is, it’s crucial to determine how the voices manifest themselves. Voices that speak in the third person, echo a patient’s thoughts or provide a running commentary on his actions are considered classically indicative of schizophrenia.
While auditory hallucinations are considered a core psychotic symptom, central to the diagnosis of schizophrenia, it has long been recognized that persons who are not psychotic may also hear voices. There is an entrenched clinical belief that distinctions can be made between these groups, typically on the basis of the perceived location or the ‘third-person’ perspective of the voices. While it is generally believed that such characteristics of voices have significant clinical implications, and are important in the differential diagnosis between dissociative and psychotic disorders, there is no research evidence in support of this. Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or with no mental disorder at all. On this and other bases outlined below, we argue that hearing voices should be considered a dissociative experience, which under some conditions may have pathological consequences. In other words, we believe that, while voices may occur in the context of a psychotic disorder, they should not be considered a psychotic symptom.
The psychiatric and psychological literature has reached no settled consensus about why hallucinations occur and whether all perceptual “mistakes” arise from the same processes (for a general review, see Aleman & Laroi 2008). For example, many researchers have found that when people hear hallucinated voices, some of these people have actually been subvocalizing: They have been using muscles used in speech, but below the level of their awareness (Gould 1949, 1950). Other researchers have not found this inner speech effect; moreover, this hypothesis does not explain many of the odd features of the hallucinations associated with psychosis, such as hearing voices that speak in the second or third person (Hoffman 1986). But many scientists now seem to agree that hallucinations are the result of judgments associated with what psychologists call “reality monitoring” (Bentall 2003). This is not the process Freud described with the term reality testing, which for the most part he treated as a cognitive higher-level decision: the ability to distinguish between fantasy and the world as it is (e.g., he loves me versus he’s just not that into me). Reality monitoring refers to the much more basic decision about whether the source of an experience is internal to the mind or external in the world.
Originally, psychologists used the term to refer to judgments about memories: Did I really have that conversation with my boyfriend back in college, or did I just think I did? The work that gave the process its name asked what it was about memories that led someone to infer that these memories were records of something that had taken place in the world or in the mind (Johnson & Raye 1981). Johnson & Raye’s elegant experiments suggested that these memories differ in predictable ways and that people use those differences to judge what has actually taken place. Memories of an external event typically have more sensory details and more details in general. By contrast, memories of thoughts are more likely to include the memory of cognitive effort, such as composing sentences in one’s mind.
It’s worth pointing out that a significant portion of the non-clinical population experiences auditory hallucinations. Such hallucinations need not be negative in content, though as I understand it, the preponderance of AVH in schizophrenia is or becomes negative. […]
I’ve certainly experienced the “third man”, in a moment of vivid stress when I was younger. At the time, I thought it was God speaking to me in an encouraging and authoritative way! (I was raised in a very strict religious household.) But I wouldn’t be surprised if many of us have had similar experiences. These days, I have more often the cell-phone buzzing in my pocket illusion.
There are, I suspect, many reasons why they auditory system might be activated to give rise to auditory experiences that philosophers would define as hallucinations: recalling things in an auditory way, thinking in inner speech where this might be auditory in structure, etc. These can have positive influences on our ability to adapt to situations.
What continues to puzzle me about AVH in schizophrenia are some of its fairly consistent phenomenal properties: second or third-person voice, typical internal localization (though plenty of external localization) and negative content.
The Digital God, How Technology Will Reshape Spirituality by William Indick
pp. 74-75
Doubled Consciousness
Who is this third who always walks beside you?
When I count, there are only you and I together.
But when I look ahead up the white road
There is always another one walking beside you
Gliding wrapt in a brown mantle, hooded.
—T.S. Eliot, The Waste Land
The feeling of “doubled consciousness” 81 has been reported by numerous epileptics. It is the feeling of being outside of one’s self. The feeling that you are observing yourself as if you were outside of your own body, like an outsider looking in on yourself. Consciousness is “doubled” because you are aware of the existence of both selves simultaneously—the observer and the observed. It is as if the two halves of the brain temporarily cease to function as a single mechanism; but rather, each half identifies itself separately as its own self. 82 The doubling effect that occurs as a result of some temporal lobe epileptic seizures may lead to drastic personality changes. In particular, epileptics following seizures often become much more spiritual, artistic, poetic, and musical. 83 Art and music, of course, are processed primarily in the right hemisphere, as is poetry and the more lyrical, metaphorical aspects of language. In any artistic endeavor, one must engage in “doubled consciousness,” creating the art with one “I,” while simultaneously observing the art and the artist with a critically objective “other-I.” In The Great Gatsby, Fitzgerald expressed the feeling of “doubled consciousness” in a scene in which Nick Caraway, in the throes of profound drunkenness, looks out of a city window and ponders:
Yet high over the city our line of yellow windows must have contributed their share of human secrecy to the casual watcher in the darkening streets, and I was him too , looking up and wondering . I was within and without , simultaneously enchanted and repelled by the inexhaustible variety of life.
Doubled-consciousness, the sense of being both “within and without” of one’s self, is a moment of disconnection and disassociation between the two hemispheres of the brain, a moment when left looks independently at right and right looks independently at left, each recognizing each other as an uncanny mirror reflection of himself, but at the same time not recognizing the other as “I.”
The sense of doubled consciousness also arises quite frequently in situations of extreme physical and psychological duress. 84 In his book, The Third Man Factor John Geiger delineates the conditions associated with the perception of the “sensed presence”: darkness, monotony, barrenness, isolation, cold, hunger, thirst, injury, fatigue, and fear. 85 Shermer added sleep deprivation to this list, noting that Charles Lindbergh, on his famous cross–Atlantic flight, recorded the perception of “ghostly presences” in the cockpit, that “spoke with authority and clearness … giving me messages of importance unattainable in ordinary life.” 86 Sacks noted that doubled consciousness is not necessarily an alien or abnormal sensation, we all feel it, especially when we are alone, in the dark, in a scary place. 87 We all can recall a memory from childhood when we could palpably feel the presence of the monster hiding in the closet, or that indefinable thing in the dark space beneath our bed. The experience of the “sensed other” is common in schizophrenia, can be induced by certain drugs, is a central aspect of the “near death experience,” and is also associated with certain neurological disorders. 88
To speak of oneself in the third person; to express the wish to “find myself,” is to presuppose a plurality within one’s own mind. 89 There is consciousness, and then there is something else … an Other … who is nonetheless a part of our own mind, though separate from our moment-to-moment consciousness. When I make a statement such as: “I’m disappointed with myself because I let myself gain weight,” it is quite clear that there are at least two wills at work within one mind—one will that dictates weight loss and is disappointed—and another will that defies the former and allows the body to binge or laze. One cannot point at one will and say: “This is the real me and the other is not me.” They’re both me. Within each “I” there exists a distinct Other that is also “I.” In the mind of the believer—this double-I, this other-I, this sentient other, this sensed presence who is me but also, somehow, not me—how could this be anyone other than an angel, a spirit, my own soul, or God? Sacks recalls an incident in which he broke his leg while mountain climbing alone and had to descend the mountain despite his injury and the immense pain it was causing him. Sacks heard “an inner voice” that was “wholly unlike” his normal “inner speech”—a “strong, clear, commanding voice” that told him exactly what he had to do to survive the predicament, and how to do it. “This good voice, this Life voice, braced and resolved me.” Sacks relates the story of Joe Simpson, author of Touching the Void , who had a similar experience during a climbing mishap in the Andes. For days, Simpson trudged along with a distinctly dual sense of self. There was a distracted self that jumped from one random thought to the next, and then a clearly separate focused self that spoke to him in a commanding voice, giving specific instructions and making logical deductions. 90 Sacks also reports the experience of a distraught friend who, at the moment she was about to commit suicide, heard a “voice” tell her: “No, you don’t want to do that…” The male voice, which seemed to come from outside of her, convinced her not to throw her life away. She speaks of it as her “guardian angel.” Sacks suggested that this other voice may always be there, but it is usually inhibited. When it is heard, it’s usually as an inner voice, rather than an external one. 91 Sacks also reports that the “persistent feeling” of a “presence” or a “companion” that is not actually there is a common hallucination, especially among people suffering from Parkinson’s disease. Sacks is unsure if this is a side-effect of L-DOPA, the drug used to treat the disease, or if the hallucinations are symptoms of the neurological disease itself. He also noted that some patients were able to control the hallucinations to varying degrees. One elderly patient hallucinated a handsome and debonair gentleman caller who provided “love, attention, and invisible presents … faithfully each evening.” 92
The ancients were also clued up in that the origins of mental instability was spiritual but they perceived it differently. In The Origins of Consciousness in the Breakdown of Bicameral Mind, Julian Jaynes’ book present a startling thesis, based on an analysis of the language of the Iliad, that the ancient Greeks were not conscious in the same way that modern humans are. Because the ancient Greeks had no sense of “I” (also Victorian England would sometimes speak in the third person rather than say I, because the eternal God – YHWH was known as the great “I AM”) with which to locate their mental processes. To them their inner thoughts were perceived as coming from the gods, which is why the characters in the Iliad find themselves in frequent communication with supernatural entities.
Jaynes’s description of consciousness, in relation to memory, proposes what people believe to be rote recollection are concepts, the platonic ideals of their office, the view out of the window, et al. These contribute to one’s mental sense of place and position in the world. The memories enabling one to see themselves in the third person.
Consciousness not a copy of experience Since Locke’s tabula rasa it has been thought that consciousness records our experiences, to save them for possible later reflection. However, this is clearly false: most details of our experience are immediately lost when not given special notice. Recalling an arbitrary past event requires a reconstruction of memories. Interestingly, memories are often from a third-person perspective, which proves that they could not be a mere copy of experience.
The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes pp. 347-350
Negatory Possession
There is another side to this vigorously strange vestige of the bicameral mind. And it is different from other topics in this chapter. For it is not a response to a ritual induction for the purpose of retrieving the bicameral mind. It is an illness in response to stress. In effect, emotional stress takes the place of the induction in the general bicameral paradigm just as in antiquity. And when it does, the authorization is of a different kind.
The difference presents a fascinating problem. In the New Testament, where we first hear of such spontaneous possession, it is called in Greek daemonizomai, or demonization. 10 And from that time to the present, instances of the phenomenon most often have that negatory quality connoted by the term. The why of the negatory quality is at present unclear. In an earlier chapter (II. 4) I have tried to suggest the origin of ‘evil’ in the volitional emptiness of the silent bicameral voices. And that this took place in Mesopotamia and particularly in Babylon, to which the Jews were exiled in the sixth century B.C., might account for the prevalence of this quality in the world of Jesus at the start of this syndrome.
But whatever the reasons, they must in the individual be similar to the reasons behind the predominantly negatory quality of schizophrenic hallucinations. And indeed the relationship of this type of possession to schizophrenia seems obvious.
Like schizophrenia, negatory possession usually begins with some kind of an hallucination. 11 It is often a castigating ‘voice’ of a ‘demon’ or other being which is ‘heard’ after a considerable stressful period. But then, unlike schizophrenia, probably because of the strong collective cognitive imperative of a particular group or religion, the voice develops into a secondary system of personality, the subject then losing control and periodically entering into trance states in which consciousness is lost, and the ‘demon’ side of the personality takes over.
Always the patients are uneducated, usually illiterate, and all believe heartily in spirits or demons or similar beings and live in a society which does. The attacks usually last from several minutes to an hour or two, the patient being relatively normal between attacks and recalling little of them. Contrary to horror fiction stories, negatory possession is chiefly a linguistic phenomenon, not one of actual conduct. In all the cases I have studied, it is rare to find one of criminal behavior against other persons. The stricken individual does not run off and behave like a demon; he just talks like one.
Such episodes are usually accompanied by twistings and writhings as in induced possession. The voice is distorted, often guttural, full of cries, groans, and vulgarity, and usually railing against the institutionalized gods of the period. Almost always, there is a loss of consciousness as the person seems the opposite of his or her usual self. ‘He’ may name himself a god, demon, spirit, ghost, or animal (in the Orient it is often ‘the fox’), may demand a shrine or to be worshiped, throwing the patient into convulsions if these are withheld. ‘He’ commonly describes his natural self in the third person as a despised stranger, even as Yahweh sometimes despised his prophets or the Muses sneered at their poets. 12 And ‘he’ often seems far more intelligent and alert than the patient in his normal state, even as Yahweh and the Muses were more intelligent and alert than prophet or poet.
As in schizophrenia, the patient may act out the suggestions of others, and, even more curiously, may be interested in contracts or treaties with observers, such as a promise that ‘he’ will leave the patient if such and such is done, bargains which are carried out as faithfully by the ‘demon’ as the sometimes similar covenants of Yahweh in the Old Testament. Somehow related to this suggestibility and contract interest is the fact that the cure for spontaneous stress-produced possession, exorcism, has never varied from New Testament days to the present. It is simply by the command of an authoritative person often following an induction ritual, speaking in the name of a more powerful god. The exorcist can be said to fit into the authorization element of the general bicameral paradigm, replacing the ‘demon.’ The cognitive imperatives of the belief system that determined the form of the illness in the first place determine the form of its cure.
The phenomenon does not depend on age, but sex differences, depending on the historical epoch, are pronounced, demonstrating its cultural expectancy basis. Of those possessed by ‘demons’ whom Jesus or his disciples cured in the New Testament, the overwhelming majority were men. In the Middle Ages and thereafter, however, the overwhelming majority were women. Also evidence for its basis in a collective cognitive imperative are its occasional epidemics, as in convents of nuns during the Middle Ages, in Salem, Massachusetts, in the eighteenth century, or those reported in the nineteenth century at Savoy in the Alps. And occasionally today.
The Emergence of Reflexivity in Greek Language and Thought by Edward T. Jeremiah
p. 3
Modernity’s tendency to understand the human being in terms of abstract grammatical relations, namely the subject and self, and also the ‘I’—and, conversely, the relative indifference of Greece to such categories—creates some of the most important semantic contrasts between our and Greek notions of the self.
p. 52
Reflexivisations such as the last, as well as those like ‘Know yourself’ which reconstitute the nature of the person, are entirely absent in Homer. So too are uses of the reflexive which reference some psychological aspect of the subject. Indeed the reference of reflexives directly governed by verbs in Homer is overwhelmingly bodily: ‘adorning oneself’, ‘covering oneself’, ‘defending oneself’, ‘debasing oneself physically’, ‘arranging themselves in a certain formation’, ‘stirring oneself’, ad all the prepositional phrases. The usual reference for indirect arguments is the self interested in its own advantage. We do not find in Homer any of the psychological models of self-relation discussed by Lakoff.
Use of the Third Person for Self-Reference by Jesus and Yahweh by Rod Elledge
pp. 11-13
Viswanathan addresses illeism in Shakespeare’s works, designating it as “illeism with a difference.” He writes: “It [‘illeism with a difference’] is one by which the dramatist makes a character, speaking in the first person, refer to himself in the third person, not simple as a ‘he’, which would be illeism proper, a traditional grammatical mode, but by name.” He adds that the device is extensively used in Julius Caesar and Troilus and Cressida, and occasionally in Hamlet and Othello. Viswanathan notes the device, prior to Shakespeare, was used in the medieval theater simply to allow a character to announce himself and clarify his identity. Yet, he argues that, in the hands of Shakespeare, the device becomes “a masterstroke of dramatic artistry.” He notes four uses of this illeism with a difference.” First, it highlights the character using it and his inner self. He notes that it provides a way of “making the character momentarily detach himself from himself, achieve a measure of dramatic (and philosophical) depersonalization, and create a kind of aesthetic distance from which he can contemplate himself.” Second, it reflects the tension between the character’s public and private selves. Third, the device “raises the question of the way in which the character is seen to behave and to order his very modes of feeling and thought in accordance with a rightly or wrongly conceived image or idea of himself.” Lastly, he notes the device tends to point toward the larger philosophical problem for man’s search for identity. Speaking of the use of illeism within Julius Caesar, Spevak writes that “in addiction to the psychological and other implications, the overall effect is a certain stateliness, a classical look, a consciousness on the part of the actors that they are acting in a not so everyday context.”
Modern linguistic scholarship
Otto Jespersen notes various examples of the third-person self-reference including those seeking to reflect deference or politeness, adults talking to children as “papa” or “Aunt Mary” to be more easily understood, as well as the case of some writers who write “the author” or “this present writer” in order to avoid the mention of “I.” He notes Caesar as a famous example of “self-effacement [used to] produce the impression of absolute objectivity.” Yet, Head writes, in response to Jespersen, that since the use of the third person for self-reference
is typical of important personages, whether in autobiography (e.g. Caesar in De Bello Gallico and Captain John Smith in his memoirs) or in literature (Marlowe’s Faustus, Shakespeare’s Julius Caesar, Cordelia and Richared II, Lessing’s Saladin, etc.), it is actually an indication of special status and hence implies greater social distance than does the more commonly used first person singular.
Land and Kitzinger argue that “very often—but not always . . . the use of a third-person reference form in self-reference is designed to display that the speaker is talking about themselves as if from the perspective of another—either the addressee(s) . . . or a non-present other.” The linguist Laurence Horn, noting the use of illeism by various athlete and political celebrities, notes that “the celeb is viewing himself . . . from the outside.” Addressing what he refers to as “the dissociative third person,” he notes that an athlete or politician “may establish distance between himself (virtually never herself) and his public persona, but only by the use of his name, never a 3rd person pronoun.”
pp. 15-17
Illeism in Clasical Antiquity
As referenced in the history of research, Kostenberger writes: “It may strike the modern reader as curious that Jesus should call himself ‘Jesus Christ’; however, self-reference in the third person was common in antiquity.” While Kostenberger’s statement is a brief comment in the context of a commentary and not a monographic study on the issue, his comment raises a critical question. Does a survey of the evidence reveal that Jesus’s use of illeism in this verse (and by implication elsewhere in the Gospels) reflects simply another example of a common mannerism in antiquity? […]
Early Evidence
From the fifth century BC to the time of Jesus the following historians refer to themselves in the third person in their historical accounts: Hecataeus (though the evidence is fragmentary), Herodotus, Thucydides, Xenophon, Polybius, Caesar, and Josephus. For the scope of this study this point in history (from fifth century BC to first century AD) is the primary focus. Yet, this feature was adopted from the earlier tendency in literature in which an author states his name as a seal or sphragis for their work. Herkommer notes that the “self-introduction” (Selbstvorstellung) in the Homeric Hymn to Apollo, in choral poetry (Chorlyrik) such as that by the Greek poet Alkman (seventh century BC), and in the poetic mxims, (Spruchdichtung) such as those of the Greek poet Phokylides (seventh century BC). Yet, from fifth century onward, this feature appears primarily in the works of Greek historians. In addition to early evidence (prior to the fifth century of an author’s self-reference in his historiographic work, the survey of evidence also noted an early example of illeism within Homer’s Illiad. Because this ancient Greek epic poem reflects an early use of the third-person self-reference in a narrative context and offers a point of comparison to its use in later Greek historiography, this early example of the use of illeism is briefly addressed.
Maricola notes that the style of historical narrative that first appears in Herodotus is a legacy from Homer (ca. 850 BC). He notes that “as the writer of the most ‘authoritative’ third-person narrative, [Homer] provided a model not only for later poets, epic and otherwise, but also to the prose historians who, by way of Herodotus, saw him as their model and rival.” While Homer provided the authoritative example of third-person narrative, he also, centuries before the development of Greek historiography, used illeism in his epic poem the Iliad. Illeism occurs in the direct speech of Zeus (the king of the gods), Achilles (the “god-like” son of a king and goddess), and Hector (the mighty Trojan prince).
Zeus, addressing the assembled gods on Mt. Olympus, refers to himself as “Zeus, the supreme Master” […] and states how superior he is above all gods and men. Hector’s use of illeism occurs as he addresses the Greeks and challenges the best of them to fight against “good Hector” […]. Muellner notes in these instances of third person for self-reference (Zeus twice and Hector once) that “the personage at the top and center of the social hierarchy is asserting his superiority over the group . . . . In other word, these are self-aggrandizing third-person references, like those in the war memoirs of Xenophon, Julius Caesar, and Napoleon.” He adds that “the primary goal of this kind of third-person self-reference is to assert the status accruing to exception excellence. Achilles refers to himself in the context of an oath (examples of which are reflected in the OT), yet his self-reference serves to emphasize his status in relation to the Greeks, and especially to King Agamemnon. Addressing Agamemnon, the general of the Greek armies, Achillies swears by his sceptor and states that the day will come when the Greeks will long for Achilles […].
Homer’s choice to use illeism within the direct speech of these three characters contributes to an understanding of its potential rhetorical implications. In each case the character’s use of illeism serves to set him apart by highlighting his innate authority and superior status. Also, all three characters reflect divine and/or royal aspects (Zeus, king of gods; Achilles, son of a king and a goddess, and referred to as “god-like”; and Hector, son of a king). The examples of illeism in the Iliad, among the earliest evidence of illeism, reflect a usage that shares similarities with the illeism as used by Jesus and Yahweh. The biblical and Homeric examples each reflects illeism in direct speech within narrative discourse and the self-reverence serves to emphasize authority or status as well as a possible associated royal and/or divine aspect(s). Yet, the examples stand in contrast to the use of illeism by later historians. As will be addressed next, these ancient historians used the third-person self-reference as a literary device to give their historical accounts a sense of objectivity.
Women and Gender in Medieval Europe: An Encyclopedia edited by Margaret C. Schaus
“Mystics’ Writings” by Patricia Dailey p. 600
The question of scribal mediation is further complicated in that the mystic’s text is, in essence, a message transmitted through her, which must be transmitted to her surrounding community. Thus, the denuding of voice of the text, of a first-person narrative, goes hand in hand with the status of the mystic as “transcriber” of a divine message that does not bear the mystic’s signature, but rather God’s. In addition, the tendency to write in the third person in visionary narratives may draw from a longstanding tradition that stems from Paul in 2 Cor. of communicating visions in the third person, but at the same time, it presents a means for women to negotiate with conflicts with regard to authority or immediacy of the divine through a veiled distance or humility that conformed to a narrative tradition.
It is no accident that the term ‘autobiography’, entailing a special amalgam of ‘autos’, ‘bios’ and ‘graphe’ (oneself, life and writing), was first used in 1797 in the Monthly Review by a well-known essayist and polyglot, translator of German romantic literature, William Taylor of Norwich. However, the term‘autobiographer’ was first extensively used by an English Romantic poet, one of the Lake Poets, Robert Southey1. This does not mean that no autobiographies were written before the beginning of the nineteenth century. The classical writers wrote about famous figures of public life, the Middle Ages produced educated writers who wrote about saints’ lives and from Renaissance onward people wrote about their own lives. However, autobiography, as an auto-reflexive telling of one’s own life’s story, presupposes a special understanding of one’s‘self’ and therefore, biographies and legends of Antiquity and the Middle Ages are fundamentally different from ‘modern’ autobiography, which postulates a truly autonomous subject, fully conscious of his/her own uniqueness2. Life-writing, whether in the form of biography or autobiography, occupied the central place in Romanticism. Autobiography would also often appear in disguise. One would immediately think of S. T. Coleridge’s Biographia Literaria (1817) which combines literary criticism and sketches from the author’s life and opinions, and Mary Wollstonecratf’s Short Residence in Sweden, Norway and Denmark (1796),which combines travel narrative and the author’s own difficulties of travelling as a woman.
When one thinks about the first ‘modern’ secular autobiography, it is impossible to avoid the name of Jean-Jacques Rousseau. He calls his first autobiography The Confessions, thus aligning himself in the long Western tradition of confessional writings inaugurated by St. Augustine (354 – 430 AD). Though St. Augustine confesses to the almighty God and does not really perceive his own life as significant, there is another dimension of Augustine’s legacy which is important for his Romantic inheritors: the dichotomies inherent in the Christian way of perceiving the world, namely the opposition of spirit/matter, higher/lower, eternal/temporal, immutable/changing become ultimately emanations of a single binary opposition, that of inner and outer (Taylor 1989: 128). The substance of St. Augustine’s piety is summed up by a single sentence from his Confessions:
“And how shall I call upon my God – my God and my Lord? For when I call on Him, I ask Him to come into me. And what place is there in me into which my God can come? (…) I could not therefore exist, could not exist at all, O my God, unless Thou wert in me.” (Confessions, book I, chapter 2, p.2, emphasis mine)
The step towards inwardness was for Augustine the step towards Truth, i.e. God, and as Charles Taylor explains, this turn inward was a decisive one in the Western tradition of thought. The ‘I’ or the first person standpoint becomes unavoidable thereafter. It took a long way from Augustine’s seeing these sources to reside in God to Rousseau’s pivotal turn to inwardness without recourse to God. Of course, one must not lose sight of the developments in continental philosophy pre-dating Rousseau’s work. René Descartes was the first to embrace Augustinian thinking at the beginning of the modern era, and he was responsible for the articulation of the disengaged subject: the subject asserting that the real locus of all experience is in his own mind3. With the empiricist philosophy of John Locke and David Hume, who claimed that we reach the knowledge of the surrounding world through disengagement and procedural reason, there is further development towards an idea of the autonomous subject. Although their teachings seemed to leave no place for subjectivity as we know it today, still they were a vital step in redirecting the human gaze from the heavens to man’s own existence.
2 Furthermore, the Middle Ages would not speak about such concepts as ‘the author’and one’s ‘individuality’ and it is futile to seek in such texts the appertaining subject. When a Croatian fourteenth-century-author, Hanibal Lucić, writes about his life in a short text called De regno Croatiae et Dalmatiae? Paulus de Paulo, the last words indicate that the author perceives his life as being insignificant and invaluable. The nuns of the fourteenth century writing their own confessions had to use the third person pronoun to refer to themselves and the ‘I’ was reserved for God only. (See Zlatar 2000)
In addition, autobiography has the pejorative connotation in Arabic of madihu nafsihi wa muzakkiha (he or she who praises and recommends him- or herself). This phrase denotes all sorts of defects in a person or a writer: selfishness versus altruism, individualism versus the spirit of the group, arrogance versus modesty. That is why Arabs usually refer to themselves in formal speech in the third person plural, to avoid the use of the embarrassing íI.ë In autobiography, of course, one uses íIë frequently.
Becoming Abraham Lincoln by Richard Kigel Preface, XI
A note about the quotations and sources: most of the statements were collected by William Herndon, Lincoln’s law partner and friend, in the years following Lincoln’s death. The responses came in original handwritten letters and transcribed interviews. Because of the low literacy levels of many of his subjects, sometimes these statements are difficult to understand. Often they used no punctuation and wrote in fragments of thoughts. Misspellings were common and names and places were often confused. “Lincoln” was sometimes spelled “Linkhorn” or “Linkern.” Lincoln’s grandmother “Lucy” was sometimes “Lucey.” Some respondents referred to themselves in third person. Lincoln himself did in his biographical writings.
p. 35
“From this place,” wrote Abe, referring to himself in the third person, “he removed to what is now Spencer County, Indiana, in the autumn of 1816, Abraham then being in his eighth [actually seventh] year. This removal was partly on account of slavery, but chiefly on account of the difficulty in land titles in Kentucky.”
Mirrors only became common in the nineteenth century; before, they were luxury items owned only by the rich. Access to mirrors is a novelty, and likely a harmful one.
In Others In Mind: Social Origins of Self-Consciousness, Philippe Rochat describes an essential and tragic feature of our experience as humans: an irreconcilable gap between the beloved, special self as experienced in the first person, and the neutrally-evaluated self as experienced in the third person, imagined through the eyes of others. One’s first-person self image tends to be inflated and idealized, whereas the third-person self image tends to be deflated; reminders of this distance are demoralizing.
When people without access to mirrors (or clear water in which to view their reflections) are first exposed to them, their reaction tends to be very negative. Rochat quotes the anthropologist Edmund Carpenter’s description of showing mirrors to the Biamis of Papua New Guinea for the first time, a phenomenon Carpenter calls “the tribal terror of self-recognition”:
After a first frightening reaction, they became paralyzed, covering their mouths and hiding their heads – they stood transfixed looking at their own images, only their stomach muscles betraying great tension.
Why is their reaction negative, and not positive? It is that the first-person perspective of the self tends to be idealized compared to accurate, objective information; the more of this kind of information that becomes available (or unavoidable), the more each person will feel the shame and embarrassment from awareness of the irreconcilable gap between his first-person specialness and his third-person averageness.
There are many “mirrors”—novel sources of accurate information about the self—in our twenty-first century world. School is one such mirror; grades and test scores measure one’s intelligence and capacity for self-inhibition, but just as importantly, peers determine one’s “erotic ranking” in the social hierarchy, as the sociologist Randall Collins terms it. […]
There are many more “mirrors” available to us today; photography in all its forms is a mirror, and internet social networks are mirrors. Our modern selves are very exposed to third-person, deflating information about the idealized self. At the same time, say Rochat, “Rich contemporary cultures promote individual development, the individual expression and management of self-presentation. They foster self-idealization.”
We see immediately from this schema why the persons of grammar are minimally four and not three. It’s because we are fourfold beings and our reality is a fourfold structure, too, being constituted of two times and two spaces — past and future, inner and outer. The fourfold human and the fourfold cosmos grew up together. Wilber’s model can’t account for that at all.
So, what’s the problem here? Wilber seems to have omitted time and our experience of time as an irrelevancy. Time isn’t even represented in Wilber’s AQAL model. Only subject and object spaces. Therefore, the human form cannot be properly interpreted, for we have four faces, like some representations of the god Janus, that face backwards, forwards, inwards, and outwards, and we have attendant faculties and consciousness functions organised accordingly for mastery of these dimensions — Jung’s feeling, thinking, sensing, willing functions are attuned to a reality that is fourfold in terms of two times and two spaces. And the four basic persons of grammar — You, I, We, He or She — are the representation in grammar of that reality and that consciousness, that we are fourfold beings just as our reality is a fourfold cosmos.
Comparing Wilber’s model to Rosenstock-Huessy’s, I would have to conclude that Wilber’s model is “deficient integral” owing to its apparent omission of time and subsequently of the “I-thou” relationship in which the time factor is really pronounced. For the “I-It” (or “We-Its”) relation is a relation of spaces — inner and outer, while the “I-Thou” (or “We-thou”) relation is a relation of times.
It is perhaps not so apparent to English speakers especially that the “thou” or “you” form is connected with time future. Other languages, like German, still preserve the formal aspects of this. In old English you had to say “go thou!” or “be thou loving!”, and so on. In other words, the “thou” or “you” is most closely associated with the imperative form and that is the future addressing the past. It is a call to change one’s personal or collective state — what we call the “vocation” or “calling” is time future in dialogue with time past. Time past is represented in the “we” form. We is not plural “I’s”. It is constituted by some historical act, like a marriage or union or congregation of peoples or the sexes in which “the two shall become one flesh”. We is the collective person, historically established by some act. The people in “We the People” is a singularity and a unity, an historically constituted entity called “nation”. A bunch of autonomous “I’s” or egos never yet formed a tribe or a nation — or a commune for that matter. Nor a successful marriage.
Though “I-It” (or “We-Its”) might be permissible in referring to the relation of subject and object spaces, “we-thou” is the relation in which the time element is outstanding.
There is “a large group of people who feel that they no longer have any effective stake or a just share in a particular system of economic or social relations aren’t going to feel any obligation to defend that system when it faces a crisis, or any sense even of belonging within it.” Scott Preston writes about this in Panem et Circenses.
We see this loss of trust in so many ways. As inequality goes up, so do the rates of social problems, from homicides to child abuse, from censorship to police brutality. The public becomes more outraged and the ruling elite become more authoritarian. But it’s the public that concerns me, as I’m not part of the ruling elite nor aspire to be. The public has lost faith in government, corporations, media, and increasingly the healthcare system as well — nearly all of the major institutions that hold together the social fabric. A society can’t survive long under these conditions. Sure, a society can turn toward the overtly authoritarian as China is doing, but even that requires public trust that the government in some basic sense has the public good or national interest in mind.
Then again, American society has been resilient up to this point. This isn’t the first time that the social order began fracturing. On more than one occasion, the ruling elite lost control of the narrative and almost entirely lost control of the reigns of power. The US has a long history of revolts, often large-scale and violent, that started as soon as the country was founded (Shays’ Rebellion, Whiskey Rebellion, etc; see Spirit of ’76 & The Fight For Freedom Is the Fight To Exist: Independence and Interdependence). In their abject fear, look at how the ruling elite treated the Bonus Army. And veterans were to be feared. Black veterans came back from WWI with their guns still in their possession and they violently fought back against their oppressors. And after WWII, veterans rose up against corrupt local governments, in one case using military weapons to shoot up the courthouse (e.g., 1946 Battle of Athens).
The public losing trust in authority figures and institutions of power is not to be taken lightly. That is even more true with a country founded on revolution and that soon after fell into civil war. As with Shays’ Rebellion, the American Civil War was a continuation of the American Revolution. The cracks in the foundation remain, the issues unresolved. This has been a particular concern for the American oligarchs and plutocrats this past century, as mass uprisings and coups overturned numerous societies around the world. The key factor, though, is what Americans will do. Patriotic indoctrination can only go so far. Where will people turn to for meaningful identities that are relevant to survival in an ever more harsh and punishing society, as stress and uncertainty continues to rise?
Even if the American public doesn’t turn against the system any time soon, when it comes under attack they might not feel in the mood to sacrifice themselves to defend it. Societies can collapse from revolt, but they can more easily collapse from indifference and apathy, a slow erosion of trust. “Not with a bang but with a whimper.” But maybe that wouldn’t be such a bad thing. It’s better than some of the alternatives. And it would be an opportunity for reinvention, for new identities.
* * *
7/26/19 – An interesting thing is that the oligarchs are so unconcerned. They see this situation as a good thing, as an opportunity. Everything is an opportunity to the opportunist and disaster capitalism is one endless opportunity for those who lack a soul.
They aren’t seeking to re-create early 20th century fascism. The loss of national identities is not an issue for them, even as they exploit and manipulate patriotism. Meanwhile, their own identities and source of power has been offshored in a new form of extra-national governance, a deep state beyond all states. They are citizens of nowhere and the rest of us are left holding the bag.
“The oligarch’s interests always lie offshore: in tax havens and secrecy regimes. Paradoxically, these interests are best promoted by nationalists and nativists. The politicians who most loudly proclaim their patriotism and defence of sovereignty are always the first to sell their nations down the river. It is no coincidence that most of the newspapers promoting the nativist agenda, whipping up hatred against immigrants and thundering about sovereignty, are owned by billionaire tax exiles, living offshore” (George Monbiot, From Trump to Johnson, nationalists are on the rise – backed by billionaire oligarchs).
It’s not that old identities are merely dying. There are those seeking to snuff them out, to put them out of their misery. The ruling elite are decimating what holds society together. But in their own demented way, maybe they are unintentionally freeing us to become something else. They have the upper hand for the moment, but moments don’t last long. Even they realize disaster capitalism can’t be maintained. It’s why they constantly dream of somewhere to escape, whether on international waters or space colonies. Everyone is looking for a new identity. That isn’t to say all potential new identities will serve us well.
All of this is a strange scenario. And most people are simply lost. As old identities loosen, we lose our bearings, even or especially among the best of us. It is disorienting, another thing Scott Preston has been writing a lot about lately (Our Mental Meltdown: Mind in Dissolution). The modern self is splintering and this creates all kinds of self-deception and self-contradiction. As Preston puts often it, “the duplicity of our times — the double-think, the double-speak, the double-standards, and the double-bind” (Age of Revolutions). But don’t worry. The oligarchs too will find themselves caught in their own traps. Their fantasies of control are only that, fantasies.
The Amish are another example of a dietary ‘paradox’ that only seems paradoxical because of dietary confusion in nutrition science and official guidelines. When we look closely at what people actually eat, many populations that are the healthiest have diets that supposedly aren’t healthy, such as lots of meat and animal fat. There are so many exceptions that they look more like the rule (Blue Zones Dietary Myth).
Besides a few genetic disorders, the Amish are a healthy population (Wikipedia, Health among the Amish). They have low incidence of allergies, asthma, etc. Some of that could be partly explained through the hygiene hypothesis (Sara G. Miller, Why Amish Kids Get Less Asthma: It’s the Cows). Amish children are exposed to more variety of animals, plants, and microbes that help to develop and strengthen their immune systems. This exposure theory has been proposed for centuries, as it was easily observable in comparing rural and urban populations. Raw milk might be an additional protective factor (Kerry Grens, Amish farm kids remarkably immune to allergies: study). Whatever the cause, the Amish are healthier than even comparable populations such as North Dakota Hutterites and Swiss farmers.
This health advantage begins young. They have low rates of Cesarean sections and few birth complications (Fox News, Amish offers clues to lowering US C-section rate). Despite lack of prenatal care, their infant mortality rate is about the same as the general population. Vaginal births, by the way, are known to contribute to positive health outcomes. On top of that, Amish mothers do extended breasteeding and that breast milk certainly is nutritious, considering diet of Amish mother’s is nutrient-dense. This early good health then extends into old age (Jeffrey Kluger, Amish People Stay Healthy in Old Age. Here’s Their Secret). They have lower rates of Alzheimer’s and other forms of dementia (Jimmy Holder & Andrew C. Warren, Prevalence of Alzheimer’s Disease and Apolipoprotein E Allele Frequencies in the Old Order Amish). This might relate to lower rates of environmental toxins, food additives, etc, although it surely involves more than that. Considering their low incidence of allergy and asthma, that indicates there would be less inflammation and autoimmune conditions. And that would offer neurocognitive protection against mental illness (Eric Haseltine, Amish Asthma Rates Offer Clues to Preventing Mental Illness). Related to this, suicide is far less common (Donald B. Kraybill et al, Suicide Patterns in a Religious Subculture, the Old Order Amish).
Another intriguing example of health is that the Amish get fewer cavities, even as they eat a fair amount of sugar while few floss or brush regularly (Jan Ziegler, Amish People Avoid Cavities Despite Poor Dental Habits). Weston A. Price already figured that one out. Most traditional people don’t have dental care and, nonetheless, having healthy teeth. It’s because of the fat-soluble vitamins that are necessary for maintaining tooth enamel and promoting remineralization. The dessert foods certainly don’t help the Amish, that is for sure. Still, though hunter-gatherers, for example, eating more sugary foods (honey, tropical fruit, etc) show worse dental health, they don’t have as many cavities as seen among high-carb modern Westerners. High nutrition can only go so far, but it sure does help.
Along with far less obesity and diabetes, the low cardiovascular disease also stands out because the Amish do have high cholesterol, but recent research shows that mainstream understanding is wrong, as cholesterol is one of the most important factors of health (Robert DuBroff, A Reappraisal of the Lipid Hypothesis; & Anahad O’Connor, Supplements and Diets for Heart Health Show Limited Proof of Benefit). Yet because their cholesterol is high, mainstream doctors and officials are trying to get the Amish on statins (Cindy Stauffer, Why are Amish more at risk of having high cholesterol?). It is sheer idiocy. Cholesterol is not the cause of cardiovascular disease and, as most current studies demonstrate, statins don’t decrease overall mortality. In fact, reducing cholesterol can be severely harmful, such as causing neurocognitive problems since the brain is dependent on cholesterol. For cardiovascular health, what we need to be looking for is inflammation markers, insulin resistance, and metabolic syndrome, along with overconsumption of omega-6 fatty acids and deficiencies in the fat-soluble vitamins.
Cancer rates among the Amish further demonstrate how mainstream advice has failed us. In one study, researchers “found that Amish dietary patterns do not meet most of the diet and cancer prevention guidelines published by American Institute for Cancer Research and others (9). Most cancer prevention guidelines emphasize minimizing calorically dense foods, eating a diet rich in fruits and vegetables (at least 5 servings per day), avoiding salt-preserved foods, and limiting alcohol consumption. With the exception of limiting alcohol intake, our data suggest that the Amish do not meet these guidelines” (Gebra B. Cuyun Carter et al, Dietary Intake, Food Processing, and Cooking Methods Among Amish and Non-Amish Adults Living in Ohio Appalachia: Relevance to Nutritional Risk Factors for Cancer). Yet the researchers couldn’t believe their own evidence and still concluded that the Amish “could benefit from dietary changes.”
It didn’t occur to the researchers that the cancer prevention guidelines could be wrong, instead of the traditional foods that humans have been eating for hundreds of thousands of years. Not only do the Amish have few processed foods and hence not as much propionate, glutamate, etc (The Agricultural Mind) but also they have an emphasis on animal foods (Food in Every Country, United States Amish and Pennsylvania Dutch). Traditionally for the Amish, animal foods were the center of their diet. They typically eat meat with every meal and eggs year round, they are known for their quality raw milk and cheese (full fat), and even the carbs they eat are cooked in lard or some other animal fat. Interestingly, the Amish eat fewer vegetables than the non-Amish. Maybe they are healthy because of this, rather than in spite of it.
The Amish have much higher energy intake and 4.3% higher saturated fat intake. Because they eat mostly what they grow in gardens and on pasture, they would be getting much more nutrient-dense foods, including omega-3s and fat-soluble vitamins. Interestingly, they have nothing against GMOs and pesticides (Andrew Porterfield, Amish use GMOs, pesticides yet cancer rates remain very low), but there simple living probably would still keep their toxin exposure low. Even though they like their pies and such, their diet overall is low in starchy carbs and sugar, and the pie crusts would be cooked with lard from pasture-raised animals with its fat-soluble vitamins. Plus, I suspect they are more likely to be eating fruits and vegetables that comes from traditional cultivars that fewer people have problems with.
Also, because refrigerators and freezers are rare, their food preparation and storage is likewise traditional: slow-rising of breads, long-soaking of beans, and cooking of garden plants fresh from the garden; canning, pickling, and fermenting; et cetera. Look at Weston A. Price’s work from the early 1900s (Malnourished Americans; & Health From Generation To Generation). He found that populations following traditional diets, including rural Europeans, were far healthier and had low rates of infectious diseases, despite lack of healthcare and, of course, lack of vaccinations. Among the Amish, there may be some infectious diseases that could be prevented if there was a more consistent practice of vaccination (Melissa Jenco, Study: Low vaccination rate in Amish children linked to hospitalization), although exposure to outsiders might be the greatest infectious risk. The research on vaccinations overall is mixed and the conclusions not always clear (Dr. Kendrick On Vaccines). Even if their mortalities from infectious diseases might be higher, as is the case with hunter-gatherers, their health otherwise is far greater. When infectious deaths along with accidental deaths are controlled for, hunter-gatherers live about the same age as modern Westerners. The same is probably true of the Amish.
It’s hard to compare the Amish with other Blue Zones because places like Okinawa and Sardiniania don’t have the same kind of isolated farming communities. The Blue Zones are different from each other in many ways, but for our purposes here their shared feature is how so many of them are dietary paradoxes in contradicting conventional thought and official guidelines. They do so many things that are claimed to be unhealthy and yet their health is far above average. Once we let go of false dietary beliefs, the paradox disappears.
Stuart McMillen is a crowdfunded cartoonist from Australia. Here is his Patreon page and his Youtube page. He covers major social issues. His work includes popular pieces such as War On Drugs and Rat Park (here is an article he wrote about the making of the latter). These cartoons are quick introductions to otherwise difficult topics.
There are multiple folktales about the tender senses of royalty, aristocrats, and other elite. The most well known example is “The Princess and the Pea”. In the Aarne-Thompson-Uther system of folktale categorization, it gets listed as type 704 about the search for a sensitive wife. That isn’t to say that all the narrative variants of elite sensitivity involve potential wives. Anyway, the man who made this particular story famous is Hans Christian Andersen, having published his translation in 1835. He longed to be a part of the respectable class, but felt excluded. Some speculate that he projected his own class issues onto his slightly altered version of the folktale, something discussed in the Wikipedia article about the story:
“Wullschlager observes that in “The Princess and the Pea” Andersen blended his childhood memories of a primitive world of violence, death and inexorable fate, with his social climber’s private romance about the serene, secure and cultivated Danish bourgeoisie, which did not quite accept him as one of their own. Researcher Jack Zipes said that Andersen, during his lifetime, “was obliged to act as a dominated subject within the dominant social circles despite his fame and recognition as a writer”; Andersen therefore developed a feared and loved view of the aristocracy. Others have said that Andersen constantly felt as though he did not belong, and longed to be a part of the upper class.[11] The nervousness and humiliations Andersen suffered in the presence of the bourgeoisie were mythologized by the storyteller in the tale of “The Princess and the Pea”, with Andersen himself the morbidly sensitive princess who can feel a pea through 20 mattresses.[12]Maria Tatar notes that, unlike the folk heroine of his source material for the story, Andersen’s princess has no need to resort to deceit to establish her identity; her sensitivity is enough to validate her nobility. For Andersen, she indicates, “true” nobility derived not from an individual’s birth but from their sensitivity. Andersen’s insistence upon sensitivity as the exclusive privilege of nobility challenges modern notions about character and social worth. The princess’s sensitivity, however, may be a metaphor for her depth of feeling and compassion.[1][…] Researcher Jack Zipes notes that the tale is told tongue-in-cheek, with Andersen poking fun at the “curious and ridiculous” measures taken by the nobility to establish the value of bloodlines. He also notes that the author makes a case for sensitivity being the decisive factor in determining royal authenticity and that Andersen “never tired of glorifying the sensitive nature of an elite class of people”.[15]”
Even if that is true, there is more going on here than some guy working out his personal issues through fiction. This princess’ sensory sensitivity sounds like autism spectrum disorder and I have a theory about that. Autism has been associated with certain foods like wheat, specifically refined flour in highly processed foods (The Agricultural Mind). And a high-carb diet in general causes numerous neurocognitive problems (Ketogenic Diet and Neurocognitive Health), along with other health conditions such as metabolic syndrome (Dietary Dogma: Tested and Failed) and insulin resistance (Coping Mechanisms of Health), atherosclerosis (Ancient Atherosclerosis?) and scurvy (Sailors’ Rations, a High-Carb Diet) — by the way, the rates of these diseases have been increasing over the generations and often first appearing among the affluent. Sure, grains have long been part of the diet, but the one grain that had most been associated with the wealthy going back millennia was wheat, as it was harder to grow which caused it to be in short supply and so expensive. Indeed, it is wheat, not the other grains, that gets brought up in relation to autism. This is largely because of gluten, though other things have been pointed to.
It is relevant that the historical period in which these stories were written down was around when the first large grain surpluses were becoming common and so bread, white bread most of all, became a greater part of the diet. But as part of the diet, this was first seen among the upper classes. It’s too bad we don’t have cross-generational data on autism rates in terms of demographic and dietary breakdown, but it is interesting to note that the mental health condition neurasthenia, also involving sensitivity, from the 19th century was seen as a disease of the middle-to-upper class (The Crisis of Identity), and this notion of the elite as sensitive was a romanticized ideal going back to the 1700s with what Jane Austen referred to as ‘sensibility’ (see Bryan Kozlowski’s The Jane Austen Diet, as quoted in the link immediately above). In that same historical period, others noted that schizophrenia was spreading along with civilization (e.g., Samuel Gridley Howe and Henry Maudsley; see The Invisible Plague by Edwin Fuller Torrey & Judy Miller) and I’d add the point that there appear to be some overlapping factors between schizophrenia and autism — besides gluten, some of the implicated factors are glutamate, exorphins, inflammation, etc. “It is unlikely,” writes William Davis, “that wheat exposure was the initial cause of autism or ADHD but, as with schizophrenia, wheat appears to be associated with worsening characteristics of the conditions” (Wheat Belly, p. 48).
For most of human history, crop failures and famine were a regular occurrence. And this most harshly affected the poor masses when grain and bread prices went up, leading to food riots and sometimes revolutions (e.g., French Revolution). Before the 1800s, grains were so expensive that, in order to make them affordable, breads were often adulterated with fillers or entirely replaced with grain substitutes, the latter referred to as “famine breads” and sometimes made with tree bark. Even when available, the average person might be spending most of their money on bread, as it was one of the most costly foods around and other foods weren’t always easily obtained.
Even so, grain being highly sought after certainly doesn’t imply that the average person was eating a high-carb diet, quite the opposite (A Common Diet). Food in general was expensive and scarce and, among grains, wheat was the least common. At times, this would have forced feudal peasants and later landless peasants onto a diet limited in both carbohydrates and calories, which would have meant a typically ketogenic state (Fasting, Calorie Restriction, and Ketosis), albeit far from an optimal way of achieving it. The further back in time one looks the greater prevalence would have been ketosis (e.g., Spartan and Mongol diet), maybe with the exception of the ancient Egyptians (Ancient Atherosclerosis?). In places like Ireland, Russia, etc, the lower classes remained on this poverty diet that was often a starvation diet well into the mid-to-late 1800s, although in the case of the Irish it was an artificially constructed famine as the potato crop was essentially being stolen by the English and sold on the international market.
Yet, in America, the poor were fortunate in being able to rely on a meat-based diet because wild game was widely available and easily obtained, even in cities. That may have been true for many European populations as well during earlier feudalism, specifically prior to the peasants being restricted in hunting and trapping on the commons. This is demonstrated by how health improved after the fall of the Roman Empire (Malnourished Americans). During this earlier period, only the wealthy could afford high-quality bread and large amounts of grain-based foods in general. That meant highly refined and fluffy white bread that couldn’t easily be adulterated. Likewise, for the early centuries of colonialism, sugar was only available to the wealthy — in fact, it was a controlled substance typically only found in pharmacies. But for the elite who had access, sugary pastries and other starchy dessert foods became popular. White bread and pastries were status symbols. Sugar was so scarce that wealthy households kept it locked away so the servants couldn’t steal it. Even fruit was disproportionately eaten by the wealthy. A fruit pie would truly have been a luxury with all three above ingredients combined in a single delicacy.
Part of the context is that, although grain yields had been increasing during the early colonial era, there weren’t dependable surplus yields of grains before the 1800s. Until then, white bread, pastries, and such simply were not affordable to most people. Consumption of grains, along with other starchy carbs and sugar, rose with 19th century advancements in agriculture. Simultaneously, income was increasing and the middle class was growing. But even as yields increased, most of the created surplus grains went to feeding livestock, not to feeding the poor. Grains were perceived as cattle feed. Protein consumption increased more than did carbohydrate consumption, at least initially. The American population, in particular, didn’t see the development of a high-carb diet until much later, as related to US mass urbanization also happening later.
Coming to the end of the 19th century, there was the emergence of the mass diet of starchy and sugary foods, especially the spread of wheat farming and white bread. And, in the US, only by the 20th century did grain consumption finally surpass meat consumption. Following that, there has been growing rates of autism. Along with sensory sensitivity, autistics are well known for their pickiness about foods and well known for cravings for particular foods such as those made from highly refined wheat flour, from white bread to crackers. Yet the folktales in question were speaking to a still living memory of an earlier time when these changes had yet to happen. Hans Christian Andersen first published “The Princess and the Pea” in 1835, but such stories had been orally told long before that, probably going back at least centuries, although we now know that some of these folktales have their origins millennia earlier, even into the Bronze Age. According to the Wikipedia article on “The Princess and the Pea”,
“The theme of this fairy tale is a repeat of that of the medieval Perso-Arabic legend of al-Nadirah.[6] […] Tales of extreme sensitivity are infrequent in world culture but a few have been recorded. As early as the 1st century, Seneca the Younger had mentioned a legend about a Sybaris native who slept on a bed of roses and suffered due to one petal folding over.[23] The 11th-century Kathasaritsagara by Somadeva tells of a young man who claims to be especially fastidious about beds. After sleeping in a bed on top of seven mattresses and newly made with clean sheets, the young man rises in great pain. A crooked red mark is discovered on his body and upon investigation a hair is found on the bottom-most mattress of the bed.[5] An Italian tale called “The Most Sensitive Woman” tells of a woman whose foot is bandaged after a jasmine petal falls upon it.”
I would take it as telling that, in the case of this particular folktale, it doesn’t appear to be as ancient as other examples. That would support my argument that the sensory sensitivity of autism might be caused by greater consumption of refined wheat, something that only began to appear late in the Axial Age and only became common much later. Even for the few wealthy that did have access in ancient times, they were eating rather limited amounts of white bread. It might have required hitting a certain level of intake, not seen until modernity or closer to it, before the extreme autistic symptoms became noticeable among a larger number of the aristocracy and monarchy.
Did you know where the term refined comes from? Around 1826, whole grain bread used by the military was called superior for health versus the white refined bread used by the aristocracy. Before the industrial revolution, it was more labor consuming and more expensive to refine bread, so white bread was the main staple loaf for aristocracy. That’s why it was called “refined”.
Bread has always been political. For Romans, it helped define class; white bread was for aristocrats, while the darkest brown loaves were for the poor. Later, Jacobin radicals claimed white bread for the masses, while bread riots have been a perennial theme of populist uprisings. But the political meaning of the staff of life changed dramatically in the early twentieth-century United States, as Aaron Bobrow-Strain, who went on to write the book White Bread, explained in a 2007 paper. […]
Even before this industrialization of baking, white flour had had its critics, like cracker inventor William Sylvester Graham. Now, dietary experts warned that white bread was, in the words of one doctor, “so clean a meal worm can’t live on it for want of nourishment.” Or, as doctor and radio host P.L. Clark told his audience, “the whiter your bread, the sooner you’re dead.”
Furthermore, one should not disregard the cultural context of food consumption. Habits may develop that prevent the attainment of a level of nutritional status commensurate with actual real income. For instance, the consumption of white bread or of polished rice, instead of whole-wheat bread or unpolished rice, might increase with income, but might detract from the body’s well-being. Insofar as cultural habits change gradually over time, significant lags could develop between income and nutritional status.
pp. 192-194
As consequence, per capita food consumption could have increased between 1660 and 1740 by as much as 50 percent. The fact that real wages were higher in the 1730s than at any time since 1537 indicates a high standard of living was reached. The increase in grain exports, from 2.8 million quintals in the first decade of the eighteenth century to 6 million by the 1740s, is also indicative of the availability of nutrients.
The remarkably good harvests were brought about by the favorable weather conditions of the 1730s. In England the first four decades of the eighteenth century were much warmer than the last decades of the previous century (Table 5.1). Even small differences in temperature may have important consequences for production. […] As a consequence of high yields the price of consumables declined by 14 percent in the 1730s relative to the 1720s. Wheat cost 30 percent less in the 1730s than it did in the 1660s. […] The increase in wheat consumption was particularly important because wheat was less susceptible to mold than rye. […]
There is direct evidence that the nutritional status of many populations was, indeed, improving in the early part of the eighteenth century, because human stature was generally increasing in Europe as well as in America (see Chapter 2). This is a strong indication that protein and caloric intake rose. In the British colonies of North America, an increase in food consumption—most importantly, of animal protein—in the beginning of the eighteenth century has been directly documented. Institutional menus also indicate that diets improved in terms of caloric content.
Changes in British income distribution conform to the above pattern. Low food prices meant that the bottom 40 percent of the distribution was gaining between 1688 and 1759, but by 1800 had declined again to the level of 1688. This trend is another indication that a substantial portion of the population that was at a nutritional disadvantage was doing better during the first half of the eighteenth century than it did earlier, but that the gains were not maintained throughout the century.
The Roots of Rural Capitalism: Western Massachusetts, 1780-1860 By Christopher Clark p. 77
Livestock also served another role, as a kind of “regulator,” balancing the economy’s need for sufficiency and the problems of producing too much. In good years, when grain and hay were plentiful, surpluses could be directed to fattening cattle and hogs for slaughter, or for exports to Boston and other markets on the hoof. Butter and cheese production would also rise, for sale as well as for family consumption. In poorer crop years, however, with feedstuffs rarer, cattle and swine could be slaughtered in greater numbers for household and local consumption, or for export as dried meat.
p. 82
Increased crop and livestock production were linked. As grain supplies began to overtake local population increases, more corn in particular became available for animal feed. Together with hay, this provided sufficient feedstuffs for farmers in the older Valley towns to undertake winter cattle fattening on a regular basis, without such concern as they had once had for fluctuations in output near the margins of subsistence. Winter fattening for market became an established practice on more farms.
But food played an even larger role in the French Revolution just a few years later. According to Cuisine and Culture: A History of Food and People, by Linda Civitello, two of the most essential elements of French cuisine, bread and salt, were at the heart of the conflict; bread, in particular, was tied up with the national identity. “Bread was considered a public service necessary to keep the people from rioting,” Civitello writes. “Bakers, therefore, were public servants, so the police controlled all aspects of bread production.”
If bread seems a trifling reason to riot, consider that it was far more than something to sop up bouillabaisse for nearly everyone but the aristocracy—it was the main component of the working Frenchman’s diet. According to Sylvia Neely’s A Concise History of the French Revolution, the average 18th-century worker spent half his daily wage on bread. But when the grain crops failed two years in a row, in 1788 and 1789, the price of bread shot up to 88 percent of his wages. Many blamed the ruling class for the resulting famine and economic upheaval.
Read more: https://www.smithsonianmag.com/arts-culture/when-food-changed-history-the-french-revolution-93598442/#veXc1rXUTkpXSiMR.99
Give the gift of Smithsonian magazine for only $12! http://bit.ly/1cGUiGv
Follow us: @SmithsonianMag on Twitter
Through 1788 and into 1789 the gods seemed to be conspiring to bring on a popular revolution. A spring drought was followed by a devastating hail storm in July. Crops were ruined. There followed one of the coldest winters in French history. Grain prices skyrocketed. Even in the best of times, an artisan or factor might spend 40 percent of his income on bread. By the end of the year, 80 percent was not unusual. “It was the connection of anger with hunger that made the Revolution possible,” observed Schama. It was also envy that drove the Revolution to its violent excesses and destructive reform.
Take the Reveillon riots of April 1789. Reveillon was a successful Parisian wall-paper manufacturer. He was not a noble but a self-made man who had begun as an apprentice paper worker but now owned a factory that employed 400 well-paid operatives. He exported his finished products to England (no mean feat). The key to his success was technical innovation, machinery, the concentration of labor, and the integration of industrial processes, but for all these the artisans of his district saw him as a threat to their jobs. When he spoke out in favor of the deregulation of bread distribution at an electoral meeting, an angry crowded marched on his factory, wrecked it, and ransacked his home.
Only in the late nineteenth and twentieth century did large numbers of “our ancestors”–and obviously this depends on which part of the world they lived in–begin eating white bread. […]
Wheat bread was for the few. Wheat did not yield well (only seven or eight grains for one planted compared to corn that yielded dozens) and is fairly tricky to grow.
White puffy wheat bread was for even fewer. Whiteness was achieved by sieving out the skin of the grain (bran) and the germ (the bit that feeds the new plant). In a world of scarcity, this made wheat bread pricey. And puffy, well, that takes fairly skilled baking plus either yeast from beer or the kind of climate that sourdough does well in. […]
Between 1850 and 1950, the price of wheat bread, even white wheat bread, plummeted in price as a result of the opening up of new farms in the US and Canada, Argentina, Australia and other places, the mechanization of plowing and harvesting, the introduction of huge new flour mills, and the development of continuous flow bakeries.
In 1800 only half the British population could afford wheat bread. In 1900 everybody could.
In Georgian times the introduction of sieves made of Chinese silk helped to produce finer, whiter flour and white bread gradually became more widespread. […]
1757
A report accused bakers of adulterating bread by using alum lime, chalk and powdered bones to keep it very white. Parliament banned alum and all other additives in bread but some bakers ignored the ban. […]
1815
The Corn Laws were passed to protect British wheat growers. The duty on imported wheat was raised and price controls on bread lifted. Bread prices rose sharply. […]
1826
Wholemeal bread, eaten by the military, was recommended as being healthier than the white bread eaten by the aristocracy.
1834
Rollermills were invented in Switzerland. Whereas stonegrinding crushed the grain, distributing the vitamins and nutrients evenly, the rollermill broke open the wheat berry and allowed easy separation of the wheat germ and bran. This process greatly eased the production of white flour but it was not until the 1870s that it became economic. Steel rollermills gradually replaced the old windmills and watermills.
1846
With large groups of the population near to starvation the Corn Laws were repealed and the duty on imported grain was removed. Importing good quality North American wheat enabled white bread to be made at a reasonable cost. Together with the introduction of the rollermill this led to the increase in the general consumption of white bread – for so long the privilege of the upper classes.
In many contexts Linné explained how people with different standing in society eat different types of bread. He wrote, “Wheat bread, the most excellent of all, is used only by high-class people”, whereas “barley bread is used by our peasants” and “oat bread is common among the poor”. He made a remark that “the upper classes use milk instead of water in the dough, as they wish to have a whiter and better bread, which thereby acquires a more pleasant taste”. He compared his own knowledge on the food habits of Swedish society with those mentioned in classical literature. Thus, according to Linné, Juvenal wrote that “a soft and snow-white bread of the finest wheat is given to the master”, while Galen condemned oat bread as suitable only for cattle, not for humans. Here Linné had to admit that it is, however, consumed in certain provinces in Sweden.
Linné was aware of and discussed the consequences of consuming less tasty and less satisfying bread, but he seems to have accepted as a fact that people belonging to different social classes should use different foods to satisfy their hunger. For example, he commented that “bran is more difficult to digest than flour, except for hard-labouring peasants and the likes, who are scarcely troubled by it”. The necessity of having to eat filling but less palatable bread was inevitable, but could be even positive from the nutritional point of view. “In Östergötland they mix the grain with flour made from peas and in Scania with vetch, so that the bread may be more nutritious for the hard-working peasants, but at the same time it becomes less flavoursome, drier and less pleasing to the palate.” And, “Soft bread is used mainly by the aristocracy and the rich, but it weakens the gums and teeth, which get too little exercise in chewing. However, the peasant folk who eat hard bread cakes generally have stronger teeth and firmer gums”.
It is intriguing that Linné did not find it necessary to discuss the consumption or effect on health of other bakery products, such as the sweet cakes, tarts, pies and biscuits served by the fashion-conscious upper class and the most prosperous bourgeois. Several cookery books with recipes for the fashionable pastry products were published in Sweden in the eighteenth century 14. The most famous of these, Hjelpreda i Hushållningen för Unga Fruentimmer by Kajsa Warg, published in 1755, included many recipes for sweet pastries 15. Linné mentioned only in passing that the addition of egg makes the bread moist and crumbly, and sugar and currants impart a good flavour.
The sweet and decorated pastries were usually consumed with wine or with the new exotic beverages, tea and coffee. It is probable that Linné regarded pastries as unnecessary luxuries, since expensive imported ingredients, sugar and spices, were indispensable in their preparation. […]
Linné emphasized that soft and fresh bread does not draw in as much saliva and thus remains undigested for a long time, “like a stone in the stomach”. He strongly warned against eating warm bread with butter. While it was “considered as a delicacy, there was scarcely another food that was more damaging for the stomach and teeth, for they were loosen’d by it and fell out”. By way of illustration he told an example reported by a doctor who lived in a town near Amsterdam. Most of the inhabitants of this town were bakers, who sold bread daily to the residents of Amsterdam and had the practice of attracting customers with oven-warm bread, sliced and spread with butter. According to Linné, this particular doctor was not surprised when most of the residents of this town “suffered from bad stomach, poor digestion, flatulence, hysterical afflictions and 600 other problems”. […]
Linné was not the first in Sweden to write about famine bread. Among his remaining papers in London there are copies from two official documents from 1696 concerning the crop failure in the northern parts of Sweden and the possibility of preparing flour from different roots, and an anonymous small paper which contained descriptions of 21 plants, the roots or leaves of which could be used for flour 10. These texts had obviously been studied by Linné with interest.
When writing about substitute breads, Linné formulated his aim as the following: “It will teach the poor peasant to bake bread with little or no grain in the circumstance of crop failure without destroying the body and health with unnatural foods, as often happens in the countryside in years of hardship” 10.
Linné’s idea for a publication on bread substitutes probably originated during his early journeys to Lapland and Dalarna, where grain substitutes were a necessity even in good years. Actually, bark bread was eaten in northern Sweden until the late nineteenth century 4. In the poorest regions of eastern and north-eastern Finland it was still consumed in the 1920s 26. […]
Bark bread has been used in the subarctic area since prehistoric times 4. According to Linné, no other bread was such a common famine bread. He described how in springtime the soft inner layer can be removed from debarked pine trees, cleaned of any remaining bark, roasted or soaked to remove the resin, and dried and ground into flour. Linné had obviously eaten bark bread, since he could say that “it tastes rather well, is however more bitter than other bread”. His view of bark bread was most positive but perhaps unrealistic: “People not only sustain themselves on this, but also often become corpulent of it, indeed long for it.” Linné’s high regard for bark bread was shared by many of his contemporaries, but not all. For example, Pehr Adrian Gadd, the first professor of chemistry in Turku (Åbo) Academy and one of the most prominent utilitarians in Finland, condemned bark bread as “useless, if not harmful to use” 28. In Sweden, Anders Johan Retzius, a professor in Lund and an expert on the economic and pharmacological potential of Swedish flora, called bark bread “a paltry food, with which they can hardly survive and of which they always after some time get a swollen body, pale and bluish skin, big and hard stomach, constipation and finally dropsy, which ends the misery” 4. […]
Linné’s investigations of substitutes for grain became of practical service when a failed harvest of the previous summer was followed by famine in 1757 10. Linné sent a memorandum to King Adolf Fredrik in the spring of 1757 and pointed out the risk to the health of the hungry people when they ignorantly chose unsuitable plants as a substitute for grain. He included a short paper on the indigenous plants which in the shortage of grain could be used in bread-making and other cooking. His Majesty immediately permitted this leaflet to be printed at public expense and distributed throughout the country 10. Soon Linné’s recipes using wild flora were read out in churches across Sweden. In Berättelse om The inhemska wäxter, som i brist af Säd kunna anwändas til Bröd- och Matredning, Linné 32 described the habitats and the popular names of about 30 edible wild plants, eight of which were recommended for bread-making.
* * *
I’ll just drop a couple videos here for general info:
“We are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent ‘me’ from moment to moment.”
~ Charles Fernyhough
“Bicamerality hidden in plain sight.”
~ Andrew Bonci
“What I tell you in the dark, speak in the daylight; what is whispered in your ear, proclaim from the roofs.”
~ Matthew 10:27
Charlie Spedding We are told red meat causes bowel cancer. Today @thetimes reports on surge in colon cancer among the young. But young people are eating less meat. How does @WHO explain that?
Louise Stephen Fake news – there is big money behind the drive to get people off red meat and onto replacement products such as Beyond Meat.
Tim Noakes just possibly, cancer might have nutritional basis. Which seems at least an outside possibility since cancer is modern disease found rarely in peoples eating their traditional diets.
Fat is our Friend
“Leading a Western lifestyle, being overweight, and being sedentary are associated with an increased risk of colorectal cancer”… but I thought it was mostly down to red meat.😉
Tim Noakes
Is diverticulosis related in any way to bowel cancer? Recall that rise in colon cancer has occurred at same time that unproven Burkitt/Trowell hypothesis has been accepted as dogma. BT hypothesis holds that absence of dietary fibre causes colon cancer. So prevention = more fibre.
Guðmundur Jóhannsson
“There is no direct evidence of an effect of dietary fiber on colon cancer incidence… In a trial of ispaghula husk fiber, the intervention group actually had significantly more recurrent adenomas after 3 years” Does a high-fiber diet prevent colon cancer in at-risk patients?
by Linda French, MD & Susan Kendall, PhD
Harold Quinn
If, as seems likely, colonic caracinoma is significantly pathogenically driven, then more “prebiotic” might be expected to be carcinogenic in the dysbiotic gut but potentially anti-cancer in a situation of eubiosis. Seeking some ubiquituous impact of fibre for all seems unwise
You must be logged in to post a comment.