Clearing Away the Rubbish

“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue”
~Richard Horton, editor in chief of The Lancet

“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor”
~Dr. Marcia Angell, former editor in chief of NEJM

Back in September, there was a scientific paper published in Clinical Cardiology, a peer reviewed medical journal that is “an official journal of the American Society for Preventive Cardiology” (Wikipedia). It got a ton of attention from news media, social media, and the blogosphere. The reason for all the attention is that, in the conclusion, the authors claimed that low-carb diets had proven the least healthy over a year period:

“One-year lowered-carbohydrate diet significantly increases cardiovascular risks, while a low-to-moderate-fat diet significantly reduces cardiovascular risk factors. Vegan diets were intermediate. Lowered-carbohydrate dieters were least inclined to continue dieting after conclusion of the study. Reductions in coronary blood flow reversed with appropriate dietary intervention. The major dietary effect on atherosclerotic coronary artery disease is inflammation and not weight loss.”

It has recently been retracted and it has come out that the lead author, Richard M. Fleming, has a long history of fraud going back to 2002 with two FBI convictions of fraud in 2009, following his self-confession. He has also since been debarred by the U.S. Food and Drug Administration. (But his closest brush with fame or infamy was his leaking the medical records of Dr. Robert Atkins, a leak that was behind a smear campaign.) As for his co-authors: “Three of the authors work at Fleming’s medical imaging company in California, one is a deceased psychologist from Iowa, another is a pediatric nutritionist from New York and one is a Kellogg’s employee from Illinois. How this group was able to run a 12-month diet trial in 120 subjects is something of a mystery” (George Henderson). Even before the retraction, many wondered how it ever passed peer-review considering the low quality of the study: “This study has so many methodological holes in it that it has no real value.” (Low Carb Studies BLOG).

But of course, none of that has been reported as widely as the paper originally was. So, most people who read about it still assume it is valid evidence. This is related to the replication crisis, as even researchers are often unaware of retractions, that is when journals will allow retractions to be published at all, something they are reluctant to do because it delegitimizes their authority. So, a lot of low quality or in some cases deceptive research goes unchallenged and unverified, neither confirmed nor disconfirmed. It’s rare when any study falls under the scrutiny of replication. If not for the lead author’s criminal background in the Fleming case, this probably would have been another paper that could have slipped past and been forgotten or else, without replication, repeatedly cited in future research. As such, bad research builds on bad research, creating the appearance of mounting evidence, but in reality it is a house of cards (consider the takedown of Ancel Keys and gang in the work by numerous authors: Gary Taubes’ Good Calories, Bad Calories; Nina Tiecholz’s The Big Fat Surprise; Sally Fallon Morrell’s Nourishing Diets; et cetera).

This is why the systemic problem and failure is referred to as a crisis. Fairly or unfairly, the legitimacy of entire fields of science are being questioned. Even scientists no longer are certain which research is valid or not. The few attempts at determining the seriousness of the situation by replicating studies has found a surprisingly low replication rate. And this problem is worse in the medical field than in many other fields, partly because of the kind of funding involved and more importantly because of how few doctors are educated in statistics or trained in research methodology. It is even worse with nutrition, as the average doctor gets about half the questions wrong when asked about this topic, and keep in mind that so much of the nutritional research is done by doctors. An example of problematic dietary study is that of Dr. Fleming himself. We’d be better off letting physicists and geologists do nutritional research.

There is more than a half century of research that conventional medical and dietary opinions are based upon. In some major cases, re-analysis of data has shown completely opposite conclusions. For example, the most famous study by Ancel Keys blamed saturated fat for heart disease, while recent reappraisal has shown the data actually shows a stronger link to sugar as the culprit. Meanwhile, no study has ever directly linked saturated fat to heart disease. The confusion has come because, in the Standard American Diet (SAD), saturated fat and sugar have been conflated in the population under study. Yet, even in cases like that of Keys when we now know what the data shows, Keys’ original misleading conclusions are still referenced as authoritative.

The only time this crisis comes to attention is when the researcher gets attention. If Keys wasn’t famous and Fleming wasn’t criminal, no one would have bothered with their research. Lots of research gets continually cited without much thought, as the authority of research accumulates over time by being cited which encourages further citation. It’s similar to how legal precedents can get set, even when the initial precedent was intentionally misinterpreted for that very purpose.

To dig through the original data, assuming it is available and one knows where to find it, is more work than most are willing to do. There is no glory or praise to be gained in doing it, nor will it promote one’s career or profit one’s bank account. If anything, there are plenty of disincentives in place, as academic careers in science are dependent on original research. Furthermore, private researchers working in corporations, for obvious reasons, tend to be even less open about their data and that makes scrutiny even more difficult. If a company found their own research didn’t replicate, they would be the last in line to announce it to the world and instead would likely bury it where it never would be found.

There is no system put into place to guard against the flaws of the system itself. And the news media is in an almost continual state of failure when it comes to scientific reporting. The crisis has been stewing for decades, occasionally being mentioned, but mostly suppressed, until now when it has gotten so bad as to be undeniable. The internet has created alternative flows of information and so much of the scrutiny, delayed for too long, is now coming from below. If this had happened at an earlier time, Fleming might have gotten away with it. But times have changed. And in crisis, there is opportunity or at very least there is hope for open debate. So bring on the debate, just as soon as we clear away some of the rubbish.

* * *

Retracted: Long‐term health effects of the three major diets under self‐management with advice, yields high adherence and equal weight loss, but very different long‐term cardiovascular health effects as measured by myocardial perfusion imaging and specific markers of inflammatory coronary artery disease

The above article, published online on 27 September 2018 in Wiley Online Library (wileyonlinelibrary.com), has been withdrawn by agreement between the journal Editor in Chief, A. John Camm and Wiley Periodicals, Inc. The article has been withdrawn due to concerns with data integrity and an undisclosed conflict of interest by the lead author.

A convicted felon writes a paper on hotly debated diets. What could go wrong?
by Ivan Oransky, Retraction Watch

Pro-tip for journals and publishers: When you decide to publish a paper about a subject — say, diets — that you know will draw a great deal of scrutiny from vocal proponents of alternatives, make sure it’s as close to airtight as possible.

And in the event that the paper turns out not to be so airtight, write a retraction notice that’s not vague and useless.

Oh, and make sure the lead author of said study isn’t a convicted felon who pleaded guilty to healthcare fraud.

“If only we were describing a hypothetical.

On second thought: A man of many talents — with a spotty scientific record
by Adam Marcus, Boston Globe

Richard M. Fleming may be a man of many talents, but his record as a scientist has been spotty. Fleming, who bills himself on Twitter as “PhD, MD, JD AND NOW Actor-Singer!!!”, was a co-author of short-lived paper in the journal Clinical Cardiology purporting to find health benefits from a diet with low or modest amounts of fat. The paper came out in late September — just a day before the Food and Drug Administration banned Fleming from participating in any drug studies. Why? Two prior convictions for fraud in 2009.

It didn’t take long for others to begin poking holes in the new article. One researcher found multiple errors in the data and noted that the study evidently had been completed in 2002. The journal ultimately retracted the article, citing “concerns with data integrity and an undisclosed conflict of interest by the lead author.” But Fleming, who objected to the retraction, persevered. On Nov. 5, he republished the study in another journal — proving that grit, determination, and a receptive publisher are more important than a spotless resume.

Malnourished Americans

Prefatory Note

It would be easy to mistake this writing as a carnivore’s rhetoric against the evils of grains and agriculture. I’m a lot more agnostic on the issue than it might seem. But I do come off as strong in opinion, from decades of personal experience about bad eating habits and the consequences, and my dietary habits were no better when I was vegetarian.

I’m not so much pro-meat as I am for healthy fats and oils, not only from animals sources but also from plants, with coconut oil and olive oil being two of my favorites. As long as you are getting adequate protein, from whatever source (including vegetarian foods), there is no absolute rule about protein intake. But hunter-gatherers on average do eat more fats and oils than protein (and more than vegetables as well), whether the protein comes from meat or seeds and nuts (though the protein and vegetables they get is of extremely high quality and, of course, nutrient dense; along with much fiber). Too much protein with too little fat/oil causes rabbit sickness. It’s fat and oil that has a higher satiety and, combined with low-carb ketosis, is amazing in eliminating food cravings, addictions, and over-eating.

Besides, I have nothing against plant-based foods. I eat more vegetables on the paleo diet than I did in the past, even when I was a vegetarian, more than any vegetarian I know as well; not just more in quantity but also more in quality. Many paleo and keto dieters have embraced a plant-based diet with varying attitudes about meat and fat. Dr. Terry Wahls, former vegetarian, reversed her symptoms of multiple sclerosis by formulating a paleo diet that include massive loads of nutrient-dense vegetables, while adding in the nutrient-dense animal foods as well (e.g., liver).

I’ve picked up three books lately that emphasize plants even further. One is The Essential Vegetarian Keto Cookbook and pretty much is as the title describes it, mostly recipes with some introductory material about ketosis. Another book, Ketotarian by Dr. Will cole, is likewise about keto vegetarianism, but with leniency toward fish consumption and ghee (the former not strictly vegetarian and the latter not strictly paleo). The most recent I got is The Paleo Vegetarian Diet by Dena Harris, another person with a lenient attitude toward diet. That is what I prefer in my tendency toward ideological impurity. About diet, I’m bi-curious or maybe multi-curious.

My broader perspective is that of traditional foods. This is largely based on the work of Weston A. Price, which I was introduced to long ago by way of the writings of Sally Fallon Morrell (formerly Sally Fallon). It is not a paleo diet in that agricultural foods are allowed, but its advocates share a common attitude with paleolists in the valuing of traditional nutrition and food preparation. Authors from both camps bond over their respect for Price’s work and so often reference those on the other side in their writings. I’m of the opinion, in line with traditional foods, that if you are going to eat agricultural foods then traditional preparation is all the more important (from long-fermented bread and fully soaked legumes to cultured dairy and raw aged cheese). Many paleolists share this opinion and some are fine with such things as ghee. My paleo commitment didn’t stop me from enjoying a white role for Thanksgiving, adorning it with organic goat butter, and it didn’t kill me.

I’m not so much arguing against all grains in this post as I’m pointing out the problems found at the extreme end of dietary imbalance that we’ve reached this past century: industrialized and processed, denatured and toxic, grain-based/obsessed and high-carb-and-sugar. In the end, I’m a flexitarian who has come to see the immense benefits in the paleo approach, but I’m not attached to it as a belief system. I heavily weigh the best evidence and arguments I can find in coming to my conclusions. That is what this post is about. I’m not trying to tell anyone how to eat. I hope that heads off certain areas of potential confusion and criticism. So, let’s get to the meat of the matter.

Grain of Truth

Let me begin with a quote, share some related info, and then circle back around to putting the quote into context. The quote is from Grain of Truth by Stephen Yafa. It’s a random book I picked up at a secondhand store and my attraction to it was that the author is defending agriculture and grain consumption. I figured it would be a good balance to my other recent readings. Skimming it, one factoid stuck out. In reference to new industrial milling methods that took hold in the late 19th century, he writes:

“Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” (p. 17)

That is remarkable. He is talking about the now infamous highly refined flour, something that never existed before. Even commercial whole wheat breads today, with some fiber added back in, have little in common with what was traditionally made for millennia. My grandparents were of that particular generation that was so severely malnourished, and so that was the world into which my parents were born. The modern health decline that has gained mainstream attention began many generations back. Okay, so put that on the backburner.

Against the Grain

In a post by Dr. Malcolm Kendrick, I was having a discussion in the comments section (and, at the same time, I was having a related discussion in my own blog). Göran Sjöberg brought up Jame C. Scott’s book about the development of agriculture, Against the Grain — writing that, “This book is very much about the health deterioration, not least through epidemics partly due to compromised immune resistance, that occurred in the transition from hunting and gathering to sedentary mono-crop agriculture state level scale, first in Mesopotamia about five thousand years ago.”

Scott’s view has interested me for a while. I find compelling the way he connects grain farming, legibility, record-keeping, and taxation. There is a reason great empires were built on grain fields, not on potato patches or vegetable gardens, much less cattle ranching. Grain farming is easily observed and measured, tracked and recorded, and that meant it could be widely taxed to fund large centralized governments along with their armies and, later on, their police forces and intelligence agencies. The earliest settled societies arose prior to agriculture, but they couldn’t become major civilizations until the cultivation of grains.

Another commenter, Sasha, responded with what she considered important qualifications: “I think there are too many confounders in transition from hunter gatherers to agriculture to suggest that health deterioration is due to one factor (grains). And since it was members of upper classes who were usually mummified, they had vastly different lifestyles from that of hunter gatherers. IMO, you’re comparing apples to oranges… Also, grain consumption existed in hunter gatherers and probably intensified long before Mesopotamia 5 thousands years ago as wheat was domesticated around 9,000 BCE and millet around 6,000 BCE to use just two examples.”

It is true that pre-neolithic hunter-gatherers, in some cases, sporadically ate grains in small amounts or at least we have evidence they were doing something with grains, though as far as we know they might have been using it to mix with medicinal herbs or used as a thickener for paints — it’s anyone’s guess. Assuming they were eating those traces of grains we’ve discovered, it surely was no where near at the level of the neolithic agriculturalists. Furthermore, during the following millennia, grains were radically changed through cultivation. As for the Egyptian elite, they were eating more grains than anyone, as farmers were still forced to partly subsist from hunting, fishing, and gathering.

I’d take the argument much further forward into history. We know from records that, through the 19th century, Americans were eating more meat than bread. Vegetable and fruit consumption was also relatively low and mostly seasonal. Part of that is because gardening was difficult with so many pests. Besides, with so many natural areas around, hunting and gathering remained a large part of the American diet. Even in the cities, wild game was easily obtained at cheap prices. Into the 20th century, hunting and gathering was still important and sustained many families through the Great Depression and World War era when many commercial foods were scarce.

It was different in Europe, though. Mass urbanization happened centuries before it did in the United States. And not much European wilderness was left standing in recent history. But with the fall of the Roman Empire and headng into feudalism, many Europeans returned to a fair amount of hunting and gathering, during which time general health improved in the population. Restrictive laws about land use eventually made that difficult and the land enclosure movement made it impossible for most Europeans.

Even so, all of that is fairly recent in the big scheme of things. It took many millennia of agriculture before it more fully replaced hunting, fishing, trapping, and gathering. In places like the United States, that change is well within living memory. When some of my ancestors immigrated here in the 1600s, Britain and Europe still maintained plenty of procuring of wild foods to support their populations. And once here, wild foods were even more plentiful and a lot less work than farming.

Many early American farmers didn’t grow food so much for their own diet as to be sold on the market, sometimes in the form of the popular grain-based alcohols. It was in making alcohol that rural farmers were able to get their product to the market without it spoiling. I’m just speculating, but alcohol might have been the most widespread agricultural food of that era because water was often unsafe to drink.

Another commenter, Martin Back, made the same basic point: “Grain these days is cheap thanks to Big Ag and mechanization. It wasn’t always so. If the fields had to be ploughed by draught animals, and the grain weeded, harvested, and threshed by hand, the final product was expensive. Grain became a store of value and a medium of exchange. Eating grains was literally like eating money, so presumably they kept consumption to a minimum.”

In early agriculture, grain was more of a way to save wealth than a staple of the diet. It was saved for purposes of trade and also saved for hard times when no other food was available. What didn’t happen was to constantly consume grain-based foods every day and all day long — going from a breakfast with toast and cereal to lunch with a sandwich and maybe a salad with croutons, and then a snack of crackers in the afternoon before eating more bread or noodles for dinner.

Historical Examples

So, I am partly just speculating. But it’s informed speculation. I base my view on specific examples. The most obvious example are hunter-gatherers, poor by standards of modern industrialization while maintaining great health, as long as they their traditional way of life is able to be maintained. Many populations that are materially better of in terms of a capitalist society (access to comfortable housing, sanitation, healthcare, an abundance of food in grocery stores, etc) are not better off in terms of chronic diseases.

As the main example I already mentioned, poor Americans have often been a quite healthy lot, as compared to other populations around the world. It is true that poor Americans weren’t particularly healthy in the early colonial period, specifically in Virginia because of indentured servitude. And it’s true that poor Americans today are fairly bad off because of the cheap industrialized diet. Yet for the couple of centuries or so in between, they were doing quite well in terms of health, with lots of access to nutrient-dense wild foods. That point is emphasized by looking at other similar populations at the time, such as back in Europe.

Let’s do some other comparisons. The poor in the Roman Empire did not do well, even when they weren’t enslaved. That was for many reasons, such as growing urbanization and its attendant health risks. When the Roman Empire fell, many of the urban centers collapsed. The poor returned to a more rural lifestyle that depended on a fair amount of wild foods. Studies done on their remains show their health improved during that time. Then at the end of feudalism, with the enclosure movement and the return of mass urbanization, health went back on a decline.

Now I’ll consider the early Egyptians. I’m not sure if there is any info about the diet and health of poor Egyptians. But clearly the ruling class had far from optimal health. It’s hard to make comparisons between then and now, though, because it was an entire different kind of society. The early Bronze Age civilizations were mostly small city-states that lacked much hierarchy. Early Egypt didn’t even have the most basic infrastructure such as maintained roads and bridges. And the most recent evidence indicates that the pyramid workers weren’t slaves but instead worked freely and seem to have fed fairly well, whatever that may or may not indicate about their socioeconomic status. The fact that the poor weren’t mummified leaves us with scant evidence that would more directly inform us.

On the other hand, no one can doubt that there have been plenty of poor populations who had truly horrific living standards with much sickness, suffering, and short lifespans. That is particularly true over the millennia as agriculture became ever more central, since that meant periods of abundance alternating with periods of deficiency and sometimes starvation, often combined with weakened immune systems and rampant sickness. That was less the case for the earlier small city-states with less population density and surrounded by the near constant abundance of wilderness areas.

As always, it depends on what are the specifics we are talking about. Also, any comparison and conclusion is relative.

My mother grew up in a family that hunted and at the time there was a certain amount of access to natural areas for many Americans, something that helped a large part of the population get through the Great Depression and world war era. Nonetheless, by the time of my mother’s childhood, overhunting had depleted most of the wild game (bison, bear, deer, etc were no longer around) and so her family relied on less desirable foods such as squirrel, raccoon, and opossum; even the fish they ate was less than optimal because they came from highly polluted waters because of the very factories and railroad her family worked in. So, the wild food opportunities weren’t nearly as good as it had been a half century earlier, much less in the prior centuries.

Not All Poverty is the Same

Being poor today means a lot of things that it didn’t mean in the past. The high rates of heavy metal toxicity today has rarely been seen among previous poor populations. Today 40% of the global deaths are caused by air pollution, primarily effecting the poor, also extremely different from the past. Beyond that, inequality has grown larger than ever before and that has been strongly correlated to high rates of stress, disease, homicides, and suicides. Such inequality is also seen in terms of climate change, droughts, refugee crises, and war/occupation.

Here is what Sasha wrote in response to me: “I agree with a lot of your points, except with your assertion that “the poor ate fairly well in many societies especially when they had access to wild sources of food”. I know how the poor ate in Russia in the beginning of the 20th century and how the poor eat now in the former Soviet republics and in India. Their diet is very poor even though they can have access to wild sources of food. I don’t know what the situation was for the poor in ancient Egypt but I would be very surprised if it was better than in modern day India or former Soviet Union.”

I’d imagine modern Russia has high inequality similar to the US. About modern India, that is one of the most impoverished, densely populated, and malnourished societies around. And modern industrialization did major harm to Hindu Indians because studies show that traditional vegetarians got a fair amount of nutrients from the insects that were mixed in with pre-modern agricultural goods. Both Russia and India have other problems related to neoliberalism that wasn’t a factor in the past. It’s an entirely different kind of poverty these days. Even if some Russians have some access to wild foods, I’m willing to bet they have nowhere near the access that was available in previous generations, centuries, and millennia.

Compare modern poverty to that of feudalism. At least in England, feudal peasants were guaranteed to be taken care of in hard times. The Church, a large part of local governance at the time, was tasked with feeding and taking care of the poor and needy, from orphans to widows. They were tight communities that took care of their own, something that no longer exists in most of the world where the individual is left to suffer and struggle. Present Social Darwinian conditions are not the norm for human societies across history. The present breakdown of families and communities is historically unprecedented.

Socialized Medicine & Externalized Costs
An Invisible Debt Made Visible
On Conflict and Stupidity
Inequality in the Anthropocene
Capitalism as Social Control

The Abnormal Norms of WEIRD Modernity

Everything about present populations is extremely abnormal. This is seen in diet as elsewhere. Let me return to the quote I began this post with. “Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” * So, what had happened to the health of the American population?

Well, there were many changes. Overhunting, as I already said, made many wild game species extinct or eliminated them from local areas, such that my mother born into a rural farm state never saw a white-tailed deer growing up. Also, much earlier after the Civil War, a new form of enclosure movement happened as laws were passed to prevent people, specifically the then free blacks, from hunting and foraging wherever they wanted (early American laws often protected the rights of anyone to hunt, forage plants, collect timber, etc from any land that was left open, whether or not it was owned by someone). The carryover from the feudal commons was finally and fully eliminated. It was also the end of the era of free range cattle ranching, the ending have come with the invention of barbed wire. Access to wild foods was further reduced by the creation and enforcement of protected lands (e.g., the federal park system), which very much was targeted at the poor who up to that point had relied upon wild foods for health and survival.

All of that was combined with mass urbanization and industrialization with all of its new forms of pollution, stress, and inequality. Processed foods were becoming more widespread at the time. Around the turn of the century unhealthy and industrialized vegetable oils became heavily marketed and hence popular, which replaced butter and lard. Also, muckraking about the meat industry scared Americans off from meat and consumption precipitiously dropped. As such, in the decades prior to World War II, the American diet had already shifted toward what we now know. A new young generation had grown up on that industrialized and processed diet and those young people were the ones showing up as recruits for the military. This new diet in such a short period had caused mass malnourishment. It was a mass experiment that showed failure early on and yet we continue the same basic experiment, not only continuing it but making it far worse.

Government officials and health authorities blamed it on bread production. Refined flour had become widely available because of industrialization. This removed all the nutrients that gave any health value to bread. In response, there was a movement to fortify bread, initially enforced by federal law and later by state laws. That helped some, but obviously the malnourishment was caused by many other factors that weren’t appreciated by most at the time, even though this was the same period when Weston A. Price’s work was published. Nutritional science was young at the time and very most nutrients were still undiscovered or else unappreciated. Throwing a few lab-produced vitamins back into food barely scratches the surface of the nutrient-density that was lost.

Most Americans continue to have severe nutritional deficiencies. We don’t recognize this fact because being underdeveloped and sickly has become normalized, maybe even in the minds of most doctors and health officials. Besides, many of the worst symptoms don’t show up until decades later, often as chronic diseases of old age, although increasingly seen among the young. Far fewer Americans today would meet the health standards of World War recruits. It’s been a steady decline, despite the miracles of modern medicine in treating symptoms and delaying death.

* The data on the British shows an even earlier shift in malnourishment because imperial trade brought an industrialized diet sooner to the British population. Also, rural life with a greater diet of wild foods had more quickly disappeared, as compared to the US. The fate of the British in the late 1800s showed what would later happen more than a half century later on the other side of the ocean.

Lore of Nutrition
by Tim Noakes
pp. 373-375

The mid-Victorian period between 1850 and 1880 is now recognised as the golden era of British health. According to P. Clayton and J. Rowbotham, 47 this was entirely due to the mid-Victorians’ superior diet. Farm-produced real foods were available in such surplus that even the working-class poor were eating highly nutritious foods in abundance. As a result, life expectancy in 1875 was equal to, or even better, than it is in modern Britain, especially for men (by about three years). In addition, the profile of diseases was quite different when compared to Britain today.

The authors conclude:

[This] shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative disease which have emerged due to our failure to maintain mid-Victorian nutritional standards. 48

This mid-Victorians’ healthy diet included freely available and cheap vegetables such as onions, carrots, turnips, cabbage, broccoli, peas and beans; fresh and dried fruit, including apples; legumes and nuts, especially chestnuts, walnuts and hazelnuts; fish, including herring, haddock and John Dory; other seafood, including oysters, mussels and whelks; meat – which was considered ‘a mark of a good diet’ so that ‘its complete absence was rare’ – sourced from free-range animals, especially pork, and including offal such as brain, heart, pancreas (sweet breads), liver, kidneys, lungs and intestine; eggs from hens that were kept by most urban households; and hard cheeses.

Their healthy diet was therefore low in cereals, grains, sugar, trans fats and refined flour, and high in fibre, phytonutrients and omega- 3 polyunsaturated fatty acids, entirely compatible with the modern Paleo or LCHF diets.

This period of nutritional paradise changed suddenly after 1875 , when cheap imports of white flour, tinned meat, sugar, canned fruits and condensed milk became more readily available. The results were immediately noticeable. By 1883 , the British infantry was forced to lower its minimum height for recruits by three inches; and by 1900, 50 per cent of British volunteers for the Boer War had to be rejected because of undernutrition. The changes would have been associated with an alteration in disease patterns in these populations, as described by Yellowlees ( Chapter 2 ).

On Obesity and Malnourishment

There is no contradiction, by the way, between rampant nutritional deficiencies and the epidemic of obesity. Gary Taubes noted the dramatic rise of obesity in America began earlier last century, which is to say that it is not a problem that came out of nowhere with the present younger generations. Americans have been getting fatter for a while now. Specifically, they were getting fatter while at the same time being malnourished, partly because of refined flour that was as empty of a carb that is possible.

Taubes emphasizes the point that this seeming paradox has often been observed among poor populations around the world, lack of optimal nutrition that leads to ever more weight gain, sometimes with children being skinny to an unhealthy degree only to grow up to be fat. No doubt that many Americans in the early 1900s were dealing with much poverty and the lack of nutritious foods that often goes with it. As for today, nutritional deficiencies are different because of enrichment, but it persists nonetheless in many other ways. Also, as Keith Payne argues in The Broken Ladder, growing inequality mimics poverty in the conflict and stress it causes. And inequality has everything to do with food quality, as seen with many poor areas being food deserts.

I’ll give you a small taste of Taube’s discussion. It is from the introduction to one of his books, published a few years ago. If you read the book, look at the section immediately following the below. He gives examples of tribes that were poor, didn’t overeat, and did hard manual labor. Yet they were getting obese, even as nearby tribes sometimes remained a healthy weight. The only apparent difference was what they were eating and not how much they were eating. The populations that saw major weight gain had adopted a grain-based diet, typically because of government rations or government stores.

Why We Get Fat
by Gary Taubes
pp. 17-19

In 1934, a young German pediatrician named Hilde Bruch moved to America, settled in New York City, and was “startled,” as she later wrote, by the number of fat children she saw—“really fat ones, not only in clinics, but on the streets and subways, and in schools.” Indeed, fat children in New York were so conspicuous that other European immigrants would ask Bruch about it, assuming that she would have an answer. What is the matter with American children? they would ask. Why are they so bloated and blown up? Many would say they’d never seen so many children in such a state.

Today we hear such questions all the time, or we ask them ourselves, with the continual reminders that we are in the midst of an epidemic of obesity (as is the entire developed world). Similar questions are asked about fat adults. Why are they so bloated and blown up? Or you might ask yourself: Why am I?

But this was New York City in the mid-1930s. This was two decades before the first Kentucky Fried Chicken and McDonald’s franchises, when fast food as we know it today was born. This was half a century before supersizing and high-fructose corn syrup. More to the point, 1934 was the depths of the Great Depression, an era of soup kitchens, bread lines, and unprecedented unemployment. One in every four workers in the United States was unemployed. Six out of every ten Americans were living in poverty. In New York City, where Bruch and her fellow immigrants were astonished by the adiposity of the local children, one in four children were said to be malnourished. How could this be?

A year after arriving in New York, Bruch established a clinic at Columbia University’s College of Physicians and Surgeons to treat obese children. In 1939, she published the first of a series of reports on her exhaustive studies of the many obese children she had treated, although almost invariably without success. From interviews with her patients and their families, she learned that these obese children did indeed eat excessive amounts of food—no matter how much either they or their parents might initially deny it. Telling them to eat less, though, just didn’t work, and no amount of instruction or compassion, counseling, or exhortations—of either children or parents—seemed to help.

It was hard to avoid, Bruch said, the simple fact that these children had, after all, spent their entire lives trying to eat in moderation and so control their weight, or at least thinking about eating less than they did, and yet they remained obese. Some of these children, Bruch reported, “made strenuous efforts to lose weight, practically giving up on living to achieve it.” But maintaining a lower weight involved “living on a continuous semi-starvation diet,” and they just couldn’t do it, even though obesity made them miserable and social outcasts.

One of Bruch’s patients was a fine-boned girl in her teens, “literally disappearing in mountains of fat.” This young girl had spent her life fighting both her weight and her parents’ attempts to help her slim down. She knew what she had to do, or so she believed, as did her parents—she had to eat less—and the struggle to do this defined her existence. “I always knew that life depended on your figure,” she told Bruch. “I was always unhappy and depressed when gaining [weight]. There was nothing to live for.… I actually hated myself. I just could not stand it. I didn’t want to look at myself. I hated mirrors. They showed how fat I was.… It never made me feel happy to eat and get fat—but I never could see a solution for it and so I kept on getting fatter.”

pp. 33-34

If we look in the literature—which the experts have not in this case—we can find numerous populations that experienced levels of obesity similar to those in the United States, Europe, and elsewhere today but with no prosperity and few, if any, of the ingredients of Brownell’s toxic environment: no cheeseburgers, soft drinks, or cheese curls, no drive-in windows, computers, or televisions (sometimes not even books, other than perhaps the Bible), and no overprotective mothers keeping their children from roaming free.

In these populations, incomes weren’t rising; there were no labor-saving devices, no shifts toward less physically demanding work or more passive leisure pursuits. Rather, some of these populations were poor beyond our ability to imagine today. Dirt poor. These are the populations that the overeating hypothesis tells us should be as lean as can be, and yet they were not.

Remember Hilde Bruch’s wondering about all those really fat children in the midst of the Great Depression? Well, this kind of observation isn’t nearly as unusual as we might think.

How Americans Used to Eat

Below is a relevant passage. It puts into context how extremely unusual has been the high-carb, low-fat diet these past few generations. This is partly what informed some of my thoughts. We so quickly forget that the present dominance of a grain-based diet wasn’t always the case, likely not even in most agricultural societies until quite recently. In fact, the earlier American diet is still within living memory, although those left to remember it are quickly dying off.

Let me explain why history of diets matter. One of the arguments for forcing official dietary recommendations onto the entire population was the belief that Americans in a mythical past ate less meat, fat, and butter while having ate more bread, legumes, and vegetables. This turns out to have been a trick of limited data.

We now know, from better data, that the complete opposite was the case. And we have the further data that shows that the increase of the conventional diet has coincided with increase of obesity and chronic diseases. That isn’t to say eating more vegetables is bad for your health, but we do know that even as the average American intake of vegetables has gone up so has all the diet-related health conditions. During this time, what went down was the consumption of all the traditional foods of the American diet going back to the colonial era: wild game, red meat, organ meat, lard, and butter — all the foods Americans ate in huge amounts prior to the industrialized diet.

What added to the confusion and misinterpretation of the evidence had to do with timing. Diet and nutrition was first seriously studied right at the moment when, for most populations, it had already changed. That was the failure of Ancel Keys research on what came to be called the Mediterranean diet (see Sally Fallon Morrell’s Nourishing Diets). The population was recuperating from World War II that had devastated their traditional way of life, including their diet. Keys took the post-war deprivation diet as being the historical norm, but the reality was far different. Cookbooks and other evidence from before the war showed that this population used to eat higher levels of meat and fat, including saturated fat. So, the very people focused on had grown up and spent most of their lives on a diet that was at the moment no longer available because of disruption of the food system. What good health Keys observed came from a lifetime of eating a different diet. Combined with cherry-picking of data and biased analysis, Keys came to a conclusion that was as wrong as wrong could be.

Slightly earlier, Weston A. Price was able to see a different picture. He intentionally traveled to the places where traditional diets remained fully in place. And the devastation of World War II had yet to happen. Price came to a conclusion that what mattered most of all was nutrient-density. Sure, the vegetables eaten would have been of a higher quality than we get today, largely because they were heirloom cultivars grown on health soil. Nutrient-dense foods can only come from nutrient-dense soil, whereas today our food is nutrient-deficient because our soil is highly depleted. The same goes for animal foods. Animals pastured on healthy land will produce healthy dairy, eggs, meat, and fat; these foods will be high in omega-3s and the fat-soluble vitamins.

No matter if it is coming from plant sources or animal sources, nutrient-density might be the most important factor of all. Why fat is meaningful in this context is that it is fat that is where fat-soluble vitamins are found and it is through fat that they are metabolized. And in turn, the fat-soluble vitamins play a key role in the absorption and processing of numerous other nutrients, not to mention a key role in numerous functions in the body. Nutrient-density and fat-density go hand in hand in terms of general health. That is what early Americans were getting in eating so much wild food, not only wild game but also wild greens, fruit, and mushrooms. And nutrient-density is precisely what we are lacking today, as the nutrients have been intentionally removed to make more palatable commercial foods.

Once again, this has a class dimension, since the wealthier have more access to nutrient-dense foods. Few poor people could afford to shop at a high-end health food store, even if one was located nearby their home. But it was quite different in the past when nutrient-dense foods were available to everyone and sometimes more available to the poor concentrated in rural areas. If we want to improve public health, the first thing we should do is return to this historical norm.

The Big Fat Surprise
by Nina Teicholz
pp. 123-131

Yet despite this shaky and often contradictory evidence, the idea that red meat is a principal dietary culprit has thoroughly pervaded our national conversation for decades. We have been led to believe that we’ve strayed from a more perfect, less meat-filled past. Most prominently, when Senator McGovern announced his Senate committee’s report, called Dietary Goals , at a press conference in 1977, he expressed a gloomy outlook about where the American diet was heading. “Our diets have changed radically within the past fifty years,” he explained, “with great and often harmful effects on our health.” Hegsted, standing at his side, criticized the current American diet as being excessively “rich in meat” and other sources of saturated fat and cholesterol, which were “linked to heart disease, certain forms of cancer, diabetes and obesity.” These were the “killer diseases,” said McGovern. The solution, he declared, was for Americans to return to the healthier, plant-based diet they once ate.

The New York Times health columnist Jane Brody perfectly encapsulated this idea when she wrote, “Within this century, the diet of the average American has undergone a radical shift away from plant-based foods such as grains, beans and peas, nuts, potatoes, and other vegetables and fruits and toward foods derived from animals—meat, fish, poultry, eggs and dairy products.” It is a view that has been echoed in literally hundreds of official reports.

The justification for this idea, that our ancestors lived mainly on fruits, vegetables, and grains, comes mainly from the USDA “food disappearance data.” The “disappearance” of food is an approximation of supply; most of it is probably being eaten, but much is wasted, too. Experts therefore acknowledge that the disappearance numbers are merely rough estimates of consumption. The data from the early 1900s, which is what Brody, McGovern, and others used, are known to be especially poor. Among other things, these data accounted only for the meat, dairy, and other fresh foods shipped across state lines in those early years, so anything produced and eaten locally, such as meat from a cow or eggs from chickens, would not have been included. And since farmers made up more than a quarter of all workers during these years, local foods must have amounted to quite a lot. Experts agree that this early availability data are not adequate for serious use, yet they cite the numbers anyway, because no other data are available. And for the years before 1900, there are no “scientific” data at all.

In the absence of scientific data, history can provide a picture of food consumption in the late eighteenth to nineteenth century in America. Although circumstantial, historical evidence can also be rigorous and, in this case, is certainly more far-reaching than the inchoate data from the USDA. Academic nutrition experts rarely consult historical texts, considering them to occupy a separate academic silo with little to offer the study of diet and health. Yet history can teach us a great deal about how humans used to eat in the thousands of years before heart disease, diabetes, and obesity became common. Of course we don’t remember now, but these diseases did not always rage as they do today. And looking at the food patterns of our relatively healthy early-American ancestors, it’s quite clear that they ate far more red meat and far fewer vegetables than we have commonly assumed.

Early-American settlers were “indifferent” farmers, according to many accounts. They were fairly lazy in their efforts at both animal husbandry and agriculture, with “the grain fields, the meadows, the forests, the cattle, etc, treated with equal carelessness,” as one eighteenth-century Swedish visitor described. And there was little point in farming since meat was so readily available.

The endless bounty of America in its early years is truly astonishing. Settlers recorded the extraordinary abundance of wild turkeys, ducks, grouse, pheasant, and more. Migrating flocks of birds would darken the skies for days . The tasty Eskimo curlew was apparently so fat that it would burst upon falling to the earth, covering the ground with a sort of fatty meat paste. (New Englanders called this now-extinct species the “doughbird.”)

In the woods, there were bears (prized for their fat), raccoons, bobolinks, opossums, hares, and virtual thickets of deer—so much that the colonists didn’t even bother hunting elk, moose, or bison, since hauling and conserving so much meat was considered too great an effort. IX

A European traveler describing his visit to a Southern plantation noted that the food included beef, veal, mutton, venison, turkeys, and geese, but he does not mention a single vegetable. Infants were fed beef even before their teeth had grown in. The English novelist Anthony Trollope reported, during a trip to the United States in 1861, that Americans ate twice as much beef as did Englishmen. Charles Dickens, when he visited, wrote that “no breakfast was breakfast” without a T-bone steak. Apparently, starting a day on puffed wheat and low-fat milk—our “Breakfast of Champions!”—would not have been considered adequate even for a servant.

Indeed, for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old. “I hold a family to be in a desperate way when the mother can see the bottom of the pork barrel,” says a frontier housewife in James Fenimore Cooper’s novel The Chainbearer.

Like the primitive tribes mentioned in Chapter 1, Americans also relished the viscera of the animal, according to the cookbooks of the time. They ate the heart, kidneys, tripe, calf sweetbreads (glands), pig’s liver, turtle lungs, the heads and feet of lamb and pigs, and lamb tongue. Beef tongue, too, was “highly esteemed.”

And not just meat but saturated fats of every kind were consumed in great quantities. Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard. X

In the book Putting Meat on the American Table , researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of eight thousand urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds. A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the eighteenth century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”

About 175 pounds of meat per person per year! Compare that to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, more than half is poultry—chicken and turkey—whereas until the mid-twentieth century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs). Subtracting out the poultry factor, we are left with the conclusion that per capita consumption of red meat today is about 40 to 70 pounds per person, according to different sources of government data—in any case far less than what it was a couple of centuries ago.

Yet this drop in red meat consumption is the exact opposite of the picture we get from public authorities. A recent USDA report says that our consumption of meat is at a “record high,” and this impression is repeated in the media. It implies that our health problems are associated with this rise in meat consumption, but these analyses are misleading because they lump together red meat and chicken into one category to show the growth of meat eating overall, when it’s just the chicken consumption that has gone up astronomically since the 1970s. The wider-lens picture is clearly that we eat far less red meat today than did our forefathers.

Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. They “appeared to yield so little nutriment in proportion to labor spent in cultivation,” wrote one eighteenth-century observer, that “farmers preferred more hearty foods.” Indeed, a pioneering 1888 report for the US government written by the country’s top nutrition professor at the time concluded that Americans living wisely and economically would be best to “avoid leafy vegetables,” because they provided so little nutritional content. In New England, few farmers even had many fruit trees, because preserving fruits required equal amounts of sugar to fruit, which was far too costly. Apples were an exception, and even these, stored in barrels, lasted several months at most.

It seems obvious, when one stops to think, that before large supermarket chains started importing kiwis from New Zealand and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. In New England, that season runs from June through October or maybe, in a lucky year, November. Before refrigerated trucks and ships allowed the transport of fresh produce all over the world, most people could therefore eat fresh fruit and vegetables for less than half the year; farther north, winter lasted even longer. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)

Thus it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator.

So by these accounts, for the first two hundred and fifty years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.

During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s. Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital. The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris , makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his seven hundred male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.” XI About one fifth of the US population was over fifty years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some ten million Americans of a prime age for having a heart attack at the turn of the twentieth century, but heart attacks appeared not to have been a common problem.

Was it possible that heart disease existed but was somehow overlooked? The medical historian Leon Michaels compared the record on chest pain with that of two other medical conditions, gout and migraine, which are also painful and episodic and therefore should have been observed by doctors to an equal degree. Michaels catalogs the detailed descriptions of migraines dating all the way back to antiquity; gout, too, was the subject of lengthy notes by doctors and patients alike. Yet chest pain is not mentioned. Michaels therefore finds it “particularly unlikely” that angina pectoris, with its severe, terrifying pain continuing episodically for many years, could have gone unnoticed by the medical community, “if indeed it had been anything but exceedingly rare before the mid-eighteenth century.” XII

So it seems fair to say that at the height of the meat-and-butter-gorging eighteenth and nineteenth centuries, heart disease did not rage as it did by the 1930s. XIII

Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle , Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another twenty years. In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.

Nevertheless, the idea that Americans once ate little meat and “mostly plants”—espoused by McGovern and a multitude of experts—continues to endure. And Americans have for decades now been instructed to go back to this earlier, “healthier” diet that seems, upon examination, never to have existed.

Ketogenic Diet and Neurocognitive Health

Below is a passage from Ketotarian by Will Cole. It can be read in Chapter 1, titled “the ketogenic diet (for better and worse)”. The specific passage is to be found on pp. 34-38 in printed book (first edition) or pp. 28-31 in the Google ebook. I share it here because it is a great up-to-date summary of the value of the ketogenic diet. It is the low-carb diet pushed to its furthest extent where you burn fat instead of sugar, that is to say the body prioritizes and more efficiently uses ketones in place of glucose.

The brain, in particular, prefers ketones. That is why I decided to share a passage specifically on neurological health, as diet and nutrition isn’t the first thing most people think of in terms of what often gets framed as mental health, typically treated with psychiatric medications. But considering the severely limited efficacy of entire classes of such drugs (e.g., antidepressives), maybe it’s time for a new paradigm for treatment.

The basic advantage to ketosis is that, until modernity, most humans for most of human evolution (and going back into hominid evolution) were largely dependent on a high-fat diet for normal functioning. This is indicated by how the body more efficiently uses ketones than glucose. What the body does with carbs and sugar, though, is to either to use it right away or store it as fat. This is why hunter-gatherers would, when possible, carb-load right before winter in order to fatten themselves up. We have taken this knowledge in using carbs to fatten up animals before the slaughter.

Besides fattening up for winter in northern climes, hunter-gatherers focus most of their diet on fats and oils, in that when available they choose to eat far more fats and oils than they eat meat or vegetables. They do most of their hunting during the season when animals are the fattest and, if they aren’t simply doing a mass slaughter, they specifically target the fattest individual animals. After the kill, they often throw the lean meat to the dogs or mix it with fat for later use (e.g., pemmican).

This is why, prior to agriculture, ketosis was the biological and dietary norm. Even farmers until recent history were largely dependent in supplementing their diet with hunting and gathering. Up until the 20th century, most Americans ate more meat than bread, while intake of vegetables and fruits was minor and mostly seasonal. The meat most Americans, including city-dwellers, were eating was wild game because of the abundance in nearby wilderness areas; and, going by cookbooks of the time, fats and oils were at the center of the diet.

Anyway, simply in reading the following passage, you will not only become more well informed on this topic than average American but, sadly, also the average American doctor. This isn’t the kind of info that is emphasized in medical schools, despite it being fairly well researched at this point (see appended section of the author’s notes). “A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of eighteen questions correctly,” as referenced by Dr. Cole. He concluded that, “In short, most mainstream doctors would fail nutrition” (see previous post).

Knowledge is a good thing. And so here is some knowledge.

* * *

NEUROLOGICAL IMPROVEMENTS

Around 25 percent of your body’s cholesterol is found in your brain, (19) and remember, your brain is composed of 60 percent fat. (20) Think about that. Over half of your brain is fat! What we have been traditionally taught when it comes to “low-fat is best” ends up depriving your brain of the very thing it is made of. It’s not a coincidence that many of the potential side effects associated with statins—cholesterol-lowering drugs—are brain problems and memory loss. (21)

Your gut and brain actually form from the same fetal tissue in the womb and continue their special bond throughout your entire life through the gut-brain axis and the vagus nerve. Ninety-five percent of your happy neurotransmitter serotonin is produced and stored in your gut, so you can’t argue that your gut doesn’t influence the health of your brain. (22) The gut is known as the “second brain” in the medical literature, and a whole area of research known as the cytokine model of cognitive function is dedicated to examining how chronic inflammation and poor gut health can directly influence brain health. (23)

Chronic inflammation leads to not only increased gut permeability but blood-brain barrier destruction as well. When this protection is compromised, your immune system ends up working in overdrive, leading to brain inflammation. (24) Inflammation can decrease the firing rate of neurons in the frontal lobe of the brain in people with depression. (25) Because of this, antidepressants can be ineffective since they aren’t addressing the problem. And this same inflammatory oxidative stress in the hypothalamic cells of the brain is one potential factor of brain fog. (26)

Exciting emerging science is showing that a ketogenic diet can be more powerful than some of the strongest medications for brain-related problems such as autism, attention deficit/hyperactivity disorder (ADHD), bipolar disorder, schizophrenia, anxiety, and depression. (27) Through a ketogenic diet, we can not only calm brain-gut inflammation but also improve the gut microbiome. (28)

Ketones are also extremely beneficial because they can cross the blood-brain barrier and provide powerful fuel to your brain, providing mental clarity and improved mood. Their ability to cross the blood-brain barrier paired with their natural anti-inflammatory qualities provides incredible healing properties when it comes to improving traumatic brain injury (TBI) as well as neurodegenerative diseases. (29)

Medium-chain triglycerides (MCTs), found in coconuts (a healthy fat option in the Ketotarian diet), increase beta-hydroxybutyrate and are proven to enhance memory function in people with Alzheimer’s disease (30) as well as protect against neurodegeneration in people with Parkinson’s disease. (31) Diets rich in polyunsaturated fats, wild-caught fish specifically, are associated with a 60 percent decrease in Alzheimer’s disease. (32) Another study of people with Parkinson’s disease also found that the severity of their condition improved 43 percent after just one month of eating a ketogenic diet. (33) Studies have also shown that a ketogenic diet improves autism symptoms. (34) Contrast that with high-carb diets, which have been shown to increase the risk of Alzheimer’s disease and other neurodegenerative conditions. (35)

TBI or traumatic brain injury is another neurological area that can be helped through a ketogenic diet. When a person sustains a TBI, it can result in impaired glucose metabolism and inflammation, both of which are stabilized through a healthy high-fat ketogenic diet. (36)

Ketosis also increases the brain-derived-neurotrophic factor (BDNF), which protects existing neurons and encourages the growth of new neurons—another neurological benefit. (37)

In its earliest phases, modern ketogenic diet research was focused on treating epilepsy. (38) Children with epilepsy who ate this way were more alert, were more well behaved, and had more enhanced cognitive function than those who were treated with medication. (39) This is due to increased mitochondrial function, reduced oxidative stress, and increased gamma-aminobutyric acid (GABA) levels, which in turn helps reduce seizures. These mechanisms can also provide benefits for people with brain fog, anxiety, and depression. (40)

METABOLIC HEALTH

Burning ketones rather than glucose helps maintain balanced blood sugar levels, making the ketogenic way of eating particularly beneficial for people with metabolic disorders, diabetes, and weight-loss resistance.

Insulin resistance, the negative hormonal shift in metabolism that we mentioned earlier, is at the core of blood sugar problems and ends up wreaking havoc on the body, eventually leading to heart disease, weight gain, and diabetes. As we have seen, healthy fats are a stronger form of energy than glucose. The ketogenic diet lowers insulin levels and reduces inflammation as well as improving insulin receptor site sensitivity, which helps the body function the way it was designed. Early trial reports have shown that type 2 diabetes symptoms can be reversed in just ten weeks on the ketogenic diet! (41)

Fascinating research has been done correlating blood sugar levels and Alzheimer’s disease. In fact, so much so that the condition is now being referred to by some experts as type 3 diabetes . With higher blood sugar and increased insulin resistance comes more degeneration in the hippocampus, your brain’s memory center. (42) It’s because of this that people with type 1 and 2 diabetes have a higher risk of developing Alzheimer’s disease. This is another reason to get blood sugar levels balanced and have our brain burn ketones instead.

Notes:

* * *

I came across something interesting on the Ketogenic Forum, a discussion of a video. It’s about reporting on the ketogenic diet from Dateline almost a quarter century ago, back when I was a senior in high school. So, not only has the ketogenic diet been known in the medical literature for about a century but has even shown up in mainstream reporting for decades. Yet, ketogenic-oriented and related low-carb diets such as the paleo diet get called fad diets, and the low-carb diet has been well known for even longer, going back to the 19th century.

The Dateline show was about the ketosis used as treatment for serious medical conditions. But even though it was a well known treatment for epilepsy, doctors apparently still weren’t commonly recommending it. In fact, the keto diet wasn’t even mentioned as an option by a national expert, instead focusing on endless drugs and even surgery. After doing his own research for his son’s seizures, the father discovered the keto diet in the medical literature. The doctor was asked why he didn’t recommend it for the child’s seizures when it was known to have the highest efficacy rate. The doctor essentially had no answer other than to say that there were more drugs he could try, even as he admitted that no drug comes close in comparison.

As one commenter put it, “Seems like even back then the Dr’s knew drugs would always trump diet even though the success rate of the keto diet was 50-70%. No drugs at the time could even come close to that. And the one doctor still insisted they should try even more drugs to help Charlie even after Keto. Ugh!” Everyone knows the diet works. It’s been proven beyond all doubt. But there is a simple problem. There is no profit to be made from an easy and effective non-pharmaceutical solution.

This doctor knew there was a better possibility to offer the family and chose not to mention it. The consequences to his medical malfeasance is the kid may have ended up with permanent brain damage from seizures and from the side effects of medications. The father was shocked and angry. You’d think cases like this would have woken up the medical community, right? Well, you’d be wrong if you thought so. Yet quarter of a century later, most doctors continue to act clueless that these kinds of diets can help numerous health conditions. It’s not a lack of information being available, as many of these doctors knew about it even back then. But it simply doesn’t fit into the conventional medicine nor within the big drug and big insurance framework.

Here is the video:

Most Mainstream Doctors Would Fail Nutrition

“A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of eighteen questions correctly. In short, most mainstream doctors would fail nutrition.”
~Dr. Will Cole

That is amazing. The point is emphasized by the fact that these are doctors fresh out of medical school. If they were never taught this info in the immediate preceding years of intensive education and training, they are unlikely to pick up more knowledge later in their careers. These young doctors are among the most well educated people in the world, as few fields are as hard to enter and the drop-out rate of medical students is phenomena. These graduates entering residency programs are among the smartest of Americans, the cream of the crop, having been taught at some of the best schools in the world. They are highly trained experts in their field, but obviously this doesn’t include nutrition.

Think about this. Doctors are where most people turn to for serious health advice. They are the ultimate authority figures that the average person directly meets and talks to. If a cardiologist only got 52 percent right to answers on heart health, would you follow her advice and let her do heart surgery on you? I’d hope not. In that case, why would you listen to the dietary opinion of the typical doctor who is ill-informed? Nutrition isn’t a minor part of health, that is for sure. It is the one area where an individual has some control over their life and so isn’t a mere victim of circumstance. Research shows that simple changes in diet and nutrition, not to mention lifestyle, can have dramatic results. Yet few people have that knowledge because most doctors and other officials, to put it bluntly, are ignorant. Anyone who points out this state of affairs in mainstream thought generally isn’t received with welcoming gratitude, much less friendly dialogue and rational debate.

In reading about the paleo diet, a pattern I’ve noticed is that few critics of it know what the diet is and what is advocated by those who adhere to it. It’s not unusual to see, following a criticism of the paleo diet, a description of dietary recommendations that are basically in line with the paleo diet. Their own caricature blinds them to the reality, obfuscating the common ground of agreement or shared concern. I’ve seen the same kind of pattern in the critics of many alternative views: genetic determinists against epigenetic researchers and social scientists, climate change denialists against climatologists, Biblical apologists against Jesus mythicists, Chomskyan linguists against linguistic relativists, etc. In such cases, there is always plenty of fear toward those posing a challenge and so they are treated as the enemy to be attacked. And it is intended as a battle to which the spoils go to the victor, those in dominance assuming they will be the victor.

After debating some people on a blog post by a mainstream doctor (Paleo-suckered), it became clear to me how attractive genetic determinism and biological essentialism is to many defenders of conventional medicine, that there isn’t much you can do about your health other than to do what the doctor tells you and take your meds (these kinds of views may be on the decline, but they are far from down for the count). What bothers them isn’t limited to the paleo diet but extends seemingly to almost any diet as such, excluding official dietary recommendations. They see diet advocates as quacks, faddists, and cultists who are pushing an ideological agenda, and they feel like they are being blamed for their own ill health; from their perspective, it is unfair to tell someone they are capable of improving their diet, at least beyond the standard advice of eat your veggies and whole grains while gulping down your statins and shooting up your insulin.

As a side note, I’m reminded of how what often gets portrayed as alternative wasn’t always seen that way. Linguistic relativism was a fairly common view prior to the Chomskyan counter-revolution. Likewise, much of what gets promoted by the paleo diet was considered common sense in mainstream medical thought earlier last century and in the centuries prior (e.g., carbs are fattening, easily observed back in the day when most people lived on farms, as carbs were and still are how animals get fattened for the slaughter). In many cases, there are old debates that go in cycles. But the cycles are so long, often extending over centuries, that old views appear as if radically new and so easily dismissed as such.

Early Christians heresiologists admitted to the fact of Jesus mythicism, but their only defense was that the devil did it in planting parallels in prior religions. During the Enlightenment Age, many people kept bringing up these religious parallels and this was part of mainstream debate. Yet it was suppressed with the rise of literal-minded fundamentalism during the modern era. Then there is the battle between the Chomskyites, genetic determinists, etc and their opponents is part of a cultural conflict that goes back at least to the ancient Greeks, between the approaches of Plato and Aristotle (Daniel Everett discusses this in the Dark Matter of the Mind; see this post).

To return to the topic at hand, the notion of food as medicine, a premise of the paleo diet, also goes back to the ancient Greeks — in fact, originates with the founder of modern medicine, Hippocrates (he also is ascribed as saying that, “All disease begins in the gut,”  a slight exaggeration of a common view about the importance of gut health, a key area of connection between the paleo diet and alternative medicine). What we now call functional medicine, treating people holistically, used to be standard practice of family doctors for centuries and probably millennia, going back to medicine men and women. But this caring attitude and practice went by the wayside because it took time to spend with patients and insurance companies wouldn’t pay for it. Traditional healthcare that we now think of as alternative is maybe not possible with a for-profit model, but I’d say that is more of a criticism of the for-profit model than a criticism of traditional healthcare.

The dietary denialists love to dismiss the paleo lifestyle as a ‘fad diet’. But as Timothy Noakes argues, it is the least fad diet around. It is based on the research of what humans have been eating since the Paleoithic era and what hominids have been eating for millions of years. Even as a specific diet, it is the earliest official dietary recommendations given by medical experts. Back when it was popularized, it was called the Banting diet and the only complaint the medical authorities had was not that it was wrong but that it was right and they disliked it being promoted in the popular literature, as they considered dietary advice to be their turf to be defended. Timothy Noakes wrote that,

“Their first error is to label LCHF/Banting ‘the latest fashionable diet’; in other words, a fad. This is wrong. The Banting diet takes its name from an obese 19th-century undertaker, William Banting. First described in 1863, Banting is the oldest diet included in medical texts. Perhaps the most iconic medical text of all time, Sir William Osler’s The Principles and Practice of Medicine , published in 1892, includes the Banting/Ebstein diet as the diet for the treatment of obesity (on page 1020 of that edition). 13 The reality is that the only non-fad diet is the Banting diet; all subsequent diets, and most especially the low-fat diet that the UCT academics promote, are ‘the latest fashionable diets’.”
(Lore of Nutrition, p. 131)

The dominant paradigm maintains its dominance by convincing most people that what is perceived as ‘alternative’ was always that way or was a recent invention of radical thought. The risk the dominant paradigm takes is that, in attacking other views, it unintentionally acknowledges and legitimizes them. That happened in South Africa when the government spent hundreds of thousands of dollars attempting to destroy the career of Dr. Timothy Noakes, but because he was such a knowledgeable expert he was able to defend his medical views with scientific evidence. A similar thing happened when the Chomskyites viciously attacked the linguist Daniel Everett who worked in the field with native tribes, but it turned out he was a better writer with more compelling ideas and also had the evidence on his side. What the dogmatic assailants ended up doing, in both cases, was bringing academic and public attention to these challengers to the status quo.

Even though these attacks don’t always succeed, they are successful in setting examples. Even a pyrrhic victory is highly effective in demonstrating raw power in the short term. Not many doctors would be willing to risk their career as did Timothy Noakes and even fewer would have the capacity to defend themselves to such an extent. It’s not only the government that might go after a doctor but also private litigators. And if a doctor doesn’t toe the line, that doctor can lose their job in a hospital or clinic, be denied the ability to get Medicaire reimbursement, be blacklisted from speaking at medical conferences, and many other forms of punishment. That is what many challengers found in too loudly disagreeing with Ancel Keys and gang — they were effectively silenced and were no longer able to get funding to do research, even though the strongest evidence was on their side of the argument. Being shut out and becoming pariah is not a happy place to be.

The establishment can be fearsome when they flex their muscles. And watch out when they come after you. The defenders of the status quo become even more dangerous precisely when they are the weakest, like an injured and cornered animal who growls all the louder, and most people wisely keep their distance. But without fools to risk it all in testing whether the bark really is worse than the bite, nothing would change and the world would grind to a halt, as inertia settled into full authoritarian control. We are in such a time. I remember back in the era of Bush jr and as we headed into the following time of rope-a-dope hope-and-change. There was a palpable feeling of change in the air and I could viscerally sense the gears clicking into place. Something had irrevocably changed and it wasn’t fundamentally about anything going on in the halls of power but something within society and the culture. It made me feel gleeful at the time, like scratching the exact right spot where it itches — ah, there it is! Outwardly, the world more or less appeared the same, but the public mood had clearly shifted.

The bluntness of reactionary right-wingers is caused by the very fact that the winds of change are turning against them. That is why they praise the crude ridicule of wannabe emperor Donald Trump. What in the past could have been ignored by those in the mainstream no longer can be ignored. And after being ignored, the next step toward potential victory is being attacked, which can be mistaken for loss even as it offers the hope for reversal of fortune. Attacks come in many forms, with a few examples already mentioned. Along with ridicule, there is defamation, character assassination, scapegoating, and straw man arguments; allegations of fraud, quackery, malpractice, or deviancy. These are attacks as preemptive defense, in the hope of enforcing submission and silence. This only works for so long, though. The tide can’t be held back forever.

The establishment is under siege and they know it. Their only hope is to be able hold out long enough until the worst happens and they can drop the pretense in going full authoritarian. That is a risky gamble on their part and likely not to pay off, but it is the only hope they have in maintaining power. Desperation of mind breeds desperation of action. But it’s not as if a choice is being made. The inevitable result of a dominant paradigm is that it closes itself not only to all other possibilities but, more importantly, to even the imagination that something else is possible. Ideological realism becomes a reality tunnel. And insularity leads to intellectual laziness, as those who rule and those who support them have come to depend on a presumed authority as gatekeepers of legitimacy. What they don’t notice or don’t understand is the slow erosion of authority and hence loss of what Julian Jaynes called authorization. Their need to be absolutely right is no longer matched with their capacity to enforce their increasingly rigid worldview, their fragile and fraying ideological dogmatism.

This is why challengers to the status quo are in a different position, thus making the altercation of contestants rather lopsided. There is a freedom to being outside the constraints of mainstream thought. An imbalance of power, in some ways, works in favor of those excluded from power since they have all the world to gain and little to lose, meaning less to defend; this being shown in how outsiders, more easily than insiders, often can acknowledge where the other side is right and accept where points of commonality are to be found, that is to say the challengers to power don’t have to be on the constant attack in the way that is required for defenders of the status quo (similar to how guerrilla fighters don’t have to defeat an empire, but simply not lose and wait it out). Trying to defeat ideological underdogs that have growing popular support is like the U.S. military trying to win a war in Vietnam or Afghanistan — they are on the wrong side of history. But systems of power don’t give up without a fight, and they are willing to sacrifice loads of money and many lives in fighting losing battles, if only to keep the enemies at bay for yet another day. And the zombie ideas these systems are built on are not easily eliminated. That is because they are highly infectious mind viruses that can continue to spread long after the original vector of disease disappeared.

As such, the behemoth medical-industrial complex won’t be making any quick turns toward internal reform. Changes happen over generations. And for the moment, this generation of doctors and other healthcare workers were primarily educated and trained under the old paradigm. It’s the entire world most of them know. The system is a victim of its own success and so those working within the system are victimized again and again in their own indoctrination. It’s not some evil sociopathic self-interest that keeps the whole mess slogging along; after all, even doctors are suffering the same failed healthcare system as the rest of us and are dying of the same preventable diseases. All are sacrificed equally, all are food for the system’s hunger. When my mother brought my nephew for an appointment, the doctor was not trying to be a bad person when she made the bizarre and disheartening claim that all kids eat unhealthy and are sickly; i.e., there is nothing to do about it, just the way kids are. Working within the failed system, that is all she knows. The idea that sickness isn’t or shouldn’t be the norm was beyond her imagination.

It is up to the rest of us to imagine new possibilities and, in some cases, to resurrect old possibilities long forgotten. We can’t wait for a system to change when that system is indifferent to our struggles and suffering. We can’t wait for a future time when most doctors are well-educated on treating the whole patient, when officials are well-prepared for understanding and tackling systemic problems. Change will happen, as so many have come to realize, from the bottom up. There is no other way. Until that change happens, the best we can do is to take care of ourselves and take care of our loved ones. That isn’t about blame. It’s about responsibility, that is to say the ability to respond; and more importantly, the willingness to do so.

* * *

Ketotarian
by Dr. Will Cole
pp. 15-16

With the Hippocratic advice to “let food be thy medicine, and medicine thy food,” how far have we strayed that the words of the founder of modern medicine can actually be threatening to conventional medicine?

Today medical schools in the United States offer, on average, only about nineteen hours of nutrition education over four years of medical school.10 Only 29 percent of U.S. medical schools offer the recommended twenty-five hours of nutrition education.11 A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of eighteen questions correctly.12 In short, most mainstream doctors would fail nutrition. So if you were wondering why someone in functional medicine, outside conventional medicine, is writing a book on how to use food for optimal health, this is why.

Expecting health guidance from mainstream medicine is akin to getting gardening advice from a mechanic. You can’t expect someone who wasn’t properly trained in a field to give sound advice. Brilliant physicians in the mainstream model of care are trained to diagnose a disease and match it with a corresponding pharmaceutical drug. This medicinal matching game works sometimes, but it often leaves the patient with nothing but a growing prescription list and growing health problems.

With the strong influence that the pharmaceutical industry has on government and conventional medical policy, it’s no secret that using foods to heal the body is not a priority of mainstream medicine. You only need to eat hospital food once to know this truth. Even more, under current laws it is illegal to say that foods can heal. That’ right. The words treat, cure, and prevent are in effect owned by the Food and Drug Administration (FDA) and the pharmaceutical industry and can be used in the health care setting only when talking about medications. This is the Orwellian world we live in today; health problems are on the rise even though we spend more on health care than ever, and getting healthy is considered radical and often labeled as quackery.

10. K. Adams et al., “Nutrition Education in U.S. Medical Schools: Latest Update of a National Survey,” Academic Medicine 85, no. 9 (September 2010): 1537-1542, https://www.ncbi.nlm.nih.gov/pubmed/9555760.
11. K. Adams et al., “The State of Nutrition Education at US Medical Schools,” Journal of Biomedical Education 2015 (2015), Article ID 357627, 7 pages, http://dx.doi.org/10.1155/2015/357627.
12. M. Castillo et al., “Basic Nutrition Knowledge of Recent Medical Graduates Entering a Pediatric Reside): 357-361, doi: 10.1515/ijamh-2015-0019, https://www.ncbi.nlm.nih.gov/pubmed/26234947.

Scientific Failure and Self Experimentation

In 2005, John P. A. Ioannidis wrote “Why Most Published Research Findings Are False” that was published in PloS journal. It is the most cited paper in that journal’s history and it has led to much discussion in the media. That paper was a theoretical model but has since been well supported — as Ioannidis explained in an interview with Julia Belluz:

“There are now tons of empirical studies on this. One field that probably attracted a lot of attention is preclinical research on drug targets, for example, research done in academic labs on cell cultures, trying to propose a mechanism of action for drugs that can be developed. There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators. Animal research has also attracted a lot of attention and has had a number of empirical evaluations, many of them showing that almost everything that gets published is claimed to be “significant”. Nevertheless, there are big problems in the designs of these studies, and there’s very little reproducibility of results. Most of these studies don’t pan out when you try to move forward to human experimentation.

“Even for randomized controlled trials [considered the gold standard of evidence in medicine and beyond] we have empirical evidence about their modest replication. We have data suggesting only about half of the trials registered [on public databases so people know they were done] are published in journals. Among those published, only about half of the outcomes the researchers set out to study are actually reported. Then half — or more — of the results that are published are interpreted inappropriately, with spin favoring preconceptions of sponsors’ agendas. If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.”

This is part of the replication crisis that has been known about for decades, although rarely acknowledged or taken seriously. And it is a crisis that isn’t limited to single studies —- Ioannidis wrote that, “Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted” (from a paper reported in the Pacific Standard). The crisis cuts across numerous fields, from economics and genetics to neuroscience and psychology. But to my mind, medical research stands out. Evidence-based medicine is only as good as the available evidence — it has been “hijacked to serve agendas different from what it originally aimed for,” as stated by Ioannidis. (A great book on this topic, by the way, is Richard Harris’ Rigor Mortis.) Studies done by or funded by drug companies, for example, are more likely to come to positive results for efficacy and negative results for side effects. And because the government has severely decreased public funding since the Reagan administration, so much of research is now linked to big pharma. From a Retraction Watch interview, Ioannidis says:

“Since clinical research that can generate useful clinical evidence has fallen off the radar screen of many/most public funders, it is largely left up to the industry to support it. The sales and marketing departments in most companies are more powerful than their R&D departments. Hence, the design, conduct, reporting, and dissemination of this clinical evidence becomes an advertisement tool. As for “basic” research, as I explain in the paper, the current system favors PIs who make a primary focus of their career how to absorb more money. Success in obtaining (more) funding in a fiercely competitive world is what counts the most. Given that much “basic” research is justifiably unpredictable in terms of its yield, we are encouraging aggressive gamblers. Unfortunately, it is not gambling for getting major, high-risk discoveries (which would have been nice), it is gambling for simply getting more money.”

I’ve become familiar with this collective failure through reading on diet and nutrition. Some of the key figures in that field, specifically Ancel Keys, were either intentionally fraudulent or really bad at science. Yet the basic paradigm of dietary recommendations that was instituted by Keys remains in place. The fact that Keys was so influential demonstrates the sad state of affairs. Ioannidis has also covered this area and come to similar dire conclusions. Along with Jonathan Schoenfeld, he considered the question “Is everything we eat associated with cancer?”

“After choosing fifty common ingredients out of a cookbook, they set out to find studies linking them to cancer rates – and found 216 studies on forty different ingredients. Of course, most of the studies disagreed with each other. Most ingredients had multiple studies claiming they increased and decreased the risk of getting cancer. Most of the statistical evidence was weak, and meta-analyses usually showed much smaller effects on cancer rates than the original studies.”
(Alex Reinhart, What have we wrought?)

That is a serious and rather personal issue, not an academic exercise. There is so much bad research out there or else confused and conflicting. It’s about impossible for the average person to wade through it all and come to a certain conclusion. Researchers and doctors are as mired in it as the rest of us. Doctors, in particular, are busy people and don’t typically read anything beyond short articles and literature reviews, and even those they likely only skim in spare moments. Besides, most doctors aren’t trained in research and statistics, anyhow. Even if they were better educated and informed, the science itself is in a far from optimal state and one can find all kinds of conclusions. Take the conflict between two prestigious British journals, the Lancet and the BMJ, the former arguing for statin use and the latter more circumspect. In the context of efficacy and side effects, the disagreement is over diverse issues and confounders of cholesterol, inflammation, artherosclerosis, heart disease, etc — all overlapping.

Recently, my dad went to his doctor who said that research in respectable journals strongly supported statin use. Sure, that is true. But the opposite is equally true, in that there are also respectable journals that don’t support wide use of statins. It depends on which journals one chooses to read. My dad’s doctor didn’t have the time to discuss the issue, as that is the nature of the US medical system. So, probably in not wanting to get caught up in fruitless debate, the doctor agreed to my dad stopping statins and seeing what happens. With failure among researchers to come to consensus, it leaves the patient to be a guinea pig in his own personal experiment. Because of the lack of good data, self-experimentation has become a central practice in diet and nutrition. There are so many opinions out there that, if one cares about one’s health, one is forced to try different approaches and find out what seems to work, even as this methodology is open to many pitfalls and hardy guarantees success. But the individual person dealing with a major health concern often has no other choice, at least not until the science improves.

This isn’t necessarily a reason for despair. At least, a public debate is now happening. Ioannidis, among others, sees the solution as not difficult (psychology, despite its own failings, might end up being key in improving research standards; and also organizations are being set up to promote better standards, including The Nutrition Science Initiative started by the science journalist Gary Taubes, someone often cited by those interested in alternative health views). We simply need to require greater transparency and accountability in the scientific process. That is to say science should be democratic. The failure of science is directly related to the failure seen in politics and economics, related to powerful forces of big money and other systemic biases. It is not so much a failure as it is a success toward ulterior motives. That needs to change.

* * *

Many scientific “truths” are, in fact, false
by Olivia Goldhill

Are most published research findings false?
by Erica Seigneur

The Decline Effect – Why Most Published Research Findings are False
by Paul Crichton

Beware those scientific studies—most are wrong, researcher warns
by Ivan Couronne

The Truthiness Of Scientific Research
by Judith Rich Harris

Is most published research really wrong?
by Geoffrey P Webb

Are Scientists Doing Too Much Research?
by Peter Bruce

Psychedelics and Language

“We cannot evolve any faster than we evolve our language because you cannot go to places that you cannot describe.”
~Terence McKenna

This post is a placeholder, as I work through some thoughts. Maybe the most central link between much of it is Terence Mckenna’s stoned ape theory. That is about the evolution of consciousness as it relates to psychedelics and language. Related to McKenna’s view, there have been many observations of non-human animals imbibing a wide variety of mind-altering plants, often psychedelics. Giorgio Samorini, in Animals and Psychedelics, that this behavior is evolutionarily advantageous in that it induces lateral thinking.

Also, as McKenna points out, many psychedelics intensify the senses, a useful effect hunting. Humans won’t only take drugs themelves for this purpose but also give them to their animals: “A classic case is indigenous people giving psychedelics to hunting dogs to enhance their abilities. A study published in the Journal of Ethnobiology, reports that at least 43 species of psychedelic plants have been used across the globe for boosting dog hunting practices. The Shuar, an indigenous people from Ecuador, include 19 different psychedelic plants in their repertoire for this purpose—including ayahuasca and four different types of brugmansia” (Alex K. Gearin, High Kingdom). So, there are many practical reasons for using psychoactive drugs. Language might have been an unintended side effect.

There is another way to get to McKenna’s conclusion. David Lewis Williams asserts that cave paintings are shamanic. He discusses the entoptic imagery that is common in trance, whether from psychedelics or by other means. This interpretation isn’t specifically about language, but that is where another theory can help us. Genevieve von Petzinger takes a different tack by speculating that the geometric signs on cave walls were a set of symbols, possibly a system of graphic communication and so maybe the origin of writing.

In exploring the sites for herself, she ascertained there were 32 signs found over a 30,000 period in Europe. Some of the same signs were found outside of Europe as well. It’s the consistency and repetition that caught her attention. They weren’t random or idiosyncratic aesthetic flourishes. If we combine that with Williams’ theory, we might have the development of proto-concepts, still attached to the concrete world but in the process of developing into something else. It would indicate that something fundamental about the human mind itself was changing.

I have my own related theory about the competing influence of psychedelics and addictive substances, the influence being not only on the mind but on society and so related to the emergence of civilization. I’m playing around with the observation that it might tell us much about civilization that, over time, addiction became more prevalent than psychedelics. I see the shift in this preference having become apparent sometime following the neolithic era, although becoming most noticeable in the Axial Age. Of course, language already existed at that point. Though maybe, as Julian Jaynes and others have argued, the use of language changed. I’ll speculate about all of that at a later time.

In the articles and passages and links below, there are numerous overlapping ideas and topics. Here are some of what stood out to me or else some of the thoughts on my mind while reading:

  • Synaesthesia, gesture, ritual, dance, sound, melody, music, poeisis, repetition (mimesis, meter, rhythm, rhyme, and alliteration, etc) vs repetition-compulsion;
  • formulaic vs grammatical language, poetry vs prose, concrete vs abstract, metaphor, and metonymy;
  • Aural and oral, listening and speaking, preliterate, epic storytelling, eloquence, verbosity, fluency, and graphomania;
  • enthralled, entangled, enactivated, embodied, extended, hypnosis, voices, voice-hearing, bundle theory of self, ego theory of self, authorization, and Logos;
  • Et cetera.

* * *

Animals on Psychedelics: Survival of the Trippiest
by Steven Kotler

According to Italian ethnobotanist Giorgio Samorini, in his 2001 Animals and Psychedelics, the risk is worth it because intoxication promotes what psychologist Edward de Bono once called lateral thinking-problem-solving through indirect and creative approaches. Lateral thinking is thinking outside the box, without which a species would be unable to come up with new solutions to old problems, without which a species would be unable to survive. De Bono thinks intoxication an important “liberating device,” freeing us from “rigidity of established ideas, schemes, divisions, categories and classifications.” Both Siegel and Samorini think animals use intoxicants for this reason, and they do so knowingly.

Don’t Be A Sea Squirt.
by Tom Morgan

It’s a feature of complex adaptive systems that a stable system is a precursor to a dead system. Something that runs the same routine day-after-day is typically a dying system. There’s evidence that people with depression are stuck in neurological loops that they can’t get out of. We all know what it’s like to be trapped in the same negative thought patterns. Life needs perpetual novelty to succeed. This is one of the reasons researchers think that psychedelics have proven effective at alleviating depression; they break our brains out of the same familiar neural pathways.

This isn’t a uniquely human trait, animals also engage in deliberate intoxication. In his book Animals and Psychedelics, Italian ethnobotanist Giorgio Samorini wrote ‘drug-seeking and drug-taking behavior, on the part of both humans and animals, enjoys an intimate connection with…..depatterning.’And thus dolphins get high on blowfish, elephants seek out alcohol and goats eat the beans of the mescal plant. They’re not just having fun, they’re expanding the possible range of their behaviours and breaking stale patterns. You’re not just getting wasted, you’re furthering the prospects of the species!*

Synesthesias, Synesthetic Imagination, and Metaphor in the Context of Individual Cognitive Development and Societal Collective Consciousness
by Hunt Harry

The continuum of synesthesias is considered in the context of evolution, childhood development, adult creativity, and related states of imaginative absorption, as well as the anthropology and sociology of “collective consciousness”. In Part I synesthesias are considered as part of the mid-childhood development of metacognition, based on a Vygotskian model of the internalization of an earlier animism and physiognomic perception, and as the precursor for an adult capacity for imaginative absorption central to creativity, metaphor, and the synesthetically based “higher states of consciousness” in spontaneous mystical experience, meditation, and psychedelic states. Supporting research is presented on childhood precocities of a fundamental synesthetic imagination that expands the current neuroscience of classical synesthetes into a broader, more spontaneous, and open-ended continuum of introspective cross modal processes that constitute the human self referential consciousness of “felt meaning”. In Part II Levi-Strauss’ analysis of the cross modal and synesthetic lattices underlying the mythologies of native peoples and their traditional animation thereby of surrounding nature as a self reflective metaphoric mirror, is illustrated by its partial survival and simplification in the Chinese I-Ching. Jung’s psychological analysis of the I-Ching, as a device for metaphorically based creative insight and as a prototype for the felt “synchronicities” underlying paranormal experience, is further extended into a model for a synesthetically and metaphorically based “collective consciousness”. This metaphorically rooted and coordinated social field is explicit in mythologically centered, shamanic peoples but rendered largely unconscious in modern societies that fail to further educate and train the first spontaneous synesthetic imaginings of mid-childhood.

Psychedelics and the Full-Fluency Phenomenon
by T.H.

Like me, the full-fluency phenomenon has been experienced by many other people who stutter while using psilocybin and MDMA, and unlike me, while using LSD as well. […]

There’s also potential for immediate recovery from stuttering following a single high dose experience. One well told account of this comes from Paul Stamets, the renowned mycologist, whose stuttering stopped altogether following his first psilocybin mushroom experience. To sustain such a high increase in fluency after the effects of the drug wear off is rare, but Paul’s story gives testimony to the possibility for it to occur.

Can Psychedelics Help You Learn New Languages?
by The Third Wave Podcast

Idahosa Ness runs “The Mimic Method,” a website that promises to help you learn foreign languages quickly by immersing you in their sounds and pronunciations. We talk to Idahosa about his experiences with cannabis and other psychedelics, and how they have improved his freestyle rapping, increased his motivation to learn new languages, and helped the growth of his business.

Marijuana and Divergent Thinking
by Jonah Lehrer

A new paper published in Psychiatry Research sheds some light on this phenomenon, or why smoking weed seems to unleash a stream of loose associations. The study looked at a phenomenon called semantic priming, in which the activation of one word allows us to react more quickly to related words. For instance, the word “dog” might lead to decreased reaction times for “wolf,” “pet” and “Lassie,” but won’t alter how quickly we react to “chair”.

Interestingly, marijuana seems to induce a state of hyper-priming, in which the reach of semantic priming extends outwards to distantly related concepts. As a result, we hear “dog” and think of nouns that, in more sober circumstances, would seem to have nothing in common. […]

Last speculative point: marijuana also enhances brain activity (at least as measured indirectly by cerebral blood flow) in the right hemisphere. The drug, in other words, doesn’t just suppress our focus or obliterate our ability to pay attention. Instead, it seems to change the very nature of what we pay attention to, flattening out our hierarchy of associations.

How the Brain Processes Language on Acid Is a Trip
by Madison Margolin

“Results showed that while LSD does not affect reaction times, people under LSD made more mistakes that were similar in meaning to the pictures they saw,” said lead author Dr. Neiloufar Family, a post-doc from the University of Kaiserslautern.

For example, participants who were dosed with acid would more often say “bus” or “train” when asked to identify a picture of a car, compared to those who ingested the placebo. These lexical mixups shed some light on how LSD affects semantic networks and the way the brain draws connections between different words or concepts.

“The effects of LSD on language can result in a cascade of associations that allow quicker access to far way concepts stored in the mind,” said Family, discussing the study’s implications for psychedelic-assisted psychotherapy. Moreover, she added, “inducing a hyper-associative state may have implications for the enhancement of creativity.”

New study shows LSD’s effects on language
by Technische Universität Kaiserslautern

This indicates that LSD seems to affect the mind’s semantic networks, or how words and concepts are stored in relation to each other. When LSD makes the network activation stronger, more words from the same family of meanings come to mind.

The results from this experiment can lead to a better understanding of the neurobiological basis of semantic network activation. Neiloufar Family explains further implication: “These findings are relevant for the renewed exploration of psychedelic psychotherapy, which are being developed for depression and other mental illnesses. The effects of LSD on language can result in a cascade of associations that allow quicker access to far away concepts stored in the mind.”

The many potential uses of this class of substances are under scientific debate. “Inducing a hyper-associative state may have implications for the enhancement of creativity,” Family adds. The increase in activation of semantic networks can lead distant or even subconscious thoughts and concepts to come to the surface.

A new harmonic language decodes the effects of LSD
by Oxford Neuroscience

Dr Selen Atasoy, the lead author of the study says: “The connectome harmonics we used to decode brain activity are universal harmonic waves, such as sound waves emerging within a musical instrument, but adapted to the anatomy of the brain. Translating fMRI data into this harmonic language is actually not different than decomposing a complex musical piece into its musical notes”. “What LSD does to your brain seems to be similar to jazz improvisation” says Atasoy, “your brain combines many more of these harmonic waves (connectome harmonics) spontaneously yet in a structured way, just like improvising jazz musicians play many more musical notes in a spontaneous, non-random fashion”.

“The presented method introduces a new paradigm to study brain function, one that links space and time in brain activity via the universal principle of harmonic waves. It also shows that this spatio-temporal relation in brain dynamics resides at the transition between order and chaos.” says Prof Gustavo Deco.

Dr. Robin Carhart-Harris adds: “Our findings reveal the first experimental evidence that LSD tunes brain dynamics closer to criticality, a state that is maximally diverse and flexible while retaining properties of order. This may explain the unusual richness of consciousness experienced under psychedelic drugs and the notion that they ‘expand consciousness’.”

Did Psilocybin Mushrooms Lead to Human Language?
by Chris Rhine

Numerous archaeological finds discovered depictions of psilocybin mushrooms in various places and times around the world. One such occasion found hallucinogenic mushrooms from works produced 7,000 to 9,000 years ago in the Sahara Desert, as stated in Giorgio Samorini’s article, “The Oldest Representations of Hallucinogenic Mushrooms in the World.” Samorini concluded, “This Saharan testimony would demonstrate that the use of hallucinogens originates in the Paleolithic period and is invariably included within mystico-religious contexts and rituals.”

Some of early man’s first drawings include the ritualization of a plant as a sign—possibly a tribute to the substance that helped in the written sign’s development.

Are Psychedelic Hallucinations Actually Metaphorical Perceptions?
by Michael Fortier

The brain is constantly attempting to predict what is going on in the world. Because it happens in a dark environment with reduced sensory stimulation, the ayahuasca ritual dampens bottom-up signaling (sensory information becomes scarcer). If you are facing a tree in daylight and your brain wrongly guesses that there is an electric pole in front you, bottom-up prediction errors will quickly correct the wrong prediction—i.e., the lookout will quickly and successfully warn the helmsman. But if the same happens in the dark, bottom-up prediction errors will be sparser and vaguer, and possibly not sufficient enough to correct errors—as it were, the lookout’s warning will be too faint to reach the helmsman. As ayahuasca introduces noise in the brain processes,6 and because bottom-up corrections cannot be as effective as usual, hallucinations appear more easily. So, on the one hand, the relative sensory deprivation of the environment in which the ayahuasca ritual takes place, and the absence of bodily motion, both favor the occurrence of hallucinations.

Furthermore, the ayahuasca ritual does include some sensory richness. The songs, the perfume, and the tobacco stimulate the brain in multiple ways. Psychedelic hallucinogens are known to induce synesthesia7 and to increase communication between areas and networks of the brain that do not usually communicate with each other.8 It is hence no surprise that the shamans’ songs are able to shape people’s visions. If one sensory modality is noisier or fainter than others, its role in perception will be downplayed.9 This is what happens with ayahuasca: Given that not much information can be gathered by the visual modality, most of the prediction errors that contribute to the shaping of conscious perception are those coming from the auditory and olfactory modalities. The combination of synesthetic processing with the increased weight attributed to non-visual senses enables shamans to “drive” people’s visions.

The same mechanisms explain the shamans’ recommendation that perfume should be sprayed or tobacco blown when one is faced with a bad spirit. Conscious perception—e.g., vision of a spirit—is the result of a complex tradeoff between top-down predictions and bottom-up prediction errors. If you spray a huge amount of perfume or blow wreaths of smoke around you, your brain will receive new and reliable information from the olfactory modality. Under psychedelics, sensory modalities easily influence one another; as a result, a sudden olfactory change amounts to sending prediction errors to upper regions of the brain. Conscious perception is updated accordingly: as predicted by the shamans’ recommendation, the olfactory change dissolves the vision of bad spirits.

In its classical sense, hallucination refers to sensory content that is not caused by objects of the world. The above description of the ayahuasca ritual demonstrates that psychedelic visions are not, in the classical sense of the term, hallucinations. Indeed, the content of the visions is tightly tied to the environment: A change of melody in a song or an olfactory change can completely transform the content of the visions. Ayahuasca visions are not caused by hypothetical supernatural entities living in a parallel world, nor are they constructed independently of the mundane objects of the world. What are they, then? They are metaphorical perceptions.

In everyday life, melodic and olfactory changes cannot affect vision much. However, because ayahuasca experience is profoundly synesthetic and intermodal, ayahuasca visions are characteristically metaphorical: A change in one sensory modality easily affects another modality. Ayahuasca visions are not hallucinations, since they are caused by real objects and events; for example, a cloud of perfume. It is more accurate to define them as metaphorical perceptions: they are loose intermodal interpretations of things that are really there.

Michael Pollan on the science of how psychedelics can ‘shake your snow globe’
interview with Michael Pollan

We know that, for example, the so-called classic psychedelics like psilocybin, LSD, and DMT, mescaline, these activate a certain receptor a serotonin receptor. And so we know that are the key that fits that lock. But beyond that, there’s a cascade of effects that happens.

The observed effect, if you do brain imaging of people who are tripping, you find some very interesting patterns of activity in the brain – specifically something called the default mode network, which is a very kind of important hub in the brain, linking parts of the cerebral cortex to deeper, older areas having to do with memory and emotion. This network is kind of a regulator of all brain activities. One neuroscientist called it, ‘The conductor of the neural symphony,’ and it’s deactivated by psychedelics, which is very interesting because the assumption going in was that they would see lots of strange activity everywhere in the brain because there’s such fireworks in the experience, but in fact, this particular network almost goes off line.

Now what does this network responsible for? Well, in addition to being this transportation hub for signals in the brain, it is involved with self reflection. It’s where we go to ruminate or mind wander – thinking about the past or thinking about the future – therefore worrying takes place here. Our sense of self, if it can be said to have an address and real, resides in this particular brain network. So this is a very interesting clue to how psychedelics affect the brain and how they create the psychological experience, the experience in the mind, that is so transformative.

When it goes off line, parts of the brain that don’t ordinarily communicate to one another, strike up conversation. And those connections may represent what people feel during the psychedelic experience as things like synaesthesia. Synaesthesia is when one sense gets cross wired with another. And so you suddenly smell musical notes or taste things that you see.

It may produce insights. It may produce new metaphors – literally connecting the dots in new ways. Now that I’m being speculative – I’m going a little beyond what we’ve established – we know there are new connections, we don’t know what’s happening with them, or which of them endure. But the fact is, the brain is temporarily rewired. And that rewiring – whether the new connections actually produce the useful material or just shaking up the system – ‘shaking the snow globe,’ as one of the neuroscientists put it, is what’s therapeutic. It is a reboot of the brain.

If you think about, you know, mental illnesses such as depression, addiction, and anxiety, many of them involve these loops of thought that we can’t control and we get stuck on these stories we tell ourselves – that we can’t get through the next hour without a drink, or we’re worthless and unworthy of love. We get stuck in these stories. This temporarily dissolves those stories and gives us a chance to write new stories.

Terence McKenna Collection

The mutation-inducing influence of diet on early humans and the effect of exotic metabolites on the evolution of their neurochemistry and culture is still unstudied territory. The early hominids’ adoption of an omnivorous diet and their discovery of the power of certain plants were decisive factors in moving early humans out of the stream of animal evolution and into the fast-rising tide of language and culture. Our remote ancestors discovered that certain plants, when self-administered, suppress appetite, diminish pain, supply bursts of sudden energy, confer immunity against pathogens, and synergize cognitive activities. These discoveries set us on the long journey to self-reflection. Once we became tool-using omnivores, evolution itself changed from a process of slow modification of our physical form to a rapid definition of cultural forms by the elaboration of rituals, languages, writing, mnemonic skills, and technology.

Food of the Gods
by Terence McKenna
pp. 24-29

Because scientists were unable to explain this tripling of the human brain size in so short a span of evolutionary time, some of the early primate paleontologists and evolutionary theorists predicted and searched for evidence of transitional skeletons. Today the idea of a “missing link” has largely been abandoned. Bipedalism, binocular vision, the opposable thumb, the throwing arm-all have been put forth as the key ingredient in the mix that caused self-reflecting humans to crystallize out of the caldron of competing hominid types and strategies. Yet all we really know is that the shift in brain size was accompanied by remarkable changes in the social organization of the hominids. They became users of tools, fire, and language. They began the process as higher animals and emerged from it 100,000 years ago as conscious, self-aware individuals.

THE REAL MISSING LINK

My contention is that mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities. Alkaloids in plants, specifically the hallucinogenic compounds such as psilocybin, dimethyltryptamine (DMT), and harmaline, could be the chemical factors in the protohuman diet that catalyzed the emergence of human self-reflection. The action of hallucinogens present in many common plants enhanced our information processing activity, or environmental sensitivity, and thus contributed to the sudden expansion of the human brain size. At a later stage in this same process, hallucinogens acted as catalysts in the development of imagination, fueling the creation of internal stratagems and hopes that may well have synergized the emergence of language and religion.

In research done in the late 1960s, Roland Fischer gave small amounts of psilocybin to graduate students and then measured their ability to detect the moment when previously parallel lines became skewed. He found that performance ability on this particular task was actually improved after small doses of psilocybin.5

When I discussed these findings with Fischer, he smiled after explaining his conclusions, then summed up, “You see what is conclusively proven here is that under certain circumstances one is actually better informed concerning the real world if one has taken a drug than if one has not.” His facetious remark stuck with me, first as an academic anecdote, later as an effort on his part to communicate something profound. What would be the consequences for evolutionary theory of admitting that some chemical habits confer adaptive advantage and thereby become deeply scripted in the behavior and even genome of some individuals?

THREE BIG STEPS FOR THE HUMAN RACE

In trying to answer that question I have constructed a scenario, some may call it fantasy; it is the world as seen from the vantage point of a mind for which the millennia are but seasons, a vision that years of musing on these matters has moved me toward. Let us imagine for a moment that we stand outside the surging gene swarm that is biological history, and that we can see the interwoven consequences of changes in diet and climate, which must certainly have been too slow to be felt by our ancestors. The scenario that unfolds involves the interconnected and mutually reinforcing effects of psilocybin taken at three different levels. Unique in its properties, psilocybin is the only substance, I believe, that could yield this scenario.

At the first, low, level of usage is the effect that Fischer noted: small amounts of psilocybin, consumed with no awareness of its psychoactivity while in the general act of browsing for food, and perhaps later consumed consciously, impart a noticeable increase in visual acuity, especially edge detection. As visual acuity is at a premium among hunter-gatherers, the discovery of the equivalent of “chemical binoculars” could not fail to have an impact on the hunting and gathering success of those individuals who availed themselves of this advantage. Partnership groups containing individuals with improved eyesight will be more successful at feeding their offspring. Because of the increase in available food, the offspring within such groups will have a higher probability of themselves reaching reproductive age. In such a situation, the out breeding (or decline) of non-psilocybin-using groups would be a natural consequence.

Because psilocybin is a stimulant of the central nervous system, when taken in slightly larger doses, it tends to trigger restlessness and sexual arousal. Thus, at this second level of usage, by increasing instances of copulation, the mushrooms directly favored human reproduction. The tendency to regulate and schedule sexual activity within the group, by linking it to a lunar cycle of mushroom availability, may have been important as a first step toward ritual and religion. Certainly at the third and highest level of usage, religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself. This third level, then, is the level of the full-blown shamanic ecstasy. The psilocybin intoxication is a rapture whose breadth and depth is the despair of prose. It is wholly Other and no less mysterious to us than it was to our mushroom-munching ancestors. The boundary-dissolving qualities of shamanic ecstasy predispose hallucinogen-using tribal groups to community bonding and to group sexual activities, which promote gene mixing, higher birth rates, and a communal sense of responsibility for the group offspring.

At whatever dose the mushroom was used, it possessed the magical property of conferring adaptive advantages upon its archaic users and their group. Increased visual acuity, sexual arousal, and access to the transcendent Other led to success in obtaining food, sexual prowess and stamina, abundance of offspring, and access to realms of supernatural power. All of these advantages can be easily self-regulated through manipulation of dosage and frequency of ingestion. Chapter 4 will detail psilocybin’s remarkable property of stimulating the language-forming capacity of the brain. Its power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.

STEERING CLEAR OF LAMARCK

An objection to these ideas inevitably arises and should be dealt with. This scenario of human emergence may seem to smack of Lamarckism, which theorizes that characteristics acquired by an organism during its lifetime can be passed on to its progeny. The classic example is the claim that giraffes have long necks because they stretch their necks to reach high branches.

This straightforward and rather common-sense idea is absolutely anathema among
neoDarwinians, who currently hold the high ground in evolutionary theory. Their position is that mutations are entirely random and that only after the mutations are expressed as the traits of organisms does natural selection mindlessly and dispassionately fulfill its function of preserving those individuals upon whom an adaptive advantage had been conferred.

Their objection can be put like this: While the mushrooms may have given us better eyesight, sex, and language when eaten, how did these enhancements get into the human genome and become innately human? Nongenetic enhancements of an organism’s functioning made by outside agents retard the corresponding genetic reservoirs of those facilities by rendering them superfluous. In other words, if a necessary metabolite is common in available food, there will not be pressure to develop a trait for endogenous expression of the metabolite. Mushroom use would thus create individuals with less visual acuity, language facility, and consciousness. Nature would not provide those enhancements through organic evolution because the metabolic investment required to sustain them wouldn’t pay off, relative to the tiny metabolic investment required to eat mushrooms. And yet today we all have these enhancements, without taking mushrooms. So how did the mushroom modifications get into the genome?

The short answer to this objection, one that requires no defense of Lamarck’s ideas, is that the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating. Experimentation with many types of foods was causing a general increase in the numbers of random mutations being offered up to the process of natural selection, while the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors. One of these new behaviors, language use, previously only a marginally important trait, was suddenly very useful in the context of new hunting and gathering lifestyles. Hence psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language; acquisition of language led to more vocabulary and an expanded memory capacity. The psilocybin-using individuals evolved epigenetic rules or cultural forms that enabled them to survive and reproduce better than other individuals. Eventually the more successful epigenetically based styles of behavior spread through the populations along with the genes that reinforce them. In this fashion the population would evolve genetically and culturally.

As for visual acuity, perhaps the widespread need for corrective lenses among modem humans is a legacy of the long period o “artificial” enhancement of vision through psilocybin use. After all, atrophy of the olfactory abilities of human beings is thought by one school to be a result of a need for hungry omnivores to tolerate strong smells and tastes, perhaps even carrion. Trade-offs of this sort are common in evolution. The suppression of keenness of tasty and smell would allow inclusion of foods in the diet that might otherwise be passed over as “too strong.” Or it may indicate some thing more profound about our evolutionary relationship to diet My brother Dennis has written:

The apparent atrophy of the human olfactory system may actually represent a functional shift in a set of primitive, externally directed chemo-receptors to an interiorized regulatory function. This function may be related to the control of the human pheromonal system, which is largely under the control of the pineal gland, and which mediates, on a subliminal level, a host of psycho-sexual and psycho-social interactions between individuals. The pineal tends to suppress gonadal development and the onset of puberty, among other functions, and this mechanism may play a role in the persistence of neonatal characteristics in the human species. Delayed maturation and prolonged childhood and adolescence play a critical role in the neurological and psychological development of the individual, since they provide the circumstances which permit the post-natal development of the brain in the early, formative years of childhood. The symbolic, cognitive and linguistic stimuli that the brain experiences during this period are essential to its development and are the factors that make us the unique, conscious, symbol-manipulating, language-using beings that we are.

Neuroactive amines and alkaloids in the diet of early primates may have played a role in the biochemical activation of the pineal gland and the resulting adaptations.

pp. 46-60

HUMAN COGNITION

All the unique characteristics and preoccupations of human beings can be summed up under the heading of cognitive activities: dance, philosophy, painting, poetry, sport, meditation, erotic fantasy, politics, and ecstatic self-intoxication. We are truly Homo sapiens, the thinking animal; our acts are all a product of the dimension that is uniquely ours, the dimension of cognitive activity. Of thought and emotion, memory and anticipation. Of Psyche.

From observing the ayahuasca-using people of the Upper Amazon, it became very clear to me that shamanism is often intuitively guided group decision making. The shamans decide when the group should move or hunt or make war. Human cognition is an adaptive response that is profoundly flexible in the way it allows us to manage what in other species are genetically programmed behaviors.

We alone live in an environment that is conditioned not only by the biological and physical constraints to which all species are subject but also by symbols and language. Our human environment is conditioned by meaning. And meaning lies in the collective mind of the group.

Symbols and language allow us to act in a dimension that is “supranatural”-outside the ordinary activities of other forms of organic life. We can actualize our cultural assumptions, alter and shape the natural world in the pursuit of ideological ends and according to the internal model of the world that our symbols have empowered us to create. We do this through the elaboration of ever more effective, and hence ever more destructive, artifacts and technologies, which we feel compelled to use.

Symbols allow us to store information outside of the physical brain. This creates for us a relationship to the past very different from that of our animal companions. Finally, we must add to any analysis of the human picture the notion of self-directed modification of activity. We are able to modify our behavior patterns based on a symbolic analysis of past events, in other words, through history. Through our ability to store and recover information as images and written records, we have created a human environment as much conditioned by symbols and languages as by biological and environmental factors.

TRANSFORMATIONS OF MONKEYS

The evolutionary breakouts that led to the appearance of language and, later, writing are examples of fundamental, almost ontological, transformations of the hominid line. Besides providing us with the ability to code data outside the confines of DNA, cognitive activities allow us to transmit information across space and time. At first this amounted merely to the ability to shout a warning or a command, really little more than a modification of the cry of alarm that is a familiar feature of the behavior of social animals. Over the course of human history this impulse to communicate has motivated the elaboration of ever more effective communication techniques. But by our century, this basic ability has turned into the all-pervasive communications media, which literally engulf the space surrounding our planet. The planet swims through a self-generated ocean of messages. Telephone calls, data exchanges, and electronically transmitted entertainment create an invisible world experienced as global informational simultaneity. We think nothing of this; as a culture we take it for granted.

Our unique and feverish love of word and symbol has given us a collective gnosis, a collective understanding of ourselves and our world that has survived throughout history until very recent times. This collective gnosis lies behind the faith of earlier centuries in “universal truths” and common human values. Ideologies can be thought of as meaning-defined environments. They are invisible, yet they surround us and determine for us, though we may never realize it, what we should think about ourselves and reality. Indeed they define for us what we can think.

The rise of globally simultaneous electronic culture has vastly accelerated the rate at which we each can obtain information necessary to our survival. This and the sheer size of the human population as a whole have brought to a halt our physical evolution as a species. The larger a population is, the less impact mutations will have on the evolution of that species. This fact, coupled with the development of shamanism and, later, scientific medicine, has removed us from the theater of natural selection. Meanwhile libraries and electronic data bases have replaced the individual human mind as the basic hardware providing storage for the cultural data base. Symbols and languages have gradually moved us away from the style of social organization that characterized the mute nomadism of our remote ancestors and has replaced that archaic model with the vastly more complicated social organization characteristic of an electronically unified planetary society. As a result of these changes, we ourselves have become largely epigenetic, meaning that much of what we are as human beings is no longer in our genes but in our culture.

THE PREHISTORIC EMERGENCE OF HUMAN IMAGINATION

Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. Neural structures concerned with conceptualization, visualization, signification, and association are highly developed in our species. Through the act of speaking vividly, we enter into a flirtation with the domain of the imagination. The ability to associate sounds, or the small mouth noises of language, with meaningful internal images is a synesthesic activity. The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.

The conclusion universally drawn from these facts is that the highly organized neurolinguistic areas of our brain have made language and culture possible. Where the search for scenarios of human emergence and social organization is concerned, the problem is this: we know that our linguistic abilities must have evolved in response to enormous evolutionary pressures-but we do not know what these pressures were.
Where psychoactive plant use was present, hominid nervous systems over many millennia would have been flooded by hallucinogenic realms of strange and alien beauty. However, evolutionary necessity channels the organism’s awareness into a narrow cul-desac where ordinary reality is perceived through the reducing valve of the senses. Otherwise, we would be rather poorly adapted for the rough-and-tumble of immediate existence. As creatures with animal bodies, we are aware that we are subject to a range of immediate concerns that we can ignore only at great peril. As human beings we are also aware of an interior world, beyond the needs of the animal body, but evolutionary necessity has placed that world far from ordinary consciousness.

PATTERNS AND UNDERSTANDING

Consciousness has been called awareness of awareness’ and is characterized by novel associations and connections among the various data of experience. Consciousness is like a super nonspecific immune response. The key to the working of the immune system is the ability of one chemical to recognize, to have a key-in-lock relationship, with another. Thus both the immune system and consciousness represent systems that learn, recognize, and remember.’

As I write this I think of what Alfred North Whitehead said about understanding, that it is apperception of pattern as such. This is also a perfectly acceptable definition of consciousness. Awareness of pattern conveys the feeling that attends understanding. There presumably can be no limit to how much consciousness a species can acquire, since understanding is not a finite project with an imaginable conclusion, but rather a stance toward immediate experience. This appears self-evident from within a world view that sees consciousness as analogous to a source of light. The more powerful the light, the greater the surface area of darkness revealed. Consciousness is the moment-to-moment integration of the individual’s perception of the world. How well, one could almost say how gracefully, an individual accomplishes this integration determines that individual’s unique adaptive response to existence.

We are masters not only of individual cognitive activity, but, when acting together, of group cognitive activity as well. Cognitive activity within a group usually means the elaboration and manipulation of symbols and language. Although this occurs in many species, within the human species it is especially well developed. Our immense power to manipulate symbols and language gives us our unique position in the natural world. The power of our magic and our science arises out of our commitment to group mental activity, symbol sharing, meme replication (the spreading of ideas), and the telling of tall tales.

The idea, expressed above, that ordinary consciousness is the end product of a process of extensive compression and filtration, and that the psychedelic experience is the antithesis of this construction, was put forward by Aldous Huxley, who contrasted this with the psychedelic experience. In analyzing his experiences with mescaline, Huxley wrote:

I find myself agreeing with the eminent Cambridge philosopher, Dr. C. D. Broad, “that we should do well to consider the suggestion that the function of the brain and nervous system and sense organs is in the main eliminative and not productive.” The function of the brain and nervous system is to protect us from being overwhelmed and confused by this mass of largely useless and irrelevant knowledge, by shutting out most of what we should otherwise perceive or remember at any moment, and leaving only that very small and special selection which is likely to be practically useful. According to such a theory, each one of us is potentially Mind at Large. But in so far as we are animals, our business is at all costs to survive. To make biological survival possible, Mind at Large has to be funnelled through the reducing valve of the brain and nervous system. What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive on the surface of this particular planet. To formulate and express the contents of this reduced awareness, man has invented and endlessly elaborated those symbol-systems and implicit philosophies which we call languages. Every individual is at once the beneficiary and the victim of the linguistic tradition into which he has been born. That which, in the language of religion, is called “this world” is the universe of reduced awareness, expressed, and, as it were, petrified by language. The various “other worlds” with which human beings erratically make contact are so many elements in the totality of the awareness belonging to Mind at Large …. Temporary by-passes may be acquired either spontaneously, or as the result of deliberate “spiritual exercises,”. . . or by means of drugs.’

What Huxley did not mention was that drugs, specifically the plant hallucinogens, can reliably and repeatedly open the floodgates of the reducing valve of consciousness and expose the individual to the full force of the howling Tao. The way in which we internalize the impact of this experience of the Unspeakable, whether encountered through psychedelics or other means, is to generalize and extrapolate our world view through acts of imagination. These acts of imagination represent our adaptive response to information concerning the outside world that is conveyed to us by our senses. In our species, culture-specific, situation-specific syntactic software in the form of language can compete with and sometimes replace the instinctual world of hard-wired animal behavior. This means that we can learn and communicate experience and thus put maladaptive behaviors behind us. We can collectively recognize the virtues of peace over war, or of cooperation over struggle. We can change.

As we have seen, human language may have arisen when primate organizational potential was synergized by plant hallucinogens. The psychedelic experience inspired us to true self-reflective thought in the first place and then further inspired us to communicate our thoughts about it.

Others have sensed the importance of hallucinations as catalysts of human psychic organization. Julian Jaynes’s theory, presented in his controversial book The Origin of Consciousness in the Breakdown of the Bicameral Mind,’ makes the point that major shifts in human self-definition may have occurred even in historical times. He proposes that through Homeric times people did not have the kind of interior psychic organization that we take for granted. Thus, what we call ego was for Homeric people a “god.” When danger threatened suddenly, the god’s voice was heard in the individual’s mind; an intrusive and alien psychic function was expressed as a kind of metaprogram for survival called forth under moments of great stress. This psychic function was perceived by those experiencing it as the direct voice of a god, of the king, or of the king in the afterlife. Merchants and traders moving from one society to another brought the unwelcome news that the gods were saying different things in different places, and so cast early seeds of doubt. At some point people integrated this previously autonomous function, and each person became the god and reinterpreted the inner voice as the “self” or, as it was later called, the “ego.”

Jaynes’s theory has been largely dismissed. Regrettably his book on the impact of hallucinations on culture, though 467 pages in length, manages to avoid discussion of hallucinogenic plants or drugs nearly entirely. By this omission Jaynes deprived himself of a mechanism that could reliably drive the kind of transformative changes he saw taking place in the evolution of human consciousness.

CATALYZING CONSCIOUSNESS

The impact of hallucinogens in the diet has been more than psychological; hallucinogenic plants may have been the catalysts for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness. Our society more than others will find this theory difficult to accept, because we have made pharmacologically obtained ecstasy a taboo. Like sexuality, altered states of consciousness are taboo because they are consciously or unconsciously sensed to be entwined with the mysteries of our origin-with where we came from and how we got to be the way we are. Such experiences dissolve boundaries and threaten the order of the reigning patriarchy and the domination of society by the unreflecting expression of ego. Yet consider how plant hallucinogens may have catalyzed the use of language, the most unique of human activities.

One has, in a hallucinogenic state, the incontrovertible impression that language possesses an objectified and visible dimension, which is ordinarily hidden from our awareness. Language, under such conditions, is seen, is beheld, just as we would ordinarily see our homes and normal surroundings. In fact our ordinary cultural environment is correctly recognized, during the experience of the altered state, as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomena have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse.

Once activities involving syntactic self-expression were established habits among early human beings, the continued evolution of language in environments where mushrooms were scarce or unavailable permitted a tendency toward the expression and emergence of the ego. If the ego is not regularly and repeatedly dissolved in the unbounded hyperspace of the Transcendent Other, there will always be slow drift away from the sense of self as part of nature’s larger whole. The ultimate consequence of this drift is the fatal ennui that now permeates Western civilization.

The connection between mushrooms and language was brilliantly anticipated by Henry Munn in his essay “The Mushrooms of Language.” Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. The spontaneity the mushrooms liberate is not only perceptual, but linguistic. For the shaman, it is as if existence were uttering itself through him.

THE FLESH MADE WORD

The evolutionary advantages of the use of speech are both obvious and subtle. Many unusual factors converged at the birth of human language. Obviously speech facilitates communication and cognitive activity, but it also may have had unanticipated effects on the whole human enterprise.

Some neurophysiologists have hypothesized that the vocal vibration associated with human use of language caused a kind of cleansing of the cerebrospinal fluid. It has been observed that vibrations can precipitate and concentrate small molecules in the spinal fluid, which bathes and continuously purifies the brain. Our ancestors may have, consciously or unconsciously, discovered that vocal sound cleared the chemical cobwebs out of their heads. This practice may have affected the evolution of our present-day thin skull structure and proclivity for language. A self-regulated process as simple as singing might well have positive adaptive advantages if it also made the removal of chemical waste from the brain more efficient. The following excerpt supports this provocative idea:

Vibrations of human skull, as produced by loud vocalization, exert a massaging effect on the brain and facilitate elution of metabolic products from the brain into the cerebrospinal fluid (CSF) . . . . The Neanderthals had a brain 15% larger than we have, yet they did not survive in competition with modern humans. Their brains were more polluted, because their massive skulls did not vibrate and therefore the brains were not sufficiently cleaned. In the evolution of the modern humans the thinning of cranial bones was important.’

As already discussed, hominids and hallucinogenic plants must have been in close association for a long span of time, especially if we want to suggest that actual physical changes in the human genome resulted from the association. The structure of the soft palate in the human infant and timing of its descent is a recent adaptation that facilitates the acquisition of language. No other primate exhibits this characteristic. This change may have been a result of selective pressure on mutations originally caused by the new omnivorous diet.

WOMEN AND LANGUAGE

Women, the gatherers in the Archaic hunter-gatherer equation, were under much greater pressure to develop language than were their male counterparts. Hunting, the prerogative of the larger male, placed a premium on strength, stealth, and stoic waiting. The hunter was able to function quite well on a very limited number of linguistic signals, as is still the case among hunting peoples such as the !Kung or the Maku.

For gatherers, the situation was different. Those women with the largest repertoire of communicable images of foods and their sources and secrets of preparation were unquestionably placed in a position of advantage. Language may well have arisen as a mysterious power possessed largely by women-women who spent much more of their waking time together-and, usually, talking-than did men, women who in all societies are seen as group-minded, in contrast to the lone male image, which is the romanticized version of the alpha male of the primate troop.

The linguistic accomplishments of women were driven by a need to remember and describe to each other a variety of locations and landmarks as well as numerous taxonomic and structural details about plants to be sought or avoided. The complex morphology of the natural world propelled the evolution of language toward modeling of the world beheld. To this day a taxonomic description of a plant is a Joycean thrill to read: “Shrub 2 to 6 feet in height, glabrous throughout. Leaves mostly opposite, some in threes or uppermost alternate, sessile, linear-lanceolate or lanceolate, acute or acuminate. Flowers solitary in axils, yellow, with aroma, pedicellate. Calyx campanulate, petals soon caducous, obovate” and so on for many lines.

The linguistic depth women attained as gatherers eventually led to a momentous discovery: the discovery of agriculture. I call it momentous because of its consequences. Women realized that they could simply grow a restricted number of plants. As a result, they learned the needs of only those few plants, embraced a sedentary lifestyle, and began to forget the rest of nature they had once known so well.

At that point the retreat from the natural world began, and the dualism of humanity versus nature was born. As we will soon see, one of the places where the old goddess culture died, fatal Huyuk, in present-day Anatolian Turkey, is the very place where agriculture may have first arisen. At places like fatal Huyuk and Jericho, humans and their domesticated plants and animals became for the first time physically and psychologically separate from the life of untamed nature and the howling unknown. Use of hallucinogens can only be sanctioned in hunting and gathering societies. When agriculturists use these plants, they are unable to get up at dawn the morning after and go hoe the fields. At that point, corn and grain become gods-gods that symbolize domesticity and hard labor. These replace the old goddesses of plant-induced ecstasy.

Agriculture brings with it the potential for overproduction, which leads to excess wealth, hoarding, and trade. Trade leads to cities; cities isolate their inhabitants from the natural world. Paradoxically, more efficient utilization of plant resources through agriculture led to a breaking away from the symbiotic relationship that had bound human beings to nature. I do not mean this metaphorically. The ennui of modernity is the consequence of a disrupted quasisymbiotic relationship between ourselves and Galan nature. Only a restoration of this relationship in some form is capable of carrying us into a full appreciation of our birthright and sense of ourselves as complete human beings.

HABIT AS CULTURE AND RELIGION

At regular intervals that were probably lunar, the ordinary activities of the small nomadic group of herders were put aside. Rains usually followed the new moon in the tropics, making mushrooms plentiful. Gatherings took place at night; night is the time of magical projection and hallucinations, and visions are more easily obtained in darkness. The whole clan was present from oldest to youngest. Elders, especially shamans, usually women but often men, doled out each person’s dose. Each clan member stood before the group and reflectively chewed and swallowed the body of the Goddess before returning to his or her place in the circle. Bone flutes and drums wove within the chanting. Line dances with heavy foot stamping channeled the energy of the first wave of visions. Suddenly the elders signal silence.

In the motionless darkness each mind follows its own trail of sparks into the bush while some people keen softly. They feel fear, and they triumph over fear through the strength of the group. They feel relief mingled with wonder at the beauty of the visionary expanse; some spontaneously reach out to those nearby in simple affection and an impulse for closeness or in erotic desire. An individual feels no distance between himself or herself and the rest of the clan or between the clan and the world. Identity is dissolved in the higher wordless truth of ecstasy. In that world, all divisions are overcome. There is only the One Great Life; it sees itself at play, and it is glad.

The impact of plants on the evolution of culture and consciousness has not been widely explored, though a conservative form of this notion appears in R. Gordon Wasson’s The Road to Eleusis. Wasson does not comment on the emergence of self-reflection in hominids, but does suggest hallucinogenic mushrooms as the causal agent in the appearance of spiritually aware human beings and the genesis of religion. Wasson feels that omnivorous foraging humans would have sooner or later encountered hallucinogenic mushrooms or other psychoactive plants in their environment:

As man emerged from his brutish past, thousands of years ago, there was a stage in the evolution of his awareness when the discovery of the mushroom (or was it a higher plant?) with miraculous properties was a revelation to him, a veritable detonator to his soul, arousing in him sentiments of awe and reverence, and gentleness and love, to the highest pitch of which mankind is capable, all those sentiments and virtues that mankind has ever since regarded as the highest attribute of his kind. It made him see what this perishing mortal eye cannot see. How right the Greeks were to hedge about this Mystery, this imbibing of the potion with secrecy and surveillance! . . . Perhaps with all our modem knowledge we do not need the divine mushroom anymore. Or do we need them more than ever? Some are shocked that the key even to religion might be reduced to a mere chug. On the other hand, the chug is as mysterious as it ever was: “like the wind that comes we know not whence nor why.” Out of a mere chug comes the ineffable, comes ecstasy. It is not the only instance in the history of humankind where the lowly has given birth to the divine.’

Scattered across the African grasslands, the mushrooms would be especially noticeable to hungry eyes because of their inviting smell and unusual form and color. Once having experienced the state of consciousness induced by the mushrooms, foraging humans would return to them repeatedly, in order to reexperience their bewitching novelty. This process would create what C. H. Waddington called a “creode, “z a pathway of developmental activity, what we call a habit.

ECSTASY

We have already mentioned the importance of ecstasy for shamanism. Among early humans a preference for the intoxication experience was ensured simply because the experience was ecstatic. “Ecstatic” is a word central to my argument and preeminently worthy of further attention. It is a notion that is forced on us whenever we wish to indicate an experience or a state of mind that is cosmic in scale. An ecstatic experience transcends duality; it is simultaneously terrifying, hilarious, awe-inspiring, familiar, and bizarre. It is an experience that one wishes to have over and over again.

For a minded and language-using species like ourselves, the experience of ecstasy is not perceived as simple pleasure but, rather, is incredibly intense and complex. It is tied up with the very nature of ourselves and our reality, our languages, and our imagings of ourselves. It is fitting, then, that it is enshrined at the center of shamanic approaches to existence. As Mircea Eliade pointed out, shamanism and ecstasy are atroot one concern:

This shamanic complex is very old; it is found, in whole or in part, among the Australians, the archaic peoples of North and South America, in the polar regions, etc. The essential and defining element of shamanism is ecstasy the shaman is a specialist in the sacred, able to abandon his body and undertake cosmic journeys “in the spirit” (in trance). “Possession” by spirits, although documented in a great many shamanisms, does not seem to have been a primary and essential element. Rather, it suggests a phenomenon of degeneration; for the supreme goal of the shaman is to abandon his body and rise to heaven or descend into hell-not to let himself be “possessed” by his assisting spirits, by demons or the souls of the dead; the shaman’s ideal is to master these spirits, not to let himself be “occupied” by them.’

Gordon Wasson added these observations on ecstasy:

In his trance the shaman goes on a far joumey-the place of the departed ancestors, or the nether world, or there where the gods dwell-and this wonderland is, I submit, precisely where the hallucinogens take us. They are a gateway to ecstasy. Ecstasy in itself is neither pleasant nor unpleasant. The bliss or panic into which it plunges you is incidental to ecstasy. When you are in a state of ecstasy, your very soul seems scooped out from your body and away it goes. Who controls its flight: Is it you, or your “subconscious,” or a “higher power”? Perhaps it is pitch dark, yet you see and hear more clearly than you have ever seen or heard before. You are at last face to face with Ultimate Truth: this is the overwhelming impression (or illusion) that grips you. You may visit Hell, or the Elysian fields of Asphodel, or the Gobi desert, or Arctic wastes. You know awe, you know bliss, and fear, even terror. Everyone experiences ecstasy in his own way, and never twice in the same way. Ecstasy is the very essence of shamanism. The neophyte from the great world associates the mushrooms primarily with visions, but for those who know the Indian language of the shaman the mushrooms “speak” through the shaman. The mushroom is the Word: es habla, as Aurelio told me. The mushroom bestows on the curandero what the Greeks called Logos, the Aryan Vac, Vedic Kavya, “poetic potency,” as Louis Renous put it. The divine afflatus of poetry is the gift of the entheogen. The textual exegete skilled only in dissecting the cruces of the verses lying before him is of course indispensable and his shrewd observations should have our full attention, but unless gifted with Kavya, he does well to be cautious in discussing the higher reaches of Poetry. He dissects the verses but knows not ecstasy, which is the soul of the verses.’

The Magic Language of the Fourth Way
by Pierre Bonnasse
pp. 228-234

Speech, just like sacred medicine, forms the basis of the shamanic path in that it permits us not only to see but also to do. Ethnobotany, the science that studies man as a function of his relationship to the plants around him, offers us new paths of reflection, explaining our relationship to language from a new angle that reconsiders all human evolution in a single movement. It now appears clear that the greatest power of the shaman, that master of ecstasy, resides in his mastery of the magic word stimulated by the ingestion of modifiers of consciousness.

For the shaman, language produces reality, our world being made of language. Terence McKenna, in his revolutionary endeavor to rethink human evolution, shows how plants have been able to influence the development of humans and animals. 41 He explains why farming and the domestication of animals as livestock were a great step forward in our cultural evolution: It was at this moment, according to him, that we were able to come into contact with the Psilocybe mushroom, which grows on and around dung. He supports the idea that “mutation-causing, psychoactive chemical compounds in the early human diet directly influenced the rapid reorganization of the brain’s information-processing capacities.” 42 Further, because “thinking about human evolution ultimately means thinking about the evolution of human consciousness,” he supports the thesis that psychedelic plants “may well have synergized the emergence of language and religion.” 43

Studies undertaken by Fischer have shown that weak doses of psilocybin can improve certain types of mental performance while making the investigator more aware of the real world. McKenna distinguishes three degrees of effects of psilocybin: improvement of visual acuity, increase of sexual excitation, and, at higher doses, “certainly . . . religious concerns would be at the forefront of the tribe’s consciousness, simply because of the power and strangeness of the experience itself.” 44 Because “the psilocybin intoxication is a rapture whose breadth and depth is the despair of prose,” it is entirely clear to McKenna that shamanic ecstasy, characterized by its “boundary-dissolving qualities,” played a crucial role in the evolution of human consciousness, which, according to him, can be attributed to “psilocybin’s remarkable property of stimulating the language-forming capacity of the brain.” Indeed, “[i]ts power is so extraordinary that psilocybin can be considered the catalyst to the human development of language.” 45 In response to the neo-Darwinist objection, McKenna states that “the presence of psilocybin in the hominid diet changed the parameters of the process of natural selection by changing the behavioral patterns upon which that selection was operating,” and that “the augmentation of visual acuity, language use, and ritual activity through the use of psilocybin represented new behaviors.” 46

Be that as it may, it is undeniable that the unlimiters of consciousness, as Charles Duits calls them, have a real impact upon linguistic activity in that they strongly stimulate the emergence of speech. If, according to McKenna’s theories, “psilocybin inclusion in the diet shifted the parameters of human behavior in favor of patterns of activity that promoted increased language,” resulting in “more vocabulary and an expanded memory capacity,” 47 then it seems obvious that the birth of poetry, literature, and all the arts came about ultimately through the fantastic encounter between humans and the magic mushroom—a primordial plant, the “umbilical cord linking us to the feminine spirit of the planet,” and thence, inevitably, to poetry. Rich in behavioral and evolutionary consequences, the mushroom, in its dynamic relationship to the human being, propelled us toward higher cultural levels developing parallel to self-reflection. 48

This in no way means that this level of consciousness is inherent in all people, but it must be observed that the experience in itself leads to a gaining of consciousness which, in order to be preserved and maintained, requires rigorous and well-directed work on ourselves. This being said, the experience allows us to observe this action in ourselves in order to endeavor to understand its subtle mechanisms. Terence McKenna writes,

Of course, imagining these higher states of self-reflection is not easy. For when we seek to do this we are acting as if we expect language to somehow encompass that which is, at present, beyond language, or translinguistic. Psilocybin, the hallucinogen unique to mushrooms, is an effective tool in this situation. Psilocybin’s main synergistic effect seems ultimately to be in the domain of language. It excites vocalization; it empowers articulation; it transmutes language into something that is visibly beheld. It could have had an impact on the sudden emergence of consciousness and language use in early humans. We literally may have eaten our way to higher consciousness. 49

If we espouse this hypothesis, then speaking means evoking and repeating the primordial act of eating the sacred medicine. Ethnobotanists insist upon the role of the human brain in the accomplishment of this process, pinpointing precisely the relevant area of activity, which, in Gurdjieffian terms, is located in the center of gravity of the intellectual center: “Our capacity for cognitive and linguistic activity is related to the size and organization of the human brain. . . . The most recently evolved areas of the human brain, Broca’s area and the neocortex, are devoted to the control of symbol and language processing.” 50 It thus appears that these are the areas of the brain that have allowed for the emergence of language and culture. Yet McKenna adds, “our linguistic abilities must have evolved in response to enormous evolutionary pressures,” though we do not know the nature of these pressures. According to him, it is this “immense power to manipulate symbols and language” that “gives us our unique position in the natural world.” 51 This is obvious, in that speech and consciousness, inextricably linked, are solely the property of humans. Thus it seems logical that the plants known as psychoactive must have been the catalysts “for everything about us that distinguishes us from other higher primates, for all the mental functions that we associate with humanness,” 52 with the primary position being held by language, “the most unique of human activities,” and the catalyst for poetic and literary activity.

Under the influence of an unlimiter, we have the incontrovertible impression that language possesses an objectified and visible dimension that is ordinarily hidden from our awareness. Under such conditions, language is seen and beheld just as we would ordinarily see our homes and normal surroundings. In fact, during the experience of the altered state, our ordinary cultural environment is recognized correctly as the bass drone in the ongoing linguistic business of objectifying the imagination. In other words, the collectively designed cultural environment in which we all live is the objectification of our collective linguistic intent.

Our language-forming ability may have become active through the mutagenic influence of hallucinogens working directly on organelles that are concerned with the processing and generation of signals. These neural substructures are found in various portions of the brain, such as Broca’s area, that govern speech formation. In other words, opening the valve that limits consciousness forces utterance, almost as if the word is a concretion of meaning previously felt but left unarticulated. This active impulse to speak, the “going forth of the word,” is sensed and described in the cosmogonies of many peoples.

Psilocybin specifically activates the areas of the brain concerned with processing signals. A common occurrence with psilocybin intoxication is spontaneous outbursts of poetry and other vocal activity such as speaking in tongues, though in a manner distinct from ordinary glossolalia. In cultures with a tradition of mushroom use, these phenomenons have given rise to the notion of discourse with spirit doctors and supernatural allies. Researchers familiar with the territory agree that psilocybin has a profoundly catalytic effect on the linguistic impulse. 53

Here we are touching upon the higher powers of speech—spontaneous creations, outbursts of poetry and suprahuman communications—which are part of the knowledge of the shamans and “sorcerers” who, through years of rigorous education, have become highly perceptive of these phenomena, which elude the subjective consciousness. In his essay “The Mushrooms of Language,” Henry Munn points to the direct links existing between the states of ecstasy and language: “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic . . . For the shaman, it is as if existence were uttering itself through him.” 54

In the 1920s, the Polish writer S. I. Witkiewicz, who attributed crucial importance to verbal creation, showed how peyote (he was one of the first people in Europe to experiment with it, or, at least, one of the first to give an account of doing so) acts upon the actual creation of words and also intervenes in the structure of sentences themselves: “. . . [I]t must also be remarked that peyote, perhaps by reason of the desire one has to capture with words that which cannot be captured, creates conceptual neologisms that belong to it alone and twists sentences in order to adapt their constructions to the frightening dimensions of its bizarrification . . .” 55 Peyote also gives those who ingest it a desire to create “new combinations of meanings.” Witkiewicz distinguishes three categories of objects in his visions: dead objects, moving objects, and living creatures. Regarding this last category, he distinguishes the “real” living creatures from the “fantastical” living creatures, which “discourage any attempt at description.” This is the moment when peyote intervenes: when those who wish to describe find themselves facing the limits of language. Peyote does not break through these limits; it simply shows that they do not exist, that they are hallucinations of the ordinary consciousness, that they are illusory, a mirage of tradition and the history of language.

The lucidogen—as it is called by Charles Duits, who created other neologisms for describing his experience with the sacred cactus—shows that life is present in everything, including speech, and he proves it. Sometimes, peyote leads us to the signifiers that escape us, always in order better to embrace the signified. Witkiewicz, pushing the phenomenon to the extreme limits of the senses and the sensible, insists:

I must draw attention to the fact that under the influence of peyote, one wants to make up neologisms. One of my friends, the most normal man in the world where language is concerned, in a state of trance and powerless to come to grips with the strangeness of these visions which defied all combinations of normal words, described them thus: “Pajtrakaly symforove i kondjioul v trykrentnykh pordeliansach.” I devised many formulas of this type on the night when I went to bed besieged by visions. I remember only this one. There is therefore nothing surprising in the fact that I, who have such inclinations even under normal conditions, should sometimes be driven to create some fancy word in order to attempt to disentangle and sort out the infernal vortex of creatures that unfurled upon me all night long from the depths of the ancient world of peyote. 56

Here, we cannot help but remember René Daumal’s experience, reported in “Le souvenir déterminant”: Under the influence of carbon tetrachloride, he pronounced with difficulty: “approximately: temgouf temgouf drr . . .” Henry Munn makes a similar remark after having taken part in shamanic rituals: “The mushroom session of language creates the words for phenomena without name.” 57 Sacred plants (and some other substances) are neologens, meaning they produce or generate neologisms from the attempts made at description by the subjects who consume them. This new word, this neologism created by circumstance, appears to be suited for this linguistic reality. We now have a word to designate this particular phenomenon pushing us against the limits of language, which in fact are revealed to be illusory.

Beyond this specific case, what is it that prevents us from creating new words whenever it appears necessary? Witkiewicz, speaking of language and life, defends the writer’s right to take liberties with the rules and invent new words. “Although certain professors insist on clinging to their own tripe,” he writes, “language is a living thing, even if it has always been considered a mummy, even if it has been thought impermissible to change anything in it. We can only imagine what literature, poetry, and even this accursed and beloved life would look like otherwise.” 58 Peyote not only incites us to this, but also, more forcefully, exercising a mysterious magnetic attraction toward a sort of supreme meaning beyond language and shaking up conventional signifiers and beings alike, peyote acts directly upon the heart of speech within the body of language. In this sense, it takes part actively and favorably in the creation of the being, the new and infinitely renewed human who, after a death that is more than symbolic, is reborn to new life. It is also very clear, in light of this example, that psilocybin alone does not explain everything, and that all lucidogenic substances work toward this same opening, this same outpouring of speech. McKenna writes:

Languages appear invisible to the people who speak them, yet they create the fabric of reality for their users. The problem of mistaking language for reality in the everyday world is only too well known. Plant use is an example of a complex language of chemical and social interactions. Yet most of us are unaware of the effects of plants on ourselves and our reality, partly because we have forgotten that plants have always mediated the human cultural relationship to the world at large. 59

pp. 238-239

It is interesting to note this dimension of speech specific to shamans, this inspired, active, healing speech. “It is not I who speak,” Heraclitus said, “it is the word.” The receptiveness brought about by an increased level of consciousness allows us not only to understand other voices, but also, above all, to express them in their entire magical substance. “Language is an ecstatic activity of signification. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. . . . The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, of the logos in activity.” 72

The shamanic paroxysm is therefore the mastery of the word, the mastery of the sacred songs very often inspired by the powers that live in plants—which instruct us, making us receptive to phenomena that escape the ordinary consciousness. The shaman becomes a channel through which subtle energies can pass. Because of the mystic intoxication, he becomes the instrument for spirits that express themselves through him. Hence the word tzo —“says”—which punctuates the phrases of the Mazatec shaman in her communication with the “little growing things”: “Says, says, says. It is said. I say. Who says! We say, man says, language says, being and existence say.” 73 “The inspired man,” writes the Mexican poet Octavio Paz in an essay on Breton, “the man who speaks the truth, says nothing that is his own: Through his mouth, it is the language that speaks.” 74

The language thus regains its primordial power, its creative force and Orphic value, which determine all true poetry, for, as Duits writes, poetry—which is born in the visionary experience—is nothing other than “the language of the gods.” There is nothing phantasmagoric, hallucinated, or illusory about this speech. “[W]ords are materializations of consciousness; language is a privileged vehicle of our relation to reality,” writes Munn. Because poetry carries the world, it is the language of power, a tool in the service of knowledge and action. The incantatory repetition of names, for example, an idea we have already touched upon in our discussion of prayer, acts upon the heart of the being. “The shaman has a conception of poesis in its original sense as an action: words themselves are medicine.” 75 The words—used in their sacred dimension —work toward the transmutation of being, the healing of the spirit, our development, but in order for it be effective, the magic word must be born from a direct confrontation with the experience, because experience alone is a safe reserve for truth. Knowledge is not enough; only those who have eaten are in a position to understand, only those who have heard and seen are in a position to say. If speech goes farther than the eye, it is because it has the power of doing. “Though the psychedelic experience produced by the mushrooms is of heightened perceptivity,” Munn writes, “the I say is of privileged importance to the I see .” 76 Psychedelic speech is speech of power, revealing the spirit.

Darwin’s Pharmacy
by Richard M. Doyle
pp. 8-23

Rhetoric is the practice of learning and teaching eloquence, persuasion, and information architecture by revealing the choices of expression or interpretation open to any given rhetor, viewer, listener, or reader. Robert Anton Wilson offers a definition of rhetoric by example when he focuses on the word “reality” in his book Cosmic Trigger:

“Reality” is a word in the English language which happens to be (a) a noun and (b) singular. Thinking in the English language (and in cognate Indo-European languages) therefore subliminally programs us to conceptualize “reality” as one block-like entity, sort of like a huge New York skyscraper, in which every part is just another “room” within the same building. This linguistic program is so pervasive that most people cannot “think” outside it at all, and when one tries to offer a different perspective they imagine one is talking gibberish. (iii) […]

Mitchell’s vision offers perhaps an equally startling irony: it was only by taking on a literally extraterrestrial perspective that the moon walker overcame alienated perception.5 […]

Thus, perception is not an object but rather the label for a nonlinear process involving an object, a percipient and information.” (Mitchell n.d.; emphasis mine) […]

Like the mind apprehending it, information “wants to be free” if only because it is essentially “not an object,” but rather “the label for a nonlinear process involving an object, a percipient and information.”6 It is worth noting that Mitchell’s experience induces a desire to comprehend, an impulse that is not only the desire to tell the story of his ecodelic imbrication but a veritable symptom of it.7 […]

What are psychedelics such that they seem to persuade humans of their interconnection with an ecosystem?

Terence McKenna’s 1992 book recursively answered this query with a title: Food of the Gods. Psychedelics, McKenna argued, were important vectors in the evolution of consciousness and spiritual practice. In his “shaggy primate story,” McKenna argued that psilocybin mushrooms were a “genome-shaping power” integral to the evolution of human consciousness. On this account, human consciousness—the only instance we know of where one part of the ecosystem is capable of reflecting on itself as a self and acting on the result—was “bootstrapped” by its encounter with the astonishing visions of high-dose psilocybin, an encounter with the Transcendental Other McKenna dubbed “a glimpse of the peacock angel.” Hence for McKenna, psychedelics are both a food fit for the gods and a food that, in scrambling the very distinction between food and drug, man and god, engenders less transcendence than immanence—each is recursively implicated, nested, in the other. […]

Evolutionarily speaking the emergence of widespread animal life on earth is not separable from a “mutualistic” economy of plants, pollinators, and seed dispersers.

The basis for the spectacular radiations of animals on earth today is clearly the resources provided by plants. They are the major primary producers, autotrophically energizing planet Earth…the new ecological relationships of flowering plants resulted in colonizing species with population structures conducive to rapid evolutionary change. (Price, 4)

And if mammalian and primate evolution is enmeshed in a systemic way with angiosperms (flowering plants), so too have humans and other primates been constantly constituted by interaction with plants. […]

Navigating our implication with both plants and their precipitates might begin, then, with the startling recognition of plants as an imbricated power, a nontrivial vector in the evolution of Homo sapiens, a power against which we have waged war. “Life is a rhizome,” wrote Carl Jung, our encrypted ecological “shadow” upon which we manifest as Homo sapiens, whose individuation is an interior folding or “involution” that increases, rather than decreases, our entanglement with any given ecosystem. […]

In other words, psychedelics are (a suppressed) part of evolution. As Italian ethnobotanist Giorgio Samorini put it “the drug phenomenon is a natural phenomenon, while the drug problem is a cultural problem“ (87). […]

Indeed, even DMT, an endogenous and very real product of the human brain, has been “scheduled” by the federal government. DMT would be precisely, by most first person accounts, “the most potent hallucinogen on sale in Haight or Ashbury or Telegraph Avenue” and is a very real attribute of our brains as well as plant ecology. We are all “holding” a Schedule One psychedelic—our own brains, wired for ecodelia, are quite literally against the law. […]

The first principle of harm reduction with psychedelics is therefore this: one must pay attention to set and setting, the organisms for whom and context in which the psychedelic experience unfolds.For even as the (rediscovery of psychedelics by twentieth-century technoscience suggested to many that consciousness was finally understandable via a molecular biology of the brain, this apex of reductionism also fostered the recognition that the effects of psychedelics depend on much more than neurochemistry.23 If ecodelics can undoubtedly provoke the onset of an extra-ordinary state of mind, they do so only on the condition of an excessive response-ability, a responsiveness to rhetorical conditions—the sensory and symbolic framework in which they are assayed. Psychologists Ralph Metzner and Timothy Leary made this point most explicitly in their discussion of session “programming,” the sequencing of text, sound, and sensation that seemed to guide, but not determine the content of psychedelic experiences:

It is by now a well-known fact that psychedelic drugs may produce religious, aesthetic, therapeutic or other kinds of experiences depending on the set and setting…. Using programming we try to control the content of a psychedelic experience in specific desired directions. (5; reversed order)

Leary, Metzner, and many others have provided much shared code for such programming, but all of these recipes are bundled with an unavoidable but difficult to remember premise: an extraordinary sensitivity to initial rhetorical conditions characterizes psychedelic “drug action.” […]

Note that the nature of the psychedelic experience is contingent upon its rhetorical framing—what Leary, Metzner, and Richard Alpert characterized in The Psychedelic Experience as “the all-determining character of thought” in psychedelic experience. The force of rhetorical conditions here is immense— for Huxley it is the force linking premise to conclusion:

“No I couldn’t control it. If one began with fear and hate as the major premise, one would have to go on the conclusion.” (Ibid.)

Rhetorical technologies structure and enable fundamentally different kinds of ecodelic experiences. If the psychonaut “began” with different premises, different experiences would ensue.

pp. 33-37

Has this coevolution of rhetorical practices and humans ceased? This book will argue that psychedelic compounds have already been vectors of tech-noscientific change, and that they have been effective precisely because they are deeply implicated in the history of human problem solving. Our brains, against the law with their endogenous production of DMT, regularly go ecodelic and perceive dense interconnectivity. The human experience of radical interconnection with an ecosystem becomes a most useful snapshot of the systemic breakdowns between “autonomous” organisms necessary to sexual reproduction, and, not incidentally, they render heuristic information about the ecosystem as an ecosystem, amplifying human perception of the connections in their environment and allowing those connections to be mimed and investigated. This increased interconnection can be spurred simply by providing a different vision of the environment. Psychologist Roland Fischer noted that some aspects of visual acuity were heightened under the influence of psilocybin, and his more general theory of perception suggests that this acuity emerges out of a shift in sensory-motor ratios.

For Fischer the very distinction between “hallucination” and “perception” resides in the ratio between sensory data and motor control. Hallucination, for Fischer, is that which cannot be verified in three-dimensional Euclidean space. Hence Fischer differentiates hallucination from perception based not on truth or falsehood, but on a capacity to interact: if a subject can interact with a sensation, and at least work toward verifying it in their lived experience, navigating the shift in sensory-motor ratios, then the subject has experienced something on the order of perception. Such perception is easily fooled and is often false, but it appears to be sufficiently connective to our ecosystems to allow for human survival and sufficiently excitable for sexually selected fitness. If a human subject cannot interact with a sensation, Fischer applies the label “hallucination” for the purpose of creating a “cartography of ecstatic states.”

Given the testimony of psychonauts about their sense of interconnection, Fischer’s model suggests that ecodelic experience tunes perception through a shift of sensory-motor ratios toward an apprehension of, and facility for, interconnection: the econaut becomes a continuum between inside and outside. […] speech itself might plausibly emerge as nothing other than a symptom and practice of early hominid use of ecodelics.

pp. 51-52

It may seem that the visions—as opposed to the description of set and setting or even affect and body load—described in the psychonautic tradition elude this pragmatic dynamic of the trip report. Heinrich Klüver, writing in the 1940s and Benny Shannon, writing in the early twenty-first century, both suggest that the forms of psychedelic vision (for mescaline and ayahuasca respectively) are orderly and consistent even while they are indescribable. Visions, then, would seem to be messages without a code (Barthes) whose very consistency suggested content.

Hence this general consensus on the “indescribableness” (Ellis) of psychedelic experience still yields its share of taxonomies as well as the often remarkable textual treatments of the “retinal circus” that has become emblematic of psychedelic experience. The geometric, fractal, and arabesque visuals of trip reports would seem to be little more than pale snapshots of the much sought after “eye candy” of visual psychedelics such as LSD, DMT, 2C-I, and mescaline. Yet as deeply participatory media technologies, psychedelics involve a learning curve capable of “going with” and accepting a diverse array of phantasms that challenge the beholder and her epistemology, ontology, and identity. Viewed with the requisite detachment, such visions can effect transformation in the observing self, as it finds itself nested within an imbricated hierarchy: egoic self observed by ecstatic Atman which apprehends itself as Brahman reverberating and recoiling back onto ego. Many contemporary investigators of DMT, for example, expect and often encounter what Terence McKenna described as the “machine elves,” elfin entities seemingly tinkering with the ontological mechanics of an interdimension, so much so that the absence of such entities is itself now a frequent aspect of trip reportage and skeptics assemble to debunk elfin actuality (Kent 2004).

p. 63

While synesthesia is classically treated as a transfer or confusion of distinct perceptions, as in the tactile and gustatory conjunction of “sharp cheese,” more recent work in neurobiology by V. S. Ramachandran and others suggests that this mixture is fundamental to language itself—the move from the perceptual to the signifying, in this view, is itself essentially synesthetic. Rather than an odd symptom of a sub-population, then, synesthesia becomes fundamental to any act of perception or communication, an attribute of realistic perception rather than a pathological deviation from it.

pp. 100-126

Rhetorical practices are practically unavoidable on the occasion of death, and scholars in the history of rhetoric and linguistics have both opined that it was as a practice of mourning that rhetoric emerged as a recognizable and repeatable practice in the “West.” […] It is perhaps this capacity of some rhetorical practices to induce and manage the breakdown of borders—such as those between male and female, life and death, silence and talk—that deserves the name “eloquence.” Indeed, the Oxford English Dictionary reminds us that it is the very difference between silence and speech that eloquence manages: a. Fr. éloquent, ad. L. loquent-em, pr. pple., f. loqui to speak out.2 […]

And despite Huxley’s concern that such an opening of the doors of (rhetorical) perception would be biologically “useless,” properly Darwinian treatments of such ordeals of signification would place them squarely within the purview of sexual selection—the competition for mates. If psychedelics such as the west African plant Iboga are revered for “breaking open the head,” it may be because we are rather more like stags butting heads than we are ordinarily comfortable putting into language (Pinchbeck 2004, cover). And our discomfort and fascination ensues, because sexual selection is precisely where sexual difference is at stake rather than determined. A gradient, sexuality is, of course, not a binary form but is instead an enmeshed involutionary zone of recombination: human reproduction takes place in a “bardo” or between space that is neither male nor female nor even, especially, human. Indeed, sex probably emerged as a technique for exploring the space of all possible genotypes, breaking the symmetry of an asexual reproduction and introducing the generative “noise” of sexuality with which Aldous Huxley’s flowers resonated. In this context, psychedelics become a way of altering the context of discursive signaling within which human reproduction likely evolved, a sensory rather than “extra-sensory” sharing of information about fitness.

Doctors of the Word

In an ecstatic treatment of Mazatec mushroom intoxication, Henry Munn casts the curandera as veritable Sophists whose inebriation is marked by an incessant speaking:

The shamans who eat them, their function is to speak, they are the speakers who chant and sing the truth, they are the oral poets of their people, the doctors of the word, they who tell what is wrong and how to remedy it, the seers and oracles, the ones possessed by the voice. (Munn, 88)

Given the contingency of psychedelic states on the rhetorical conditions under which they are used, it is perhaps not surprising that the Mazatec, who have used the “little children” of psilocybin for millennia, have figured out how to modulate and even program psilocybin experience with rhetorical practices. But the central role enjoyed by rhetoricians here—those doctors of the word—should not obscure the difficulty of the shaman/ rhetorician’s task: “possessed by the voice,” such curanderas less control psychedelic experience than consistently give themselves over to it. They do not wield ecstasy, but are taught by it. Munn’s mushroom Sophists are athletes of “negative capability,” nineteenth-century poet John Keats’s term for the capacity to endure uncertainty. Hence the programming of ecodelic experience enables not control but a practiced flexibility within ritual, a “jungle gym” for traversing the transhuman interpolation. […]

Fundamental to shamanic rhetoric is the uncertainty clustering around the possibility of being an “I,” an uncertainty that becomes the very medium in which shamanic medicine emerges. While nothing could appear more straightforward than the relationship between the one who speaks and the subject of the sentence “I speak,” Munn writes, sampling Heraclitus, “It is not I who speak…it is the logos.” This sense of being less in dialogue with a voice than a conduit for language itself leads Munn toward the concept of “ecstatic signification.”

Language is an ecstatic activity of signification…. Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth from the contact of the intention of articulation with the matter of experience. At times it is as if one were being told what to say, for the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings. Message fields of communication with the world, others, and one’s self are disclosed by the mushrooms. (Ibid., 88-89)

If these practices are “ecstatic,” they are so in the strictest of fashions. While recent usage tends to conjoin the “ecstatic” with enjoyment, its etymology suggests an ontological bifurcation—a “being beside oneself” in which the very location, if not existence, of a self is put into disarray and language takes on an unpredictable and lively agency: “words leap to mind, one after another.”3 This displacement suggests that the shaman hardly governs the speech and song she seemingly produces, but is instead astonished by its fluent arrival. Yet this surprise does not give way to panic, and the intoxication increases rather than retards fluency—if anything, Munn’s description suggests that for the Mazatec (and, perhaps, for Munn) psilocybin is a rhetorical adjunct that gives the speaker, singer, listener, eater access to “message fields of communication.” How might we make sense of this remarkable claim? What mechanisms would allow a speaker to deploy intoxication for eloquence?

Classically speaking, rhetoric has treated human discourse as a tripartite affair, a threefold mixture of ethos, an appeal based on character; logos, an appeal based on the word; and pathos, an appeal to or from the body.4 Numerous philosophers and literary critics since Jacques Derrida have decried the Western fascination with the logos, and many scholars have looked to the rich traditions of rhetoric for modalities associated with other offices of persuasion, deliberation, and transformation. But Munn’s account asks us to recall yet another forgotten rhetorical practice—a pharmacopeia of rhetorical adjuncts drawn from plant, fungus, and geological sources. In the context of the Mazatec, the deliberate and highly practiced ingestion of mushrooms serves to give the rhetor access not to individually created statements or acts of persuasion, but to “fields” of communication where rhetorical practice calls less for a “subject position” than it does a capacity to abide multiplicity—the combination and interaction, at the very least, of human and plant.

Writer, philosopher, and pioneering psychonaut Walter Benjamin noted that his experiments with hashish seemed to induce a “speaking out,” a lengthening of his sentences: “One is very much struck by how long one’s sentences are” (20). Longer sentences, of course, are not necessarily more eloquent in any ordinary sense than short ones, since scholars, readers, and listeners find that eloquence inheres in a response to any given rhetorical context. Indeed, Benjamin’s own telegraphic style in his hashish protocols becomes extraordinary, rare, and paradoxical given his own claim for long sentences in a short note. Yet Benjamin’s account does remind us that ecodelics often work on and with the etymological sense of “eloquence,” a “speaking out,” an outburst of language, a provocation to language. Benjamin reported that it was through language that material forms could be momentarily transformed: “The word ‘ginger’ is uttered and suddenly in place of the desk there is a fruit stand” (ibid., 21).

And yet if language and, indeed, the writing table, is the space where hashish begins to resonate for Benjamin, it does so only by making itself available to continual lacunae, openings and closings where, among other things, laughter occurs. For precisely as they are telegraphic, the hashish protocols of Benjamin create a series of non sequiturs: […]

Hashish, then, is an assassin of referentiality, inducing a butterfly effect in thought. In Benjamin, cannabis induces a parataxis wherein sentences less connect to each other through an explicit semantics than resonate together and summon coherence in the bardos between one statement and another. It is the silent murmur between sentences that is consistent while the sentences continually differentiate until, through repetition, an order appears: “You follow the same paths of thought as before. Only, they appear strewn with roses.”

For a comparable practice in classical rhetoric linking “intoxication” with eloquence, we return to Delphi, where the oracles made predictions persuasive even to the always skeptical Socrates, predictions whose oracular ecodelic speech was rendered through the invisible but inebriating “atmosphere” of ethylene gases—a geological rhetoric. Chemist Albert Hofmann, classicist Carl Ruck, ethnobotanist Jonathan Ott, and others have made a compelling case that at Eleusis, where Socrates, well before Bartleby, “preferred not” to go, the Greek Mysteries were delivered in the context of an ecodelic beverage, perhaps one derived from fermented grain or the ergotladen sacrament kykeon, chemically analogous to LSD.5 These Mystery rites occasioned a very specific rhetorical practice—silence—since participants were forbidden from describing the kykeon or its effects. But silence, too, is a rhetorical practice, and one can notice that such a prohibition functions rhetorically not only to repress but also to intensify a desire to “speak out” of the silence that must come before and after Eleusis.

And Mazatec curandera Maria Sabina is explicit that indeed it is not language or even its putative absence, silence, that is an adjunct or “set and setting” for the mushrooms. Rather, the mushrooms themselves are a languaging, eloquence itself, a book that presents itself and speaks out:

At other times, God is not like a man: He is the Book. A Book that is born from the earth, a sacred Book whose birth makes the world shake. It is the Book of God that speaks to me in order for me to speak. It counsels me, it teaches me, it tells me what I have to say to men, to the sick, to life. The Book appears and I learn new words.6

Crucial to this “speaking” is the way in which Maria Sabina puts it. Densely interactive and composed of repetition, the rhetorical encounter with the mushroom is more than informative it is pedagogical and transformative: “The Book appears and I learn new words.” The earth shakes with vitality, manifesting the mushroom orator.7 Like any good teacher, the mushrooms work with rhythms, repetitions that not only reinforce prior knowledge but induce one to take leave of it. “It counsels me, it teaches me.” The repetition of which and through which Maria Sabina speaks communicates more than knowledge, but allows for its gradual arrival, a rhythm of coming into being consonant and perhaps even resonant with the vibrations of the Earth, that scene of continual evolutionary transformation.

More than a supplement or adjunct to the rhetor, the mushroom is a transformer. Mary Barnard maps out a puppetry of flesh that entails becoming a transducer of the mushroom itself: “The mushroom-deity takes possession of the shaman’s body and speaks with the shaman’s lips. The shaman does not say whether the sick child will live or die; the mushroom says” (248).

Nor are reports of psilocybin’s effects as a rhetorical adjunct peculiar to Munn or even the Mazatec tradition. Over a span of ten years, psychologist Roland Fischer and his colleagues at Ohio State University tested the effects of psilocybin on linguistic function. Fischer articulated “the hallucination-perception continuum,” wherein hallucinations would be understood less as failed images of the real than virtual aspects of reality not verifiable in the “Euclidean” space projected by the human sensorium. Fischer, working with the literary critic Colin Martindale, located in the human metabolism of psilocybin (and its consequent rendering into psilocin) linguistic symptoms isomorphic to the epics of world literature. Psilocybin, Fischer and Martindale argued, provoked an increase in the “primary process content” of writing composed under the influence of psilocybin. Repetitious and yet corresponding to the very rhetorical structure of epics, psilocybin can thus be seen to be prima facie adjuncts to an epic eloquence, a “speaking out” that leaves rhetorical patterns consistent with the epic journey (Martindale and Fisher).

And in this journey, it is often language itself that is exhausted—there is a rhythm in the epic structure between the prolix production of primary process content and its interruption. Sage Ramana Maharshi described mouna, a “state which transcends speech and thought,” as the state that emerges only when “silence prevails.” […]

A more recent study conducted of high-dose psilocybin experience among international psychonauts suggested that over 35 percent of subjects heard what they called “the logos” after consuming psilocybin mushrooms.

Based on the responses to the question of the number of times psilocybin was taken, the study examined approximately 3,427 reported psilocybin experiences (n = 118). Of the total questionnaire responses (n = 128), 35.9% (n = 46) of the participants reported having heard a voice(s) with psilocybin use, while 64.0% (n = 82) of the participants stated that they had not. (Beach) […]

Inevitably, this flow fluctuates between silence and discourse. Michaux’s experiments with psychedelics rendered the now recognizable symptoms of graphomania, silence, and rhetorical amplification. In Miserable Miracle, one of the three books Michaux wrote “with mescaline,” Michaux testifies to a strange transformation into a Sophist:

For the first time I understood from within that animal, till now so strange and false, that is called an orator. I seemed to feel how irresistible must be the propensity for eloquence in certain people. Mesc. acted in such a way that it gave me the desire to make proclamations. On what? On anything at all. (81)11

Hence, while their spectrum of effects is wide ranging and extraordinarily sensitive to initial rhetorical conditions, psychedelics are involved in an intense inclination to speak unto silence, to write and sing in a time not limited to the physical duration of the sacramental effect, and this involvement with rhetorical practice—the management of the plume, the voice, and the breath—appears to be essential to the nature of psychedelics; they are compounds whose most persistent symptoms are rhetorical. […]

Crucial to Krippner’s analysis, though, is the efficacy of psychedelics in peeling away these strata of rhetorical practice. By withering some layers of perception, others are amplified:

In one experiment (Jarvik et al. 1955), subjects ingested one hundred micrograms of LSD and demonstrated an increase in their ability to quickly cancel out words on a page of standardized material, but a decreased ability to cancel out individual letters. The drug seemed to facilitate the perceptions of meaningful language units while it interfered with the visual perception of non-meaningful ones. (Krippner, 220)

Krippner notes that the LSD functioned here as a perceptual adjunct, somehow tuning the visual perception toward increased semantic and hence rhetorical efficacy. This intensified visual perception of language no doubt yielded the familiar swelling of font most associated with psychedelic art and pioneered by the psychedelic underground press (such as the San Francisco Oracle.) By amplifying the visual aspect of font—whose medium is the psychedelic message—this psychedelic innovation remixes the alphabet itself, as more information (the visual, often highly sensory swelling of font) is embedded in a given sequence of (otherwise syntactic and semantic) symbols. More information is compressed into font precisely by working with the larger-scale context of any given message rather than its content. This apprehension of larger-scale contexts for any given data may be the very signature of ecodelic experience. Krippner reports that this sensory amplification even reached dimensional thresholds, transforming texts:

Earlier, I had tasted an orange and found it the most intense, delightful taste sensation I had ever experienced. I tried reading a magazine as I was “coming down,” and felt the same sensual delight in moving my eye over the printed page as I had experienced when eating the orange. The words stood out in three dimensions. Reading had never been such a sheer delight and such a complete joy. My comprehension was excellent. I quickly grasped the intent of the author and felt that I knew exactly what meaning he had tried to convey. (221)

Rather than a cognitive modulation, then, psychedelics in Krippner’s analysis seem to affect language function through an intensification of sensory attention on and through language, “a complete joy.” One of Krippner’s reports concerned a student attempting to learn German. The student reported becoming fascinated with the language in a most sensory fashion, noting that it was the “delicacy” of the language that allowed him to, well, “make sense” of it and indulge his desire to “string” together language:

The thing that impressed me at first was the delicacy of the language.…Before long, I was catching on even to the umlauts. Things were speeding up like mad, and there were floods of associations.…Memory, of course, is a matter of association and boy was I ever linking up to things! I had no difficulty recalling words he had given me—in fact, I was eager to string them together. In a couple of hours after that, I was even reading some simple German, and it all made sense. (Ibid.)

Krippner reports that by the end of his LSD session, the student “had fallen in love with German” (222). Krippner rightly notes that this “falling” is anything but purely verbal, and hypothesizes that psychedelics are adjuncts to “non-verbal training”: “The psychedelic session as non-verbal training represents a method by which an individual can attain a higher level of linguistic maturity and sophistication” (225).

What could be the mechanism of such a “non-verbal” training? The motor-control theory of language suggests that language is bootstrapped and developed out of the nonlinguistic rhythms of the ventral premotor system, whose orderly patterns provided the substrate of differential repetition necessary to the arbitrary configuration and reconfiguration of linguistic units. Neuroscientist V. S. Ramachandran describes the discovery of “mirror neurons” by Giaccamo Rizzolati. Rizzolati

recorded from the ventral premotor area of the frontal lobes of monkeys and found that certain cells will fire when a monkey performs a single, highly specific action with its hand: pulling, pushing, tugging, grasping, picking up and putting a peanut in the mouth etc. different neurons fire in response to different actions. One might be tempted to think that these are motor “command” neurons, making muscles do certain things; however, the astonishing truth is that any given mirror neuron will also fire when the monkey in question observes another monkey (or even the experimenter) performing the same action, e.g. tasting a peanut! (Ramachandran)

Here the distinction between observing and performing an action are confused, as watching a primate pick up a peanut becomes indistinguishable from picking up the peanut, at least from the perspective of an EEG. Such neurological patterns are not arbitrary, linked as they are to the isomorphic patterns that are the developmentally articulated motor control system of the body. This may explain how psychedelics can, according to Krippner, allow for the perceptual discernment of meaningful units. By releasing the attention from the cognitive self or ego, human subjects can focus their attention on the orderly structures “below” conscious awareness and distributed across their embodiment and environments. Robin Allot has been arguing for the motor theory of language evolution since the 1980s:

In the evolution of language, shapes or objects seen, sounds heard, and actions perceived or performed, generated neural motor programs which, on transfer to the vocal apparatus, produced words structurally correlated with the perceived shapes, objects, sounds and actions. (1989)

These perceived shapes, objects, sounds, and actions, of course, include the sounds, smells, visions, and actions continually transmitted by ecosystems and the human body itself, and by focusing the attention on them, we browse for patterns not yet articulated by our embodiment. Significantly, as neuroscientist Ramachandran points out, this “mirror neuron” effect seems to occur only when other living systems are involved:

When people move their hands a brain wave called the MU wave gets blocked and disappears completely. Eric Altschuller, Jamie Pineda, and I suggested at the Society for Neurosciences in 1998 that this suppression was caused by Rizzolati’s mirror neuron system. Consistent with this theory we found that such a suppression also occurs when a person watches someone else moving his hand but not if he watches a similar movement by an inanimate object.

Hence, in this view, language evolves and develops precisely by nonverbal means in interaction with other living systems, as the repetitions proper to language iterate on the basis of a prior repetition—the coordinated movements necessary to survival that are coupled to neurological patterns and linked to an animate environment. By blocking the “throttling embrace of the self,” ecodelics perhaps enable a resonance between the mind and nature not usually available to the attention. This resonance creates a continuum between words and things even as it appears to enable the differentiation between meaningful and nonmeaningful units: […]

This continuum between the abstract character of language and its motor control system is consistent with Krippner’s observation that “at the sensory level, words are encoded and decoded in highly unusual ways” (238). This differential interaction with the sensory attributes of language includes an interaction with rhythms and puns common to psychedelic experience, a capacity to become aware of a previously unobserved difference and connection. Puns are often denounced as, er, punishing a reader’s sense of taste, but in fact they set up a field of resonance and association between previously distinct terms, a nonverbal connection of words. In a highly compressed fashion, puns transmit novel information in the form of a meshed relation between terms that would otherwise remain, often for cultural or taboo reasons, radically distinct.12 This punning involves a tuning of a word toward another meaning, a “troping” or bending of language toward increased information through nonsemantic means such as rhyming. This induction of eloquence and its sensory perception becomes synesthetic as an oral utterance becomes visual: […]

Hence, if it is fair to characterize some psychedelic experiences as episodes of rhetorical augmentation, it is nonetheless necessary to understand rhetoric as an ecological practice, one which truly works with all available means of persuasion (Aristotle), human or otherwise, to increase the overall dissipation of energy in any given ecology. One “goes for broke,” attempting the hopeless task of articulating psychedelics in language until exhausting language of any possible referential meaning and becoming silent. By locating “new” information only implicit in a given segment of language and not semantically available to awareness, a pun increases the informational output of an ecosystem featuring humans. This seems to feedback, […]

Paired with an apprehension of the logos, this tuning in to ecodelia suggests that in “ego death,” many psychonauts experience a perceived awareness of what Vernadsky called the noösphere, the effects of their own consciousness on their ecosystem, about which they incessantly cry out: “Will we listen in time?”

In the introduction, I noted that the ecodelic adoption of this non-local and hence distributed perspective of the biosphere was associated with the apprehension of the cosmos as an interconnected whole, and with the language of “interpellation” I want to suggest that this sense of interconnection often appears in psychonautic testimony as a “calling out” by our evolutionary context. […]

The philosopher Louis Althusser used the language of “interpellation” to describe the function of ideology and its purchase on an individual subject to it, and he treats interpellation as precisely such a “calling out.” Rather than a vague overall system involving the repression of content or the production of illusion, ideology for Althusser functions through its ability to become an “interior” rhetorical force that is the very stuff of identity, at least any identity subject to being “hailed” by any authority it finds itself response-able to. I turn to that code commons Wikipedia for Althusser’s most memorable treatment of this concept:

Memorably, Althusser illustrates this with the concept of “hailing” or “interpellation.” He uses the example of an individual walking in a street: upon hearing a policeman shout “Hey you there!”, the individual responds by turning round and in this simple movement of his body she is transformed into a subject. The person being hailed recognizes himself as the subject of the hail, and knows to respond.14

This sense of “hailing” and unconscious “turning” is appropriate to the experience of ecodelic interconnection I am calling “the transhuman interpellation.” Shifting back and forth between the nonhuman perspectives of the macro and the micro, one is hailed by the tiniest of details or largest of overarching structures as reminders of the way we are always already linked to the “evolutionary heritage that bonds all living things genetically and behaviorally to the biosphere” (Roszak et al., 14). And when we find, again and again, that such an interpellation by a “teacher” or other plant entity (à la the logos) is associated not only with eloquence but also with healing,15 we perhaps aren’t surprised by a close-up view of the etymology of “healing.” The Oxford English Dictionary traces it from the Teutonic “heilen,” which links it to “helig” or “holy.” And the alluvial flow of etymology connects “hailing” and “healing” in something more than a pun:

A Com. Teut. vb.: OE. hlan = OFris. hêla, OS. hêlian (MDu. hêlen, heilen, Du. heelen, LG. helen), OHG. heilan (Ger. heilen), ON. heil (Sw. hela, Da. hele), Goth. hailjan, deriv. of hail-s, OTeut. *hailo-z, OS. Hál <HALE><WHOLE>16

Hailed by the whole, one can become healed through ecodelic practice precisely because the subject turns back on who they thought they were, becoming aware of the existence of a whole, a system in which everything “really is” connected—the noösphere. Such a vision can be discouraging and even frightening to the phantasmically self-birthed ego, who feels not guilt but a horror of exocentricity. It appears impossible to many of us that anything hierarchically distinct, and larger and more complex than Homo sapiens—such as Gaia—could exist, and so we often cry out as one in the wilderness, in amazement and repetition.

Synesthesia, and Psychedelics, and Civilization! Oh My!
Were cave paintings an early language?

Choral Singing and Self-Identity
Music and Dance on the Mind
Development of Language and Music
Spoken Language: Formulaic, Musical, & Bicameral
“Beyond that, there is only awe.”
“First came the temple, then the city.”
The Spell of Inner Speech
Language and Knowledge, Parable and Gesture

Ancient Atherosclerosis?

In reading about health, mostly about diet and nutrition, I regularly come across studies that are either poorly designed or poorly interpreted. The conclusions don’t always follow from the data or there are so many confounders that other conclusions can’t be discounted. Then the data gets used by dietary ideologues.

There is a major reason I appreciate the dietary debate among proponents of traditional, ancestral, paleo, low-carb, ketogenic, and some other related views (anti-inflammatory diets, autoimmune diets, etc such as the Wahls Protocol for multiple sclerosis and Bredesen Protocol for Alzheimer’s). This area of alternative debate leans heavily on questioning conventional certainties by digging deep into the available evidence. These diets seem to attract people capable of changing their minds or maybe it is simply that many people who eventually come to these unconventional views do so after having already tried numerous other diets.

For example, Dr. Terry Wahls is a clinical professor of Internal Medicine, Epidemiology, and Neurology  at the University of Iowa while also being Associate Chief of Staff at a Veterans Affairs hospital. She was as conventional as doctors come until she developed multiple sclerosis, began researching and experimenting, and eventually became a practitioner of functional medicine. Also, she went from being a hardcore vegetarian following mainstream dietary advice (avoided saturated fats, ate whole grains and legumes, etc) to embracing an essentially nutrient-dense paleo diet; her neurologist at the Cleveland Clinic referred her to Dr. Loren Cordain’s paleo research at Colorado State University. Since that time, she has done medical research and, recently having procured funding, she is in the process of doing a study in order to further test her diet.

Her experimental attitude, both personally and scientifically, is common among those interested in these kinds of diets and functional medicine. This experimental attitude is necessary when one steps outside of conventional wisdom, something Dr. Wahls felt she had to do to save her own life — a motivating factor of health crisis that leads many people to try a paleo, keto, etc diet after trying all else (these involve protocols to deal with serious illnesses, such as ketosis being medically used for treatment of epileptic seizures). Contradicting professional opinion of respected authorities (e.g., the American Heart Association), a diet like this tends to be an option of last resort for most people, something they come to after much failure and worsening of health. That breeds a certain mentality.

On the other hand, it should be unsurprising that people raised on mainstream views and who hold onto those views long into adulthood (and long into their careers) tend not to be people willing to entertain alternative views, no matter what the evidence indicates. This includes those working in the medical field. Some ask, why are doctors so stupid? As Dr. Michael Eades explains, it’s not that they’re stupid but that many of them are ignorant; to put it more nicely, they’re ill-informed. They simply don’t know because, like so many others, they are repeating what they’ve been told by other authority figures. And the fact of the matter is most doctors never learned much about certain topics in the first place: “A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of the eighteen questions correctly. In short, most mainstream doctors would fail nutrition” (Dr. Will Cole, Ketotarian).

The reason people stick to the known, even when it is wrong, is because it is familiar and so it feels safe (and because of liability, healthcare workers and health insurance companies prefer what is perceived as safe). Doctors, as with everyone else, are dependent on heuristics to deal with a complex world. And doctors, more than most people, are too busy to explore the large amounts of data out there, much less analyze it carefully for themselves.

This maybe relates to why most doctors tend to not make the best researchers, not to dismiss those attempting to do quality research. For that reason, you might think scientific researchers who aren’t doctors would be different than doctors. But that obviously isn’t always the case because, if so, Ancel Keys low quality research wouldn’t have dominated professional dietary advice for more than a half century. Keys wasn’t a medical professional or even trained in nutrition, rather he was educated in a wide variety of other fields (economics, political science, zoology, oceanography, biology, and physiology) with his earliest research done on the physiology of fish.

I came across yet another example of this, although less extreme than that of Keys, but also different in that at least some of the authors of the paper are medical doctors. The study in question involved the participation of 19 people. The paper is “Atherosclerosis across 4000 years of human history: the Horus study of four ancient populations,” peer-reviewed and published (2013) in the highly respectable Lancet Journal (Keys’ work, one might note, was also highly respectable). This study on atherosclerosis was well reported in the mainstream news outlets and received much attention from those critical of paleo diets, offered as a final nail in the coffin, claimed as being absolute proof that ancient people were as unhealthy as we are.

The 19 authors conclude that, “atherosclerosis was common in four preindustrial populations, including a preagricultural hunter-gatherer population, and across a wide span of human history. It remains prevalent in contemporary human beings. The presence of atherosclerosis in premodern human beings suggests that the disease is an inherent component of human ageing and not characteristic of any specific diet or lifestyle.” There you have it. Heart disease is simply in our genetics — so take your statin meds like your doctor tells you to do, just shut up and quit asking questions, quit looking at all the contrary evidence.

But even ignoring all else, does the evidence from this paper support their conclusion? No. It doesn’t require much research or thought to ascertain the weak case presented. In the paper itself, on multiple occasions including in the second table, they admit that three out of four of the populations were farmers who ate largely an agricultural diet and, of course, lived an agricultural lifestyle. At most, these examples can speak to the conditions of the neolithic but not the paleolithic. Of these three, only one was transitioning from an earlier foraging lifestyle, but as with the other two was eating a higher carb diet from foods they farmed. Also, the most well known example of the bunch, the Egyptians, particularly point to the problems of an agricultural diet — as described by Michael Eades in Obesity in ancient Egypt:

“[S]everal thousand years ago when the future mummies roamed the earth their diet was a nutritionist’s nirvana. At least a nirvana for all the so-called nutritional experts of today who are recommending a diet filled with whole grains, fresh fruits and vegetables, and little meat, especially red meat. Follow such a diet, we’re told, and we will enjoy abundant health.

“Unfortunately, it didn’t work that way for the Egyptians. They followed such a diet simply because that’s all there was. There was no sugar – it wouldn’t be produced for another thousand or more years. The only sweet was honey, which was consumed in limited amounts. The primary staple was a coarse bread made of stone-ground, whole wheat. Animals were used as beasts of burden and were valued much more for the work they could do than for the meat they could provide. The banks of the Nile provided fertile soil for growing all kinds of fruits and vegetables, all of which were a part the low-fat, high-carbohydrate Egyptian diet. And there were no artificial sweeteners, artificial coloring, artificial flavors, preservatives, or any of the other substances that are part of all the manufactured foods we eat today.

“Were the nutritionists of today right about their ideas of the ideal diet, the ancient Egyptians should have had abundant health. But they didn’t. In fact, they suffered pretty miserable health. Many had heart disease, high blood pressure, diabetes and obesity – all the same disorders that we experience today in the ‘civilized’ Western world. Diseases that Paleolithic man, our really ancient ancestors, appeared to escape.”

With unintentional humor, the authors of the paper note that, “None of the cultures were known to be vegetarian.” No shit. Maybe that is because until late in the history of agriculture there were no vegetarians and for good reason. As Weston Price noted, there is a wide variety of possible healthy diets as seen in traditional communities. Yet for all his searching for a healthy traditional community that was strictly vegan or even vegetarian, he could never find any; the closest examples were those that relied largely on such things as insects and grubs because of a lack of access to larger sources of protein and fat. On the other hand, the most famous vegetarian population, Hindu Indians, have one of the shortest lifespans (to be fair, though, that could be for other reasons such as poverty-related health issues).

Interestingly, there apparently has never been a study done comparing a herbivore diet and a carnivore diet, although one study touched on it while not quite eliminating all plants from the latter. As for fat, there is no evidence that it is problematic (vegetable oils are another issue), if anything the opposite: “In a study published in the Lancet, they found that people eating high quantities of carbohydrates, which are found in breads and rice, had a nearly 30% higher risk of dying during the study than people eating a low-carb diet. And people eating high-fat diets had a 23% lower chance of dying during the study’s seven years of follow-up compared to people who ate less fat” (Alice Park, The Low-Fat vs. Low-Carb Diet Debate Has a New Answer); and “The Mayo Clinic published a study in the Journal of Alzheimer’s Disease in 2012 demonstrating that in individuals favoring a high-carb diet, risk for mild cognitive impairment was increased by 89%, contrasted to those who ate a high-fat diet, whose risk was decreased by 44%” (WebMD interview of Dr. David Perlmutter). Yet the respectable authorities tell us that fat is bad for our health, making it paradoxical that many fat-gluttonous societies have better health. There are so many paradoxes, according to conventional thought, that one begins to wonder if conventional thought is the real paradox.

Now let me discuss the one group, the Unangan, that at first glance stands out from the rest. The authors say that the, “five Unangan people living in the Aleutian Islands of modern day Alaska (ca 1756–1930 CE, one excavation site).” Those mummies are far different than those from the other populations that came much earlier in history. Four of the Unangan died around 1900 and one around 1850. Why does that matter? Well, for the reason that their entire world was being turned on its head at that time. The authors claim that, “The Unangan’s diet was predominately marine, including seals, sea lions, sea otters, whale, fish, sea urchins, and other shellfish and birds and their eggs. They were hunter-gatherers living in barabaras, subterranean houses to protect against the cold and fierce winds.” They base this claim on the assumption that these particular mummified Unangan had been eating the same diet as their ancestors for thousands of years, but the evidence points in the opposite direction.

Questioning this assumption, Jeffery Gerber explains that, “During life (before 1756–1930 CE) not more than a few short hundred years ago, the 5 Unangan/Aleut mummies were hardly part of an isolated group. The Fur Seal industry exploded in the 18th century bringing outside influence, often violent, from countries including Russia and Europe. These mummies during life, were probably exposed to foods (including sugar) different from their traditional diet and thus might not be representative of their hunter-gatherer origins” (Mummies, Clogged Arteries and Ancient Junk Food). One might add that, whatever Western foods that may have been introduced, we do know of another factor — the Government of Nunavat official website states that, “European whalers regularly travelled to the Arctic in the late 17th and 18th century. When they visited, they introduced tobacco to Inuit.” Why is that significant? Tobacco is a known risk factor for atherosclerosis. Gideon Mailer and Nicola Hale, in their book Decolonizing the Diet, elaborate on the colonial history of the region (pp. 162-171):

“On the eve of Western contact, the indigenous population of present-day Alaska numbered around 80,000. They included the Alutiiq and Unangan communities, more commonly defined as Aleuts, Inupiat and Yupiit, Athabaskans, and the Tinglit and Haida groups. Most groups suffered a stark demographic decline from the mid-eighteenth century to the mid-nineteenth century, during the period of extended European — particularly Russian — contact. Oral traditions among indigenous groups in Alaska described whites as having taken hunting grounds from other related communities, warning of a similar fate to their own. The Unangan community, numbering more than 12,000 at contact, declined by around 80 percent by 1860. By as early as the 1820s, as Jacobs has described, “The rhythm of life had changed completely in the Unangan villages now based on the exigencies of the fur trade rather than the subsistence cycle, meaning that often villages were unable to produce enough food to keep them through the winter.” Here, as elsewhere, societal disruption was most profound in the nutritional sphere, helping account for the failure to recover population numbers following disease epidemics.

“In many parts of Alaska, Native American nutritional strategies and ecological niches were suddenly disrupted by the arrival of Spanish and Russian settlers. “Because,” as Saunt has pointed out “it was extraordinarily difficult to extract food from the challenging environment,” in Alaska and other Pacific coastal communities, “any disturbance was likely to place enormous stress on local residents.” One of indigenous Alaska’s most important ecological niches centered on salmon access points. They became steadily more important between the Paleo-Eskimo era around 4,200 years ago and the precontact period, but were increasingly threatened by Russian and American disruptions from the 1780s through the nineteenth century. Dependent on nutrients and omega fatty acids such as DHA from marine resources such as salmon, Aleut and Alutiiq communities also required other animal products, such as intestines, to prepare tools and waterproof clothing to take advantage of fishing seasons. Through the later part of the eighteenth century, however, Russian fur traders and settlers began to force them away from the coast with ruthless efficiency, even destroying their hunting tools and waterproof apparatus. The Russians were clear in their objectives here, with one of their men observing that the Native American fishing boats were “as indispensable as the plow and the horse for the farmer.”

“Here we are provided with another tragic case study, which allows us to consider the likely association between disrupted access to omega-e fatty acids such as DHA and compromised immunity. We have already noted the link between DHA, reduced inflammation and enhanced immunity in the millennia following the evolution of the small human gut and the comparatively larger human brain. Wild animals, but particularly wild fish, have been shown to contain far higher proportions of omega-3 fatty acids than the food sources that apparently became more abundant in Native American diets after European contact, including in Alaska. Fat-soluble vitamins and DHA are abundantly found in fish eggs and fish fats, which were prized by Native Americans in the Northwest and Great Lakes regions, in the marine life used by California communities, and perhaps more than anywhere else, in the salmon products consumed by indigenous Alaskan communities. […]

“In Alaska, where DHA and vitamin D-rich salmon consumption was central to precontact subsistence strategies, alongside the consumption of nutrient-dense animal products and the regulation of metabolic hormones through periods of fasting or even through the efficient use of fatty acids or ketones for energy, disruptions to those strategies compromised immunity among those who suffered greater incursions from Russian and other European settlers through the first half of the nineteenth century.

“A collapse in sustainable subsistence practices among the Aleuts of Alaska exacerbated population decline during the period of Russian contact. The Russian colonial regime from the 1740s to 1840s destroyed Aleut communities through open warfare and by attacking and curtailing their nutritional resources, such as sea otters, which Russians plundered to supply the Chinese market for animal skins. Aleuts were often forced into labor, and threatened by the regular occurrence of Aleut women being taken as hostages. Curtailed by armed force, Aleuts were often relocated to the Pribilof Islands or to California to collect seals and sea otters. The same process occurred as Aleuts were co-opted into Russian expansion through the Aleutian Islands, Kodiak Island and into the southern coast of Alaska. Suffering murder and other atrocities, Aleuts provided only one use to Russian settlers: their perceived expertise in hunting local marine animals. They were removed from their communities, disrupting demography further and preventing those who remained from accessing vital nutritional resources due to the discontinuation of hunting frameworks. Colonial disruption, warfare, captivity and disease were accompanied by the degradation of nutritional resources. Aleut population numbers declined from 18,000 to 2,000 during the period of Russian occupation in the first half of the nineteenth century. A lag between the first period of contact and the intensification of colonial disruption demonstrates the role of contingent interventions in framing the deleterious effects of epidemics, including the 1837-38 smallpox epidemic in the region. Compounding these problems, communities used to a relatively high-fat and low-fructose diet were introduced to alcohol by the Russians, to the immediate detriment of their health and well-being.”

The traditional hunter-gatherer diet, as Mailer and Hale describe it, was high in the nutrients that protect against inflammation. The loss of these nutrients and the simultaneous decimation to the population was a one-two punch. Without the nutrients, their immune systems were compromised. And with their immune systems compromised, they were prone to all kinds of health conditions, probably including heart disease which of course is related to inflammation. Weston A. Price, in Nutrition and Physical Degeneration, observed that morbidity and mortality of health conditions such as heart disease rise and fall with the seasons, following precisely the growth and dying away of vegetation throughout the year (which varies by region as do the morbidity and mortality rates; the regions of comparison were in the United States and Canada). He was able to track this down to the change of fat soluble vitamins, specifically vitamin D, in dairy. When fresh vegetation was available, cows ate it and so produced more of these nutrients and presumably more omega-3s at the same time.

Prior to colonization, the Unang would have had access to even higher levels of these protective nutrients year round. The most nutritious dairy taken from the springtime wouldn’t come close in comparison to the nutrient profile of wild game. I don’t know why anyone would be shocked that, like agricultural populations, hunter-gatherers also experience worsening health after loss of wild resources. Yet the authors of the mummy studies act like they made a radical discovery that throws to the wind every doubt anyone ever had about simplistic mainstream thought. It turns out, they seem to be declaring, that we are all victims of genetic determinism after all and so toss out your romantic fairy tales about healthy primitives from the ancient world. The problem is all the evidence that undermines their conclusion, including the evidence that they present in their own paper, that is when it is interpreted in full context.

As if responding to the researchers, Mailer and Hale write (p. 186): “Conditions such as diabetes are thus often associated with heart disease and other syndromes, given their inflammatory component. They now make up a huge proportion of treatment and spending in health services on both sides of the Atlantic. Yet policy makers and researchers in those same health services often respond to these conditions reactively rather than proactively — as if they were solely genetically determined, rather than arising due to external nutritional factors. A similarly problematic pattern of analysis, as we have noted, has led scholars to ignore the central role of nutritional change in Native American population loss after European contact, focusing instead on purportedly immutable genetic differences.”

There is another angle related to the above but somewhat at a tangent. I’ll bring it up because the research paper mentions it in passing as a factor to be considered: “All four populations lived at a time when infections would have been a common aspect of daily life and the major cause of death. Antibiotics had yet to be developed and the environment was non-hygienic. In 20th century hunter-foragers-horticulturalists, about 75% of mortality was attributed to infections, and only 10% from senescence. The high level of chronic infection and inflammation in premodern conditions might have promoted the inflammatory aspects of atherosclerosis.”

This is familiar territory for me, as I’ve been reading much about inflammation and infections. The authors are presenting the old view of the immune system, as opposed to that of functional medicine that looks at the entire human. An example of the latter is the hygiene hypothesis that argues it is the exposure to microbes that strengthens the immune system and there has been much evidence in support of it (such as children raised with animals or on farms being healthier as adults). The researchers above are making an opposing argument that is contradicted by populations remaining healthy when lacking modern medicine as long as they maintain traditional diet and lifestyle in a healthy ecosystem, including living soil that hasn’t been depleted from intensive farming.

This isn’t only about agriculturalists versus hunter-gatherers. The distinction between populations goes deeper into culture and environment. Weston A. Price discovered this simple truth in finding healthy populations among both agriculturalists and hunter-gatherers, but it was specific populations under specific conditions. Also, at the time when he traveled in the early 20th century, there were still traditional communities living in isolation in Europe. One example is Loetschenatal Valley in Switzerland, while visiting the country in two separate trips in the consecutive years of 1931 and 1932 — as he writes of it:

“We were told that the physical conditions that would not permit people to obtain modern foods would prevent us from reaching them without hardship. However, owing to the completion of the Loetschberg Tunnel, eleven miles long, and the building of a railroad that crosses the Loetschental Valley, at a little less than a mile above sea level, a group of about 2,000 people had been made easily accessible for study, shortly prior to 1931. Practically all the human requirements of the people in that valley, except a few items like sea salt, have been produced in the valley for centuries.”

He points out that, “Notwithstanding the fact that tuberculosis is the most serious disease of Switzerland, according to a statement given me by a government official, a recent report of inspection of this valley did not reveal a single case.” In Switzerland and other countries, he found an “association of dental caries and tuberculosis.” The commonality was early life development, as underdeveloped and maldeveloped bone structure led to diverse issues: crowded teeth, smaller skull size, misaligned features, and what was called tubercular chest. And that was an outward sign of deeper and more systemic developmental issues, including malnutrition, inflammation, and the immune system:

“Associated with a fine physical condition the isolated primitive groups have a high level of immunity to many of our modern degenerative processes, including tuberculosis, arthritis, heart disease, and affections  of the internal organs. When, however, these individuals have lost this high level of physical excellence a definite lowering in their resistance to the modern degenerative processes has taken place. To illustrate, the narrowing of the facial and dental arch forms of the children of the modernized parents, after they had adopted the white man’s food, was accompanied by an increase in susceptibility to pulmonary tuberculosis.”

Any population that lost its traditional way of life became prone to disease. But this could often as easily be reversed by having the diseased individual return to healthy conditions. In discussing Dr. Josef Romig, Price said that, “Growing out of his experience, in which he had seen large numbers of the modernized Eskimos and Indians attacked with tuberculosis, which tended to be progressive and ultimately fatal as long as the patients stayed under modernized living conditions, he now sends them back when possible to primitive conditions and to a primitive diet, under which the death rate is very much lower than under modernized  conditions. Indeed, he reported that a great majority of the afflicted recover under the primitive type of living and nutrition.”

The point made by Mailer and Hale was earlier made by Price. As seen with pre-contact Native Alaskans, the isolated traditional residents of Loetschenatal Valley had nutritious diets. Price explained that he “arranged to have samples of food, particularly dairy products, sent to me about twice a month, summer and winter. These products have been tested for their mineral and vitamin contents, particularly the fat-soluble activators. The samples were found to be high in vitamins and much higher than the average samples of commercial dairy products in America and Europe, and in the lower areas of Switzerland.” Whether fat and organ meats from marine animals or dairy from pastured alpine cows, the key is high levels of fat soluble vitamins and, of course, omega-3 fatty acids procured from a pristine environment (healthy soil and clean water with no toxins, farm chemicals, hormones, etc). It also helped that both populations ate much that was raw which maintains the high nutrient content that is partly destroyed through heat.

Some might find it hard to believe that what you eat can determine whether or not you get a serious disease like tuberculosis. Conventional medicine tells us that the only thing that protects us is either avoiding contact or vaccination. But this view is being seriously challenged, as Mailer and Hale make clear (p. 164): “Several studies have focused on the link between Vitamin D and the health outcomes of individuals infected with tuberculosis, taking care to discount other causal factors and to avoid determining causation merely through association. Given the historical occurrence of the disease among indigenous people after contact, including in Alaska, those studies that have isolated the contingency of immunity on active Vitamin D are particularly pertinent to note. In biochemical experiments, the presence of the active form of vitamin D has been shown to have a crucial role in the destruction of Mycobacterium tuberculosis by macrophages. A recent review has found that tuberculosis patients tend to retain a lower-than-average vitamin D status, and that supplementation of the nutrient improved outcomes in most cases.” As an additional thought, the popular tuberculosis sanitoriums, some in the Swiss Alps, were attractive because “it was believed that the climate and above-average hours of sunshine had something to do with it” (Jo Fahy, A breath of fresh air for an alpine village). What does sunlight help the body to produce? Vitamin D.

As an additional perspective, James C. Scotts’ Against the Grain, writes that, “Virtually every infectious disease caused by micro-organisms and specifically adapted to Homo sapiens has arisen in the last ten thousand years, many of them in the last five thousand years as an effect of ‘civilisation’: cholera, smallpox, measles, influenza, chickenpox, and perhaps malaria” It is not only that agriculture introduces new diseases but also makes people susceptible to them. That might be true, as Scott suggests, even of a disease like malaria. The Piraha are more likely to die of malaria than anything else, but that might not have been true in the past. Let me offer a speculation by connecting to the mummy study.

The Ancestral Puebloans, one of the groups in the mummy study, were at the time farming maize (corn) and squash while foraging pine nuts, seeds, amaranth (grain), and grasses. How does this compare to the more recent Piraha? A 1948 Smithsonian publication, Handbook of South American Indians ed. Julian H. Steward, reported that, “The Pirah grew maize, sweet manioc (macaxera), a kind of yellow squash (jurumum), watermelon, and cotton” (p. 267). So it turns out that, like the Ancestral Puebloan, the Piraha have been on their way toward a more agricultural lifestyle for a while. I also noted that the same publication added the detail that the Piraha “did not drink rum,” but by the time Daniel Everett met the Piraha in 1978 traders had already introduced them to alcohol and it had become an occasional problem. Not only were they becoming agricultural but also Westernized, two factors that likely contributed to decreased immunity.

Like other modern hunter-gatherers, the Piraha have been effected by the Neolithic Revolution and are in many ways far different from Paleolithic hunter-gatherers. Ancient dietary habits are shown in the analysis of ancient bones — M.P. Richards writes that, “Direct evidence from bone chemistry, such as the measurement of the stable isotopes of carbon and nitrogen, do provide direct evidence of past diet, and limited studies on five Neanderthals from three sites, as well as a number of modern Palaeolithic and Mesolithic humans indicates the importance of animal protein in diets. There is a significant change in the archaeological record associated with the introduction of agriculture worldwide, and an associated general decline in health in some areas. However, there is an rapid increase in population associated with domestication of plants, so although in some regions individual health suffers after the Neolithic revolution, as a species humans have greatly expanded their population worldwide” (A brief review of the archaeological evidence for Palaeolithic and Neolithic subsistence). This is further supported in the analysis of coprolites. “Studies of ancient human coprolites, or fossilized human feces, dating anywhere from three hundred thousand to as recent as fifty thousand years ago, have revealed essentially a complete lack of any plant material in the diets of the subjects studied (Bryant and Williams-Dean 1975),” Nora Gedgaudas tells us in Primal Body, Primal Mind (p. 39).

This diet changed as humans entered our present interglacial period with its warmer temperatures and greater abundance of vegetation, which was lacking during the Paleolithic Period: “There was far more plant material in the diets of our more recent ancestors than our more ancient hominid ancestors, due to different factors” (Gedgaudas, p. 37). Following the earlier megafauna mass extinction, it wasn’t only agriculturalists but also hunter-gatherers who began to eat more plants and in many cases make use of cultivated plants (either that they cultivated or that they adopted from nearby agriculturalists). To emphasize how drastic was this change, this loss of abundant meat and fat, consider the fact that humans have yet to regain the average height and skull size of Paleolithic humans.

The authors of the mummy study didn’t even attempt to look at the data of Paleolithic humans. The populations compared are entirely from the past few millennia. And the only hunter-gatherer group included was post-contact. So, why are the authors so confident in their conclusion? I presume they were simply trying to get published and get media attention in a highly competitive market of academic scholarship. These people obviously aren’t stupid, but they had little incentive to fully inform themselves either. All the info I shared in this post I was able to gather in about a half an hour of several web searches, not exactly difficult academic research. It’s amazing the info that is easily available these days, for those who want to find it.

Let me make one last point. The mummy study isn’t without its merits. The paper mentions other evidence that remains to be explained: “We also considered the reliability and previous work of the authors. Autopsy studies done as long ago as the mid-19th century showed atherosclerosis in ancient Egyptians. Also, in more recent times, Zimmerman undertook autopsies and described atherosclerosis in the mummies of two Unangan men from the same cave as our Unangan mummies and of an Inuit woman who lived around 400 CE. A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.”

Let’s break that down. Further examples of Egyptian mummies is irrelevant, as their diet was so strikingly similar to the idealized Western diet recommended by mainstream doctors, dieticians, and nutritionists. That leaves the rest to account for. The older Unangan mummies are far more interesting and any meaningful paper would have led with that piece of data, but even then it wouldn’t mean what the authors think it means. Atherosclerosis is one small factor and not necessarily as significant as assumed. From a functional medicine perspective, it’s the whole picture that matters in how the body actually functions and in the health that results. If so, atherosclerosis might not indicate the same thing for all populations. In Nourishing Diets, Morell writes that (pp. 124-5),

“Critics have pointed out that Keys omitted from his study many areas of the world where consumption of animal foods is high and deaths from heart attack are low, including France — the so-called French paradox. But there is also a Japanese paradox. In 1989, Japanese scientists returned to the same two districts that Keys had studied. In an article titled “lessons fro Science from the Seven Countries Study,” they noted that per capita consumption of rice had declined, while consumption of fats, oils, meats, poultry, dairy products and fruit had all increased. […]

“During the postwar period of increased animal consumption, the Japanese average height increased three inches and the age-adjusted death rate from all causes declined from 17.6 to 7.4 per 1,000 per year. Although the rates of hypertension increased, stroke mortality declined markedly. Deaths from cancer also went down in spite of the consumption of animal foods.

“The researchers also noted — and here is the paradox — that the rate of myocardial infarction (heart attack) and sudden death did not change during this period, in spite of the fact that the Japanese weighed more, had higher blood pressure and higher cholesterol levels, and ate more fat, beef and dairy foods.”

Right here in the United States, we have are own ‘paradox’ as well. Good Calories, Bad Calories by Gary Taubes makes a compelling argument that, based on the scientific research, there is no strong causal link between atherosclerosis and coronary heart disease. Nina Teicholz has also written extensively about this, such as in her book The Big Fat Surprise; and in an Atlantic piece (How Americans Got Red Meat Wrong) she lays out some of the evidence showing that Americans in the 19th century, as compared to the following century, ate more meat and fat while they ate fewer vegetables and fruits. Nonetheless: “During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.” Whether or not earlier Americans had high rates of atherosclerosis, there is strong evidence indicating they did not have high rates of heart disease, of strokes and heart attacks. The health crisis for these conditions, as Tiecholz notes, didn’t take hold until the very moment meat and animal fat consumption took a nosedive. So what gives?

The takeaway is this. We have no reason to assume that atherosclerosis in the present or in the past can tell us much of anything about general health. Even ignoring the fact that none of the mummies studied was from a high protein and high fat Paleo population, we can make no meaningful interpretations of the presence of atherosclerosis among some of the individuals. Going by modern data, there is no reason to jump to the conclusion that they had high mortality rates because of it. Quite likely, they died from completely unrelated health issues. A case in point is that of the Masai, around which there is much debate in interpreting the data. George V. Mann and others wrote a paper, Atherosclerosis in the Masai, that demonstrated the complexity:

“The hearts and aortae of 50 Masai men were collected at autopsy. These pastoral people are exceptionally active and fit and they consume diets of milk and meat. The intake of animal fat exceeds that of American men. Measurements of the aorta showed extensive atherosclerosis with lipid infiltration and fibrous changes but very few complicated lesions. The coronary arteries showed intimal thickening by atherosclerosis which equaled that of old U.S. men. The Masai vessels enlarge with age to more than compensate for this disease. It is speculated that the Masai are protected from their atherosclerosis by physical fitness which causes their coronary vessels to be capacious.”

Put this in the context provided in What Causes Heart Disease? by Sally Fallon Morell and Mary Enig: “The factors that initiate a heart attack (or a stroke) are twofold. One is the pathological buildup of abnormal plaque, or atheromas, in the arteries, plaque that gradually hardens through calcification. Blockage most often occurs in the large arteries feeding the heart or the brain. This abnormal plaque or atherosclerosis should not be confused with the fatty streaks and thickening that is found in the arteries of both primitive and industrialized peoples throughout the world. This thickening is a protective mechanism that occurs in areas where the arteries branch or make a turn and therefore incur the greatest levels of pressure from the blood. Without this natural thickening, our arteries would weaken in these areas as we age, leading to aneurysms and ruptures. With normal thickening, the blood vessel usually widens to accommodate the change. But with atherosclerosis the vessel ultimately becomes more narrow so that even small blood clots may cause an obstruction.”

A distinction is being made here that maybe wasn’t being made in the the mummy study. What gets measured as atherosclerosis could correlate to diverse health conditions and consequences in various populations across dietary lifestyles, regional environments, and historical and prehistorical periods. Finding atherosclerosis in an individual, especially a mummy, might not tell us any useful info about overall health.

Just for good measure, let’s tackle the last piece of remaining evidence the authors mention: “A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.” Calling him Iceman, to most ears, sounds similar to calling an ancient person a caveman — implying that he was a hunter for it is hard to grow plants on ice. In response, Paul Mabry writes in Did Meat Eating Make Ancient Hunter Gatherers Get Heart Disease, showing what was left out in the research paper:

“Sometimes the folks trying to discredit hunter-gather diets bring in Ötzi, “The Iceman” a frozen human found in the Tyrolean Mountains on the border between Austria and Italy that also had plaques in his heart arteries. He was judged to be 5300 years old making his era about 3400 BCE. Most experts feel agriculture had reach Europe almost 700 years before that according to this article. And Ötzi himself suggests they are right. Here’s a quote from the Wikipedia article on Ötzi’s last meal (a sandwich): “Analysis of Ötzi’s intestinal contents showed two meals (the last one consumed about eight hours before his death), one of chamois meat, the other of red deer and herb bread. Both were eaten with grain as well as roots and fruits. The grain from both meals was a highly processed einkornwheat bran,[14] quite possibly eaten in the form of bread. In the proximity of the body, and thus possibly originating from the Iceman’s provisions, chaff and grains of einkorn and barley, and seeds of flax and poppy were discovered, as well as kernels of sloes (small plumlike fruits of the blackthorn tree) and various seeds of berries growing in the wild.[15] Hair analysis was used to examine his diet from several months before. Pollen in the first meal showed that it had been consumed in a mid-altitude conifer forest, and other pollens indicated the presence of wheat and legumes, which may have been domesticated crops. Pollen grains of hop-hornbeam were also discovered. The pollen was very well preserved, with the cells inside remaining intact, indicating that it had been fresh (a few hours old) at the time of Ötzi’s death, which places the event in the spring. Einkorn wheat is harvested in the late summer, and sloes in the autumn; these must have been stored from the previous year.””

Once again, we are looking at the health issues of someone eating an agricultural diet. It’s amazing that the authors, 19 of them, apparently all agreed that diet has nothing to do with a major component of health. That is patently absurd. To the credit of Lancet, they published a criticism of this conclusion (though these critics repeats their own preferred conventional wisdom, in their view on saturated fat) — Atherosclerosis in ancient populations by Gino Fornaciari and Raffaele Gaeta:

“The development of vascular calcification is related not only to atherosclerosis but also to conditions such as disorders of calcium-phosphorus metabolism, diabetes, chronic microinflammation, and chronic renal insufficiency.

“Furthermore, stating that atherosclerosis is not characteristic of any specific diet or lifestyle, but an inherent component of human ageing is not in agreement with recent studies demonstrating the importance of diet and physical activity.5 If atherosclerosis only depended on ageing, it would not have been possible to diagnose it in a young individual, as done in the Horus study.1

“Finally, classification of probable atherosclerosis on the basis of the presence of a calcification in the expected course of an artery seems incorrect, because the anatomy can be strongly altered by post-mortem events. The walls of the vessels might collapse, dehydrate, and have the appearance of a calcific thickening. For this reason, the x-ray CT pattern alone is insufficient and diagnosis should be supported by histological study.”

As far as I know, this didn’t lead to a retraction of the paper. Nor did this criticism receive the attention that the paper itself was given. None of the people who praised the paper bothered to point out the criticism, at least not among what I came across. Anyway, how did this weakly argued paper based on faulty evidence get published in the first place? And then how does it get spread by so many as if proven fact?

This is the uphill battle faced by anyone seeking to offer an alternative perspective, especially on diet. This makes meaningful debate next to impossible. That won’t stop those like me from slowly chipping away at the vast edifice of the dominant paradigm. On a positive note, it helps when the evidence used against an alternative view, after reinterpretation, ends up being strong evidence in favor of it.

Straw Men in the Linguistic Imaginary

“For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?”
~ Jeremy Lent, The Patterning Instinct

Portraying the Sapir-Whorf hypothesis as linguistic determinism is a straw man fallacy. It’s false to speak of a Sapir-Whorf hypothesis at all as no such hypothesis was ever proposed by Edward Sapir and Benjamin Lee Whorf. Interestingly, it turns out that researchers have since found examples of what could be called linguistic determinism or at least very strong linguistic relativity, although still apparently rare (similar to how examples of genetic determinism are rare). But that is neither here nor there, considering Sapir and Whorf didn’t argue for linguistic determinism, no matter how you quote-mine their texts. The position of relativity, similar to social constructivism, is the wholesale opposite of rigid determinism — besides, linguistic relativism wasn’t even a major focus of Sapir’s work even as he influenced Whorf.

Turning their view into a caricature of determinism was an act of projection. It was the anti-relativists who were arguing for biology determining language, from Noam Chomsky’s language module in the brain to Brent Berlin and Paul Kay’s supposedly universal color categories. It was masterful rhetoric to turn the charge onto those holding the moderate position in order to dress them up as ideologial extremists and charlatans. And with Sapir and Whorf gone from early deaths, they weren’t around to defend themselves and to deny what was claimed on their behalf.

Even Whorf’s sometimes strongly worded view of relativity, by today’s standards and knowledge in the field, doesn’t sound particularly extreme. If anything, to those informed of the most up-to-date research, denying such obvious claims would now sound absurd. How did so many become disconnected from simple truths of human experience that anyone who dared speak these truths could be ridiculed and dismissed out of hand? For generations, relativists stating common sense criticisms of race realism were dismissed in a similar way, and they were often the same people (cultural relativity and linguistic relativity in American scholarship was influenced by Franz Boas) — the argument tying them together is that relativity in expression and emodiment of our shared humanity (think of it more in terms of Daniel Everett’s dark matter of the mind) is based on a complex and flexible set of universal potentials, such that universalism doesn’t require nor indicate essentialism. Yet why do we go on clinging to so many forms of determinism, essentialism, and nativism, including those ideas advocated by many of Sapir and Whorf’s opponents?

We are in a near impossible situation, Essentialism has been a cornerstone of modern civilization, most of all in its WEIRD varieties. Relativity simply can’t be fully comprehended, much less tolerated, within the dominant paradigm, although as Leavitt argues it resonates with the emphasis on language found in Romanticism which was a previous response to essentialism. As for linguistic determinism, even if it were true beyond a few exceptional cases, it is by and large an untestable hypothesis at present and so scientifically meaningless within WEIRD science. WEIRD researchers exist in a civilization that has become dominated by WEIRD societies with nearly all alternatives destroyed or altered beyond their original form. There is no where to stand outside of the WEIRD paradigm, especially not the WEIRDest of the WEIRD researchers doing most of the research.

If certain thoughts are unthinkable within WEIRD culture and language, we have no completely alien mode of thought by which to objectively assess the WEIRD, as imperialism and globalization has left no society untouched. There is no way for us to even think about what might be unthinkable, much less research it. This doublebind goes right over the heads of most people, even over the heads of some relativists who fear being disparaged if they don’t outright deny any possibility of the so-called strong Sapir-Whorf hypothesis. That such a hypothesis potentially could describe reality to a greater extent than we’d prefer is, for most people infected with the WEIRD mind virus and living within the WEIRD monocultural reality tunnel, itself an unthinkable thought.

It is unthinkable and, in its fullest form, fundamentally untestable. And so it is terra incognito within the collective mind. The response is typically either uncomfortable irritation or nervous laughter. Still, the limited evidence in support of linguistic determinism points to the possibility of it being found in other as-of-yet unexplored areas — maybe a fair amount of evidence already exists that will later be reinterpreted when a new frame of understanding becomes established or when someone, maybe generations later, looks at it with fresh eyes. History is filled with moments when something shifted allowing the incomprehensible and unspeakable to become a serious public debate, sometimes a new social reality. Determinism in all of its varieties seems a generally unfrutiful path of research, although in its linguistic form it is compelling as a thought experiment in showing how little we know and can know, how severely constrained is our imaginative capacities.

We don’t look in the darkness where we lost what we are looking for because the light is better elsewhere. But what would we find if we did search the shadows? Whether or not we discovered proof for linguistic determinism, we might stumble across all kinds of other inconvenient evidence pointing toward ever more radical and heretical thoughts. Linguistic relativity and determinism might end up playing a central role less because of the bold answers offered than in the questions that were dared to be asked. Maybe, in thinking about determinism, we could come to a more profound insight of relativity — after all, a complex enough interplay of seemingly deterministic factors would for all appearances be relativistic, that is to say what seen to be linear causation could when lines of causation are interwoven lead to emergent properties. The relativistic whole, in that case, presumably would be greater than the deterministic parts.

Besides, it always depends on perspective. Consider Whorf who “has been rejected both by cognitivists as a relativist and by symbolic and postmodern anthropologists as a determinist and essentialist” (John Leavitt, Linguistic Relativities, p. 193; Leavitt’s book goes into immense detail about all of the misunderstanding and misinterpretation, much of it because of intellectual laziness or hubris  but some of motivated by ideological agendas; the continuing and consistent wrongheadedness makes it difficult to not take much of it as arguing in bad faith). It’s not always clear what the debate is supposed to be about. Ironically, such terms as ‘determinism’ and ‘relativity’ are relativistic in their use while, in how we use them, determining how we think about the issues and how we interpret the evidence. There is no way to take ourselves out of the debate itself for our own humanity is what we are trying to place under the microscope, causing us tremendous psychological contortions in maintaining whatever worldview we latch onto.

There is less distance between linguistic relativity and linguistic determinism than is typically assumed. The former says we are only limited by habit of thought and all it entails within culture and relationships. Yet habits of thought can be so powerful as to essentially determine social orders for centuries and millennia. Calling this mere ‘habit’ hardly does it justice. In theory, a society isn’t absolutely determined to be the way it is nor for those within it to behave the way they do, but in practice extremely few individuals ever escape the gravity pull of habitual conformity and groupthink (i.e., Jaynesian self-authorization is more a story we tell ourselves than an actual description of behavior).

So, yes, in terms of genetic potential and neuroplasticity, there was nothing directly stopping Bronze Age Egyptians from starting an industrial revolution and there is nothing stopping a present-day Piraha from becoming a Harvard professor of mathematics — still, the probability of such things happening is next to zero. Consider the rare individuals in our own society who break free of the collective habits of our society, as they usually either end up homeless or institutionalized, typically with severely shortened lives. To not go along with the habits of your society is to be deemed insane, incompetent, and/or dangerous. Collective habits within a social order involve systematic enculturation, indoctrination, and enforcement. The power of language — even if only relativistic — over our minds is one small part of the cultural system, albeit an important part.

We don’t need to go that far with our argument, though. However you want to splice it, there is plenty of evidence that remains to be explained. And the evidence has become overwhelming and, to many, disconcerting. The debate over the validity of the theory of linguistic relativity is over. But the opponents of the theory have had two basic strategies to contain their loss and keep the debate on life support. They conflate linguistic relativity with linguistic determinism and dismiss it as laughably false. Or they concede that linguistic relativity is partly correct but argue that it’s insignificant in influence, as if they never denied it and simply were unimpressed.

“This is characteristic: one defines linguistic relativity in such an extreme way as to make it seem obviously untrue; one is then free to acknowledge the reality of the data at the heart of the idea of linguistic relativity – without, until quite recently, proposing to do any serious research on these data.” (John Leavit, Linguistic Relativities, p. 166)

Either way, essentialists maintain their position as if no serious challenge was posed. The evidence gets lost in the rhetoric, as the evidence keeps growing.

Still, there is something more challenging that also gets lost in debate, even when evidence is acknowledged. What motivated someone like Whorf wasn’t intellectual victory and academic prestige. There was a sense of human potential locked behind habit. That is why it was so important to study foreign cultures with their diverse languages, not only for the sake of knowledge but to be confronted by entirely different worldviews. Essentialists are on the old imperial path of Whiggish Enlightenment, denying differences by proclaiming that all things Western are the norm of humanity and reality, sometimes taken as a universal ideal state or the primary example by which to measure all else… an ideology that easily morphs into yet darker specters:

“Any attempt to speak of language in general is illusory; the (no doubt French or English) philosopher who does so is merely elevating his own mother tongue to the status of a universal standard (p. 3). See how the discourse of diversity can be turned to defend racism and fascism! I suppose by now this shouldn’t surprise us – we’ve seen so many examples of it at the end of the twentieth and beginning of the twenty-first century.” (John Leavit, Linguistic Relativities, p. 161)

In this light, it should be unsurprising that the essentialist program presented in Chomskyan linguistics was supported and funded by the Pentagon (their specific interest in this case being about human-computer interface in eliminating messy human error; in studying the brain as a computer, it was expected that the individual human mind could be made more amenable to a computerized system of military action and its accompanying chain-of-command). Essentialism makes promises that are useful for systems of effective control as part of a larger technocratic worldview of social control.

The essentialist path we’ve been on has left centuries of destruction in its wake. But from the humbling vista opening onto further possibilities, the relativists offer not a mere scientific theory but a new path for humanity or rather they throw light onto the multiple paths before us. In offering respect and openness toward the otherness of others, we open ourselves toward the otherness within our own humanity. The point is that, though trapped in linguistic cultures, the key to our release is also to be found in the same place. But this requires courage and curiosity, a broadening of the moral imagination.

Let me end on a note of irony. In comparing linguistic cultures, Joseph Needham wrote that, “Where Western minds asked ‘what essentially is it?’, Chinese minds asked ‘how is it related in its beginnings, functions, and endings with everything else, and how ought we to react to it?” This was quoted by Jeremy Lent in the Patterning Instinct (p. 206; quote originally from: Science and Civilization in China, vol. 2, History of Scientific Thought, pp. 199-200). Lent makes clear that this has everything to do with language. Chinese language embodies ambiguity and demands contextual understanding, whereas Western or more broadly Indo-European language elicits abstract essentialism.

So, it is a specific linguistic culture of essentialism that influences, if not entirely determines, that Westerners are predisposed to see language as essentialist, rather than as relative. And it is this very essentialism that causes many Westerners, especially abstract-minded intellectuals, to be blind to essentialism as being linguistically cultural, but not essentialist to human nature and neurocognitive functioning. That is the irony. This essentialist belief system is further proof of linguistic relativism.

 

* * *

The Patterning Instinct
by Jeremy Lent
pp. 197-205

The ability of these speakers to locate themselves in a way that is impossible for the rest of us is only the most dramatic in an array of discoveries that are causing a revolution in the world of linguistics. Researchers point to the Guugu Yimithirr as prima facie evidence supporting the argument that the language you speak affects how your cognition develops. As soon as they learn their first words, Guugu Yimithirr infants begin to structure their orientation around the cardinal directions. In time, their neural connections get wired accordingly until this form of orientation becomes second nature, and they no longer even have to think about where north, south, east, and west are.3 […]

For many of us, the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?4

The answer can be found in the remarkable story of one charismatic individual, Benjamin Whorf. In the early twentieth century, Whorf was a student of anthropologist-linguist Edward Sapir, whose detailed study of Native American languages had caused him to propose that a language’s grammatical structure corresponds to patterns of thought in its culture. “We see and hear and otherwise experience very largely as we do,” Sapir suggested, “because the language habits of our community predispose certain choices of interpretation.”5

Whorf took this idea, which became known as the Sapir-Whorf hypothesis, to new heights of rhetoric. The grammar of our language, he claimed, affects how we pattern meaning into the natural world. “We cut up and organize the spread and flow of events as we do,” he wrote, “largely because, through our mother tongue, we are parties to an agreement to do so, not because nature itself is segmented in exactly that way for all to see.”6 […]

Whorf was brilliant but highly controversial. He had a tendency to use sweeping generalizations and dramatic statements to drive home his point. “As goes our segmentation of the face of nature,” he wrote, “so goes our physics of the Cosmos.” Sometimes he went beyond the idea that language affects how we think to a more strident assertion that language literally forces us to think in a certain way. “The forms of a person’s person’s thoughts,” he proclaimed, “are controlled by inexorable laws of pattern of which he is unconscious.” This rhetoric led people to interpret the Sapir-Whorf hypothesis as a theory of linguistic determinism, claiming that people’s thoughts are inevitably determined by the structure of their language.8

A theory of rigid linguistic determinism is easy to discredit. All you need to do is show a Hopi Indian capable of thinking in terms of past, present, and future, and you’ve proven that her language didn’t ordain how she was able to think. The more popular the Sapir-Whorf theory became, the more status could be gained by any researcher who poked holes in it. In time, attacking Sapir-Whorf became a favorite path to academic tenure, until the entire theory became completely discredited.9

In place of the Sapir-Whorf hypothesis arose what is known as the nativist view, which argues that the grammar of language is innate to humankind. As discussed earlier, the theory of universal grammar, proposed by Noam Chomsky in the 1950s and popularized more recently by Steven Pinker, posits that humans have a “language instinct” with grammatical rules coded into our DNA. This theory has dominated the field of linguistics for decades. “There is no scientific evidence,” writes Pinker, “that languages dramatically shape their speakers’ ways of thinking.” Pinker and other adherents to this theory, however, are increasingly having to turn a blind eye—not just to the Guugu Yimithirr but to the accumulating evidence of a number of studies showing the actual effects of language on people’s patterns of thought.10 […]

Psychologist Peter Gordon saw an opportunity to test the most extreme version of the Sapir-Whorf hypothesis with the Pirahã. If language predetermined patterns of thought, then the Pirahã should be unable to count, in spite of the fact that they show rich intelligence in other forms of their daily life. He performed a number of tests with the Pirahã over a two-year period, and his results were convincing: as soon as the Pirahã had to deal with a set of objects beyond three, their counting performance disintegrated. His study, he concludes, “represents a rare and perhaps unique case for strong linguistic determinism.”12

The Guugu Yimithirr, at one end of the spectrum, show the extraordinary skills a language can give its speakers; the Pirahã, at the other end, show how necessary language is for basic skills we take for granted. In between these two extremes, an increasing number of researchers are demonstrating a wide variety of more subtle ways the language we speak can influence how we think.

One set of researchers illustrated how language affects perception. They used the fact that the Greek language has two color terms—ghalazio and ble—that distinguish light and dark blue. They tested the speed with which Greek speakers and English speakers could distinguish between these two different colors, even when they weren’t being asked to name them, and discovered the Greeks were significantly faster.13

Another study demonstrates how language helps structure memory. When bilingual Mandarin-English speakers were asked in English to name a statue of someone with a raised arm looking into the distance, they were more likely to name the Statue of Liberty. When they were asked the same question in Mandarin, they named an equally famous Chinese statue of Mao with his arm raised.14

One intriguing study shows English and Spanish speakers remembering accidental events differently. In English, an accident is usually described in the standard subject-verb-object format of “I broke the bottle.” In Spanish, a reflexive verb is often used without an agent, such as “La botella se rompió”—“the bottle broke.” The researchers took advantage of this difference, asking English and Spanish speakers to watch videos of different intentional and accidental events and later having them remember what happened. Both groups had similar recall for the agents involved in intentional events. However, when remembering the accidental events, English speakers recalled the agents better than the Spanish speakers did.15

Language can also have a significant effect in channeling emotions. One researcher read the same story to Greek-English bilinguals in one language and, then, months later, in the other. Each time, he interviewed them about their feelings in response to the story. The subjects responded differently to the story depending on its language, and many of these differences could be attributed to specific emotion words available in one language but not the other. The English story elicited a sense of frustration in readers, but there is no Greek word for frustration, and this emotion was absent in responses to the Greek story. The Greek version, however, inspired a sense of stenahoria in several readers, an emotion loosely translated as “sadness/discomfort/suffocation.” When one subject was asked why he hadn’t mentioned stenahoria after his English reading of the story, he answered that he cannot feel stenahoria in English, “not just because the word doesn’t exist but because that kind of situation would never arise.”16 […]

Marketing professor David Luna has performed tests on people who are not just bilingual but bicultural—those who have internalized two different cultures—which lend support to this model of cultural frames. Working with people immersed equally in both American and Hispanic cultures, he examined their responses to various advertisements and newspaper articles in both languages and compared them to those of bilinguals who were only immersed in one culture. He reports that biculturals, more than monoculturals, would feel “like a different person” when they spoke different languages, and they accessed different mental frames depending on the cultural context, resulting in shifts in their sense of self.25

In particular, the use of root metaphors, embedded so deeply in our consciousness that we don’t even notice them, influences how we define our sense of self and apply meaning to the world around us. “Metaphor plays a very significant role in determining what is real for us,” writes cognitive linguist George Lakoff. “Metaphorical concepts…structure our present reality. New metaphors have the power to create a new reality.”26

These metaphors enter our minds as infants, as soon as we begin to talk. They establish neural pathways that are continually reinforced until, just like the cardinal directions of the Guugu Yimithirr, we use our metaphorical constructs without even recognizing them as metaphors. When a parent, for example, tells a child to “put that out of your mind,” she is implicitly communicating a metaphor of the MIND AS A CONTAINER that should hold some things and not others.27

When these metaphors are used to make sense of humanity’s place in the cosmos, they become the root metaphors that structure a culture’s approach to meaning. Hunter-gatherers, as we’ve seen, viewed the natural world through the root metaphor of GIVING PARENT, which gave way to the agrarian metaphor of ANCESTOR TO BE PROPITIATED. Both the Vedic and Greek traditions used the root metaphor of HIGH IS GOOD to characterize the source of ultimate meaning as transcendent, while the Chinese used the metaphor of PATH in their conceptualization of the Tao. These metaphors become hidden in plain sight, since they are used so extensively that people begin to accept them as fundamental structures of reality. This, ultimately, is how culture and language reinforce each other, leading to a deep persistence of underlying structures of thought from one generation to the next.28

Linguistic Relativities
by John Leavitt
pp. 138-142

Probably the most famous statement of Sapir’s supposed linguistic determinism comes from “The Status of Linguistics as a Science,” a talk published in 1929:

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of a particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language, and that language is merely an incidental means of solving specific problems of communication or reflection. The fact of the matter is that the “real world” is to a large extent unconsciously built up on the language habits of the group. No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are different worlds, not merely the same world with different labels attached … We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation. (Sapir 1949: 162)

This is the passage that is most commonly quoted to demonstrate the putative linguistic determinism of Sapir and of his student Whorf, who cites some of it (1956: 134) at the beginning of “The Relation of Habitual Thought and Behavior to Language,” a paper published in a Sapir Festschrift in 1941. But is this linguistic determinism? Or is it the statement of an observed reality that must be dealt with? Note that the passage does not say that it is impossible to translate between different languages, nor to convey the same referential content in both. Note also that there is a piece missing here, between “labels attached” and “We see and hear.” In fact, the way I have presented it, with the three dots, is how this passage is almost always presented (e.g., Lucy 1992a: 22); otherwise, the quote usually ends at “labels attached.” If we look at what has been elided, we find two examples, coming in a new paragraph immediately after “attached.” In a typically Sapirian way, one is poetic, the other perceptual. He begins:

The understanding of a simple poem, for instance, involves not merely an understanding of the single words in their average significance, but a full comprehension of the whole life of the community as it is mirrored in the words, or as it is suggested by the overtones.

So the apparent claim of linguistic determinism is to be illustrated by – a poem (Friedrich 1979: 479–80), and a simple one at that! In light of this missing piece of the passage, what Sapir seems to be saying is not that language determines thought, but that language is part of social reality, and so is thought, and to understand either a thought or “a green thought in a green shade” you need to consider the whole.

The second example is one of the relationship of terminology to classification:

Even comparatively simple acts of perception are very much more at the mercy of the social patterns called words than we might suppose. If one draws some dozen lines, for instance, of different shapes, one peceives them as divisible into such categories as “straight,” “crooked,” “curved,” “zigzag” because of the classificatory suggestiveness of the linguistic terms themselves. We see and hear …

Again, is Sapir here arguing for a determination of thought by language or simply observing that in cases of sorting out complex data, one will tend to use the categories that are available? In the latter case, he would be suggesting to his audience of professionals (the source is a talk given to a joint meeting of the Linguistic Society of America and the American Anthropological Association) that such phenomena may extend beyond simple classification tasks.

Here it is important to distinguish between claims of linguistic determinism and the observation of the utility of available categories, an observation that in itself in no way questions the likely importance of the non-linguistic salience of input or the physiological component of perception. Taken in the context of the overall Boasian approach to language and thought, this is clearly the thrust of Sapir’s comments here. Remember that this was the same man who did the famous “Study on Phonetic Symbolism,” which showed that there are what appear to be universal psychological reactions to certain speech sounds (his term is “symbolic feeling-significance”), regardless of the language or the meaning of the word in which these sounds are found (in Sapir 1949). This evidence against linguistic determinism, as it happens, was published the same year as “The Status of Linguistics as a Science,” but in the Journal of Experimental Psychology.3

The metaphor Sapir uses most regularly for the relation of language patterning to thought is not that of a constraint, but of a road or groove that is relatively easy or hard to follow. In Language, he proposed that languages are “invisible garments” for our spirits; but at the beginning of the book he had already questioned this analogy: “But what if language is not so much a garment as a prepared road or groove?” (p. 15); grammatical patterning provides “grooves of expression, (which) have come to be felt as inevitable” (p. 89; cf. Erickson et al. 1997: 298). One important thing about a road is that you can get off it; of a groove, that you can get out of it. We will see that this kind of wording permeates Whorf’s formulations as well. […]

Since the early 1950s, Sapir’s student Benjamin Lee Whorf (1897–1941) has most often been presented as the very epitome of extreme cognitive relativism and linguistic determinism. Indeed, as the name attached to the “linguistic determinism hypothesis,” a hypothesis almost never evoked but to be denied, Whorf has become both the best-known ethnolinguist outside the field itself and one of the great straw men of the century. This fate is undeserved; he was not a self-made straw man, as Marshall Sahlins once called another well-known anthropologist. While Whorf certainly maintained what he called a principle of linguistic relativity, it is clear from reading Language, Thought, and Reality, the only generally available source of his writings, published posthumously in 1956, and even clearer from still largely unpublished manuscripts, that he was also a strong universalist who accepted the general validity of modern science. With some re-evaluations since the early 1990s (Lucy 1992a; P. Lee 1996), we now have a clearer idea of what Whorf was about.

In spite of sometimes deterministic phraseology, Whorf presumed that much of human thinking and perception was non-linguistic and universal across languages. In particular, he admired Gestalt psychology (P. Lee 1996) as a science giving access to general characteristics of human perception across cultures and languages, including the lived experiences that lie behind the forms that we label time and space. He puts this most clearly in discussions of the presumably universal perception of visual space:

A discovery made by modern configurative or Gestalt psychology gives us a canon of reference, irrespective of their languages or scientific jargons, by which to break down and describe all visually observable situations, and many other situations, also. This is the discovery that visual perception is basically the same for all normal persons past infancy and conforms to definite laws. (Whorf 1956: 165)

Whorf clearly believed there was a real world out there, although, enchanted by quantum mechanics and relativity theory, he also believed that this was not the world as we conceive it, nor that every human being conceives it habitually in the same way.

Whorf also sought and proposed general descriptive principles for the analysis of languages of the most varied type. And along with Sapir, he worked on sound symbolism, proposing the universality of feeling-associations to certain speech sounds (1956: 267). Insofar as he was a good disciple of Sapir and Boas, Whorf believed, like them, in the universality of cognitive abilities and of some fundamental cognitive processes. And far from assuming that language determines thought and culture, Whorf wrote in the paper for the Sapir volume that

I should be the last to pretend that there is anything so definite as “a correlation” between culture and language, and especially between ethnological rubrics such as “agricultural, hunting,” etc., and linguistic ones like “inflected,” “synthetic,” or “isolating.” (pp. 138–9)

pp. 146

For Whorf, certain scientific disciplines – elsewhere he names “relativity, quantum theory, electronics, catalysis, colloid chemistry, theory of the gene, Gestalt psychology, psychoanalysis, unbiased cultural anthropology, and so on” (1956: 220), as well as non-Euclidean geometry and, of course, descriptive linguistics – were exemplary in that they revealed aspects of the world profoundly at variance with the world as modern Westerners habitually assume it to be, indeed as the members of any human language and social group habitually assume it to be.

Since Whorf was concerned with linguistic and/or conceptual patterns that people almost always follow in everyday life, he has often been read as a determinist. But as John Lucy pointed out (1992a), Whorf’s critiques clearly bore on habitual thinking, what it is easy to think; his ethical goal was to force us, through learning about other languages, other ways of foregrounding and linking aspects of experience, to think in ways that are not so easy, to follow paths that are not so familiar. Whorf’s argument is not fundamentally about constraint, but about the seductive force of habit, of what is “easily expressible by the type of symbolic means that language employs” (“Model,” 1956: 55) and so easy to think. It is not about the limits of a given language or the limits of thought, since Whorf presumes, Boasian that he is, that any language can convey any referential content.

Whorf’s favorite analogy for the relation of language to thought is the same as Sapir’s: that of tracks, paths, roads, ruts, or grooves. Even Whorf’s most determinist-sounding passages, which are also the ones most cited, sound very different if we take the implications of this analogy seriously: “Thinking … follows a network of tracks laid down in the given language, an organization which may concentrate systematically upon certain phases of reality … and may systematically discard others featured by other languages. The individual is utterly unaware of this organization and is constrained completely within its unbreakable bonds” (1956: 256); “we dissect nature along lines laid down by our native languages” (p. 213). But this is from the same essay in which Whorf asserted the universality of “ways of linking experiences … basically alike for all persons”; and this completely constrained individual is evidently the unreflective (utterly unaware) Mr. Everyman (Schultz 1990), and the very choice of the analogy of traced lines or tracks, assuming that they are not railway tracks – that they are not is suggested by all the other road and path metaphors – leaves open the possibility of getting off the path, if only we had the imagination and the gumption to do it. We can cut cross-country. In the study of an exotic language, he wrote, “we are at long last pushed willy-nilly out of our ruts. Then we find that the exotic language is a mirror held up to our own” (1956: 138). How can Whorf be a determinist, how can he see us as forever trapped in these ruts, if the study of another language is sufficient to push us, kicking and screaming perhaps, out of them?

The total picture, then, is not one of constraint or determinism. It is, on the other hand, a model of powerful seduction: the seduction of what is familiar and easy to think, of what is intellectually restful, of what makes common sense.7 The seduction of the habitual pathway, based largely on laziness and fear of the unknown, can, with work, be resisted and broken. Somewhere in the back of Whorf’s mind may have been the allegory of the broad, fair road to Hell and the narrow, difficult path to Heaven beloved of his Puritan forebears. It makes us think of another New England Protestant: “Two roads diverged in a wood, and I, / I took the one less travelled by, / and that has made all the difference.”

The recognition of the seduction of the familiar implies a real ethical program:

It is the “plainest” English which contains the greatest number of unconscious assumptions about nature … Western culture has made, through language, a provisional analysis of reality and, without correctives, holds resolutely to that analysis as final. The only correctives lie in all those other tongues which by aeons of independent evolution have arrived at different, but equally logical, provisional analyses. (1956: 244)

Learning non-Western languages offers a lesson in humility and awe in an enormous multilingual world:

We shall no longer be able to see a few recent dialects of the Indo-European family, and the rationalizing techniques elaborated from their patterns, as the apex of the evolution of the human mind, nor their present wide spread as due to any survival from fitness or to anything but a few events of history – events that could be called fortunate only from the parochial point of view of the favored parties. They, and our own thought processes with them, can no longer be envisioned as spanning the gamut of reason and knowledge but only as one constellation in a galactic expanse. (p. 218)

The breathtaking sense of sudden vaster possibility, of the sky opening up to reveal a bigger sky beyond, may be what provokes such strong reactions to Whorf. For some, he is simply enraging or ridiculous. For others, reading Whorf is a transformative experience, and there are many stories of students coming to anthropology or linguistics largely because of their reading of Whorf (personal communications; Alford 2002).

p. 167-168

[T]he rise of cognitive science was accompanied by a restating of what came to be called the “Sapir–Whorf hypothesis” in the most extreme terms. Three arguments came to the fore repeatedly:

Determinism. The Sapir–Whorf hypothesis says that the language you speak, and nothing else, determines how you think and perceive. We have already seen how false a characterization this is: the model the Boasians were working from was only deterministic in cases of no effort, of habitual thought or speaking. With enough effort, it is always possible to change your accent or your ideas.

Hermeticism. The Sapir–Whorf hypothesis maintains that each language is a sealed universe, expressing things that are inexpressible in another language. In such a view, translation would be impossible and Whorf’s attempt to render Hopi concepts in English an absurdity. In fact, the Boasians presumed, rather, that languages were not sealed worlds, but that they were to some degree comparable to worlds, and that passing between them required effort and alertness.

Both of these characterizations are used to set up a now classic article on linguistic relativity by the psychologist Eleanor Rosch (1974):

Are we “trapped” by our language into holding a particular “world view”? Can we never really understand or communicate with speakers of a language quite different from our own because each language has molded the thought of its people into mutually incomprehensible world views? Can we never get “beyond” language to experience the world “directly”? Such issues develop from an extreme form of a position sometimes known as “the Whorfian hypothesis” … and called, more generally, generally, the hypothesis of “linguistic relativity.” (Rosch 1974: 95)

Rosch begins the article noting how intuitively right the importance of language differences first seemed to her, then spends much of the rest of it attacking this initial intuition.

Infinite variability. A third common characterization is that Boasian linguistics holds that, in Martin Joos’s words, “languages can differ from each other without limit and in unpredictable ways” (Joos 1966: 96). This would mean that the identification of any language universal would disprove the approach. In fact, the Boasians worked with the universals that were available to them – these were mainly derived from psychology – but opposed what they saw as the unfounded imposition of false universals that in fact reflected only modern Western prejudices. Joos’s hostile formulation has been cited repeatedly as if it were official Boasian doctrine (see Hymes and Fought 1981: 57).

For over fifty years, these three assertions have largely defined the received understanding of linguistic relativity. Anyone who has participated in discussions and/or arguments about the “Whorfian hypothesis” has heard them over and over again.

p. 169-173

In the 1950s, anthropologists and psychologists were interested in experimentation and the testing of hypotheses on what was taken to be the model of the natural sciences. At a conference on language in culture, Harry Hoijer (1954) first named a Sapir–Whorf hypothesis that language influences thought.

To call something a hypothesis is to propose to test it, presumably using experimental methods. This task was taken on primarily by psychologists. A number of attempts were made to prove or disprove experimentally that language influences thought (see Lucy 1992a: 127–78; P. Brown 2006). Both “language” and “thought” were narrowed down to make them more amenable to experiment: the aspect of language chosen was usually the lexicon, presumably the easiest aspect to control in an experimental setting; thought was interpreted to mean perceptual discrimination and cognitive processing, aspects of thinking that psychologists were comfortable testing for. Eric Lenneberg defined the problem posed by the “Sapir–Whorf hypothesis” as that of “the relationship that a particular language may have to its speakers’ cognitive processes … Does the structure of a given language affect the thoughts (or thought potential), the memory, the perception, the learning ability of those who speak that language?” (1953: 463). Need I recall that Boas, Sapir, and Whorf went out of their way to deny that different languages were likely to be correlated with strengths and weaknesses in cognitive processes, i.e., in what someone is capable of thinking, as opposed to the contents of habitual cognition? […]

Berlin and Kay started by rephrasing Sapir and Whorf as saying that the search for semantic universals was “fruitless in principle” because “each language is semantically arbitrary relative to every other language” (1969: 2; cf. Lucy 1992a: 177–81). If this is what we are calling linguistic relativity, then if any domain of experience, such as color, is identified in recognizably the same way in different languages, linguistic relativity must be wrong. As we have seen, this fits the arguments of Weisgerber and Bloomfield, but not of Sapir or Whorf. […]

A characteristic study was reported recently in my own university’s in-house newspaper under the title “Language and Perception Are Not Connected” (Baril 2004). The article starts by saying that according to the “Whorf–Sapir hypothesis … language determines perception,” and therefore that “we should not be able to distinguish differences among similar tastes if we do not possess words for expressing their nuances, since it is language that constructs the mode of thought and its concepts … According to this hypothesis, every language projects onto its speakers a system of categories through which they see and interpret the world.” The hypothesis, we are told, has been “disconfirmed since the 1970s” by research on color. The article reports on the research of Dominic Charbonneau, a graduate student in psychology. Intrigued by recent French tests in which professional sommeliers, with their elaborate vocabulary, did no better than regular ignoramuses in distinguishing among wines, Charbonneau carried out his own experiment on coffee – this is, after all, a French-speaking university, and we take coffee seriously. Francophone students were asked to distinguish among different coffees; like most of us, they had a minimal vocabulary for distinguishing them (words like “strong,” “smooth,” “dishwater”). The participants made quite fine distinctions among the eighteen coffees served, well above the possible results of chance, showing that taste discrimination does not depend on vocabulary. Conclusion: “Concepts must be independent of language, which once again disconfirms the Sapir–Whorf hypothesis” (my italics). And this of course would be true if there were such a hypothesis, if it was primarily about vocabulary, and if it said that vocabulary determines perception.

We have seen that Bloomfield and his successors in linguistics maintained the unlimited arbitrariness of color classifications, and so could have served as easy straw men for the cognitivist return to universals. But what did Boas, Sapir, Whorf, or Lee actually have to say about color? Did they in fact claim that color perception or recognition or memory was determined by vocabulary? Sapir and Lee are easy: as far as I have been able to ascertain, neither one of them talked about color at all. Steven Pinker attributes a relativist and determinist view of color classifications to Whorf:

Among Whorf’s “kaleidoscopic flux of impressions,” color is surely the most eye-catching. He noted that we see objects in different hues, depending on the wavelengths of the light they reflect, but that the wavelength is a continuous dimension with nothing delineating red, yellow, green, blue, and so on. Languages differ in their inventory of color words … You can fill in the rest of the argument. It is language that puts the frets in the spectrum. (Pinker 1994: 61–2)

No he didn’t. Whorf never noted anything like this in any of his published work, and Pinker gives no indication of having gone through Whorf’s unpublished papers. As far as I can ascertain, Whorf talks about color in two places; in both he is saying the opposite of what Pinker says he is saying.

pp. 187-188

The 1950s through the 1980s saw the progressive triumph of universalist cognitive science. From the 1980s, one saw the concomitant rise of relativistic postmodernism. By the end of the 1980s there had been a massive return to the old split between universalizing natural sciences and their ancillary social sciences on the one hand, particularizing humanities and their ancillary cultural studies on the other. Some things, in the prevailing view, were universal, others so particular as to call for treatment as fiction or anecdote. Nothing in between was of very much interest, and North American anthropology, the discipline that had been founded upon and achieved a sort of identity in crossing the natural-science/humanities divide, faced an identity crisis. Symptomatically, one noticed many scholarly bookstores disappearing their linguistics sections into “cognitive science,” their anthropology sections into “cultural studies.”

In this climate, linguistic relativity was heresy, Whorf, in particular, a kind of incompetent Antichrist. The “Whorfian hypothesis” of linguistic relativism or determinism became a topos of any anthropology textbook, almost inevitably to be shown to be silly. Otherwise serious linguists and psychologists (e.g., Pinker 1994: 59–64) continued to dismiss the idea of linguistic relativity with an alacrity suggesting alarm and felt free to heap posthumous personal vilification on Whorf, the favorite target, for his lack of official credentials, in some really surprising displays of academic snobbery. Geoffrey Pullum, to take only one example, calls him a “Connecticut fire prevention inspector and weekend language-fancier” and “our man from the Hartford Fire Insurance Company” (Pullum 1989 [1991]: 163). This comes from a book with the subtitle Irreverent Essays on the Study of Language. But how irreverent is it to make fun of somebody almost everybody has been attacking for thirty years?

The Language Myth: Why Language Is Not an Instinct
by Vyvyan Evans
pp. 195-198

Who’s afraid of the Big Bad Whorf?

Psychologist Daniel Casasanto has noted, in an article whose title gives this section its heading, that some researchers find Whorf’s principle of linguistic relativity to be threatening. 6 But why is Whorf such a bogeyman for some? And what makes his notion of linguistic relativity such a dangerous idea?

The rationalists fear linguistic relativity – the very idea of it – and they hate it, with a passion: it directly contradicts everything they stand for – if relativism is anywhere near right, then the rationalist house burns down, or collapses, like a tower of cards without a foundation. And this fear and loathing in parts of the Academy can often, paradoxically, be highly irrational indeed. Relativity is often criticised without argumentative support, or ridiculed, just for the audacity of existing as an intellectual idea to begin with. Jerry Fodor, more candid than most about his irrational fear, just hates it. He says: “The thing is: I hate relativism. I hate relativism more than I hate anything else, excepting, maybe, fiberglass powerboats.” 7 Fodor continues, illustrating further his irrational contempt: “surely, surely, no one but a relativist would drive a fiberglass powerboat”. 8

Fodor’s objection is that relativism overlooks what he deems to be “the fixed structure of human nature”. 9 Mentalese provides the fixed structure – as we saw in the previous chapter. If language could interfere with this innate set of concepts, then the fixed structure would no longer be fixed – anathema to a rationalist.

Others are more coy, but no less damning. Pinker’s strategy is to set up straw men, which he then eloquently – but mercilessly – ridicules. 10 But don’t be fooled, there is no serious argument presented – not on this occasion. Pinker takes an untenable and extreme version of what he claims Whorf said, and then pokes fun at it – a common modus operandi employed by those who are afraid. Pinker argues that Whorf was wrong because he equated language with thought: that Whorf assumes that language causes or determines thought in the first place. This is the “conventional absurdity” that Pinker refers to in the first of his quotations above. For Pinker, Whorf was either romantically naïve about the effects of language, or, worse, like the poorly read and ill-educated, credulous.

But this argument is a classic straw man: it is set up to fail, being made of straw. Whorf never claimed that language determined thought. As we shall see, the thesis of linguistic determinism, which nobody believes, and which Whorf explicitly rejected, was attributed to him long after his death. But Pinker has bought into the very myths peddled by the rationalist tradition for which he is cheerleader-in-chief, and which lives in fear of linguistic relativity. In the final analysis, the language-as-instinct crowd should be afraid, very afraid: linguistic relativity, once and for all, explodes the myth of the language-as-instinct thesis.

The rise of the Sapir − Whorf hypothesis

Benjamin Lee Whorf became interested in linguistics in 1924, and studied it, as a hobby, alongside his full-time job as an engineer. In 1931, Whorf began to attend university classes on a part-time basis, studying with one of the leading linguists of the time, Edward Sapir. 11 Amongst other things covered in his teaching, Sapir touched on what he referred to as “relativity of concepts … [and] the relativity of the form of thought which results from linguistic study”. 12 The notion of the relativistic effect of different languages on thought captured Whorf’s imagination; and so he became captivated by the idea that he was to develop and become famous for. Because Whorf’s claims have often been disputed and misrepresented since his death, let’s see exactly what his formulation of his principle of linguistic relativity was:

Users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world. 13

Indeed, as pointed out by the Whorf scholar, Penny Lee, post-war research rarely ever took Whorf’s principle, or his statements, as their starting point. 14 Rather, his writings were, on the contrary, ignored, and his ideas largely distorted. 15

For one thing, the so-called ‘Sapir − Whorf hypothesis’ was not due to either Sapir or Whorf. Sapir – whose research was not primarily concerned with relativity – and Whorf were lumped together: the term ‘Sapir − Whorf hypothesis’ was coined in the 1950s, over ten years after both men had been dead – Sapir died in 1939, and Whorf in 1941.16 Moreover, Whorf’s principle emanated from an anthropological research tradition; it was not, strictly speaking, a hypothesis. But, in the 1950s, psychologists Eric Lenneberg and Roger Brown sought to test empirically the notion of linguistic relativity. And to do so, they reformulated it in such a way that it could be tested, producing two testable formulations. 17 One, the so-called ‘strong version’ of relativity, holds that language causes a cognitive restructuring: language causes or determines thought. This is otherwise known as linguistic determinism, Pinker’s “conventional absurdity”. The second hypothesis, which came to be known as the ‘weak version’, claims instead that language influences a cognitive restructuring, rather than causing it. But neither formulation of the so-called ‘Sapir − Whorf hypothesis’ was due to Whorf, or Sapir. Indeed, on the issue of linguistic determinism, Whorf was explicit in arguing against it, saying the following:

The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur. 18

This demonstrates that, in point of fact, Whorf actually believed in something like the ‘fixed structure’ that Fodor claims is lacking in relativity. The delicious irony arising from it all is that Pinker derides Whorf on the basis of the ‘strong version’ of the Sapir − Whorf hypothesis: linguistic determinism – language causes thought. But this strong version was a hypothesis not created by Whorf, but imagined by rationalist psychologists who were dead set against Whorf and linguistic relativity anyway. Moreover, Whorf explicitly disagreed with the thesis that was posthumously attributed to him. The issue of linguistic determinism became, incorrectly and disingenuously, associated with Whorf, growing in the rationalist sub-conscious like a cancer – Whorf was clearly wrong, they reasoned.

In more general terms, defenders of the language-as-instinct thesis have taken a leaf out of the casebook of Noam Chomsky. If you thought that academics play nicely, and fight fair, think again. Successful ideas are the currency, and they guarantee tenure, promotion, influence and fame; and they allow the successful academic to attract Ph.D. students who go out and evangelise, and so help to build intellectual empires. The best defence against ideas that threaten is ridicule. And, since the 1950s, until the intervention of John Lucy in the 1990s – whom I discuss below – relativity was largely dismissed; the study of linguistic relativity was, in effect, off-limits to several generations of researchers.

The Bilingual Mind, And What it Tells Us about Language and Thought
by Aneta Pavlenko
PP. 27-32

1.1.2.4 The real authors of the Sapir-Whorf hypothesis and the invisibility of scientific revolutions

The invisibility of bilingualism in the United States also accounts for the disappearance of multilingual awareness from discussions of Sapir’s and Whorf’s work, which occurred when the two scholars passed away – both at a relatively young age – and their ideas landed in the hands of others. The posthumous collections brought Sapir’s ( 1949 ) and Whorf’s ( 1956 ) insights to the attention of the wider public (including, inter alia , young Thomas Kuhn ) and inspired the emergence of the field of psycholinguistics. But the newly minted psycholinguists faced a major problem: it had never occurred to Sapir and Whorf to put forth testable hypotheses. Whorf showed how linguistic patterns could be systematically investigated through the use of overt categories marked systematically (e.g., number in English or gender in Russian) and covert categories marked only in certain contexts (e.g., gender in English), yet neither he nor Sapir ever elaborated the meaning of ‘different observations’ or ‘psychological correlates’.

Throughout the 1950s and 1960s, scholarly debates at conferences, summer seminars and in academic journals attempted to correct this ‘oversight’ and to ‘systematize’ their ideas (Black, 1959 ; Brown & Lenneberg , 1954 ; Fishman , 1960 ; Hoijer, 1954a; Lenneberg, 1953 ; Osgood & Sebeok , 1954 ; Trager , 1959 ). The term ‘the Sapir -Whorf hypothesis’ was first used by linguistic anthropologist Harry Hoijer ( 1954b ) to refer to the idea “that language functions, not simply as a device for reporting experience, but also, and more significantly, as a way of defining experience for its speakers” (p. 93). The study of SWH, in Hoijer’s view, was supposed to focus on structural and semantic patterns active in a given language. This version, probably closest to Whorf’s own interest in linguistic classification, was soon replaced by an alternative, developed by psychologists Roger Brown and Eric Lenneberg, who translated Sapir’s and Whorf’s ideas into two ‘testable’ hypotheses (Brown & Lenneberg, 1954 ; Lenneberg, 1953 ). The definitive form of the dichotomy was articulated in Brown’s ( 1958 ) book Words and Things:

linguistic relativity holds that where there are differences of language there will also be differences of thought, that language and thought covary. Determinism goes beyond this to require that the prior existence of some language pattern is either necessary or sufficient to produce some thought pattern. (p. 260)

In what follows, I will draw on Kuhn’s ([1962] 2012 ) insights to discuss four aspects of this radical transformation of Sapir’s and Whorf’s ideas into the SWH: (a) it was a major change of paradigm , that is, of shared assumptions, research foci, and methods, (b) it erased multilingual awareness , (c) it created a false dichotomy, and (d) it proceeded unacknowledged.

The change of paradigm was necessitated by the desire to make complex notions, articulated by linguistic anthropologists, fit experimental paradigms in psychology. Yet ideas don’t travel easily across disciplines: Kuhn ([1962] 2012 ) compares a dialog between scientific communities to intercultural communication, which requires skillful translation if it is to avoid communication breakdowns. Brown and Lenneberg ’s translation was not skillful and while their ideas moved the study of language and cognition forward, they departed from the original arguments in several ways (for discussion, see also Levinson , 2012 ; Lucy , 1992a ; Lee , 1996 ).

First, they shifted the focus of the inquiry from the effects of obligatory grammatical categories, such as tense, to lexical domains, such as color, that had a rather tenuous relationship to linguistic thought (color differentiation was, in fact, discussed by Boas and Whorf as an ability not influenced by language). Secondly, they shifted from concepts as interpretive categories to cognitive processes, such as perception or memory, that were of little interest to Sapir and Whorf, and proposed to investigate them with artificial stimuli, such as Munsell chips, that hardly reflect habitual thought. Third, they privileged the idea of thought potential (and, by implication, what can be said) over Sapir’s and Whorf’s concerns with obligatory categories and habitual thought (and, by definition, with what is said). Fourth, they missed the insights about the illusory objectivity of one’s own language and replaced the interest in linguistic thought with independent ‘language’ and ‘cognition’. Last, they substituted Humboldt ’s, Sapir ’s and Whorf ’s interest in multilingual awareness with a hypothesis articulated in monolingual terms.

A closer look at Brown’s ( 1958 ) book shows that he was fully aware of the existence of bilingualism and of the claims made by bilingual speakers of Native American languages that “thinking is different in the Indian language” (p. 232). His recommendation in this case was to distrust those who have the “unusual” characteristic of being bilingual:

There are few bilinguals, after all, and the testimony of those few cannot be uncritically accepted. There is a familiar inclination on the part of those who possess unusual and arduously obtained experience to exaggerate its remoteness from anything the rest of us know. This must be taken into account when evaluating the impressions of students of Indian languages. In fact, it might be best to translate freely with the Indian languages, assimilating their minds to our own. (Brown, 1958 : 233)

The testimony of German–English bilinguals – akin to his own collaborator Eric Heinz Lenneberg – was apparently another matter: the existence of “numerous bilingual persons and countless translated documents” was, for Brown ( 1958 : 232), compelling evidence that the German mind is “very like our own”. Alas, Brown ’s ( 1958 ) contradictory treatment of bilingualism and the monolingual arrogance of the recommendations ‘to translate freely’ and ‘to assimilate Indian minds to our own’ went unnoticed by his colleagues. The result was the transformation of a fluid and dynamic account of language into a rigid, static false dichotomy.

When we look back, the attribution of the idea of linguistic determinism to multilinguals interested in language evolution and the evolution of the human mind makes little sense. Yet the replacement of the open-ended questions about implications of linguistic diversity with two ‘testable’ hypotheses had a major advantage – it was easier to argue about and to digest. And it was welcomed by scholars who, like Kay and Kempton ( 1984 ), applauded the translation of Sapir’s and Whorf’s convoluted passages into direct prose and felt that Brown and Lenneberg “really said all that was necessary” (p. 66) and that the question of what Sapir and Whorf actually thought was interesting but “after all less important than the issue of what is the case” (p. 77). In fact, by the 1980s, Kay and Kempton were among the few who could still trace the transformation to the two psychologists. Their colleagues were largely unaware of it because Brown and Lenneberg concealed the radical nature of their reformulation by giving Sapir and Whorf ‘credit’ for what should have been the Brown-Lenneberg hypothesis.

We might never know what prompted this unusual scholarly modesty – a sincere belief that they were simply ‘improving’ Sapir and Whorf or the desire to distance themselves from the hypothesis articulated only to be ‘disproved’. For Kuhn ([1962] 2012 ), this is science as usual: “it is just this sort of change in the formulation of questions and answers that accounts, far more than novel empirical discoveries, for the transition from Aristotelian to Galilean and from Galilean to Newtonian dynamics” (p. 139). He also points to the hidden nature of many scientific revolutions concealed by textbooks that provide the substitute for what they had eliminated and make scientific development look linear, truncating the scientists’ knowledge of the history of their discipline. This is precisely what happened with the SWH: the newly minted hypothesis took on a life of its own, multiplying and reproducing itself in myriads of textbooks, articles, lectures, and popular media, and moving the discussion further and further away from Sapir’s primary interest in ‘social reality’ and Whorf’s central concern with ‘habitual thought’.

The transformation was facilitated by four common academic practices that allow us to manage the ever-increasing amount of literature in the ever-decreasing amount of time: (a) simplification of complex arguments (which often results in misinterpretation); (b) reduction of original texts to standard quotes; (c) reliance on other people’s exegeses; and (d) uncritical reproduction of received knowledge. The very frequency of this reproduction made the SWH a ‘fact on the ground’, accepted as a valid substitution for the original ideas. The new terms of engagement became part of habitual thought in the Ivory Tower and to this day are considered obligatory by many academics who begin their disquisitions on linguistic relativity with a nod towards the sound-bite version of the ‘strong’ determinism and ‘weak’ relativity. In Kuhn ’s ([1962] 2012 ) view, this perpetuation of a new set of shared assumptions is a key marker of a successful paradigm change: “When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced” (p. 20).

Yet the false dichotomy reified in the SWH – and the affective framing of one hypothesis as strong and the other as weak – moved the goalposts and reset the target and the standards needed to achieve it, giving scholars a clear indication of which hypothesis they should address. This preference, too, was perpetuated by countless researchers who, like Langacker ( 1976 : 308), dismissed the ‘weak’ version as obviously true but uninteresting and extolled ‘the strongest’ as “the most interesting version of the LRH” but also as “obviously false”. And indeed, the research conducted on Brown’s and Lenneberg’s terms failed to ‘prove’ linguistic determinism and instead revealed ‘minor’ language effects on cognition (e.g., Brown & Lenneberg, 1954 ; Lenneberg , 1953 ) or no effects at all (Heider , 1972 ). The studies by Gipper ( 1976 ) 4 and Malotki ( 1983 ) showed that even Whorf ’s core claims, about the concept of time in Hopi, may have been misguided. 5 This ‘failure’ too became part of the SWH lore, with textbooks firmly stating that “a strong version of the Whorfian hypothesis cannot be true” (Foss & Hakes , 1978 : 393).

By the 1980s, there emerged an implicit consensus in US academia that Whorfianism was “a bête noire, identified with scholarly irresponsibility, fuzzy thinking, lack of rigor, and even immorality” (Lakoff, 1987 : 304). This consensus was shaped by the political climate supportive of the notion of ‘free thought’ yet hostile to linguistic diversity, by educational policies that reinforced monolingualism, and by the rise of cognitive science and meaning-free linguistics that replaced the study of meaning with the focus on structures and universals. Yet the implications of Sapir ’s and Whorf’s ideas continued to be debated (e.g., Fishman , 1980 , 1982 ; Kay & Kempton , 1984 ; Lakoff, 1987 ; Lucy & Shweder , 1979 ; McCormack & Wurm , 1977 ; Pinxten , 1976 ) and in the early 1990s the inimitable Pinker decided to put the specter of the SWH to bed once and for all. Performing a feat reminiscent of Humpty Dumpty, Pinker ( 1994 ) made the SWH ‘mean’ what he wanted it to mean, namely “the idea that thought is the same thing as language” (p. 57). Leaving behind Brown ’s ( 1958 ) articulation with its modest co-variation, he replaced it in the minds of countless undergraduates with

the famous Sapir-Whorf hypothesis of linguistic determinism , stating that people’s thoughts are determined by the categories made available by their language, and its weaker version, linguistic relativity , stating that differences among languages cause differences in the thoughts of their speakers. (Pinker, 1994 : 57)

And lest they still thought that there is something to it, Pinker ( 1994 ) told them that it is “an example of what can be called a conventional absurdity” (p. 57) and “it is wrong, all wrong” (p. 57). Ironically, this ‘obituary’ for the SWH coincided with the neo-Whorfian revival, through the efforts of several linguists, psychologists, and anthropologists – most notably Gumperz and Levinson ( 1996 ), Lakoff ( 1987 ), Lee ( 1996 ), Lucy ( 1992a , b ), and Slobin ( 1991 , 1996a ) – who were willing to buck the tide, to engage with the original texts, and to devise new methods of inquiry. This work will form the core of the chapters to come but for now I want to emphasize that the received belief in the validity of the terms of engagement articulated by Brown and Lenneberg and their attribution to Sapir and Whorf is still pervasive in many academic circles and evident in the numerous books and articles that regurgitate the SWH as the strong/weak dichotomy. The vulgarization of Whorf ’s views bemoaned by Fishman ( 1982 ) also continues in popular accounts, and I fully agree with Pullum ( 1991 ) who, in his own critique of Whorf, noted:

Once the public has decided to accept something as an interesting fact, it becomes almost impossible to get the acceptance rescinded. The persistent interestingness and symbolic usefulness overrides any lack of factuality. (p. 159)

Popularizers of academic work continue to stigmatize Whorf through comments such as “anyone can estimate the time of day, even the Hopi Indians; these people were once attributed with a lack of any conception of time by a book-bound scholar, who had never met them” (Richards , 1998 : 44). Even respectable linguists perpetuate the strawman version of “extreme relativism – the idea that there are no facts common to all cultures and languages” (Everett, 2012 : 201) or make cheap shots at “the most notorious of the con men, Benjamin Lee Whorf, who seduced a whole generation into believing, without a shred of evidence, that American Indian languages lead their speakers to an entirely different conception of reality from ours” (Deutscher, 2010 : 21). This assertion is then followed by a statement that while the link between language, culture, and cognition “seems perfectly kosher in theory, in practice the mere whiff of the subject today makes most linguists, psychologists, and anthropologists recoil” because the topic “carries with it a baggage of intellectual history which is so disgraceful that the mere suspicion of association with it can immediately brand anyone a fraud” (Deutscher, 2010 : 21).

Such comments are not just an innocent rhetorical strategy aimed at selling more copies: the uses of hyperbole (most linguists, psychologists, and anthropologists ; mere suspicion of association), affect (disgraceful , fraud , recoil , embarrassment), misrepresentation (disgraceful baggage of intellectual history), strawman’s arguments and reduction ad absurdum as a means of persuasion have played a major role in manufacturing the false consent in the history of ideas that Deutscher (2010) finds so ‘disgraceful’ (readers interested in the dirty tricks used by scholars should read the expert description by Pinker , 2007 : 89–90). What is particularly interesting is that both Deutscher (2010) and Everett (2012) actually martial evidence in support of Whorf’s original arguments. Their attempt to do so while distancing themselves from Whorf would have fascinated Whorf, for it reveals two patterns of habitual thought common in English-language academia: the uncritical adoption of the received version of the SWH and the reliance on the metaphor of ‘argument as war’ (Tannen , 1998), i.e., an assumption that each argument has ‘two sides’ (not one or three), that these sides should be polarized in either/or terms, and that in order to present oneself as a ‘reasonable’ author, one should exaggerate the alternatives and then occupy the ‘rational’ position in between. Add to this the reductionism common for trade books and the knowledge that criticism sells better than praise, and you get Whorf as a ‘con man’.

Dark Matter of the Mind
by Daniel L. Everett
Kindle Locations 352-373

I am here particularly concerned with difference, however, rather than sameness among the members of our species— with variation rather than homeostasis. This is because the variability in dark matter from one society to another is fundamental to human survival, arising from and sustaining our species’ ecological diversity. The range of possibilities produces a variety of “human natures” (cf. Ehrlich 2001). Crucial to the perspective here is the concept-apperception continuum. Concepts can always be made explicit; apperceptions less so. The latter result from a culturally guided experiential memory (whether conscious or unconscious or bodily). Such memories can be not only difficult to talk about but often ineffable (see Majid and Levinson 2011; Levinson and Majid 2014). Yet both apperception and conceptual knowledge are uniquely determined by culture, personal history, and physiology, contributing vitally to the formation of the individual psyche and body.

Dark matter emerges from individuals living in cultures and thereby underscores the flexibility of the human brain. Instincts are incompatible with flexibility. Thus special care must be given to evaluating arguments in support of them (see Blumberg 2006 for cogent criticisms of many purported examples of instincts, as well as the abuse of the term in the literature). If we have an instinct to do something one way, this would impede learning to do it another way. For this reason it would surprise me if creatures higher on the mental and cerebral evolutionary scale— you and I, for example— did not have fewer rather than more instincts. Humans, unlike cockroaches and rats— two other highly successful members of the animal kingdom— adapt holistically to the world in which they live, in the sense that they can learn to solve problems across environmental niches, then teach their solutions and reflect on these solutions. Cultures turn out to be vital to this human adaptational flexibility— so much so that the most important cognitive question becomes not “What is in the brain?” but “What is the brain in?” (That is, in what individual, residing in what culture does this particular brain reside?)

The brain, by this view, was designed to be as close to a blank slate as was possible for survival. In other words, the views of Aristotle, Sapir, Locke, Hume, and others better fit what we know about the nature of the brain and human evolution than the views of Plato, Bastian, Freud, Chomsky, Tooby, Pinker, and others. Aristotle’s tabula rasa seems closer to being right than is currently fashionable to suppose, especially when we answer the pointed question, what is left in the mind/ brain when culture is removed?

Most of the lessons of this book derive from the idea that our brains (including our emotions) and our cultures are related symbiotically through the individual, and that neither supervenes on the other. In this framework, nativist ideas often are superfluous.

Kindle Locations 3117-3212

Science, we might say, ought to be exempt from dark matter. Yet that is much harder to claim than to demonstrate. […] To take a concrete example of a science, we focus on linguistics, because this discipline straddles the borders between the sciences, humanities, and social sciences. The basic idea to be explored is this: because counterexamples and exceptions are culturally determined in linguistics, as in all sciences, scientific progress is the output of cultural values. These values differ even within the same discipline (e.g., linguistics), however, and can lead to different notions of progress in science. To mitigate this problem, therefore, to return to linguistics research as our primary example, our inquiry should be informed by multiple theories, with a focus on languageS rather than Language. To generalize, this would mean a focus on the particular rather than the general in many cases. Such a focus (in spite of the contrast between this and many scientists’ view that generalizations are the goal of science) develops a robust empirical basis while helping to distinguish local theoretical culture from broader, transculturally agreed-upon desiderata of science— an issue that theories of language, in a way arguably more extreme than in other disciplines, struggle to tease apart.

The reason that a discussion of science and dark matter is important here is to probe the significance and meaning of dark matter, culture, and psychology in the more comfortable, familiar territory of the reader, to understand that what we are contemplating here is not limited to cultures unlike our own, but affects every person, every endeavor of Homo sapiens, even the hallowed enterprise of science. This is not to say that science is merely a cultural illusion. This chapter has nothing to do with postmodernist epistemological relativity. But it does aim to show that science is not “pure rationality,” autonomous from its cultural matrix. […]

Whether we classify an anomaly as counterexample or exception depends on our dark matter— our personal history plus cultural values, roles, and knowledge structures. And the consequences of our classification are also determined by culture and dark matter. Thus, by social consensus, exceptions fall outside the scope of the statements of a theory or are explicitly acknowledged by the theory to be “problems” or “mysteries.” They are not immediate problems for the theory. Counterexamples, on the other hand, by social consensus render a statement false. They are immediately acknowledged as (at least potential) problems for any theory. Once again, counterexamples and exceptions are the same etically, though they are nearly polar opposites emically. Each is defined relative to a specific theoretical tradition, a specific set of values, knowledge structures, and roles— that is, a particular culture.

One bias that operates in theories, the confirmation bias, is the cultural value that a theory is true and therefore that experiments are going to strengthen it, confirm it, but not falsify it. Anomalies appearing in experiments conducted by adherents of a particular theory are much more likely to be interpreted as exceptions that might require some adjustments of the instruments, but nothing serious in terms of the foundational assumptions of the theory. On the other hand, when anomalies turn up in experiments by opponents of a theory, there will be a natural bias to interpret these as counterexamples that should lead to the abandonment of the theory. Other values that can come into play for the cultural/ theoretical classification of an anomaly as a counterexample or an exception include “tolerance for cognitive dissonance,” a value of the theory that says “maintain that the theory is right and, at least temporarily, set aside problematic facts,” assuming that they will find a solution after the passage of a bit of time. Some theoreticians call this tolerance “Galilean science”— the willingness to set aside all problematic data because a theory seems right. Fair enough. But when, why, and for how long a theory seems right in the face of counterexamples is a cultural decision, not one that is based on facts alone. We have seen that the facts of a counterexample and an exception can be exactly the same. Part of the issue of course is that data, like their interpretations, are subject to emicization. We decide to see data with a meaning, ignoring the particular variations that some other theory might seize on as crucial. In linguistics, for example, if a theory (e.g., Chomskyan theory) says that all relevant grammatical facts stop at the boundary of the sentence, then related facts at the level of paragraphs, stories, and so on, are overlooked.

The cultural and dark matter forces determining the interpretation of anomalies in the data that lead one to abandon a theory and another to maintain it themselves create new social situations that confound the intellect and the sense of morality that often is associated with the practice of a particular theory. William James (1907, 198) summed up some of the reactions to his own work, as evidence of these reactions to the larger field of intellectual endeavors: “I fully expect to see the pragmatist view of truth run through the classic stages of a theory’s career. First, you know, a new theory is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim that they themselves discovered it.”

In recent years, due to my research and claims regarding the grammar of the Amazonian Pirahã— that this language lacks recursion— I have been called a charlatan and a dull wit who has misunderstood. It has been (somewhat inconsistently) further claimed that my results are predicted (Chomsky 2010, 2014); it has been claimed that an alternative notion of recursion, Merge, was what the authors had in mind is saying that recursion is the foundation of human languages; and so on. And my results have been claimed to be irrelevant.

* * *

Beyond Our Present Knowledge
Useful Fictions Becoming Less Useful
Essentialism On the Decline
Is the Tide Starting to Turn on Genetics and Culture?
Blue on Blue
The Chomsky Problem
Dark Matter of the Mind
What is the Blank Slate of the Mind?
Cultural Body-Mind
How Universal Is The Mind?
The Psychology and Anthropology of Consciousness
On Truth and Bullshit

Inequality in the Anthropocene

This post was inspired by an article on the possibility of increasing suicides because of climate change. What occurred to me is that all the social and psychological problems seen with climate change are also seen with inequality (as shown in decades of research), and to a lesser extent as seen with extreme poverty — although high poverty with low inequality isn’t necessarily problematic at all (e.g., the physically and psychologically healthy hunter-gatherers who are poor in terms of material wealth and private property).

Related to this, I noticed in one article that a study was mentioned about the chances of war increasing when detrimental weather events are combined with ethnic diversity. And that reminded me of the research that showed diversity only leads to lowered trust when combined with segregation. A major problem with climate-related refugee crises is that it increases segregation, such as refugee camps and immigrant ghettoization. That segregation will lead to further conflict and destruction of the social fabric, which in turn will promote further segregation — a vicious cycle that will be hard to pull out before the crash, especially as the environmental conditions lead to droughts, famines, and plagues.

As economic and environmental conditions worsen, there are some symptoms that will become increasingly apparent and problematic. Based on the inequality and climatology research, we should expect increased stress, anxiety, fear, xenophobia, bigotry, suicide, homicide, aggressive behavior, short-term thinking, reactionary politics, and generally crazy and bizarre behavior. This will likely result in civil unrest, violent conflict, race wars, genocides, terrorism, militarization, civil wars, revolutions, international conflict, resource-based wars, world wars, authoritarianism, ethno-nationalism, right-wing populism, etc.

The only defense against this will be a strong, courageous left-wing response. That would require eliminating not only the derangement of the GOP but also the corruption of the DNC by replacing both with a genuinely democratic and socialist movement. Otherwise, our society will descend into collective madness and our entire civilization will be under existential threat. There is no other option.

* * *

The Great Acceleration and the Great Divergence: Vulnerability in the Anthropocene
by Rob Nixon

Most Anthropocene scholars date the new epoch to the late-eighteenth-century beginnings of industrialization. But there is a second phase to the Anthropocene, the so-called great acceleration, beginning circa 1950: an exponential increase in human-induced changes to the carbon cycle and nitrogen cycle and in ocean acidification, global trade, and consumerism, as well as the rise of international forms of governance like the World Bank and the IMF.

However, most accounts of the great acceleration fail to position it in relation to neoliberalism’s recent ascent, although most of the great acceleration has occurred during the neoliberal era. One marker of neoliberalism has been a widening chasm of inequality between the superrich and the ultrapoor: since the late 1970s, we have been living through what Timothy Noah calls “the great divergence.” Noah’s subject is the economic fracturing of America, the new American gilded age, but the great divergence has scarred most societies, from China and India to Indonesia, South Africa, Nigeria, Italy, Spain, Ireland, Costa Rica, Jamaica, Australia, and Bangladesh.

My central problem with the dominant mode of Anthropocene storytelling is its failure to articulate the great acceleration to the great divergence. We need to acknowledge that the grand species narrative of the Anthropocene—this geomorphic “age of the human”—is gaining credence at a time when, in society after society, the idea of the human is breaking apart economically, as the distance between affluence and abandonment is increasing. It is time to remold the Anthropocene as a shared story about unshared resources. When we examine the geology of the human, let us also pay attention to the geopolitics of the new stratigraphy’s layered assumptions.

Neoliberalism loves watery metaphors: the trickle-down effect, global flows, how a rising tide lifts all boats. But talk of a rising tide raises other specters: the coastal poor, who will never get storm-surge barriers; Pacific Islanders in the front lines of inundation; Arctic peoples, whose livelihoods are melting away—all of them exposed to the fallout from Anthropocene histories of carbon extraction and consumption in which they played virtually no part.

We are not all in this together
by Ian Angus

So the 21st century is being defined by a combination of record-breaking inequality with record-breaking climate change. That combination is already having disastrous impacts on the majority of the world’s people. The line is not only between rich and poor, or comfort and poverty: it is a line between survival and death.

Climate change and extreme weather events are not devastating a random selection of human beings from all walks of life. There are no billionaires among the dead, no corporate executives living in shelters, no stockbrokers watching their children die of malnutrition. Overwhelmingly, the victims are poor and disadvantaged. Globally, 99 percent of weather disaster casualties are in developing countries, and 75 percent of them are women.

The pattern repeats at every scale. Globally, the South suffers far more than the North. Within the South, the very poorest countries, mostly in Africa south of the Sahara, are hit hardest. Within each country, the poorest people—women, children, and the elderly—are most likely to lose their homes and livelihoods from climate change, and most likely to die.

The same pattern occurs in the North. Despite the rich countries’ overall wealth, when hurricanes and heatwaves hit, the poorest neighborhoods are hit hardest, and within those neighborhoods the primary victims are the poorest people.

Chronic hunger, already a severe problem in much of the world, will be made worse by climate change. As Oxfam reports: “The world’s most food-insecure regions will be hit hardest of all.”

Unchecked climate change will lock the world’s poorest people in a downward spiral, leaving hundreds of millions facing malnutrition, water scarcity, ecological threats, and loss of livelihood. Children will be among the primary victims, and the effects will last for lifetimes: studies in Ethiopia, Kenya, and Niger show that being born in a drought year increases a child’s chances of being irreversibly stunted by 41 to 72 percent.

Environmental racism has left black Americans three times more likely to die from pollution
By Bartees Cox

Without a touch of irony, the EPA celebrated Black History Month by publishing a report that finds black communities face dangerously high levels of pollution. African Americans are more likely to live near landfills and industrial plants that pollute water and air and erode quality of life. Because of this, more than half of the 9 million people living near hazardous waste sites are people of color, and black Americans are three times more likely to die from exposure to air pollutants than their white counterparts.

The statistics provide evidence for what advocates call “environmental racism.” Communities of color aren’t suffering by chance, they say. Rather, these conditions are the result of decades of indifference from people in power.

Environmental racism is dangerous. Trump’s EPA doesn’t seem to care.
by P.R. Lockhart

Studies have shown that black and Hispanic children are more likely to develop asthma than their white peers, as are poor children, with research suggesting that higher levels of smog and air pollution in communities of color being a factor. A 2014 study found that people of color live in communities that have more nitrogen dioxide, a pollutant that exacerbates asthma.

The EPA’s own research further supported this. Earlier this year, a paper from the EPA’s National Center for Environmental Assessment found that when it comes to air pollutants that contribute to issues like heart and lung disease, black people are exposed to 1.5 times more of the pollutant than white people, while Hispanic people were exposed to about 1.2 times the amount of non-Hispanic whites. People in poverty had 1.3 times the exposure of those not in poverty.

Trump’s EPA Concludes Environmental Racism Is Real
by Vann R. Newkirk II

Late last week, even as the Environmental Protection Agency and the Trump administration continued a plan to dismantle many of the institutions built to address those disproportionate risks, researchers embedded in the EPA’s National Center for Environmental Assessment released a study indicating that people of color are much more likely to live near polluters and breathe polluted air. Specifically, the study finds that people in poverty are exposed to more fine particulate matter than people living above poverty. According to the study’s authors, “results at national, state, and county scales all indicate that non-Whites tend to be burdened disproportionately to Whites.”

The study focuses on particulate matter, a group of both natural and manmade microscopic suspensions of solids and liquids in the air that serve as air pollutants. Anthropogenic particulates include automobile fumes, smog, soot, oil smoke, ash, and construction dust, all of which have been linked to serious health problems. Particulate matter was named a known definite carcinogen by the International Agency for Research on Cancer, and it’s been named by the EPA as a contributor to several lung conditions, heart attacks, and possible premature deaths. The pollutant has been implicated in both asthma prevalence and severitylow birth weights, and high blood pressure.

As the study details, previous works have also linked disproportionate exposure to particulate matter and America’s racial geography. A 2016 study in Environment International found that long-term exposure to the pollutant is associated with racial segregation, with more highly segregated areas suffering higher levels of exposure. A 2012 article in Environmental Health Perspectives found that overall levels of particulate matter exposure for people of color were higher than those for white people. That article also provided a breakdown of just what kinds of particulate matter counts in the exposures. It found that while differences in overall particulate matter by race were significant, differences for some key particles were immense. For example, Hispanics faced rates of chlorine exposure that are more than double those of whites. Chronic chlorine inhalation is known for degrading cardiac function.

The conclusions from scientists at the National Center for Environmental Assessment not only confirm that body of research, but advance it in a top-rate public-health journal. They find that black people are exposed to about 1.5 times more particulate matter than white people, and that Hispanics had about 1.2 times the exposure of non-Hispanic whites. The study found that people in poverty had about 1.3 times more exposure than people above poverty. Interestingly, it also finds that for black people, the proportion of exposure is only partly explained by the disproportionate geographic burden of polluting facilities, meaning the magnitude of emissions from individual factories appears to be higher in minority neighborhoods.

These findings join an ever-growing body of literature that has found that both polluters and pollution are often disproportionately located in communities of color. In some places, hydraulic-fracturing oil wells are more likely to be sited in those neighborhoods. Researchers have found the presence of benzene and other dangerous aromatic chemicals to be linked to race. Strong racial disparities are suspected in the prevalence of lead poisoning.

It seems that almost anywhere researchers look, there is more evidence of deep racial disparities in exposure to environmental hazards. In fact, the idea of environmental justice—or the degree to which people are treated equally and meaningfully involved in the creation of the human environment—was crystallized in the 1980s with the aid of a landmark study illustrating wide disparities in the siting of facilities for the disposal of hazardous waste. Leaders in the environmental-justice movement have posited—in places as prestigious and rigorous as United Nations publications and numerous peer-reviewed journals—that environmental racism exists as the inverse of environmental justice, when environmental risks are allocated disproportionately along the lines of race, often without the input of the affected communities of color.

The idea of environmental racism is, like all mentions of racism in America, controversial. Even in the age of climate change, many people still view the environment mostly as a set of forces of nature, one that cannot favor or disfavor one group or another. And even those who recognize that the human sphere of influence shapes almost every molecule of the places in which humans live, from the climate to the weather to the air they breathe, are often loathe to concede that racism is a factor. To many people, racism often connotes purposeful decisions by a master hand, and many see existing segregation as a self-sorting or poverty problem. Couldn’t the presence of landfills and factories in disproportionately black neighborhoods have more to do with the fact that black people tend to be disproportionately poor and thus live in less desirable neighborhoods?

But last week’s study throws more water on that increasingly tenuous line of thinking. While it lacks the kind of complex multivariate design that can really disentangle the exact effects of poverty and race, the finding that race has a stronger effect on exposure to pollutants than poverty indicates that something beyond just the concentration of poverty among black people and Latinos is at play. As the study’s authors write: “A focus on poverty to the exclusion of race may be insufficient to meet the needs of all burdened populations.” Their finding that the magnitude of pollution seems to be higher in communities of color than the number of polluters suggests, indicates that regulations and business decisions are strongly dependent on whether people of color are around. In other words, they might be discriminatory.

This is a remarkable finding, and not only because it could provide one more policy linkage to any number of health disparities, from heart disease to asthma rates in black children that are double those of white children. But the study also stands as an implicit rebuke to the very administration that allowed its release.

Violence: Categories & Data, Causes & Demographics

Most violent crime correlates to social problems in general. Most social problems in general correlate to economic factors such as poverty but even moreso inequality. And in a country like the US, most economic factors correlate to social disadvantage and racial oppression, from economic segregation (redlining, sundown towns, etc) to environmental racism (ghettos located in polluted urban areas, high toxicity rates among minorities, etc) — consider how areas of historically high rates of slavery at present have higher levels of poverty and inequality, impacting not just blacks but also whites living in those communities.

Socialized Medicine & Externalized Costs

About 40 percent of deaths worldwide are caused by water, air and soil pollution, concludes a Cornell researcher. Such environmental degradation, coupled with the growth in world population, are major causes behind the rapid increase in human diseases, which the World Health Organization has recently reported. Both factors contribute to the malnourishment and disease susceptibility of 3.7 billion people, he says.

Percentages of Suffering and Death

Even accepting the data that Pinker uses, it must be noted that he isn’t including all violent deaths. Consider economic sanctions and neoliberal exploitation, vast poverty and inequality forcing people to work long hours in unsafe and unhealthy conditions, covert operations to overthrow governments and destabilize regions, anthropogenic climate change with its disasters, environmental destruction and ecosystem collapse, loss of arable land and food sources, pollution and toxic dumps, etc. All of this would involve food scarcity, malnutrition, starvation, droughts, rampant disease, refugee crises, diseases related to toxicity and stress, etc; along with all kinds of other consequences to people living in desperation and squalor.

This has all been intentionally caused through governments, corporations, and other organizations seeking power and profit while externalizing costs and harm. In my lifetime, the fatalities to this large scale often slow violence and intergenerational trauma could add up to hundreds of millions or maybe billions of lives cut short. Plus, as neoliberal globalization worsens inequality, there is a direct link to higher rates of homicides, suicides, and stress-related diseases for the most impacted populations. Yet none of these deaths would be counted as violent, no matter how horrific it was for the victims. And those like Pinker adding up the numbers would never have to acknowledge this overwhelming reality of suffering. It can’t be seen in the official data on violence, as the causes are disconnected from the effects. But why should only a small part of the harm and suffering get counted as violence?

Learning to Die in the Anthropocene: Reflections on the End of a Civilization
by Roy Scranton
Kindle Locations 860-888 (see here)

Consider: Once among the most modern, Westernized nations in the Middle East, with a robust, highly educated middle class, Iraq has been blighted for decades by imperialist aggression, criminal gangs, interference in its domestic politics, economic liberalization, and sectarian feuding. Today it is being torn apart between a corrupt petrocracy, a breakaway Kurdish enclave, and a self-declared Islamic fundamentalist caliphate, while a civil war in neighboring Syria spills across its borders. These conflicts have likely been caused in part and exacerbated by the worst drought the Middle East has seen in modern history. Since 2006, Syria has been suffering crippling water shortages that have, in some areas, caused 75 percent crop failure and wiped out 85 percent of livestock, left more than 800,000 Syrians without a livelihood, and sent hundreds of thousands of impoverished young men streaming into Syria’s cities. 90 This drought is part of long-term warming and drying trends that are transforming the Middle East. 91 Not just water but oil, too, is elemental to these conflicts. Iraq sits on the fifth-largest proven oil reserves in the world. Meanwhile, the Islamic State has been able to survive only because it has taken control of most of Syria’s oil and gas production. We tend to think of climate change and violent religious fundamentalism as isolated phenomena, but as Retired Navy Rear Admiral David Titley argues, “you can draw a very credible climate connection to this disaster we call ISIS right now.” 92

A few hundred miles away, Israeli soldiers spent the summer of 2014 killing Palestinians in Gaza. Israel has also been suffering drought, while Gaza has been in the midst of a critical water crisis exacerbated by Israel’s military aggression. The International Committee for the Red Cross reported that during summer 2014, Israeli bombers targeted Palestinian wells and water infrastructure. 93 It’s not water and oil this time, but water and gas: some observers argue that Israel’s “Operation Protective Edge” was intended to establish firmer control over the massive Leviathan natural gas field, discovered off the coast of Gaza in the eastern Mediterranean in 2010.94

Meanwhile, thousands of miles to the north, Russian-backed separatists fought fascist paramilitary forces defending the elected government of Ukraine, which was also suffering drought. 95 Russia’s role as an oil and gas exporter in the region and the natural gas pipelines running through Ukraine from Russia to Europe cannot but be key issues in the conflict. Elsewhere, droughts in 2014 sent refugees from Guatemala and Honduras north to the US border, devastated crops in California and Australia, and threatened millions of lives in Eritrea, Somalia, Ethiopia, Sudan, Uganda, Afghanistan, India, Morocco, Pakistan, and parts of China. Across the world, massive protests and riots have swept Bosnia and Herzegovina, Venezuela, Brazil, Turkey, Egypt, and Thailand, while conflicts rage on in Colombia, Libya, the Central African Republic, Sudan, Nigeria, Yemen, and India. And while the world burns, the United States has been playing chicken with Russia over control of Eastern Europe and the melting Arctic, and with China over control of Southeast Asia and the South China Sea, threatening global war on a scale not seen in seventy years. This is our present and future: droughts and hurricanes, refugees and border guards, war for oil, water, gas, and food.

Donald Trump Is the First Demagogue of the Anthropocene
by Robinson Meyer

First, climate change could easily worsen the inequality that has already hollowed out the Western middle class. A recent analysis in Nature projected that the effects of climate change will reduce the average person’s income by 23 percent by the end of the century. The U.S. Environmental Protection Agency predicts that unmitigated global warming could cost the American economy $200 billion this century. (Some climate researchers think the EPA undercounts these estimates.)

Future consumers will not register these costs so cleanly, though—there will not be a single climate-change debit exacted on everyone’s budgets at year’s end. Instead, the costs will seep in through many sources: storm damage, higher power rates, real-estate depreciation, unreliable and expensive food. Climate change could get laundered, in other words, becoming just one more symptom of a stagnant and unequal economy. As quality of life declines, and insurance premiums rise, people could feel that they’re being robbed by an aloof elite.

They won’t even be wrong. It’s just that due to the chemistry of climate change, many members of that elite will have died 30 or 50 years prior. […]

Malin Mobjörk, a senior researcher at the Stockholm International Peace Research Institute, recently described a “growing consensus” in the literature that climate change can raise the risk of violence. And the U.S. Department of Defense already considers global warming a “threat multiplier” for national security. It expects hotter temperatures and acidified oceans to destabilize governments and worsen infectious pandemics.

Indeed, climate change may already be driving mass migrations. Last year, the Democratic presidential candidate Martin O’Malley was mocked for suggesting that a climate-change-intensified drought in the Levant—the worst drought in 900 years—helped incite the Syrian Civil War, thus kickstarting the Islamic State. The evidence tentatively supports him. Since the outbreak of the conflict, some scholars have recognized that this drought pushed once-prosperous farmers into Syria’s cities. Many became unemployed and destitute, aggravating internal divisions in the run-up to the war. […]

They were not disappointed. Heatwaves, droughts, and other climate-related exogenous shocks do correlate to conflict outbreak—but only in countries primed for conflict by ethnic division. In the 30-year period, nearly a quarter of all ethnic-fueled armed conflict coincided with a climate-related calamity. By contrast, in the set of all countries, war only correlated to climatic disaster about 9 percent of the time.

“We cannot find any evidence for a generalizable trigger relationship, but we do find evidence for some risk enhancement,” Schleussner told me. In other words,  climate disaster will not cause a war, but it can influence whether one begins.

Why climate change is very bad for your health
by Geordan Dickinson Shannon

Ecosystems

We don’t live in isolation from other ecosystems. From large-scale weather events, through to the food we eat daily, right down to the minute organisms colonising our skin and digestive systems, we live and breath in co-dependency with our environment.

A change in the delicate balance of micro-organisms has the potential to lead to disastrous effects. For example, microbial proliferation – which is predicted in warmer temperatures driven by climate change – may lead to more enteric infections (caused by viruses and bacteria that enter the body through the gastrointestinal tract), such as salmonella food poisoning and increased cholera outbreaks related to flooding and warmer coastal and estuarine water.

Changes in temperature, humidity, rainfall, soil moisture and sea-level rise, caused by climate change is also affecting the transmission of dangerous insect-borne infectious diseases. These include malaria, dengue, Japanese encephalitis, chikungunya and West Nile viruslymphatic filariasis, plague, tick-borne encephalitis, Lyme diseaserickettsioses, and schistosomiasis.

Through climate change, the pattern of human interaction will likely change and so will our interactions with disease-spreading insects, especially mosquitoes. The World Health Organisation has also stressed the impact of climate change on the reproductive, survival and bite rates of insects, as well as their geographic spread.

Climate refugees

Perhaps the most disastrous effect of climate change on human health is the emergence of large-scale forced migration from the loss of local livelihoods and weather events – something that is recognised by the United Nations High Commission on Human Rights. Sea-level rise, decreased crop yield, and extreme weather events will force many people from their lands and livelihoods, while refugees in vulnerable areas also face amplified conditions such as fewer food supplies and more insect-borne diseases. And those who are displaced put a significant health and economic burden on surrounding communities.

The International Red Cross estimates that there are more environmental refugees than political. Around 36m people were displaced by natural disasters in 2009; a figure that is predicted to rise to more than 50m by 2050. In one worst-case scenario, as many as 200m people could become environmental refugees.

Not a level playing field

Climate change has emerged as a major driver of global health inequalities. As J. Timmons Roberts, professor of Environmental Studies and Sociology at Brown University, put it:

Global warming is all about inequality, both in who will suffer most its effects and in who created the problem in the first place.

Global climate change further polarises the haves and the have-nots. The Intergovernmental Panel on Climate Change predicts that climate change will hit poor countries hardest. For example, the loss of healthy life years in low-income African countries is predicted to be 500 times that in Europe. The number of people in the poorest countries most vulnerable to hunger is predicted by Oxfam International to increase by 20% in 2050. And many of the major killers affecting developing countries, such as malaria, diarrhoeal illnesses, malnutrition and dengue, are highly sensitive to climate change, which would place a further disproportionate burden on poorer nations.

Most disturbingly, countries with weaker health infrastructure – generally situated in the developing world – will be the least able to copewith the effects of climate change. The world’s poorest regions don’t yet have the technical, economic, or scientific capacity to prepare or adapt.

Predictably, those most vulnerable to climate change are not those who contribute most to it. China, the US, and the European Union combined have contributed more than half the world’s total carbon dioxide emissions in the last few centuries. By contrast, and unfairly, countries that contributed the least carbon emissions (measured in per capita emissions of carbon dioxide) include many African nations and small Pacific islands – exactly those countries which will be least prepared and most affected by climate change.

Here’s Why Climate Change Will Increase Deaths by Suicide
by Francis Vergunst, Helen Louise Berry & Massimiliano Orri

Suicide is already among the leading causes of death worldwide. For people aged 15-55 years, it is among the top five causes of death. Worldwide nearly one million people die by suicide each year — more than all deaths from war and murder combined.

Using historical temperature records from the United States and Mexico, the researchers showed that suicide rates increased by 0.7 per cent in the U.S. and by 2.1 per cent in Mexico when the average monthly temperatures rose by 1 C.

The researchers calculated that if global temperatures continue to rise at these rates, between now and 2050 there could be 9,000 to 40,000 additional suicides in the U.S. and Mexico alone. This is roughly equivalent to the number of additional suicides that follow an economic recession.

Spikes during heat waves

It has been known for a long time that suicide rates spike during heat waves. Hotter weather has been linked with higher rates of hospital admissions for self-harmsuicide and violent suicides, as well as increases in population-level psychological distress, particularly in combination with high humidity.

Another recent study, which combined the results of previous research on heat and suicide, concluded there is “a significant and positive association between temperature rises and incidence of suicide.”

Why this is remains unclear. There is a well-documented link between rising temperatures and interpersonal violence and suicide could be understood as an act of violence directed at oneself. Lisa Page, a researcher in psychology at King’s College London, notes:

“While speculative, perhaps the most promising mechanism to link suicide with high temperatures is a psychological one. High temperatures have been found to lead individuals to behave in a more disinhibited, aggressive and violent manner, which might in turn result in an increased propensity for suicidal acts.”

Hotter temperatures are taxing on the body. They cause an increase in the stress hormone cortisol, reduce sleep quality and disrupt people’s physical activity routines. These changes can reduce well-being and increase psychological distress.

Disease, water shortages, conflict and war

The effects of hotter temperatures on suicides are symptomatic of a much broader and more expansive problem: the impact of climate change on mental health.

Climate change will increase the frequency and severity of heat waves, droughts, storms, floods and wildfires. It will extend the range of infectious diseases such as Zika virus, malaria and Lyme disease. It will contribute to food and water shortages and fuel forced migration, conflict and war.

These events can have devastating effects on people’s health, homes and livelihoods and directly impact psychological health and well-being.

But effects are not limited to people who suffer direct losses — for example, it has been estimated that up to half of Hurricane Katrina survivors developed post-traumatic stress disorder even when they had suffered no direct physical losses.

The feelings of loss that follow catastrophic events, including a sense of loss of safety, can erode community well-being and further undermine mental health resilience

The Broken Ladder
by Keith Payne
pp. 3-4 (see here)

[W]hen the level of inequality becomes too large to ignore, everyone starts acting strange.

But they do not act strange in just any old way. Inequality affects our actions and our feelings in the same systematic, predictable fashion again and again. It makes us shortsighted and prone to risky behavior, willing to sacrifice a secure future for immediate gratification. It makes us more inclined to make self-defeating decisions. It makes us believe weird things, superstitiously clinging to the world as we want it to be rather than as it is. Inequality divides us, cleaving us into camps not only of income but also of ideology and race, eroding our trust in one another. It generates stress and makes us all less healthy and less happy.

Picture a neighborhood full of people like the ones I’ve described above: shortsighted, irresponsible people making bad choices; mistrustful people segregated by race and by ideology; superstitious people who won’t listen to reason; people who turn to self-destructive habits as they cope with the stress and anxieties of their daily lives. These are the classic tropes of poverty and could serve as a stereotypical description of the population of any poor inner-city neighborhood or depressed rural trailer park. But as we will see in the chapters ahead, inequality can produce these tendencies even among the middle class and wealthy individuals.

PP. 119-120 (see here)

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Essentialism On the Decline

Before getting to the topic of essentialism, let me take an indirect approach. In reading about paleolithic diets and traditional foods, a recurring theme is inflammation, specifically as it relates to the health of the gut-brain network and immune system.

The paradigm change this signifies is that seemingly separate diseases with different diagnostic labels often have underlying commonalities. They share overlapping sets of causal and contributing factors, biological processes and symptoms. This is why simple dietary changes can have a profound effect on numerous health conditions. For some, the diseased state expresses as mood disorders and for others as autoimmune disorders and for still others something entirely else, but there are immense commonalities between them all. The differences have more to do with how dysbiosis and dysfunction happens to develop, where it takes hold in the body, and so what symptoms are experienced.

From a paleo diet perspective in treating both patients and her own multiple sclerosis, Terry Wahls gets at this point in a straightforward manner (p. 47): “In a very real sense, we all have the same disease because all disease begins with broken, incorrect biochemistry and disordered communication within and between our cells. […] Inside, the distinction between these autoimmune diseases is, frankly, fairly arbitrary”. In How Emotions Are Made, Lisa Feldman Barrett wrote (Kindle Locations 3834-3850):

“Inflammation has been a game-changer for our understanding of mental illness. For many years, scientists and clinicians held a classical view of mental illnesses like chronic stress, chronic pain, anxiety, and depression. Each ailment was believed to have a biological fingerprint that distinguished it from all others. Researchers would ask essentialist questions that assume each disorder is distinct: “How does depression impact your body? How does emotion influence pain? Why do anxiety and depression frequently co-occur?” 9

“More recently, the dividing lines between these illnesses have been evaporating. People who are diagnosed with the same-named disorder may have greatly diverse symptoms— variation is the norm. At the same time, different disorders overlap: they share symptoms, they cause atrophy in the same brain regions, their sufferers exhibit low emotional granularity, and some of the same medications are prescribed as effective.

“As a result of these findings, researchers are moving away from a classical view of different illnesses with distinct essences. They instead focus on a set of common ingredients that leave people vulnerable to these various disorders, such as genetic factors, insomnia, and damage to the interoceptive network or key hubs in the brain (chapter 6). If these areas become damaged, the brain is in big trouble: depression, panic disorder, schizophrenia, autism, dyslexia, chronic pain, dementia, Parkinson’s disease, and attention deficit hyperactivity disorder are all associated with hub damage. 10

“My view is that some major illnesses considered distinct and “mental” are all rooted in a chronically unbalanced body budget and unbridled inflammation. We categorize and name them as different disorders, based on context, much like we categorize and name the same bodily changes as different emotions. If I’m correct, then questions like, “Why do anxiety and depression frequently co-occur?” are no longer mysteries because, like emotions, these illnesses do not have firm boundaries in nature.”

What jumped out at me was the conventional view of disease as essentialist, and hence the related essentialism in biology and psychology. This is exemplified by genetic determinism, such as it informs race realism. It’s easy for most well-informed people to dismiss race realists, but essentialism takes on much more insidious forms that are harder to detect and root out. When scientists claimed to find a gay gene, some gay men quickly took this genetic determinism as a defense against the fundamentalist view that homosexuality is a choice and a sin. It turned out that there was no gay gene (by the way, this incident demonstrated how, in reacting to reactionaries, even leftist activists can be drawn into the reactionary mind). Not only is there no gay gene but also no simple and absolute gender divisions at all — as I previously explained (Is the Tide Starting to Turn on Genetics and Culture?):

“Recent research has taken this even further in showing that neither sex nor gender is binary (1234, & 5), as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way.”

The attraction of essentialism is powerful. And as shown in numerous cases, the attraction can be found across the political spectrum, as it offers a seemingly strong defense in diverting attention away from other factors. Similar to the gay gene, many people defend neurodiversity as if some people are simply born a particular way, and that therefore we can’t and shouldn’t seek to do anything to change or improve their condition, much less cure it or prevent it in future generations.

For example, those on the high-functioning end of the autism spectrum will occasionally defend their condition as being gifted in their ability to think and perceive differently. That is fine as far as it goes, but from a scientific perspective we still should find it concerning that conditions like this are on a drastic rise and it can’t be explained by mere greater rates of diagnosis. Whether or not one believes the world would be a better place with more people with autism, this shouldn’t be left as a fatalistic vision of an evolutionary leap, especially considering most on the autism spectrum aren’t high functioning — instead, we should try to understand why it is happening and what it means.

Researchers have found that there are prospective causes to be studied. Consider proprionate, a substance discussed by Alanna Collen (10% Human, p. 83): “although propionate was an important compound in the body, it was also used as a preservative in bread products – the very foods many autistic children crave. To top it all off, clostridia species are known to produce propionate. In itself, propionate is not ‘bad’, but MacFabe began to wonder whether autistic children were getting an overdose.” This might explain why antibiotics helped many with autism, as it would have been knocking off the clostridia population that was boosting propionate. To emphasize this point, when rodents were injected with propionate, they exhibited the precise behaviors of autism and they too showed inflammation in the brain. The fact that autistics often have brain inflammation, an unhealthy condition, is strong evidence that autism shouldn’t be taken as mere neurodiversity (and, among autistics, the commonality of inflammation-related gut issues emphasizes this point).

There is no doubt that genetic determinism, like the belief in an eternal soul, can be comforting. We identify with our genes, as we inherit them and are born with them. But to speak of inflammation or propionate or whatever makes it seem like we are victims of externalities. And it means we aren’t isolated individuals to be blamed or to take credit for who we are. To return to Collen (pp. 88-89):

“In health, we like to think we are the products of our genes and experiences. Most of us credit our virtues to the hurdles we have jumped, the pits we have climbed out of, and the triumphs we have fought for. We see our underlying personalities as fixed entities – ‘I am just not a risk-taker’, or ‘I like things to be organised’ – as if these are a result of something intrinsic to us. Our achievements are down to determination, and our relationships reflect the strength of our characters. Or so we like to think.

“But what does it mean for free will and accomplishment, if we are not our own masters? What does it mean for human nature, and for our sense of self? The idea that Toxoplasma, or any other microbe inhabiting your body, might contribute to your feelings, decisions and actions, is quite bewildering. But if that’s not mind-bending enough for you, consider this: microbes are transmissible. Just as a cold virus or a bacterial throat infection can be passed from one person to another, so can the microbiota. The idea that the make-up of your microbial community might be influenced by the people you meet and the places you go lends new meaning to the idea of cultural mind-expansion. At its simplest, sharing food and toilets with other people could provide opportunity for microbial exchange, for better or worse. Whether it might be possible to pick up microbes that encourage entrepreneurship at a business school, or a thrill-seeking love of motorbiking at a race track, is anyone’s guess for now, but the idea of personality traits being passed from person to person truly is mind-expanding.”

This goes beyond the personal level, which lends a greater threat to the proposal. Our respective societies, communities, etc might be heavily influenced by environmental factors that we can’t see. A ton of research shows the tremendous impact of parasites, heavy metal toxins, food additives, farm chemicals, hormones, hormone mimics, hormone disruptors, etc. Entire regions might be shaped by even a single species of parasite, such as how higher rates of toxoplasmosis gondii in New England is directly correlated to higher rates of neuroticism (see What do we inherit? And from whom? & Uncomfortable Questions About Ideology).

Essentialism, though still popular, has taken numerous major hits in recent years. It once was the dominant paradigm and went largely unquestioned. Consider how early last century respectable fields of study such as anthropology, linguistic relativism and behaviorism suggested that humans were largely products of environmental and cultural factors. This was the original basis of the attack on racism and race realism. In linguistics, Noam Chomsky overturned this view in positing the essentialist belief that, though not observed much less proven, there must exist within the human brain a language module with a universal grammar. It was able to defeat and replace the non-essentialist theories because it was more satisfying to the WEIRD ideologies that were becoming a greater force in an increasingly WEIRD society.

Ever since Plato, Western civilization has been drawn toward the extremes of essentialism (as part of the larger Axial Age shift toward abstraction and idealism). Yet there has also long been a countervailing force (even among the ancients, non-essentialist interpretations were common; consider group identity: here, here, here, here, and here). It wasn’t predetermined that essentialism would be so victorious as to have nearly obliterated the memory of all alternatives. It fit the spirit of the times for this past century, but now the public mood is shifting again. It’s no accident that, as social democracy and socialism regains favor, environmentalist explanations are making a comeback. But this is merely the revival of a particular Western tradition of thought, a tradition that is centuries old.

I was reminded of this in reading Liberty in America’s Founding Moment by Howard Schwartz. It’s an interesting shift of gears, since Schwartz doesn’t write about anything related to biology, health, or science. But he does indirectly get at environmentalist critique that comes out in his analysis of David Hume (1711-1776). I’ve mostly thought of Hume in terms of his bundle theory of self, possibly having been borrowed from Buddhism that he might have learned from Christian missionaries having returned from the East. However he came to it, the bundle theory argued that there is no singular coherent self, as was a central tenet of traditional Christian theology. Still, heretical views of the self were hardly new — some detect a possible Western precursor of Humean bundle theory in the ideas of Baruch Spinoza (1632-1677).

Whatever its origins in Western thought, environmentalism has been challenging essentialism since the Enlightenment. And in the case of Hume, there is an early social constructionist view of society and politics, that what motivates people isn’t essentialism. This puts a different spin on things, as Hume’s writings were widely read during the revolutionary era when the United States was founded. Thomas Jefferson, among others, was familiar with Hume and highly recommended his work. Hume represented the opposite position to John Locke. We are now returning to this old battle of ideas.