The Language of Heritability

“The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference.”
~Siddhartha Mukherjee, Same But Different

If genetics are the words in a dictionary, then epigenetics is the creative force that forms those words into a library of books. Even using the same exact words in the genomic code from identical twins, they can be expressed in starkly different ways. Each gene’s expression is dependent on it’s relationship to numerous other genes, potentially thousands, and all of those genes together are moderated according to epigenetics.

The epigenome itself can be altered by individual and environmental factors (type of work, exercise, and injuries; traumatic abuse, chronic stress, and prejudice; smoking, drinking, and malnutrition; clean or polluted air, water and soil; availability of green spaces, socioeconomic class, and level of inequality; etc). Then those changes can be passed on across multiple generations (e.g., the grandchildren of famine victims having higher obesity rates). This applies even to complex behaviors being inherited (e.g., the grandchildren of shocked mice, when exposed to cherry blossom scent, still jumping in response to the shock their grandparents experienced when exposed to the same scent).

What is rarely understood is that heritability rates don’t refer directly to genetics alone. It simply speaks to the entire package of influences. We don’t only inherit genes for we also inherit epigenetic markers and environmental conditions, all of the confounders that make twin studies next to useless. Heritability is only meaningful at a population level and can say nothing directly about individual people or individual factors such as a specific gene. And at a population level, research has shown that behavioral and cultural traits can persist over centuries, and they seem to have been originally caused by distant historical events of which the living memory has long since disappeared, but the memory lingers in some combination of heritable factors.

Even if epigenetics could only last several generations, though at least in some species much longer, the social conditions could continually reinforce those epigenetic changes so that they effectively become permanently set. And the epigenetics, in predisposing social behaviors, would create a vicious cycle of feeding back into the conditions that maintain the epigenetics. Or think of the centuries-long history of racism in the United States where evidence shows racism remains pervasive, systemic, and institutional, in which case the heritability is partly being enforced upon an oppressed underclass by those with wealth, privilege, and power. That wealth, power, and privilege is likewise heritable, as is the entire social order. No one part can be disentangled from the rest for none of us are separate from the world that we are born into.

Now consider any given disease, behavior, personality trait, etc might be determined by thousands of genes, thousands of epigenetic markers, and thousands of external factors. Change any single part of that puzzle might mean to rearrange the the entire result, even leading to a complete opposite expression. The epigenome determines not only if a gene is expressed but how it is expressed because it determines how which words are used in the genomic dictionary and how those words are linked into sentences, paragraphs, and chapters. So, one gene might be correlated as heritable with something in a particular society while correlated to something entirely else in a different society. The same gene could potentially have immense possible outcomes, in how the same word could be found in hundreds of thousands of books. Many of the same words are found in both Harry Potter and Hamlet, but that doesn’t help us to understand what makes one book different from the other. This is a useful metaphor, although an aspect of it might be quite literal considering what has been proven in the research on linguistic relativity.

There is no part of our lives not touched by language in shaping thought and affect, perception and behavior. Rather than a Chomskyan language organ that we inherit, maybe language is partly passed on through the way epigenetics ties together genes and environment. Even our scientific way of thinking about such issues probably leaves epigenetic markers that might predispose our children and grandchildren to think scientifically as well. What I’m describing in this post is a linguistically-filtered narrative upheld by a specific Jaynesian voice of authorization in our society. Our way of speaking and understanding changes us, even at a biological level. We are unable of standing back from the very thing about which we speak. In fact, it has been the language of scientific reductionism that has made it so difficult coming to this new insight into human nature, that we are complex beings in a complex world. And that scientific reduction has been a central component to the entire ruling paradigm, which continues to resist this challenging view.

Epigenetics can last across generations, but it can also be changed in a single lifetime. For centuries, we enforced upon the world, often violently and through language, an ideology of genetic determinism and race realism. The irony is that the creation of this illusion of an inevitable and unalterable social order was only possible through the elite’s control of environmental conditions and hence epigenetic factors. Yet as soon as this enforcement ends, the illusion drifts away like a fog dissipated by a strong wind and now through clear vision the actual landscape is revealed, a patchwork of possible pathways. We constantly are re-created by our inheritance, biological and environmental, and in turn we re-create the social order we find. But with new ways of speaking will come new ways of perceiving and acting in the world, and from that a different kind of society could form.

* * *

The Ending of the Nature vs Nurture Debate
Heritability & Inheritance, Genetics & Epigenetics, Etc
Identically Different: A Scientist Changes His Mind
Epigenetic Memory and the Mind
Inherited Learned Behavior
Epigenetics, the Good and the Bad
Trauma, Embodied and Extended
Facing Shared Trauma and Seeking Hope
Society: Precarious or Persistent?
Plowing the Furrows of the Mind

What If (Almost) Every Gene Affects (Almost) Everything?
by Ed Yong

But Evan Boyle, Yang Li, and Jonathan Pritchard from Stanford University think that this framework doesn’t go far enough.

They note that researchers often assume that those thousands of weakly-acting genetic variants will all cluster together in relevant genes. For example, you might expect that height-associated variants will affect genes that control the growth of bones. Similarly, schizophrenia-associated variants might affect genes that are involved in the nervous system. “There’s been this notion that for every gene that’s involved in a trait, there’d be a story connecting that gene to the trait,” says Pritchard. And he thinks that’s only partly true.

Yes, he says, there will be “core genes” that follow this pattern. They will affect traits in ways that make biological sense. But genes don’t work in isolation. They influence each other in large networks, so that “if a variant changes any one gene, it could change an entire gene network,” says Boyle. He believes that these networks are so thoroughly interconnected that every gene is just a few degrees of separation away from every other. Which means that changes in basically any gene will ripple inwards to affect the core genes for a particular trait.

The Stanford trio call this the “omnigenic model.” In the simplest terms, they’re saying that most genes matter for most things.

More specifically, it means that all the genes that are switched on in a particular type of cell—say, a neuron or a heart muscle cell—are probably involved in almost every complex trait that involves those cells. So, for example, nearly every gene that’s switched on in neurons would play some role in defining a person’s intelligence, or risk of dementia, or propensity to learn. Some of these roles may be starring parts. Others might be mere cameos. But few genes would be left out of the production altogether.

This might explain why the search for genetic variants behind complex traits has been so arduous. For example, a giant study called… er… GIANT looked at the genomes of 250,000 people and identified 700 variants that affect our height. As predicted, each has a tiny effect, raising a person’s stature by just a millimeter. And collectively, they explain just 16 percent of the variation in heights that you see in people of European ancestry.

An Enormous Study of the Genes Related to Staying in School
by Ed Yong

Over the past five years, Benjamin has been part of an international team of researchers identifying variations in the human genome that are associated with how many years of education people get. In 2013, after analyzing the DNA of 101,000 people, the team found just three of these genetic variants. In 2016, they identified 71 more after tripling the size of their study.

Now, after scanning the genomes of 1,100,000 people of European descent—one of the largest studies of this kind—they have a much bigger list of 1,271 education-associated genetic variants. The team—which includes Peter Visscher, David Cesarini, James Lee, Robbee Wedow, and Aysu Okbay—also identified hundreds of variants that are associated with math skills and performance on tests of mental abilities.

The team hasn’t discovered “genes for education.” Instead, many of these variants affect genes that are active in the brains of fetuses and newborns. These genes influence the creation of neurons and other brain cells, the chemicals these cells secrete, the way they react to new information, and the way they connect with each other. This biology affects our psychology, which in turn affects how we move through the education system.

This isn’t to say that staying in school is “in the genes.” Each genetic variant has a tiny effect on its own, and even together, they don’t control people’s fates. The team showed this by creating a “polygenic score”—a tool that accounts for variants across a person’s entire genome to predict how much formal education they’re likely to receive. It does a lousy job of predicting the outcome for any specific individual, but it can explain 11 percent of the population-wide variation in years of schooling.

That’s terrible when compared with, say, weather forecasts, which can correctly predict about 95 percent of the variation in day-to-day temperatures.

Complex grammar of the genomic language
from Science Daily

Each gene has a regulatory region that contains the instructions controlling when and where the gene is expressed. This gene regulatory code is read by proteins called transcription factors that bind to specific ‘DNA words’ and either increase or decrease the expression of the associated gene.

Under the supervision of Professor Jussi Taipale, researchers at Karolinska Institutet have previously identified most of the DNA words recognised by individual transcription factors. However, much like in a natural human language, the DNA words can be joined to form compound words that are read by multiple transcription factors. However, the mechanism by which such compound words are read has not previously been examined. Therefore, in their recent study in Nature, the Taipale team examines the binding preferences of pairs of transcription factors, and systematically maps the compound DNA words they bind to.

Their analysis reveals that the grammar of the genetic code is much more complex than that of even the most complex human languages. Instead of simply joining two words together by deleting a space, the individual words that are joined together in compound DNA words are altered, leading to a large number of completely new words.

“Our study identified many such words, increasing the understanding of how genes are regulated both in normal development and cancer,” says Arttu Jolma. “The results pave the way for cracking the genetic code that controls the expression of genes. “

Dietary Risk Factors for Heart Disease and Cancer

Based on a study of 42 European countries, a recent scientific paper reported that, “the highest CVD [cardiovascular disease] prevalence can be found in countries with the highest carbohydrate consumption, whereas the lowest CVD prevalence is typical of countries with the highest intake of fat and protein.” And that, “The positive effect of low-carbohydrate diets on CVD risk factors (obesity, blood lipids, blood glucose, insulin, blood pressure) is already apparent in short-term clinical trials lasting 3–36 months (58) and low-carbohydrate diets also appear superior to low-fat diets in this regard (36, 37).” Basically, for heart health, this would suggest eating more full-fat dairy, eggs, meat, and fish while eating less starches, sugar, and alcohol. That is to say, follow a low-carb diet. It doesn’t mean eat any low-carb diet, though, for the focus is on animal foods.

By the way, when you dig into the actual history of the Blue Zones (healthy, long-lived populations), what you find is that their traditional diets included large portions of animal foods, including animal fat (Blue Zones Dietary Myth, Eat Beef and Bacon!, Ancient Greek View on Olive Oil as Part of the Healthy Mediterranean Diet). The longest-lived society in the entire world, in fact, is also the one with the highest meat consumption per capita, even more than Americans. What society is that? Hong Kong. In general, nutrition studies in Asia has long shown that those eating more meat have the best health outcomes. This contradicts earlier Western research, as we’re dealing with how the healthy user effect manifests differently according to culture. But even in the West, the research is ever more falling in line with the Eastern research, such as with the study I quoted above. And that study is far from being the only one (Are ‘vegetarians’ or ‘carnivores’ healthier?).

This would apply to both meat-eaters and vegetarians, as even vegetarians could put greater emphasis on nutrient-dense animal foods. It is specifically saturated fat and animal proteins that were most strongly associated with better health, both of which could be obtained from dairy and eggs. Vegans, on the other hand, would obviously be deficient in this area. But certain plant foods (tree nuts, olives, citrus fruits, low-glycemic vegetables, and wine, though not distilled beverages) also showed some benefit. Considering plant foods, those specifically associated with greater risk of heart disease, strokes, etc were those high in carbohydrates such as grains. Unsurprisingly, sunflower oil was a risk factor, probably related to seed oils being inflammatory and oxidative (not to mention mutagenic); but oddly onions were also likewise implicated, if only weakly. Other foods showed up in the data, but the above were the most interesting and important.

Such correlations, of course, can’t prove causation. But it fits the accumulating evidence: “These findings strikingly contradict the traditional ‘saturated fat hypothesis’, but in reality, they are compatible with the evidence accumulated from observational studies that points to both high glycaemic index and high glycaemic load (the amount of consumed carbohydrates × their glycaemic index) as important triggers of CVDs. The highest glycaemic indices (GI) out of all basic food sources can be found in potatoes and cereal products, which also have one of the highest food insulin indices (FII) that betray their ability to increase insulin levels.” All of that seems straightforward, according to the overall data from nutrition studies (see: Uffe Ravnskov, Richard Smith, Robert Lustig, Eric Westman, Ben Bikman, Gary Taubes, Nina Teicholz, etc). About saturated fat not being linked to CVD risk, Andrew Mente discusses a meta-analysis he worked on and another meta-analysis by another group of researchers, Siri-Tarino PW et al (New Evidence Reveals that Saturated Fat Does Not Increase the Risk of Cardiovascular Disease). Likewise, many experts no longer see cholesterol as a culprit either (Uffe Ravnskov et al, LDL-C does not cause cardiovascular disease: a comprehensive review of the current literature).

Yet one other odd association was discovered: “In fact, our ecological comparison of cancer incidence in 39 European countries (for 2012; (59)) can bring another important argument. Current rates of cancer incidence in Europe are namely the exact geographical opposite of CVDs (see Fig. 28). In sharp contrast to CVDs, cancer correlates with the consumption of animal food (particularly animal fat), alcohol, a high dietary protein quality, high cholesterol levels, high health expenditure, and above average height. These contrasting patterns mirror physiological mechanisms underlying physical growth and the development of cancer and CVDs (60). The best example of this health paradox is again that of French men, who have the lowest rates of CVD mortality in Europe, but the highest rates of cancer incidence. In other words, cancer and CVDs appear to express two extremes of a fundamental metabolic disbalance that is related to factors such as cholesterol and IGF-1 (insulin-like growth factor).”

That is an argument people have made, but it’s largely been theoretical. In response, others have argued the opposite position (High vs Low Protein, Too Much Protein?, Gundry’s Plant Paradox and Saladino’s Carnivory, Carcinogenic Grains). It’s true that, for example, eating meat increases IGF-1, at least temporarily. Then again, eating in general does the same. And on a diet low enough in carbs, it’s been shown in studies that people naturally reduce their calorie intake, which would reduce IGF-1. And for really low-carb, the ketogenic diet is specifically defined as being low in animal protein while higher in fat. A low-carb diet is not necessarily a high-animal protein diet, especially when combined with intermittent fasting such as OMAD (one meal a day) with long periods of downregulated IGF-1. Also, this study didn’t appear to include plant proteins in the data, and so we don’t know if eating lots of soy, hemp protein powder, etc would show similar results; although nuts were mentioned in the report as being similar to meat in correlating to CVD health but, as far as I know, not mentioned in terms of cancer. What would make animal proteins more carcinogenic than plant proteins or, for that matter, plant carbohydrates? The hypothetical mechanism is not clear.

This anomaly would’ve been more interesting if the authors had surveyed the research literature. It’s hard to know what to make of it since other studies have pointed to the opposite conclusion, that the risks of these two are closely linked, rather than being inversely associated: “Epidemiologically, a healthy lifestyle lessens the risk of both cardiovascular disease and cancer, as first found in the Nurses’ Health study” (Lionel Opie, Cancer and cardiovascular disease; see Rob M. Van Dam, Combined impact of lifestyle factors on mortality). “Research has shown there are interrelationships among type 2 diabetes, heart disease, and cancer. These interrelationships may seem coincidental and based only on the fact these conditions share common risk factors. However, research suggests these diseases may relate to one another in multiple ways and that nutrition and lifestyle strategies used to prevent and manage these diseases overlap considerably” (Karen Collins, The Cancer, Diabetes, and Heart Disease Link).

Yet other researchers did find the same inverse relationship: “We herein report that, based on two separate medical records analysis, an inverse correlation between cancer and atherosclerosis” (Matthew Li et al, If It’s Not One Thing, It’s Another). But there was an additional point: “We believe that the anti-inflammatory aspect of cancer’s pan-inflammatory response plays an important role towards atherosclerotic attenuation.” Interesting! In that case, one of the key causal mechanisms to be considered is inflammation. Some diets high in animal proteins would be inflammatory, such as the Standard American Diet, whereas others would be anti-inflammatory. Eliminating seed oils (e.g., sunflower oil) would by itself reduce inflammation. Reducing starches and sugar would help as well. So, is it the meat that increases cancer or is it what the meat is being cooked in or eaten with? That goes back to the healthy and unhealthy user effects.

As this confounding factor is central, we might want to consider the increasingly common view that inflammation is involved in nearly every major disease. “For example, inflammation causes or is a causal link in many health problems or otherwise seen as an indicator of health deterioration (arthritis, depression, schizophrenia, etc), but inflammation itself isn’t the fundamental cause since it is a protective response itself to something else (allergens, leaky gut, etc). Or as yet another example, there is the theory that cholesterol plaque in arteries doesn’t cause the problem but is a response to it, as the cholesterol is essentially forming a scab in seeking to heal injury. Pointing at cholesterol would be like making accusations about firefighters being present at fires” (Coping Mechanisms of Health).

What exacerbates or moderates inflammation will be pivotal to overall health (Essentialism On the Decline), especially the nexus of disease called metabolic syndrome/derangement or what used to be called syndrome X: insulin resistance, diabetes, obesity, heart disease, strokes, etc. In fact, other researchers point directly to inflammation as being a common factor of CVD and cancer: “Although commonly thought of as two separate disease entities, CVD and cancer possess various similarities and possible interactions, including a number of similar risk factors (e.g. obesity, diabetes), suggesting a shared biology for which there is emerging evidence. While chronic inflammation is an indispensible feature of the pathogenesis and progression of both CVD and cancer, additional mechanisms can be found at their intersection” (Ryan J. Koene et al, Shared Risk Factors in Cardiovascular Disease and Cancer). But it might depend on the specific conditions how inflammation manifests as disease — not only CVD or cancer but also arthritis, depression, Alzheimer’s, etc.

This is the major downfall of nutrition studies, as the experts in the field find themselves hopelessly mired in a replication crisis. There is too much contradictory research and, when much of the research has been repeated, it simply did not replicate. That is to say much of it is simply wrong or misinterpreted. And as few have attempted to replicate much of it, we aren’t entirely sure what is valid and what is not. That further problemetizes meta-analyses, despite how potentially powerful that tool can be when working with quality research. The study I’ve been discussing here was an ecological study and that has its limitations. The researchers couldn’t disentangle all the major confounding factors, much less control for them in the first place, as they were working with data across decades that came from separate countries. Even so, it’s interesting and useful info to consider. And keep in mind that almost all official dietary recommendations are based on observational (associative, correlative, epidemiological) studies with far fewer controls. This is the nature of the entire field of nutrition studies, as long-term randomized and controlled studies on humans are next to impossible to do.

So, as always, qualifications must be made. The study’s authors state that, “In items of smaller importance (e.g. distilled beverages, sunflower oil, onions), the results are less persuasive and their interpretation is not always easy and straightforward. Similar to observational studies, our ecological study reflects ‘real-world data’ and cannot always separate mutual interactions among the examined variables. Therefore, the reliance on bivariate correlations could lead to misleading conclusions. However, some of these findings can be used as a starting point of medical hypotheses, whose validity can be investigated in controlled clinical trials.” Nonetheless, “The reasonably high accuracy of the input data, combined with some extremely high correlations, together substantially increase the likelihood of true causal relationships, especially when the results concern principal components of food with high consumption rates, and when they can be supported by other sources.”

This data is meaningful in offering strong supporting evidence. The finding about animal foods and starchy foods is the main takeaway, however tentative the conclusion may be for real world application, at least in taking this evidence in isolation. But the inverse correlation of CVD risk and cancer risk stands out and probably indicates confounders across populations, and that would be fertile territory for other researchers to explore. The main importance to this study is less in the specifics and more in how it further challenges the broad paradigm that has dominated nutrition studies for the past half century or so. The most basic point is that the diet-heart hypothesis simply doesn’t make sense of the evidence and it never really did. When the hypothesis was first argued, heart disease was going up precisely at the moment saturated fat intake was going down, since seed oils had replaced lard as the main fat source in the decades prior. Interestingly, lard has been a common denominator among most long-lived populations, from the Okinawans to Rosetans (Ancient Greek View on Olive Oil as Part of the Healthy Mediterranean Die, Blue Zones Dietary Myth).

This study is further support for a new emerging understanding, as seen with the American Heart Association backing off from its earlier position (Slow, Quiet, and Reluctant Changes to Official Dietary Guidelines). Fat is not the enemy of humanity, as seen with the high-fat ketogenic diet where fat is used as the primary fuel, instead of carbohydrates (Ketogenic Diet and Neurocognitive Health, The Ketogenic Miracle Cure, The Agricultural Mind). In fact, we wouldn’t be here without fat, as it is the evolutionary and physiological norm, specifically in terms of low-carb (Is Ketosis Normal?, “Is keto safe for kids?”). Instead, that too many carbohydrates are unhealthy used to be common knowledge (American Heart Association’s “Fat and Cholesterol Counter” (1991)). Consensus on this shifted a half century ago, the last time when low-carb diets were still part of mainstream thought, and now we are shifting back the other way. The old consensus will be new again.

* * *

Carbohydrates, not animal fats, linked to heart disease across 42 European countries
by Keir Watson

Key findings

  • Cholesterol levels were tightly correlated to the consumption of animal fats and proteins – Countries consuming more fat and protein from animal sources had higher incidence of raised cholesterol
  • Raised cholesterol correlated negatively with CVD risk – Countries with higher levels of raised cholesterol had fewer cases of CVD deaths and a lower incidence of CVD risk factors
  • Carbohydrates correlated positively with CVD risk – the more carbohydrates consumed (and especially those with high GI such as starches) the more CVD
  • Fat and Protein correlated negatively with CVD risk – Countries consuming more fat and protein from animal and plant sources had less CVD. The authors speculate that this is because increasing fat and protein in the diet generally displaces carbohydrates.

Food consumption and the actual statistics of cardiovascular diseases: an epidemiological comparison of 42 European countries
Pavel Grasgruber,* Martin Sebera, Eduard Hrazdira, Sylva Hrebickova, and Jan Cacek

Results

We found exceptionally strong relationships between some of the examined factors, the highest being a correlation between raised cholesterol in men and the combined consumption of animal fat and animal protein (r=0.92, p<0.001). The most significant dietary correlate of low CVD risk was high total fat and animal protein consumption. Additional statistical analyses further highlighted citrus fruits, high-fat dairy (cheese) and tree nuts. Among other non-dietary factors, health expenditure showed by far the highest correlation coefficients. The major correlate of high CVD risk was the proportion of energy from carbohydrates and alcohol, or from potato and cereal carbohydrates. Similar patterns were observed between food consumption and CVD statistics from the period 1980–2000, which shows that these relationships are stable over time. However, we found striking discrepancies in men’s CVD statistics from 1980 and 1990, which can probably explain the origin of the ‘saturated fat hypothesis’ that influenced public health policies in the following decades.

Conclusion

Our results do not support the association between CVDs and saturated fat, which is still contained in official dietary guidelines. Instead, they agree with data accumulated from recent studies that link CVD risk with the high glycaemic index/load of carbohydrate-based diets. In the absence of any scientific evidence connecting saturated fat with CVDs, these findings show that current dietary recommendations regarding CVDs should be seriously reconsidered. […]

Irrespective of the possible limitations of the ecological study design, the undisputable finding of our paper is the fact that the highest CVD prevalence can be found in countries with the highest carbohydrate consumption, whereas the lowest CVD prevalence is typical of countries with the highest intake of fat and protein. The polarity between these geographical patterns is striking. At the same time, it is important to emphasise that we are dealing with the most essential components of the everyday diet.

Health expenditure – the main confounder in this study – is clearly related to CVD mortality, but its influence is not apparent in the case of raised blood pressure or blood glucose, which depend on the individual lifestyle. It is also difficult to imagine that health expenditure would be able to completely reverse the connection between nutrition and all the selected CVD indicators. Therefore, the strong ecological relationship between CVD prevalence and carbohydrate consumption is a serious challenge to the current concepts of the aetiology of CVD.

The positive effect of low-carbohydrate diets on CVD risk factors (obesity, blood lipids, blood glucose, insulin, blood pressure) is already apparent in short-term clinical trials lasting 3–36 months (58) and low-carbohydrate diets also appear superior to low-fat diets in this regard (36, 37). However, these findings are still not reflected by official dietary recommendations that continue to perpetuate the unproven connection between saturated fat and CVDs (25). Understandably, because of the chronic nature of CVDs, the evidence for the connection between carbohydrates and CVD events/mortality comes mainly from longitudinal observational studies and there is a lack of long-term clinical trials that would provide definitive proof of such a connection. Therefore, our data based on long-term statistics of food consumption can be important for the direction of future research.

In fact, our ecological comparison of cancer incidence in 39 European countries (for 2012; (59)) can bring another important argument. Current rates of cancer incidence in Europe are namely the exact geographical opposite of CVDs (see Fig. 28). In sharp contrast to CVDs, cancer correlates with the consumption of animal food (particularly animal fat), alcohol, a high dietary protein quality, high cholesterol levels, high health expenditure, and above average height. These contrasting patterns mirror physiological mechanisms underlying physical growth and the development of cancer and CVDs (60). The best example of this health paradox is again that of French men, who have the lowest rates of CVD mortality in Europe, but the highest rates of cancer incidence. In other words, cancer and CVDs appear to express two extremes of a fundamental metabolic disbalance that is related to factors such as cholesterol and IGF-1 (insulin-like growth factor).

Besides total fat and protein consumption, the most likely preventive factors emerging in our study include fruits (particularly citrus fruits), wine, high-fat dairy products (especially cheese), sources of plant fat (tree nuts, olives), and potentially even vegetables and other low-glycaemic plant sources, provided that they substitute high-glycaemic foods. Many of these foodstuffs are the traditional components of the ‘Mediterranean diet’, which again strengthens the meaningfulness of our results. The factor analysis (Factor 3) also highlighted coffee, soybean oil and fish & seafood, but except for the fish & seafood, the rationale of this finding is less clear, because coffee is strongly associated with fruit consumption and soybean oil is used for various culinary purposes. Still, some support for the preventive role of coffee does exist (61) and hence, this observation should not be disregarded.

Similar to the “Mediterranean diet”, the Dietary Approaches to Stop Hypertension (DASH) diet, which is based mainly on fruits, vegetables, and low-fat dairy, also proved to be quite effective (62). However, our data indicate that the consumption of low-fat dairy may not be an optimal strategy. Considering the unreliability of observational studies highlighting low-fat dairy and the existence of strong bias regarding the intake of saturated fat, the health effect of various dairy products should be carefully tested in controlled clinical studies. In any case, our findings indicate that citrus fruits, high-fat dairy (such as cheese) and tree nuts (walnuts) constitute the most promising components of a prevention diet.

Among other potential triggers of CVDs, we should especially stress distilled beverages, which consistently correlate with CVD risk, in the absence of any relationship with health expenditure. The possible role of sunflower oil and onions is much less clear. Although sunflower oil consistently correlates with stroke mortality in the historical comparison and creates very productive regression models with some correlates of the actual CVD mortality, it is possible that both these food items mirror an environment that is deficient in some important factors correlating negatively with CVD risk.

A very important case is that of cereals because whole grain cereals are often propagated as CVD prevention. It is true that whole grain cereals are usually characterised by lower GI and FII values than refined cereals, and their benefits have been documented in numerous observational studies (63), but their consumption is also tied with a healthy lifestyle. All the available clinical trials have been of short duration and have produced inconsistent results indicating that the possible benefits are related to the substitution of refined cereals for whole grain cereals, and not because of whole grain cereals per se (64, 65). Our study cannot differentiate between refined and unrefined cereals, but both are highly concentrated sources of carbohydrates (~70–75% weight, ~80–90% energy) and cereals also make up ~50% of CA energy intake in general. To use an analogy with smoking, a switch from unfiltered to filtered cigarettes can reduce health risks, but this fact does not mean that filtered cigarettes should be propagated as part of a healthy lifestyle. In fact, even some unrefined cereals [such as the ‘whole-meal bread’ tested by Bao et al. (32)] have high glycaemic and insulin indices, and the values are often unpredictable. Therefore, in the light of the growing evidence pointing to the negative role of carbohydrates, and considering the lack of any association between saturated fat and CVDs, we are convinced that the current recommendations regarding diet and CVDs should be seriously reconsidered.

The Madness of Drugs

There is always a question of what is making the world so crazy. And it’s not exactly a new question. “Cancer, like insanity,” Stanislou Tanochou wrote in 1843, “seems to increase with the progress of civilization.” Or go back earlier to 1809, the year Thomas Paine died and Abraham Lincoln was born, when John Haslam explained how common had become this concern of civilization going off the rails: “The alarming increase in Insanity, as might naturally be expected, has incited many persons to an investigation of this disease.” (For background, see: The Crisis of Identity.)

Was it changes of diet with the introduction of sugar, the first surplus yields of wheat, and a high-carb diet in general? If not the food itself, could it be the food additives such as glutamate and propionate? Was it the pollution from industrialization such as the chemicals in our food supply from industrial agriculture and industrial production, the pollution in the air we breathe and water we drink, and the spikes of toxic exposure with lead having been introduced to new products? Was it urbanization with 97% of the world’s population still in rural areas at the beginning of the 19th century followed by the majority of Westerners having moved to the cities a few generations later? Or was it the consequence of urbanization and industrialization as seen with increasing inequality of wealth, resources, and power that put the entire society under strain?

I’ve entertained all those possibilities over the years. And I’m of the opinion that they’re all contributing factors. Strong evidence can be shown for each one. But modernity saw another change as well. It was the era of science and that shaped medicine, especially drugs. In general, drugs became more common, whether medicinally or recreationally, even some thing so simple as the colonial trade of sugar and tobacco. Then later there were hardcore drugs like opium and cocaine that became increasingly common over the 19th century.

The 20th century, of course, pushed this to a whole new level. Drugs were everywhere. Consider the keto diet that, in the 1920s, showed a promising treatment or even cure for epileptic seizures, but shortly after that the drug companies came up with medications and the keto research dried up, even though those medications never came close to being as effective and some of them caused permanent harm to the patient, something rarely admitted by doctors (see the story of Charlie Abrams, son of the Hollywood produce Jim Abrams). Drugs seemed more scientific and modern humanity had fallen under the thrall of scientism. Ascie Dupont’s advertising slogan went, “Better Living Through Chemistry”.

It was irrelevant that most of the drugs never lived up to the hype, as the hype was endless. As research has shown, the placebo effect makes each new pharmaceutical seemingly effective, until shortly later the drug companies invent another drug and unsurprisingly the old drug stops showing the same benefits it did previously. Our hopes and fantasies are projected onto the next equivalent of a sugar pill and the placebo effect just goes on and on, as does the profit industry.

That isn’t to dismiss the actual advancements of science. But we now know that even the drugs that are beneficial to some people, from antidepressants to statins, are overprescribed and may be harming more people than they are helping. Part of this is because our scientific knowledge has been lacking, sometimes actively suppressed. It turns out that depression is not a neurotransmitter deficiency nor that cholesterol is bad for the body. Drugs that mess with the body in fundamental ways often have severe side effects and the drug companies have gone to extreme lengths to hide the consequences, as their profit model depends upon an ignorant and unquestioning population of citizen-consumers.

This is not a minor issue. The evidence points to statins making some people irritable to the point of violence and there is a statistically significant increase of violent death among statin users. That is on top of an increase of neurocognitive decline in general, as the brain requires cholesterol to function normally. Or consider how some painkillers might also be disrupting the physiological mechanisms underlying empathy and so, heavy regular usage, might contribute to sociopathy. It’s unsurprising that psychiatric medications can change behavior and personality, but no one expects such dire consequences when going to the drugstore to pick up their medication for asthma or whatever.

We are living in an era when patients, in many cases, can’t trust their own doctors. There is no financial incentive to honestly inform patients so that they can make rational choices based on balancing the benefits and harms. We know the immense influence drug companies have over doctors that happens through legal forms of bribery, from paid vacations to free meals and supplies. It’s related to not only why patients are kept in the dark but so are most doctors. It just so happens that drug company funding of medical school curriculum and continuing education for doctors doesn’t include education for effective dietary and lifestyle changes that are inexpensive or even free (i.e., no profit). This is why most doctors fail a basic test of nutritional knowledge. That needs to change.

This problem is just one among many. As I pointed out, there are many factors that are throwing gasoline on the fire. Whatever are the causes, the diseases of civilization, including but not limited to mental illness, is worsening with every generation and this is a centuries-old trend. It’s interesting that this has happened simultaneous with the rise of science. It was the hubris of the scientific mindset (and related technological industrialization) that has caused much of the harm, but it is also because of science that we are beginning to understand the harm we’ve done and what exactly are the causal mechanisms behind it. We must demand that science be turned into a tool not of private interest but of public good.

* * *

The medications that change who we are
by Zaria Gorvett

They’ve been linked to road rage, pathological gambling, and complicated acts of fraud. Some make us less neurotic, and others may even shape our social relationships. It turns out many ordinary medications don’t just affect our bodies – they affect our brains. Why? And should there be warnings on packets? […]

According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.

We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.

In most people, these changes are extremely subtle. But in some they can also be dramatic. […]

But Golomb’s most unsettling discovery isn’t so much the impact that ordinary drugs can have on who we are – it’s the lack of interest in uncovering it. “There’s much more of an emphasis on things that doctors can easily measure,” she says, explaining that, for a long time, research into the side-effects of statins was all focused on the muscles and liver, because any problems in these organs can be detected using standard blood tests.

This is something that Dominik Mischkowski, a pain researcher from Ohio University, has also noticed. “There is a remarkable gap in the research actually, when it comes to the effects of medication on personality and behaviour,” he says. “We know a lot about the physiological effects of these drugs – whether they have physical side effects or not, you know. But we don’t understand how they influence human behaviour.” […]

In fact, DeRubeis, Golomb and Mischkowski are all of the opinion that the drugs they’re studying will continue to be used, regardless of their potential psychological side-effects. “We are human beings, you know,” says Mischkowski. “We take a lot of stuff that is not necessarily always good in every circumstance. I always use the example of alcohol, because it’s also a painkiller, like paracetamol. We take it because we feel that it has a benefit for us, and it’s OK as long as you take it in the right circumstances and you don’t consume too much.”.

But in order to minimise any undesirable effects and get the most out of the staggering quantities of medications that we all take each day, Mischkowski reiterates that we need to know more. Because at the moment, he says, how they are affecting the behaviour of individuals – and even entire societies – is largely a mystery.

Multiple Sclerosis and Carnivore Diet

Dr. Terry Wahls reversed the symptoms of multiple sclerosis in herself and, through a clinical study, in others. Her Wahls Protocol includes a ketogenic diet with nutrient-dense animal foods, but it also requires massive loads of vegetables. Yet those on the carnivore diet, typically which also means ketosis, others have also experienced similar improvements of multiple sclerosis symptoms, either reversal or stabilization.

Maybe all those vegetables were irrelevant. The only other factors would either be the ketosis or all the animal-based nutrition. There would be an easy way to test this. Do a study with multiple groups: 1) ketogenic omnivores (Wahls Protocol), ketogenic vegans, ketogenic carnivores, and non-ketogenic carnivores (add in enough dairy to remain out of ketosis). Control for all other main factors. Find out the results.

My suspicion is that it’s the combination of both ketosis and animal-based nutrition, combined with elimination with plant anti-nutrients, although any of these alone might show some benefits. Right now, mostly all we have to go by are the many people experimenting. The studies Wahls is doing are useful (she is on her second study), but that research is only preliminary.

As further evidence of the anecdotal variety, Mikhaila Peterson has successfully treated her own autoimmune disorder. It’s not multiple sclerosis, but all autoimmune disorders have some similarities. In going ketogenic, inflammation is being eliminated. And in going carnivore, plant anti-nutrients are being eliminated. These two factors also promote other things as well, but simply what they eliminate might be most key. Inflammation, in particular, is understood in its connection to autoimmune disorders.

Jaynesian Linguistic Relativity

  • “All of these concrete metaphors increase enormously our powers of perception of the world about us and our understanding of it, and literally create new objects. Indeed, language is an organ of perception, not simply a means of communication.
  • The lexicon of language, then, is a finite set of terms that by metaphor is able to stretch out over an infinite set of circumstances, even to creating new circumstances thereby.
  • “The bicameral mind with its controlling gods was evolved as a final stage of the evolution of language. And in this development lies the origin of civilization.”
  • “For if consciousness is based on language, then it follows that it is of much more recent origin than has been heretofore supposed. Consciousness come after language! The implications of such a position are extremely serious.
  • But there’s no doubt about it, Whorfian hypothesis is true for some of the more abstract concepts we have. Certainly, in that sense, I would certainly be a Whorfian. But I don’t think Whorf went far enough.
    ~Julian Jaynes

Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, makes statements that obviously express a view of linguistic relativity, also known as the Sapir-Whorf hypothesis or Whorfian hypothesis, whether or not the related strong form of linguistic determinism, although the above quotes do indicate the strong form. Edward Sapir and Benjamin Lee Whorf, by the way, weren’t necessarily arguing for the determinism that was later ascribed to them or at least to Whorf (Straw Men in the Linguistic Imaginary). Yet none of Jaynes’ writings ever directly refer to this other field of study or the main thinkers involved, even though it is one of the closest fields to his own hypothesis on language and metaphor in relation to perception, cognition, and behavior. It’s also rare to see this connection come up in the writings of any Jaynesian scholars. There apparently isn’t even a single mention, even in passing, in the discussion forum at the official site of the Julian Jaynes Society (no search results were found for: Edward Sapir, Benjamin Lee Whorf, Sapir-Whorf, Whorfian, Whorfianism, linguistic relativity, linguistic relativism, or linguistic determinism), although I found a few writings elsewhere that touch upon this area of overlap (see end of post). Besides myself, someone finally linked to an article about linguistic relativity in the Facebook group dedicated to his book (also see below).

Limiting ourselves to published work, the one and only significant exception I’ve found is a passing mention from Brian J. McVeigh in his book The “Other” Psychology of Julian Jaynes: “Also, since no simple causal relation between language and interiorized mentation exists, an examination of how a lexicon shapes psychology is not necessarily a Sapir-Whorfian application of linguistic theory.” But since Sapir and Whorf didn’t claim a simple causal relation, this leads me to suspect that McVeigh isn’t overly familiar with their scholarship or widely read in the more recent research. But if I’m misunderstanding him and he has written more fully elsewhere about this, I’d love to read it (owning some of his books, I do enjoy and highly respect McVeigh’s work, as I might consider him the leading Jaynesian scholar). In my having brought this up in a Julian Jaynes Facebook group, Paul Otteson responded that, “my take on linguistic relativism and determinism is that they are obvious.” But obviously, it isn’t obvious to many others, including some Jaynesian scholars who are academic experts on linguistic analysis of texts and culture, as is the case with McVeigh. “For many of us,” Jeremy Lent wrote in The Patterning Instinct, “the idea that the language we speak affects how we think might seem self-evident, hardly requiring a great deal of scientific proof. However, for decades, the orthodoxy of academia has held categorically that the language a person speaks has no effect on the way they think. To suggest otherwise could land a linguist in such trouble that she risked her career. How did mainstream academic thinking get itself in such a straitjacket?” (quoted in Straw Men in the Linguistic Imaginary).

Jaynes focused heavily on how metaphors shape an experience of interiorized and narratized space, i.e., a specific way of perceiving space and time in relation to identity. More than relevant is the fact that, in linguistic relativity research, how language shapes spatial and temporal perception has also been a a key area of study. Linguistic relativity has gained compelling evidence in recent decades. And several great books have been written exploring and summarizing the evidence: Vyvyan Evans’s The Language Myth, Guy Deutscher’ Through the Looking Glass, Benjamin K. Bergen’s Louder Than Words, Aneta Pavlenko’s The Bilingual Mind, Jeremy Lent’s The Patterning Instinct, Caleb Everett’s Linguistic Relativity and Numbers and the Making of Us (maybe include Daniel L. Everett’s Dark Matter of the Mind, Language: The Cultural Tool, and How Language Began). This would be a fruitful area for Jaynesian thought, not to mention it would help it to break out into wider scholarly interest. The near silence is surprising because of the natural affinity between the two groups of thinkers. (Maybe I’m missing something. Does anyone know of a Jaynesian scholar exploring linguistic relativity, a linguistic relativity scholar studying Jaynesianism, or any similar crossover?)

What makes it odd to me is that Jaynes was clearly influenced by linguistic relativity, if not directly then indirectly. Franz Boas’ theories on language and culture shaped linguistic relativists along with the thinkers read by Jaynes, specifically Ruth Benedict. Jaynes was caught up in a web of influences that brought him into the sphere of linguistic relativity and related anthropological thought, along with philology, much of it going back to Boas: “Julian Jaynes had written about the comparison of shame and guilt cultures. He was influenced in by E. R. Dodds (and Bruno Snell). Dodds in turn based some of his own thinking about the Greeks on the work of Ruth Benedict, who originated the shame and guilt culture comparison in her writings on Japan and the United States. Benedict, like Margaret Mead, had been taught by Franz Boas. Boas developed some of the early anthropological thinking that saw societies as distinct cultures” (My Preoccupied Mind: Blogging and Research).

Among these thinkers, there is an interesting Jungian influence as well: “Boas founded a school of thought about the primacy of culture, the first major challenge to race realism and eugenics. He gave the anthropology field new direction and inspired a generation of anthropologists. This was the same era during which Jung was formulating his own views. As with Jung before him, Jaynes drew upon the work of anthropologists. Both also influenced anthropologists, but Jung’s influence of course came earlier. Even though some of these early anthropologists were wary of Jungian psychology, such as archetypes and collective unconscious, they saw personality typology as a revolutionary framework (those influenced also included the likes of Edward Sapir and Benjamin Lee Whorf, both having been mentors of Boas who maybe was the source of introducing linguistic relativity into American thought). Through personality types, it was possible to begin understanding what fundamentally made one mind different from another, a necessary factor in distinguishing one culture from another” (The Psychology and Anthropology of Consciousness). The following is from Jung and the Making of Modern Psychology, Sonu Shamdasani (Kindle Locations 4706-4718):

“The impact of Jung’s typology on Ruth Benedict may be found in her concept of Apollonian and Dionysian culture patterns which she first put forward in 1928 in “Psychological Types in the cultures of the Southwest,” and subsequently elaborated in Patterns of Culture. Mead recalled that their conversations on this topic had in part been shaped by Sapir and Oldenweiser’s discussion of Jung’s typology in Toronto in 1924 as well as by Seligman’s article cited above (1959, 207). In Patterns of Culture, Benedict discussed Wilhelm Worringer’s typification of empathy and abstraction, Oswald Spengler’s of the Apollonian and the Faustian and Friedrich Nietzsche’s of the Apollonian and the Dionysian. Conspicuously, she failed to cite Jung explicitly, though while criticizing Spengler, she noted that “It is quite as convincing to characterize our cultural type as thoroughly extravert … as it is to characterize it as Faustian” (1934, 54-55). One gets the impression that Benedict was attempting to distance herself from Jung, despite drawing some inspiration from his Psychological Types.

“In her autobiography, Mead recalls that in the period that led up to her Sex and Temperament, she had a great deal of discussion with Gregory Bateson concerning the possibility that aside from sex difference, there were other types of innate differences which “cut across sex lines” (1973, 216). She stated that: “In my own thinking I drew on the work of Jung, especially his fourfold scheme for grouping human beings as psychological types, each related to the others in a complementary way” (217). Yet in her published work, Mead omitted to cite Jung’s work. A possible explanation for the absence of citation of Jung by Benedict and Mead, despite the influence of his typological model, was that they were developing diametrically opposed concepts of culture and its relation to the personality to Jung’s. Ironically, it is arguably through such indirect and half-acknowledged conduits that Jung’s work came to have its greatest impact upon modern anthropology and concepts of culture. This short account of some anthropological responses to Jung may serve to indicate that when Jung’s work was engaged with by the academic community, it was taken to quite different destinations, and underwent a sea change.”

As part of the intellectual world that shaped Jaynes’ thought, this Jungian line of influence feeds into the Boasian line of influence. But interestingly, in the Jaynesian sphere, the Jungian side of things is the least obvious component. Certainly, Jaynes didn’t see the connection, despite Jung’s Jaynesian-like comments about consciousness long before Jaynes wrote about it in 1976. Jung, writing in 1960 stated that, “There is in my opinion no tenable argument against the hypothesis that psychic functions which today seem conscious to us were once unconscious and yet worked as if they were conscious” (On the Nature of the Psyche; see post). And four years later wrote that, “Consciousness is a very recent acquisition of nature” (Man and His Symbols; see post). In distancing himself from Jung, Jaynes was somewhat critical, though not dismissive: “Jung had many insights indeed, but the idea of the collective unconscious and of the archetypes has always seemed to me to be based on the inheritance of acquired characteristics, a notion not accepted by biologists or psychologists today” (quoted by Philip Ardery in “Ramifications of Julian Jaynes’s theory of consciousness for traditional general semantics“). His criticism was inaccurate, though, since Jung’s actual position was that, “It is not, therefore, a question of inherited ideas but of inherited possibilities of ideas” (What is the Blank Slate of the Mind?). So, in actuality, Jaynes’ view on this point appears to be right in line with that of Jung. This further emphasizes the unacknowledged Jungian influence.

I never see this kind of thing come up in Jaynesian scholarship. It makes me wonder how many Jaynesian scholars recognize the intellectual debt they owe to Boas and his students, including Sapir and Whorf. More than a half century before Jaynes published his book, a new way of thinking was paving the way. Jaynes didn’t come out of nowhere. Then again, neither did Boas. There are earlier linguistic philosophers such as Wilhelm von Humboldt — from On Language (1836): “Via the latter, qua character of a speech-sound, a pervasive analogy necessarily prevails in the same language; and since a like subjectivity also affects language in the same notion, there resides in every language a characteristic world-view. As the individual sound stands between man and the object, so the entire language steps in between him and the nature that operates, both inwardly and outwardly, upon him. He surrounds himself with a world of sounds, so as to take up and process within himself the world of objects. These expressions in no way outstrip the measure of the simple truth. Man lives primarily with objects, indeed, since feeling and acting in him depend on his presentations, he actually does so exclusively, as language presents them to him. By the same act whereby he spins language out of himself, he spins himself into it, and every language draws about the people that possesses it a circle whence it is possible to exit only by stepping over at once into the circle of another one. To learn a foreign language should therefore be to acquire a new standpoint in the world-view hitherto possessed, and in fact to a certain extent is so, since every language contains the whole conceptual fabric and mode of presentation of a portion of mankind.” The development of thought over time is always fascinating. But schools of thought too easily become narrow and insular over time, forgetting their own roots and becoming isolated from related areas of study. The Boasian lineage and Jaynesian theory have ever since been developing separately but in parallel. Maybe it’s time for them to merge back together or, at the very least, cross-pollinate.

To be fair, linguistic relativity has come up ever so slightly elsewhere in Jaynesian scholarship. As a suggestion, Marcel Kuijsten pointed to “John Limber’s chapter “Language and Consciousness” in Reflections on the Dawn of Consciousness”. I looked at that Limber piece. He does discuss this broad area of study involving language, thought, and consciousness. But as far as I can tell (based on doing an ebook search for relevant terms), he nowhere discusses Boas, Sapir, or Whorf. At best, he makes an indirect and brief mention of “pre-Whorfian advocates” without even bothering to mention, much less detail, Whorfian advocates or where they came from and how there is a line of influence from Boas to Jaynes. It’s an even more passing comment than that of McVeigh’s. It is found in note 82: “For reviews of non-Jaynesian ideas on inner speech and consciousness, see Sokolov (1972), Kucaj (1982), Dennett (1991), Nørretranders (1998), and Morin (2005). Vygotsky, of course, was somewhat of a Marxist and probably took something from Marx’s (1859) often cited “It is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness.” Vygotsky was also influenced by various pre-Whorfian advocates of linguistic relativity. I say “Vygotsky as inspiration” because I have not as yet found much of substance in any of his writings on consciousness beyond that of the Marx quote above. (Several of his papers are available online at http://www.marxists.org.)” So, apparently in the entire Jaynesian literature and commentary, there are only two miniscule acknowledgements that linguistic relativists exist at all (nor much reference to similar thinkers like Marxist Lev Vygotsky; or consider Marx’s theory of species-being; also note the omission of Alfred Korzybski’s General Semantics). Considering the fact that Jaynes was making an argument for linguistic relativity and possibly going so far as linguistic determinism, whether or not he knew it and thought about it that way, this oversight really gets me thinking.

That was where my thought ended, until serendipity brought forth a third example. It is in a passage from one of McVeigh’s more recent books, Discussions with Julian Jaynes (2016). In the June 5, 1991 session of their talks, almost a couple of decades after the publication of his book, Jaynes spoke to McVeigh about this:
McVeigh: “The first thing I want to ask you about is language. Because in our book, language plays an important role, specifically metaphors. And what would you say to those who would accuse you of being too Whorfian? Or how would you handle the charge that you’re saying it is language that determines thought in your book? Or would you agree with the statement, “As conscious developed, language changed to reflect this transformation?” So, in other words, how do you handle this [type of] old question in linguistics, “Which comes first, the chicken or the egg?””
Jaynes: “Well, you see Whorf applies to some things and doesn’t apply to others, and it’s being carried to a caricature state when somebody, let’s say, shows [a people perceives colors] and they don’t have words for colors. That’s supposed to disprove Whorf. That’s absolutely ridiculous. Because after all, animals, fish have very good color vision. But there’s no doubt about it, Whorfian hypothesis is true for some of the more abstract concepts we have. Certainly, in that sense, I would certainly be a Whorfian. But I don’t think Whorf went far enough. That’s what I used to say. I’m trying to think of the way I would exactly say it. I don’t know. for example, his discussion of time I think it is very appropriate. Indeed, there wouldn’t be such a thing as time without consciousness. No concept of it.”
Jaynes bluntly stated, “I would certainly be a Whorfian.” He said this in response to a direct question McVeigh asked him about being accused of being a Whorfian. There was no dancing around it. Jaynes apparently thought it was obvious enough to not require further explanation. That makes it all the more odd that McVeigh, a Jaynesian scholar who has spent his career studying language, has never since pointed out this intriguing detail. After all, if Jaynes was a Whorfian by his own admission and McVeigh is a Jaynesian scholar, then doesn’t it automatically follow that McVeigh in studying Jaynesianism is studying Whorfianism?

That still leaves plenty of room for interpretation. It’s not clear what was Jayne’s full position on the Sapir-Whorf hypothesis. Remarkably, he did not only identify as a Whorfian for he then suggested that he went beyond Whorf. I don’t know what that means, but it does get one wondering. Whorf wasn’t offering any coherent and overarching explanatory theory in the way that did Jaynes. Rather, the Sapir-Whorf hypothesis is more basic in simply suggesting language can influence and maybe sometimes determine thought, perception, and behavior. That is more of a general framework of research that potentially could apply to a wide variety of theories. I’d argue it not only partly but entirely applies to Jaynes’ theory as well — as neither Sapir nor Whorf, as far as I know, were making any assertions for or against the role of language in the formation of consciousness. Certainly, Jaynesian consciousness or the bicameral mind before it would not be precluded according to the Sapir-Whorf linguistic paradigm. Specifically in identifying as Whorfian, Jaynes agrees that, “Whorfian hypothesis is true for some of the more abstract concepts we have.” What does he mean by ‘abstract’ in this context? I don’t recall any of the scholarly and popular texts on linguistic relativity ever describing the power of language being limited to abstractions. Then again, neither did Jaynes directly state it is limited in this fashion, even as he does not elaborate on any other applications. However, McVeigh interpreted his words as implying such a limitation — from the introduction of the book, McVeigh wrote that, “he argues that the relation between words and concepts is not one of simple causation and that the Whorfian hypothesis only works for certain abstract notions. In other words, the relation between language and conscious interiority is subtle and complex.” Well, I’m not expert on the writings of Whorf, but my sense is that Whorf would not necessarily disagree with that assessment. One of the best sources of evidence for such subtlety and complexity might be found in linguistic relativity, a growing field of research. It is the area of overlap that remains terra incognito. I’m not sure anyone knows the details of how linguistic relativity might apply to Jaynesian consciousness as metaphorical mindspace nor how it might apply the other way around.

* * *

Though reworked a bit, I wrote much of the above about a year ago in the Facebook group Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind. And I just now shared a variation of my thoughts in another post to the same group. This link between the Jaynesian and the Whorfian (along with the Boasian, Marxian, Jungian, etc) has been on my mind for a while, but it was hard to write about as few others have written about it. There is a fairly large literature of Jaynesian scholarship and an even more vast literature of linguistic relativity research. Yet to find even passing references to both together is a rare finding. Below are the few examples I could find on the entire world wide web.

Language and thought: A Jaynesian Perspective
by Rachel Williams, Minds and Brains

The Future of Philosophy of Mind
by Rachel Williams, Minds and Brains

Recursion, Linguistic Evolution, Consciousness, the Sapir-Whorf Hypothesis, and I.Q.
by Gary Williams, New Amsterdam Paleoconservative

Rhapsody on Blue
by Chad Hill, the HipCrime Vocab
(a regular commenter on the Facebook group)

Why ancient civilizations couldn’t see the color blue
posted by J Nickolas FitzGerald, Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind Facebook group

* * *

Out of curiosity, I did some less extensive searches, in relation to Julian Jaynes, for some other thinkers, specifically Lev Vygotsky and Alfred Korzybski. The latter only showed up to a significant degree in a single scholarly article on Jaynes’ work (Philip Ardery, Ramifications of Julian Jaynes’s Theory of Consciousness for Traditional General Semantics), although Charles Eisenstein does mention the two thinkers in the same passage of his book The Ascent of Humanity but without making any direct connection or comparison. Greater relevance is found with Vygotsky and indeed he does come up more often, including several times on the official Julian Jaynes Society website and also in two of the collections of Jaynesian scholarship.

Two of the mentions of Vygotsky on the website are Books Related to Jaynes’s Bicameral Mind Theory and Supplementary Material (for Reflections on the Dawn of Consciousness), with the third offering some slight commentary — Marcel Kuijsten’s Critique 13, from Critiques and Responses: Part 2, where he writes: “For the vast differences between consciousness as described by Jaynes, Dennett, Carruthers, Vygotsky, and others – which is linguistically based and uniquely human – vs. non-linguistic animal cognition, see Peter Carruthers, Language, Thought and Consciousness, Jose Luis Bermudez, Ch. 9, “The Limits of Thinking Without Words,” in Thinking without Words, Lev Vygotsky, Thought and Language, Daniel Dennett, Kinds of Minds, etc.” In the introduction to The Julian Jaynes Collection, Marcel Kuijsten discusses Jayne’s first hypothesis that consciousness is based on language. Vygotsky is mentioned in passing while explaining the views of another scholar:

“The debate over the importance of language for consciousness has a long history and has seen renewed interest in recent years. While many theorists continue to assume that infants are born conscious (confusing consciousness with sense perception), the work of child psychologist Philip Zelazo strongly supports Jaynes’s argument that consciousness develops in children over time through the acquisition of language. Building on the work of the early twentieth century Russian psychologists Lev Vygotsky and Alexander Luria and the Swiss psychologist Jean Piaget, Zelazo and his colleagues propose a model for the development of consciousness in children that highlights the importance of the interaction between thought and language. 11 Zelazo describes “four major age-related increases” in consciousness in children and corresponding increases in children’s ability to spatialize time. Zelazo’s fourth stage, reflective consciousness , corresponds roughly to Jaynes’s definition of consciousness, whereas Zelazo’s first stage, minimal consciousness, describes what Jaynes would term reactivity or basic sense perception.”

A slightly fuller, if brief, comment on Vygotsky is found in The “Other” Psychology of Julian Jaynes. The author, Brian J. McVeigh, writes that, “An important intellectual descendant of Volkerpsychologie took root in the Soviet Union with the work of the cultural-historical approach of Lev Vygotsky (1896-1934) (1998), Alexander Luria (1902-77) (1976), and Aleksei Leontiev (1903-79) (1978, 2005 [1940]). Vygotsky and Luria (1993 [1930]) emphasized the inherently social nature of mind, language, and thought. Higher mental processes are complex and self-regulating, social in origin, mediated, and “conscious and voluntary in their mode of functioning” (cited in Meshcheriakov 2000; 43; see Wertsch 1985, 1991).”

Interestingly, Rachel Williams, in the above linked post The Future of Philosophy of Mind, also brings up Vygotsky. “Julian Jaynes has already cleared the underbrush to prepare the way for social-linguistic constructivism,” she explains. “And not your Grandpa’s neutered Sapir-Whorf hypothesis either. I’m talking about the linguistic construction of consciousness and higher-order thought itself. In other words, Vygotsky, not Whorf.” So, she obviously thinks Vygotsky is of utmost importance. I must admit that I’m actually not all that familiar with Vygotsky, but I am familiar with how influential he has been on the thought of others. I have greater interest in Korzybski by way of my appreciation for William S. Burrough’s views of “word virus” and “Control”.

* * *

It should be mentioned that Jaynesian scholarship, in general, is immense in scope. Look at any of the books put out on the topic and you’ll be impressed. Those like Kuijsten and McVeigh are familiar and conversant with a wide variety of scholars and texts. But for whatever reason, certain thinkers haven’t shown up much on their intellectual radars. About the likes of Vygotsky and Korzybski, I feel less surprised that they don’t appear as often in Jaynesian scholarship. Though influential, knowledge of them is limited and I don’t generally see them come up in consciousness studies more broadly. Sapir and Whorf, on the other hand, have had a much larger impact and, over time, their influence has continuously grown. Linguistic relativity has gained a respectability that Jaynesian scholarship still lacks.

I sometimes suspect that Jaynesian scholars are still too worried about respectability, as black sheep in the academic world. Few serious intellectuals took Jaynes seriously and that still is the case. That used to be also true of Sapir and Whorf, but that has changed. Linguistic relativity, with improved research, has recovered the higher status it had earlier last century. That is the difference for Jaynesian scholarship, as it never was respectable. I think that is why linguistic relativity got so easily ignored or dismissed. Jaynesian scholars might’ve been worried about aligning their own theories to another field of study that was, for a generation of scholars, heavily criticized and considered taboo. The lingering stigma of ‘strong’ Whorfianism as linguistic determinism, that we aren’t entirely isolated autonomous self-determined free agents, is still not acceptable in mainstream thought in this hyper-individualistic society. But one would think Jaynesian scholars would be sympathetic as the same charge of heresy is lodged against them.

Whatever motivated Jaynesian scholars in the past, it is definitely long past the time to change tack. Linguistic relativity is an area of real world research that could falsifiably test and potentially demonstrate the verity of Jaynes’ theory. Simply for practical reasons, those wishing to promote Jaynes’ work might be wise to piggyback on these obvious connections into more mainstream thought, such as mining the work of the popular Daniel Everett and his son Caleb Everett. That would draw Jaynesian scholarship into one of the main battles in all of linguistics, that of the debate between Daniel Everett and Noam Chomsky about recursion. There is a great opening for bringing attention to Jaynes — discuss why recursion is relevant to consciousness studies in general and Jaynesian consciousness in particular. Or better yet, show the commonalities between Jaynes and Jung, considering Jung is one of the most popular thinkers in the Western world. And as I’ve argued in great detail, such larger context has everything to do with the cultural and cognitive differences demonstrated by linguistic relativity.

In general, Jaynesian studies has been trapped in an intellectual backwater. There has yet to be a writer to popularize Jaynes’ views as they apply to the larger world and present society, from politics to culture, from the economy to environmentalism, from media to entertainment. Even among intellectuals and academics, it remains largely unknown and even less understood. This is beginning to change, though. HBO’s Westworld did more than anything to bring Jaynes’ ideas to a larger audience that otherwise would never come across such strange insights into human nature. Placing this radical theory within a science fiction narrative makes it less daunting and threatening to status quo thought. There is nothing like a story to slip a meme past the psychological defenses. Now that a seed has been planted, may it grow in the public mind.

Let me add that my pointed jabs at the Jaynesian world come from a place of love. Jaynes is one of the main inspirations to my thought. And I enjoy reading Jaynesian scholarship more than about any other field. I just want to see it expand, to become even more impressive. Besides, I’ve never been one for respectability, whether in politics or intellectual pursuits. Still, I couldn’t help but feel kind of bad about writing this post. It could be perceived as if all I was doing was complaining. And I realize that my sense of respect for Jaynesian scholars might be less than obvious to someone casually reading it (I tried to remedy that in clarifying my position in the main text above). I didn’t intend it as an attack on those scholars I have learned so much from. But I felt a need to communicate something, even if all I accomplished for the moment was making an observation.

It’s true that, instead of complaining about the omission of linguistic relativity, I could make a positive contribution by simply writing about how linguistic relativity applies to Jaynesian scholarship. If others haven’t shown the connections, the evidence and the examples, well then maybe I should. And I probably will, eventually. But it might take a while before I get around to that project. When I do, it could be a partial continuation of or tangent from my ongoing theorizing about symbolic conflation and such — that is tough nut I’ve been trying to crack for years. Still, the omission of linguistic relativity itself somehow seemed significant in my mind. I’m not sure why. This post is basically a way of setting forth a problem to be solved. The significance is that linguistic relativity would offer the real world examples of how Jaynesian views of consciousness, authorization, narratization, etc might apply to our everyday experience. It would help explain why such complex analysis, intellectually brilliant as it is, is relevant at all to our actual lives.

Ancel Keys, One of Lewis Terman’s Termites

Unless you are seriously interested in diet and nutrition, you’ve probably never heard the name of Ancel Keys (1904-2004). Yet he was one of the most influential men of the 20th century, at least within the area of nutrition studies, government food policy, and official dietary recommendations. He developed the so-called ‘Mediterranean diet’, although it could more accurately be called a post-war scarcity and austerity diet, since we now know it has little in common with the pre-war traditional Mediterranean diet that prioritized lard and not olive oil. Because of his public campaign against animal fats and his research on heart disease, he was sometimes referred to in the press as ‘Dr. Cholesterol’, despite not being a doctor. He was academically successful and had a scientific background, but oddly considering his career path he had absolutely zero formal education and professional training in nutrition studies or in medicine. Instead, his extended higher education included chemistry, economics, political science, zoology, oceanography, biology, and physiology.

His career as a scientific researcher started in 1931 with a study on the physiology of fish and eels, his main area of expertise at the time, whereas his first work in diet and nutrition happened later on by the accident of historical circumstances. The US military sought to develop prepared rations for soldiers and, as no one else at the University of Minnesota wanted this lowly assignment, Keys at the bottom of the totem pole saw it as an opportunity and took advantage of it to promote his career. In his lack of requisite knowledge and expertise, according to his colleague Dr. Elsworth Buskirk, “he was told to go home and leave such things to the professionals,” but he persisted in obtaining funds and came up with something that met specifications (From Harvard to Minnesota: Keys to our History). This became what is known as the K-Ration. During the Second World War, he did much other work for the military and that paved the way for his entering the field of nutrition studies. It was through the military that he did research on humans with much of it focusing on extreme conditions of stress, from high altitudes to starvation. This led to a study on vitamin supplementation during that time period and after the war a prospective dietary study in 1947.

Yet Keys wouldn’t fully enter the fray of nutrition studies until the 1970s. He was about 70 years old when, in his battle with the British sugar researcher John Yudkins, he finally became a major contender in scientific debates. His controversial Seven Countries Study, although done in 1956, wasn’t published until decades later in 1978, almost 40 years after his first involvement in animal research. The height of his career extended into his 80s, having given him many decades to have mentored students, allies, and followers to carry on his crusade. He had a towering intellect and charismatic personality that gave him the capacity to demolish opponents in debate and helped him to dominate the media and political battlefield. Think of Keys as a smarter version of Donald Trump, as seen in an instinct for media manipulation of public perception, maybe related to Keys’ geographic and familial proximity to Hollywood: “As the nephew of silent screen star Lon Chaney, Keys also filmed all of his scientific work and was a first-rate publicist, frequently writing for popular audiences” (Sarah W. Tracey, Ancel Keys). He was a creature of the mass media that took hold during his lifetime.

Though now largely forgotten by the general public, Keys once was a famous figure whose picture was found on the covers of national magazines, from Time to Life. He personally associated and politically allied himself with many powerful politicians, health experts, and leading scientists. Whether or not you know of him, his work and advocacy shaped the world most of us were born into and he had a direct impact on the modern food system and healthcare practice that has touched us all. A half century ago, his fame was comparable to that of Dr. John Harvey Kellogg (1852-1943), a Seventh Day Adventists and eugenicist from an earlier generation who also worked in the field of diet and nutrition in having been one of the earliest vegans, in having invented breakfast cereal, and in having operated a sanitorium that was popular among the elite: politicians, movie stars, writers, and artists. Dr. Kellogg preached against race mixing, and warned of race degeneracy, and to promote eugenics he co-founded the Race Betterment Foundation that held several national conferences. He advocated the development of a “eugenic registry” to ensure “proper breeding pairs” that would produce “racial thoroughbreds,” but for inferior couplings he advised sterilization of “defectives.” Though coming from different ideological perspectives, Keys and Kellogg were the twin forces in shaping anti-fat ideology and, in this scapegoating of animal fats, shifted the blame away from sugar which was the actual cause behind metabolic syndrome (obesity, diabetes, heart disease, fatty liver, etc). That misdirection sent nutrition studies down a blind alley and misled public policymakers, a quagmire we are still in the middle of.

To be fair, it must be clarified that Keys never showed any proclivities toward eugenics, but I bring it up because there is a connection to be explored. As a child, he had tested as high IQ. After Keys’ parents “signed him up while he was a student at Berkeley High School,” according to Richard C. Paddock (The Secret IQ Diaries), he was given entrance into a study done by Lewis Terman (1877-1956) who was a noted psychologists and, like Dr. Kellogg, was an early 20th century racist: “He joined and served as a high ranking member in many eugenic organizations (the Human Betterment Foundation, the American Eugenics Society, and the Eugenics Research Association), and worked alongside many others (such as the American Institute of Family Relations and the California Bureau of Juvenile Research)” (Ben Maldonado, Eugenics on the Farm: Lewis Terman). In studying and working with gifted youths like Keys, Terman sought to prove the hypothesis of social Darwinism through eugenics (‘good genes’). He believed that such an ideological vision could be made manifest through a genetically superior intellectual elite who, if promoted and supported and given all the advantages a society could offer, would develop into a paternalistic ruling class of enlightened aristocracy with the potential of becoming humanity’s salvation as visionaries, creative geniuses, and brilliant leaders. It was a humble aspiration to remake all of society from the ground up.

This attitude, bigoted and socially conservative (e.g., prejudice against “sexual deviancy” in seeking to enforce traditional gender roles), was far from uncommon in the Progressive Era. Keep in mind that, at the time, ‘progressivism’ wasn’t solely or even always primarily identified with social liberalism. Among the strongest supporters of Progressivism were Evangelicals, Mormons, Klansmen, Jim Crow leaders, white supremacists, WASP elites, military imperialists, and fascists — think of one of the most famous of Progressive leaders, President Theodore Roosevelt, who was a racist and imperialist; and even his distant cousin, the Progressive President Franklin Delano Roosevelt, was not without racist and imperialist inclinations. Progress back then had a different connotation, and many of these American eugenicists were a direct inspiration to Adolf Hitler and other Nazi leaders. After all, the enactment of progressive Manifest Destiny was still playing out in the last of Indian Wars all the way into the 1930s, before the remaining free Indians were finally put down. The proto-neocon Civilizing Project was long and arduous and more than a bit bloody. This ideology continued even after the defeat of the Nazis, as sterilization of perceived inferiors in the United States was still practiced for decades following the end of Second World War, all the way into the 1970s. Eugenics has been persistent, to say the least.

Inspired by this idealistic, if demented and distorted, ideology of evolutionary advancement and Whiggish progress, Terman invented the Stanford-Binet IQ Test. During the First World War, he worked in the military to implement the first mass testing of intelligence. His own IQ test was the initial attempt to scientifically measure what is now called general intelligence or the g factor but for which he coined the term “intelligence quotient” (IQ), a mysterious essence that many at the time believed to be inherent to the individual psyche from birth, as genetically inherited from one’s parents. The Stanford-Binet was a measure of academic ability or what today we might think of as ‘aptitude’ — specifically having assessed attention, memory, and verbal skill in measuring ability in arithmetical reasoning, sentence completion, logics, synonyms-antonyms, symbol-digit testing, vocabulary, analogies, comparisons, and general information. The focus was on crystallized intelligence, but it was also culturally biased and coded for socioeconomic class.

The Stanford-Binet was modeled after the intelligence test of the French psychologist Alfred Binet. There was a significant difference, though. Binet used his test to identify those most in need in order to help them improve, whereas Terman saw these ‘deficient’ children as a danger to society that should be eliminated, quite literally with sterilization — this had real world application and consequences: “Terman’s test was also used regularly to determine who should be sterilized in the name of eugenics: individuals with an IQ of under 70 (deemed feebleminded) were targeted for sterilization by the state, such as in the famous case of Carrie Buck. In the United States, over 600,000 people were sterilized by the state for eugenic reasons, often because of IQ test results. For many eugenicists, Terman’s research finally presented a way to efficiently and “objectively” judge the eugenic worth of human lives” (Ben Maldonado, Eugenics on the Farm: Lewis Terman). Instead of helping the poor and disadvantaged, he hoped to use his own adaptation of Binet’s test to identify the smart kids so as to ensure they would become high achievers in gaining the success and respect they supposedly deserved. This was a response to his own childhood struggles as a sickly nerd growing up among other farm kids in rural Indiana.

By the way, this was the specific area that later on would become the stronghold of the Second Klan with the Indiana Grand Dragon D.C. Stephenson having set up base in Terman’s old hometown. The Second Klan rose to power at the very moment the adult Terman, having left Indiana, began his eugenicist project of IQ testing. That was no coincidence. Following upon a period of moral panic, there was a mix of fear and hope about the future and, central to public debate, threats to the survival of the white race was a major concern (The Crisis of Identity). The purpose of eugenics was basically to show that the right kind of people were a special breed of humans that, in eliminating what held back their genetic potential, would rise up to make America great again and so return Western Civilization to its previous glorious heights. The agenda, of course, wasn’t to create a fair and objective measure of human worth and human potential for the assumptions it was built upon presupposed the race and class of people who, by definition, were the best of the best. Terman was simply seeking to prove what he already ‘knew’ as a true believer of social, moral, mental, and racial hygiene.

With this hope in mind, Terman began in 1921 to gather a large group of children who scored high on his IQ test, a total of 1,521 subjects, including the teenage Ancel Keys. His selection process was highly subjective and idosyncratic. It just so happened that, among a total sample of 168,000 students, Terman included only 6 Japanese-Americans, 2 African-Americans, 1 Native American, and 1 Mexican-American. The vast majority of those chosen were white, urban, and middle class boys largely drawn from the college towns and suburbs of Northern California. These were known as Terman’s kids or ‘Termites’. Betraying scientific objectivity, he intervened in the lives of his subjects, sometimes openly but also behind the scenes. He followed these subjects into adulthood to find out how they turned out and to ensure they gained advantages, such as his having written letters of recommendation for college entrance, job applications, and professional contacts. The eugenics project was not a passive endeavor of neutral scientific observation.

Whatever is to be thought of it, there is no doubt that the study of Terman’s children was the first and maybe only time a hypothesis of social Darwinian eugenics was so fully tested at such an ambitious level. In general across all scientific fields, there is no other longitudinal study that lasted so long and, as some of the subjects remain alive, there are scientists carrying on the work to this day with the last of the surviving Termites still dutifully filling out the surveys sent to them (Ancel Keys remained a participant into his 90s until his death in 2004, two months shy of his 101st birthday). One has to give Terman credit for having dared to scientifically test his belief system in a falsifiable study, ignoring the problems with confounding factors. He put his convictions on the line, although Hitler was even more ambitious in using war as a test of sorts, forcing an end result of either total domination or total destruction, to prove or disprove the hypothesis of German racial supremacy. I guess we can be grateful that Terman took a less violent approach of scientific analysis that didn’t require vast desolation of battlefields and doctors experimenting on unwilling victims in concentration camps.

Terman’s decades-long experiment, continuing as it did into the post-war period, ended in failure by his own standards of expectation. Before his death in 1956, he was able to see how few of the children grew up to amount to much, beyond many of them becoming moderately successful middle class professionals, although a few attained some prominence: “Among some of the original participants of the Terman study was famed educational psychologist Lee Chronbach, “I Love Lucy” writer Jess Oppenheimer, child psychologist Robert Sears, scientist Ancel Keys, and over 50 others who had since become faculty members at colleges and universities” (Kendra Cherry, Are People With High IQs More Successful?). In Cradles of Eminence, Victor and Muriel Goertzel analyzed the Termites according to eminence, defined as having multiple biographies written about someone without their being either royalty or a sports star. It turns out none of Terman’s subjects had even a single biography written about them. Crystallized intelligence, at best, moderately predicted being professionally successful and conforming well within the social order. However, once later tests removed the cultural and class biases, IQ tests stopped even being useful for predicting even this much. When environmental factors and family background are controlled for, almost all IQ differences disappear. A lower IQ rich person is more likely to be successful than a higher IQ poor person. Surprise, surprise!

Interestingly, in comparing the Termites to their peers, “two children who were tested but didn’t make the cut — William Shockley and Luis Alvarez — went on to win the Nobel Prize in Physics. According to Hastorf, none of the Terman kids ever won a Nobel or Pulitzer” (Mitchell Leslie, The Vexing Legacy of Lewis Terman). It’s ironic that Shockley later followed Terman’s example by also having become a eugenicist and, through his friendship with Terman’s son Frederick, was hired on as a professor at Stanford where the senior Terman had done his scientific work, the reason his IQ test was called the Stanford-Binet. Shockley and Frederick Terman came to be known as the fathers of Silicon Valley in having developed the high tech start-up model and in having played a central role in bringing in the massive Pentagon funding that has defined and dominated the American tech industry ever since (e.g., Jeff Bezos sitting on a Pentagon board and with numerous government contracts). Social Darwinism, intellectual elitism, and paternalistic technocracy remains the ascendant ideology of Silicon Valley tech bros and the capitalist class of entrepreneurial philanthropists who seek to shape society with their gifted genius, not to mention their wealth (e.g., Bill Gates and the Gates Foundation).

Lewis Terman privately admitted that some of his strongest bigoted views were wrong, but unlike many other eugenicists he never publicly recanted his earlier racism. Nonetheless, he was honest enough to conclude that a pillar of eugenicist dogma was flat-out wrong, in having stated that, “At any rate, we have seen that intellect and achievement are far from perfectly correlated.” Of the 730 subjects he was able to follow into adulthood, he divided them into three groups: In Group A of those he deemed successful, only 20% of the kids were categorized. He judged an equal number, 20%, to fall into a Group C of failures. Most of them fell into the middle Group B that included those working in positions “as humble as those of policemen, seaman, typist and filing clerk.” That is rather unimpressive. Writing about this, one person noted that, “The ones among Group A overwhelmingly were from the upper class. The Cs were majorly from the lower class. Majority in the group had careers that were quite ordinary. […] Sociologist Pitirim Sorokin, in his critique of the study, argued showing that Terman’s selected group of children with high IQs did about as well as a random group of children selected from similar family backgrounds would have done.”

Beyond the unsurprising prediction that wealthier people with better chances have better outcomes, the predictive ability of his IQ test was completely off the mark. The Termites, for all their test-taking ability, showed no advantages over the general population. The IQ test did demonstrate academic ability, for whatever that is worth. Among Termites, the rate of college graduates was extremely high (70%, ten times that of their peers), but on average they still were only getting B grades in their classes and a college degree didn’t translate to greater real world accomplishment. They were smart, even if no more successful than their socioeconomic equivalents. If they were wealthy, they did as well as other wealthy people. And if they weren’t wealthy, then they followed the typical path of underachievement. Supposed superior genetics offered no protective advantages beyond the social, racial, and economic privileges given or denied in the lottery of birth.

Even among the successful Termites, there was nothing unusual to be praised. “Rebels were scarce among the Termites, and Henry David Thoreau’s different drummer would have found few followers,” wrote Shurkin in Terman’s Kids. “They did not change life; they accepted it as it came and conquered it.” As good test-takers and students, they were the ultimate conformists, well-lubricated cogs in the machine. They knew how to play the game to win, but the game they played, that of mainstream success and conventional respectability, had rules they followed. These weren’t the types to rock the boat. Rather, Termites were simply well-educated sheep (see: A Ruling Elite of Well-Educated Sheep; & William Deresiewicz, Excellent Sheep). “This is unsurprising,” Elizabeth Svoboda points out, “given that the kinds of people who ace aptitude tests are, by definition, those specialising at jumping through the hoops that society has set up. If you believe that your entire purpose on Earth is to finish the course, chances are you’ll remain within its boundaries at all costs” (The broad, ragged cut).

As expected, the single greatest factor is environment. It’s not so much who we are, as if we have an inborn psychological profile where character is fate, since who we are is dependent on where we are (Dorsa Amir, Personality is not only about who but also where you are), although we know from epigenetic research it also matters where were our parents, grandparents, great-grandparents, and on back as environmental factors carry forward in our family inheritance, such as the grandchildren of famine victims having higher rates of obesity. The world is complex and humans are shaped by it. Despite Terman’s ideological failure, many aptitude tests were based upon this model. Our entire education system has since been redesigned to teach to such tests and as a filtering process for educational advancement, in an assumption of a pseudo-meritocratic dogma not all that different from Terman’s eugenicist dream of a better humanity.

As with fascism, the dangers and harms of eugenics linger on within our institutions and within our minds. We are trapped within false and misleading systems of ideological realism. That isn’t particularly smart of us, as individuals and as a society. We’d be better off promoting the development and opportunities of the majority (James Haywood Rolling, Jr., Swarm Intelligence), rather than investing almost all of society’s resources in a privileged elite who we desperately hope will be our salvation. Considering the national and global failure among the ruling class and capitalist plutocrats, maybe we should create a citizenry that can solve their own problems. Basically, maybe we should take seriously democracy, really and fully try it for the first time, not as superficially inspiring rhetoric to cling to in the darkness but as a lived reality. As ambitious experiments go, democracy is definitely worth attempting.

Up to this point, the democratic experiment has been more of a hypothesis waiting to be tested. The oligarchic and filthy rich American ruling elite, for some reason, have never been in favor of testing the potential for self-governance among the American people. Eugenics has been more thoroughly researched over the past century than has liberty and freedom. That speaks volumes about American society. But it isn’t only about eugenics, as authoritarian elitism and paternalism has taken many other forms. Let’s bring it back to Ancel Keys. Even though he was one of the Elect personally groomed by Lewis Terman to be a leading member of the master race, Keys rejected “Terman’s hereditarian bias” and thought that “personal will. . . is a greater factor in success than inherited intelligence” (Richard C. Paddock, The Secret IQ Diaries).

Even so, it appears that Keys carried on the sense of personal superiority that Terman helped instill in him. As part of the supposed meritocracy, he didn’t feel a need to humbly seek to make scientific advancements in workman-like fashion of careful research and cautious analysis. He had such immense confidence in knowing he was right and that inferior minds were wrong that he saw no need for scientific debate and, instead, used his political power and media influence to effectively shut down debate by silencing his opponents. As a self-identified genius imbued with noblesse oblige (with great power comes great responsibility), he wanted to change the world and had the zealous conviction to enforce his will upon others. It was irrelevant that he dismissed the idea that his elitist worth was based on genetics, as it was the same difference no matter what he believed was the justification for his dogmatic mission of dietary evangelism (The Creed of Ancel Keys). Following in the footsteps of Lewis Terman, he aspired to be a paternalistic technocrat who would save the lesser folk from their wrong thinking and behavior. He simply knew what was right.

Ancel Keys, in embracing his role as part of the wise ruling class, ended up being the greatest success story of Lewis Terman’s eugenics project. He also demonstrated its failure, in that it turns out that being smart is not enough. He was brilliant in his aggressive displays of intellectual prowess and he was successful in his professional achievement by climbing the ladder of power and prestige, but he was neither a creative genius nor a a visionary leader. Instead of thinking outside of the box, he forced everyone else into the box of his ideological biases that commanded the stunting effect of groupthink among several generations of scientific researchers and health experts, nutritionists and doctors. Maybe we should be unsurprised by this unhappy result (Quickie Post — Young Prodigies Usually Do Not Turn into Paradigm-Shifting Geniuses).

It could be argued that, at least in this case, the name of ‘Termite’ was aptly descriptive of the harm caused society. Now we are all suffering for it in the tragedy of our ever worsening public health crisis. And as if that weren’t bad enough, we have a new generation of paternalistic overlords who are repeating the same mistake in once again trying to enforce dietary dogma from up on high (Dietary Dictocrats of EAT-Lancet), in being led by Walter Willett who is the direct heir of Ancel Keys. The experiment of elite rule goes on and on.

* * *

The broad, ragged cut
by Elizabeth Svoboda

Despite initial resistance, the public accepted the notion of a test-driven meritocracy because it twined together two established strands of thought: first, that the spoils should go to the declared winner, and second, that high-performers’ abilities should be harnessed for the good of the nation. ‘To each according to their ability’ became the tacit watchword, a neat variant of the Marxist injunction ‘to each according to their need’.

The first aptitude-testers promoted the idea that each person had an innate, more-or-less fixed intellectual capacity. In the context of the early 20th century’s growing eugenics movement, the tests were often deployed to justify widespread racial discrimination. Terman claimed that what he called borderline deficient scores on the Stanford-Binet were ‘very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes’. ‘Children of this group should be segregated into separate classes,’ he wrote in 1916. ‘They cannot master abstractions but they can often be made into efficient workers … From a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.’ In Terman’s mind, then, low IQ scores were simply and unarguably the result of objective deficiency.

We now understand just how wrong that notion was. Today, many psychologists understand IQ and aptitude tests to be ‘culture-bound’ to one degree or another – that is, they evaluate abilities prized in the dominant Western culture, such as sorting items into categories, and can privilege those raised in that milieu. Such inequities have persisted despite attempts to make the tests fairer to those from non-dominant cultures.

As the US marinated in social Darwinism after the First World War, the government began devising its own sinister solution to the ‘grave problem’ of which Terman had warned. The US Supreme Court case Buck v Bell in 1927 ruled for compulsory sterilisation of the ‘feeble-minded’ in the name of public welfare. For more than four decades thereafter, US states sterilised thousands of people with low IQ scores; a disproportionate number of victims were nonwhite. In later years, though aptitude tests’ eugenic roots would fade from view, the ranking of test-takers according to perceived social value would continue unabated.

The History of Eugenics in America, Part II
by Steven Vigdor and Tim Londergan

In light of our current knowledge of nutrition and fitness, we now view J.H. Kellogg’s practices as a combination of exceptional insight, mixed with positively bizarre notions on medicine and health. The field of eugenics also combined significant advances in applied science with a set of misguided, and in some cases tragic, biases and prejudices.

J.H. Kellogg was an enthusiastic proponent of eugenics, and in 1911 he established the Race Betterment Foundation in Michigan. That Foundation held three national conferences on Race Betterment in 1914, 1915 and 1928. The Race Betterment congresses allowed advocates of eugenics to share their suggestions for the most effective practices that would lead to maintaining or improving ‘racial purity.’ Kellogg himself had a complicated relationship with the notion of racial purity, particularly with respect to blacks. He and his wife had no children, so over the course of their lives they raised a large number of foster children; this included a number of black youths.

On the other hand, Kellogg was a strong supporter of segregation and a firm believer that different races should not mix. Here Kellogg adopted a common theme from the eugenics movement that Nordics, Mediterraneans, Alpines, Mongolians and blacks all represented different ‘races.’ Kellogg warned of “the rapid increase of race degeneracy, especially in recent times,” and urged the adoption of steps that he claimed would result in the “creation of a new and superior human race.”

With his characteristic energy and ambition, Kellogg proposed a multi-step plan to save the U.S. from a calamitous fate. His plan included “a thoroughgoing health survey to be conducted in every community every five years, free medical dispensaries for the afflicted, the inspection of schools and schoolchildren, health education, prohibition of the sale of alcohol and tobacco, strict marriage laws in every state, and the establishment of experiment stations [devoted] to investigating the laws of heredity in plants, animals, and humans.”

A central feature of Kellogg’s plan was the creation of a ‘eugenic registry’ that would establish criteria for ‘proper breeding pairs.’ The idea was that individuals would provide their credentials to a central clearinghouse. Males who met the highest standards for racial ‘fitness’ would be paired with similarly ‘fit’ females and encouraged to marry (the idea was clearly inspired by similar matings with ‘pedigreed’ dogs and ‘bloodlines’ for horses).

Kellogg proposed central record-keeping offices for family pedigrees and the establishment of contests for ‘best babies’ and ‘fittest families.’ A few years later, such contests became common at state fairs across the U.S., as we will describe in the next section. In addition to the fairly sinister aspect of ‘racial purity,’ such contests also placed an emphasis on wellness, and offered useful tips on healthy diets and nutrition for young children. […]

The study of intelligence testing was then taken up by scientists such as Lewis Terman and Robert Yerkes. Terman, a psychologist at Stanford, made major revisions in Binet’s tests. He organized the tests into two parts. Part A included sections on arithmetical reasoning, sentence completion, logics, a synonym-antonym section, and a symbol-digit test. Part B included sections involving sentence completion, vocabulary, analogies, comparisons, and general information. Terman and associates tried out their tests on numerous cohorts of school children. Their aim was to determine the average performance of children in each grade from 3 to 8, and to administer the test to as many students as possible. They also performed numerous statistical tests, and arranged the grading to achieve an average of 100 for every grade, with a standard deviation of 15. The resulting “Stanford-Binet” test fairly rapidly became the standard in the field.

Terman was quite candid about his motives for universal testing. “It is safe to predict that in the near future intelligence tests will bring tens of thousands of those high-grade defectives under the surveillance and protection of society. This will ultimately result in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of crime, pauperism, and industrial inefficiency.” So, while Binet had insisted that his tests be administered only to provide assistance in improving the skills of slow learners, Terman and his hereditarian brethren were determined to identify, isolate and stigmatize precisely this group of children. Terman had no doubt that his tests represented measurements of innate intelligence, and that intelligence was almost entirely determined by heredity. “The children of successful and cultured parents test higher than children from wretched and ignorant homes for the simple reason that their heredity is better.”

Terman also recommended that businesses use IQ tests in hiring decisions. He argued that “substantial success” as a leader required an IQ of at least 115 to 120. Furthermore, people with IQs below 100 should not be hired for demanding or high-paying jobs. Terman was even more specific: people with IQ below 75 should only be qualified for menial tasks, and the 75-85 level for semi-skilled labor. People with an IQ of 85 or lower should be tracked into vocational schools, so that they would not leave school and “drift easily into the ranks of the anti-social or join the army of Bolshevik discontents.”

The Vexing Legacy of Lewis Terman
by Mitchell Leslie

Terman, who had grown up gifted himself, was gathering evidence to squelch the popular stereotype of brainy, “bookish” children as frail oddballs doomed to social isolation. He wanted to show that most smart kids were robust and well-adjusted — that they were, in fact, born leaders who ought to be identified early and cultivated for their rightful roles in society.

Though the more than 1,000 youngsters enrolled in his study didn’t know it at the time, they were embarking on a lasting relationship. As Terman poked around in their lives with his inquisitive surveys, “he fell in love with those kids,” explains Albert Hastorf, emeritus professor of psychology. To the group he always called “my gifted children” — even after they grew up — Terman became mentor, confidant, guidance counselor and sometimes guardian angel, intervening on their behalf. In doing so, he crashed through the glass that is supposed to separate scientists from subjects, undermining his own data. But Terman saw no conflict in nudging his protégés toward success, and many of them later reflected that being a “Terman kid” had indeed shaped their self-images and changed the course of their lives. […]

A story of a different kind emerges from Terman’s own writings — a disturbing tale of the beliefs of a pioneer in psychology. Lewis Terman was a loving mentor, yes, but his ardent promotion of the gifted few was grounded in a cold-blooded, elitist ideology. Especially in the early years of his career, he was a proponent of eugenics, a social movement aiming to improve the human “breed” by perpetuating certain allegedly inherited traits and eliminating others. While championing the intelligent, he pushed for the forced sterilization of thousands of “feebleminded” Americans. Later in life, Terman backed away from eugenics, but he never publicly recanted his beliefs. […]

Many who did well in their fields had received no boost from Terman beyond an occasional pat on the back and the knowledge that they’d qualified for his study. For others, like Dmytryk, Terman’s intervention was life-changing. We’ll never know all that he did for his kids, Hastorf notes. But it’s clear that Terman helped several get into Stanford and other universities. He dispatched numerous letters of recommendation mentioning that individuals took part in his project. And one time, early in World War II, he apparently pulled strings on behalf of a family of Japanese-Americans in his study. Fearing they were about to be interned, they wrote to Terman for help. He sent a letter assuring the federal government of their loyalty and arguing against internment. The family remained free.

From a scientific standpoint, Terman’s personal involvement seems foolish because it probably skewed his results. “It’s what you’d expect a mentor to do, but it’s bad science,” Hastorf says. As a conscientious researcher whose work got him elected to the National Academy of Sciences, Terman should have known better — but he wasn’t the first or last to slip. Indeed, the temptation to meddle is an occupational hazard among longitudinal researchers, says Glen Elder Jr., a sociologist at the University of North Carolina. A certain degree of intimacy develops, he explains, because “we’re living in their lives and they’re living in ours.”

It’s difficult to gauge Terman’s influence on the kids because so many are deceased or still anonymous. One survivor willing to speak on the record is Russell Robinson, a retired engineer and former director of aeronautical research at NASA Ames. He was a high school student in Santa Monica when, he recalls, “someone in the school system tapped me on the shoulder and said, ‘Dr. Terman would like to test you, if you’re willing.'” Robinson, now 92 and living in Los Altos, doesn’t think being in the study significantly changed his life, but he did draw confidence from knowing that Terman thought highly of him. Several times during his career, he mentally invoked Terman to shore up his self-image. “Research is a strange business — in a sense, you’re out there alone,” he says. “Sometimes, the problems got so complex I would ask myself, Am I up to this? Then I would think, Dr. Terman thought I was.”

Others have echoed that sentiment, Hastorf says. In fact, the study meant so much to some of the subjects that the Terman project now runs entirely on their bequests.

Several Terman kids have cited a negative impact on their lives. Some complained of being saddled with an unfair burden to succeed, Hastorf says, while others thought that being dubbed geniuses at an early age made them cocky and complacent. For better or worse, a quarter of the men and almost a third of the women said they felt that being a Terman kid had changed their lives. And since Terman often did his meddling behind the scenes, others may have been influenced without ever realizing it.

His support of the gifted was heartfelt, but an equally fundamental part of Terman’s social plan was controlling the people at the other end of the intelligence scale. Both were aims of eugenics, a movement that gained momentum early in the 20th century.

The eugenicists of Terman’s day held that people of different races, nationalities and classes were born with immutable differences in intelligence, character and hardiness, and that these genetic disparities called for an “aristogenic” caste system. Traits like feeblemindedness, frailty, emotional instability and “shiftlessness,” they believed, were controlled by single genes and could be easily eliminated by controlling the reproduction of the “unfit.” In the United States, the movement peddled a topsy-turvy form of Darwinism, claiming that the “fittest” (defined as well-to-do whites of Northern European ancestry) were reproducing too slowly and in danger of being overwhelmed by the inferior lower strata of society. America was jeopardized from within, eugenicists warned, by the rapid proliferation of people lacking intelligence and moral fiber. From without, the threat was the unchecked arrival of immigrants from southern and eastern Europe. Together these groups would drag down the national stock.

Terman’s letters and published writings show that he shared these beliefs and argued for measures to reverse society’s perceived deterioration. He was a member of the prominent eugenics societies of the day. “It is more important,” he wrote in 1928, “for man to acquire control over his biological evolution than to capture the energy of the atom.” Yet he wasn’t a renegade howling from the fringe. Eugenics was “hugely popular in America and Europe among the ‘better sort’ before Hitler gave it a bad name,” as journalist Nicholas Lemann puts it. Luminaries who supported at least part of the early eugenic agenda include George Bernard Shaw, Theodore Roosevelt, Margaret Sanger, Calvin Coolidge and Oliver Wendell Holmes Jr. In fact, Terman sat on the boards of two eugenics organizations with Stanford’s first president, David Starr Jordan.

Early eugenicists managed to push through several laws. Thirty-three states, including California, passed measures requiring sterilization of the feebleminded. As a result, more than 60,000 men and women in mental institutions were sterilized — most against their will and some thinking they were getting an emergency appendectomy. In 1924, Congress set quotas that drastically cut immigration from eastern and southern Europe. Though pressure to stem immigration had come from many sources, including organized labor, the quotas had an undeniably racist taint. Terman cheered these efforts.

During the 1930s, as the brutality of Nazi policies and the scientific errors of eugenic doctrines became clearer, the eugenics movement withered in the United States and Terman inched away from his harshest views. Later in life, he told friends he regretted some of his statements about “inferior races.” But unlike several prominent intelligence-testers, such as psychologist Henry Goddard and sat creator Carl Brigham, Terman never publicly recanted.

At least one eugenic measure proved as stubborn as he was. News of the Nazis’ mass sterilization program did not put an end to the practice in the United States, where sterilizations of the mentally ill and retarded continued well into the 1970s.

Terman left a difficult legacy. On one hand, his work inspired almost all the innovations we use today to challenge bright students and enrich their education. As he followed the lives of intelligent kids, he also became their best publicist, battling a baseless prejudice. As a scientist, he devised methods for assessing our minds and behaviors, helping put the field of psychology on an empirical and quantitative foundation. He was one of Stanford’s first nationally prominent scholars, and as a department chair for two decades, he transformed the psychology department from a languid backwater into an energetic, top-ranked program. He established the longitudinal method and generated an archive of priceless data. Longitudinal studies have “become the laboratory of the social sciences” and are growing in importance as the population ages, unc sociologist Elder observes.

On the other hand, as biographer Minton points out, the very qualities that made Terman a groundbreaking scientist — his zeal, his confidence — also made him dogmatic, unwilling to accept criticism or to scrutinize his hereditarian views. A similar paradox existed in his social agenda. Terman was a visionary whose disturbing eugenic positions and loving treatment of the gifted grew out of the same dream for an American meritocracy.

“He was a very nice guy, but I have some things I would argue with him about,” Hastorf declares. His conclusion is that Terman was as much a product of his time as a force for change — and that, like many powerful thinkers, he was complex, contradictory and not always admirable.

The Parable of the Talents
by Scott Alexander
from comment section:

Harald K says:
“The IQ pioneers were social reformers who wanted to reduce human suffering.”

Oh sure. By turning as much as possible of decision making over to them, or resisting efforts to take away the privileges they already had, i.e. egalitarian efforts. I don’t hold much faith in the good will of US eugenicists, any more than their German cousins. The decision of which of other people’s genes deserve to survive to the next generation, is one which every human is hopelessly biased, and every decision is hopelessly corrupt.

Sure, many socialists were fooled too by the eugenicists’ crocodile tears for humanity, but it’s an inherently and irreparably selfish practice, only morally compatible with every man for himself/might makes right morality. I could have told them (and many DID tell them).

Binet can get a pass, sort of. His concern was mainly about who would do well in the French school system. Goddard imported the Binet test before Terman turned it into the first IQ test, so hardly “the man who brought IQ tests to America”. I wonder who can look at Goddard’s wikipedia page for arguments that he had such noble intentions, and overlook how he argued that Americans were unfit for democracy, or how he let first and second class skip the intelligence testing for immigration demand on Ellis Island.

IQ tests were invented in America, by Lewis Terman. From the moment Terman touched the test, it was conscripted to the service of racism and elitism.

“Does that sound like radical antihumanism? Nope.”

Yes, it does. Note how it promotes the welfare of humanity in the abstract, at the expense of concrete humans living here and now. But as I’ve argued, even that is just a fig leaf for the crudest power-grab a biological human can possibly make.

Harald K says:
“And ended up making a tool that predicts that the Chinese and Jews should be doing it instead of them.”

Ah, here it becomes relevant that the IQ of today isn’t really Terman’s IQ. Today’s test make Chinese people look good, but Terman’s test didn’t. It didn’t try to be culturally independent at all, so if you administered it to a Chinese person, he’d score horribly. There were even questions which obviously coded for social class, like where would you go to buy certain products.

It was in response to such criticism that they gradually tried to make the tests more independent of culture and language. It was not such a great sacrifice for them to open up for the possibility that some groups may on average do slightly better than your group, once the tests had scientifically established that they, individually, were superior beings.

But as they did so, the tests became less useful for prediction of success. (It turns out upper class white kids are more successful than kids who go to the liquor store to buy sugar, even if the latter kids are otherwise clever. Who knew?).

Elof Carlson says:
There are several difficulties with using a single number to measure intelligence, in a spectrum running from retarded to genius. Issue one is the diversity of talents. As you point out musical genius is not correlated to IQ test genius because there are many people in the 160 plus range who have little music appreciation n or talent. My mentor, HJ Muller, had a 165 IQ measured by Anne Roe, but he had no ear for music. The same might be true for artistic expression among museum quality artists. It might also be true for creativity. The second issue is the role of home environment. This varies a lot. In general those in poverty have lower IQ scores than those who have wealthy home environments. Premeds who take MCAT Kaplan courses do better than those who do not. Those who go to elite private schools do better than those who go to public high schools. Having a private tutor helps even more. The wealthy can afford such luxuries for their children. The poor cannot.

A book that changed my mind about the usefulness of IQ scores was Cradles of Eminence by Victor and Muriel Goertzel. They wanted to compare Terman’s study of 1000 high IQ California kids with eminence. They defined eminence as having two or more biographies written about a person who is not royalty or a sports figure. They found that none of the Terman kids had biographies written about them. They mostly became health professionals, CEOs, lawyers, engineers and solid middle class and contented adults. They found that those who had biographies written about them often had unstable middle class homes (e.g., a neurotic or psychotic parent, an alcoholic parent, a financial collapse in business leading downward in social class, a parent who was a zealot for a cause). They argued that it was the conflict at home (the parents were nevertheless loving to their children) that led these students to creative activities that set them apart. The Terman kids were teachers’ pets, loved school, and aced all their tests. The Goertzel biographees often disliked school (they were bored by it), were often misinterpreted by their teachers as lazy or mentally disturbed or nonconforming. Very few of the high IQ Terman kids were in the arts or wrote fiction. Many of the Goertzel biographees had careers in the arts (but about a majority of both groups chose science careers). None of the Terman kids won a Nobel or Pulitzer. Numerous of the Goertzel biographees did win Nobels and Pulitzers.

I hope you will read that book and comment on it. I believe IQ measures effectiveness in test-taking. That may be innate. It certainly has value in who gets into medical school or who succeeds academically. I believe creativity is independent of IQ score and no one has developed an objective quantitative measure of that creativity in whatever field people excel.

Are ‘vegetarians’ or ‘carnivores’ healthier?

Nutrition studies has been plagued with problems. Most of the research in the past was extremely low quality. Few other fields would allow such weak research to be published in peer-reviewed journals. Yet for generations, epidemiological (observational and correlational) studies were the norm for nutrition studies. This kind of research is fine for preliminary exploration in formulating new hypotheses to test, but it is entirely useless for proving or disproving any given hypothesis. Shockingly, almost all of medical advice and government recommendations on diet and nutrition are based on this superficial and misleading level of results.

The main problem is there has been little, if any, control of confounding factors. Also, the comparisons used were pathetically weak. It turns out that, in studies, almost any dietary protocol or change improves health compared to a standard American diet (SAD) or other varieties of standard industrialized diets based on processed foods of refined carbs (particularly wheat), added sugar (particularly high fructose corn syrup), omega-6 seed oils (inflammatory, oxidative, and mutagenic), food additives (from glutamate to propionate), and nutrient-deficient, chemical-drenched agricultural crops (glyphosate among the worst). Assuming the dog got decent food, even eating dog shit would be better for your health than SAD.

Stating that veganism or the Mediterranean diet is healthier than what most people eat really tells us nothing at all. That is even more true when the healthy user effect is not controlled for, as typically is the case with most studies. When comparing people on these diets to typical meat eaters, the ‘carnivores’ also are eating tons of carbs, sugar, and seed oils with their meat (buns, french fries, pop, etc; and, for cooking and in sauces, seed oils; not to mention snacking all day on chips, crackers, cookies, and candy). The average meat-eater consumes far more non-animal foods than animal foods, and most processed junk food is made mostly or entirely with vegan ingredients. So why do the animal foods get all the blame? And why does saturated fat get blamed when, starting back in the 1930s, seed oils replaced lard as the main source of cooking fat/oil?

If scientists in this field were genuinely curious, intellectually humble, not ideologically blinded, and unbiased by big food and big farm funding, they would make honest and fair comparisons to a wide variety of optimally-designed diets. Nutritionists have known about low-carb, keto, and carnivore diets for about a century. The desire to research these diets, however, has been slim to none. The first ever study of the carnivore diet, including fully meat-based, is happening right now. To give some credit, research has slowly been improving. I came across a 2013 study that compared four diets: “vegetarian, carnivorous diet rich in fruits and vegetables, carnivorous diet less rich in meat, and carnivorous diet rich in meat” (Nathalie T. Burkert et al, Nutrition and Health – The Association between Eating Behavior and Various Health Parameters: A Matched Sample Study).

It’s still kind of amusing that the researchers called carnivorous a “diet rich in fruits and vegetables” and a “diet less rich in meat.” If people are mostly eating plant foods or otherwise not eating much meat, how exactly is that carnivorous in any meaningful and practical sense? Only one in four of the diets were carnivorous in the sense the average person would understand it, as a diet largely based on animal foods. Even then, it doesn’t include a carnivorous diet entirely based on animal foods. Those carnivores eating a “diet rich in meat” might still be eating plenty of processed junk food, their meat might still be cooked or slathered in harmful seed oils and come with a bun, and they might still be washing it down with sugary drinks. A McDonald’s Big Mac meal could be considered as part of a diet rich in meat, just because meat represents the greatest portion of weight and calories. Even if their diet was only 5-10% unhealthy plant foods, it could still be doing severe damage to their health. One can fit in a fairly large amount of carbs, seed oils, etc in a relatively small portion of the diet.

I’m reminded of research that defines a “low-carb diet” as any carb intake that is 40% or below, but other studies show that 40% is the absolute highest point of carb intake for most hunter-gatherers. As high and low are relative concepts in defining carb intake, what is considered a meat-rich diet would be relative as well. I doubt these studied carnivorous “diets rich in meat” are including as high amount of animal foods as found in the diets of Inuit, Masai, early Americans, and Paleolithic humans. So what is actually being compared and tested? It’s not clear. This was further confounded in how vegans, vegetarians, and pescetarians (fish-eaters) were combined into a single group mislabeled as ‘vegetarian’, considering that vegetarians and pescetarians technically could eat a diet primarily animal-based if they so chose (dairy, eggs, and/or fish) and I know plenty of vegetarians who eat more cheese than they do fruits and vegetables. Nonetheless, at least these researchers were making a better comparison than most studies. They did try to control for other confounders such as pairing each person on a plant-based diet with “a subject of the same sex, age, and SES [socioeconomic status]” from each of the other three diets.

What were the results? Vegetarians, compared to the most meat-based of the diets, had worse outcomes for numerous health conditions: asthma, allergies, diabetes, cataracts, tinnitus, cardiac infarction, bronchitis, sacrospinal complaints, osteoporosis, gastric or intestinal ulcer, cancer, migraine, mental illness (anxiety disorder or depression), and “other chronic conditions.” There were only a few health conditions where the plant-based dieters fared better. For example, the so-called ‘vegetarians’ had lower rates of hypertension compared to carnivores rich in meat and less rich in meat, although higher rates than those carnivores rich in fruits and vegetables (i.e., more typical omnivores).

This is interesting evidence about the diets, though. If the carnivorous diets were low enough in starchy and sugary plant foods and low enough in dairy, they would be ketogenic which in studies is known to lower blood pressure and so would show a lesser rate of hypertension. This indicates that none of these diets are low-carb, much less very low-carb (ketogenic). The plant-based dieters in this study also had lower rates of stroke and arthritis, these being other health benefits seen on a ketogenic diet, and so this further demonstrates that this study wasn’t comparing high-carb vs low-carb as one might expect from how the diets were described in the paper. That is to say the researchers didn’t include a category for a ketogenic carnivore diet or even a ketogenic omnivore diet, much less a ketogenic ‘vegetarian’ diet as a control. Keep in mind that keto-carnivore is one of the most common forms of those intentionally following a carnivore diet. And keep in mind that plant-based keto is probably more popular right now than keto-carnivore. So, the point is that these unexpected results are examples of the complications with confounding factors.

The only other result that showed an advantage to the ‘vegetarians’ was less urinary incontinence, which simply means they didn’t have to pee as often. I haven’t a clue what that might mean. If we were talking about low-carb and keto, I’d suspect that the increased urination for the ‘carnivorous’ diets was related to decreased water retention (i.e., bloating) and hence the water loss that happens as metabolism shifts toward fat-burning. But since we are confident that such a diet wasn’t included in the study, these results remain anomalous. Of all the things that meat gets blamed for, I’ve never heard of anyone suggesting that it causes most people to urinate incessantly. That is odd. Anyway, it’s not exactly a life-threatening condition, even if it were caused by carnivory. It might have something to do with higher-fat combined with higher-carb, in the way that this combination also contributes to obesity, whereas high-fat/low-carb and low-fat/high-carb does not predispose one to fat gain. The ‘vegetarianism’ in this study was being conflated with a low-fat diet, but all of the four categories apparently were varying degrees of higher carb.

The basic conclusion is that ‘vegetarians’, including vegans and pescetarians, have on average poorer health across the board, with a few possible exceptions. In particular, they suffer more from chronic diseases and report higher impairment from health disorders. Also, not only these ‘vegetarians’ but also meat-eaters who ate a largely plant-based diet (“rich in fruits and vegetables”) consult doctors more often, even as ‘vegetarians’ are inconsistent about preventative healthcare such as check-ups and vaccinations. Furthermore, “subjects with a lower animal fat intake demonstrate worse health care practices,” whatever that exactly means. Generally, ‘vegetarians’ “have a lower quality of life.”

These are interesting results since the researchers were controlling for such things as wealth and poverty, and so it wasn’t an issue of access to healthcare or the quality of one’s environment or level of education. The weakness is that no data was gathered on macronutrient ratios of the subjects’ diets, and no testing was done on micronutrient content in the food and potential deficiencies in the individuals. Based on these results, no conclusions can be made about causal direction and mechanisms, but it does agree with some other research that finds similar results, including with other health conditions such as vegans and vegetarians having greater infertility. Any single one of these results, especially something like infertility, points toward serious health concerns involving deeper systemic disease and disorder within the body.

But what really stands out is the high rate of mental illness among ‘vegetarians’ (about 10%), twice as high as the average meat-eater (about 5%) which is to say the average Westerner, and that is with the background of the Western world having experienced a drastic rise in mental illness over the past couple of centuries. And the only mental illnesses considered in this study were depression and anxiety. The percentage would be so much higher if including all other psychiatric conditions and neurocognitive disorders (personality disorders, psychosis, psychopathy, Alzheimer’s, ADHD, autism, learning disabilities, etc). Think about that, the large number of people on a plant-based diet who are struggling on the most basic level of functioning, something I personally understand from decades of chronic depression on the SAD diet. Would you willingly choose to go on a diet that guaranteed a high probability of causing mental health struggles and suffering, neurocognitive issues and decline?

To put this study in context, listen to what Dr. Paul Saladino, trained in psychiatry and internal medicine, has to say in the following video. Jump to around the 19 minute mark where he goes into the nutritional angle of a carnivore diet. And by carnivore he is talking about fully carnivore and so, if dairy is restricted as he does in his own eating, it would also mean ketogenic as well. A keto-carnivore diet has never been studied. Hopefully, that will change soon. Until then, we have brilliant minds like that of Dr. Saladino to dig into the best evidence that is presently available.

Ancient Greek View on Olive Oil as Part of the Healthy Mediterranean Diet

“I may instance olive oil, which is mischievous to all plants, and generally most injurious to the hair of every animal with the exception of man, but beneficial to human hair and to the human body generally; and even in this application (so various and changeable is the nature of the benefit), that which is the greatest good to the outward parts of a man, is a very great evil to his inward parts: and for this reason physicians always forbid their patients the use of oil in their food, except in very small quantities, just enough to extinguish the disagreeable sensation of smell in meats and sauces.
~Socrates dialogue with Protagoras

So what did ancient Greeks most often use for cooking? They preferred animal fat, most likely lard. Pigs have a much higher amount of fat than most other animals. And pigs are easy to raise under almost any conditions: cold and hot, fields and forests, plains and mountains, mainlands and islands. Because of this, lard is one of the few common features in traditional societies, including the longest-lived populations.

That was true of the ancient Greeks, but it has been true ever since in many parts of the world, especially in Europe but also in Asia. This continued to be true as the Western world expanded with colonialism and new societies formed. As Nina Teicholz notes, “saturated fats of every kind were consumed in great quantities. Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard” (The Big Fat Surprise).

To return to the Greeks, the modern population is not following a traditional diet. Prior to the World War era, pork and lard was abundant in the diet. But during wartime, the pig population was decimated from violence, disruption of the food system, and the confiscation of pigs to feed the military.

The same thing happened in the the most pig-obsessed culture in history, the long-lived Okinawans, when the Japanese during WWII stole or killed all of their pigs — as the Japanese perceived these shamanistic rural people on an isolated island to be a separate race and so were treated as less worthy. The Okinawans independence was dependent on their raising pigs and that was taken away from them.

When the Greeks and Okinawans were studied after the war, the diet observed was not the diet that existed earlier, the traditional lard-based and nutrient-dense diet that most of the population had spent most of their lives eating. They were long-lived not because of the lack of lard but because of it once having been abundant.

So, something like olive oil, once primarily used as a lamp fuel, was turned to in replacing the lost access to lard. Olive oil was a poverty food used out of necessity, not out of preference. It is great credit to modern marketing and propaganda that olive oil has been sold as a healthy oil when, in fact, most olive oil bought in the store is rancid. Olive oil is actually a fruit juice which is why it can’t be kept long before going bad, maybe why it gained a bad reputation in ancient Greece.

Lard and other animal fats, on the other hand, because they are heavily saturated have long shelf-life and don’t oxidize when used for cooking. Also, unlike vegetable oils, animal fats from pastured animals is filled with fat-soluble vitamins and omega-3 fatty acids, essential to health and longevity. How did this traditional knowledge, going back to the ancient world, get lost in a single generation of devastating war?

* * *

American Heart Association’s “Fat and Cholesterol Counter” (1991)

Even hydrogenated fat gets blamed on saturated fat, since the hydrogenation process turns some small portion of it saturated, which ignores the heavy damage and inflammatory response caused by the oxidization process (both in the industrial processing and in cooking). Not to mention those hydrogenated fats as industrial seed oils are filled with omega-6 fatty acids, the main reason they are so inflammatory. Saturated fat, on the other hand, is not inflammatory at all. This obsession with saturated fat is so strange. It never made any sense from a scientific perspective. When the obesity epidemic began and all that went with it, the consumption of saturated fat by Americans had been steadily dropping for decades, ever since the invention of industrial seed oils in the late 1800s and the fear about meat caused by Upton Sinclair’s muckraking journalism, The Jungle, about the meatpacking industry.

The amount of saturated fat and red meat has declined over the past century, to be replaced with those industrial seed oils and lean white meat, along with fruits and vegetables — all of which have been increasing.** Chicken, in particular, replaced beef and what stands out about chicken is that, like those industrial seed oils, it is high in the inflammatory omega-6 fatty acids. How could saturated fat be causing the greater rates of heart disease and such when people were eating less of it. This scapegoating wasn’t only unscientific but blatantly irrational. All of this info was known way back when Ancel Keys went on his anti-fat crusade (The Creed of Ancel Keys). It wasn’t a secret. And it required cherrypicked data and convoluted rationalizations to explain away.

Worse than removing saturated fat when it’s not a health risk is the fact that it is actually an essential nutrient for health: “How much total saturated do we need? During the 1970s, researchers from Canada found that animals fed rapeseed oil and canola oil developed heart lesions. This problem was corrected when they added saturated fat to the animals diets. On the basis of this and other research, they ultimately determined that the diet should contain at least 25 percent of fat as saturated fat. Among the food fats that they tested, the one found to have the best proportion of saturated fat was lard, the very fat we are told to avoid under all circumstances!” (Millie Barnes, The Importance of Saturated Fats for Biological Functions).

It is specifically lard that has been most removed from the diet, and this is significant as lard was a central to the American diet until this past century: “Pre-1936 shortening is comprised mainly of lard while afterward, partially hydrogenated oils came to be the major ingredient” (Nina Teicholz, The Big Fat Surprise, p. 95); “Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard” (p. 126). And what about the Mediterranean people who supposedly are so healthy because of their love of olive oil? “Indeed, in historical accounts going back to antiquity, the fat more commonly used in cooking in the Mediterranean, among peasants and the elite alike, was lard.” (p. 217).

Jason Prall notes that long-lived populations ate “lots of meat” and specifically, “They all ate pig. I think pork was the was the only common animal that we saw in the places that we went” (Longevity Diet & Lifestyle Caught On Camera w/ Jason Prall). The infamous long-lived Okinawans also partake in everything from pigs, such that their entire culture and religion was centered around pigs (Blue Zones Dietary Myth). Lard, in case you didn’t know, comes from pigs. Pork and lard is found in so many diets for the simple reason pigs can live in diverse environments, from mountainous forests to tangled swamps to open fields, and they are a food source available year round.

Blue Zones Dietary Myth

And one of the animal foods so often overlooked is lard: “In the West, the famous Roseto Penssylvanians also were great consumers of red meat and saturated fat. Like traditional Mediterraneans, they ate more lard than olive oil (olive oil was too expensive for everyday cooking and too much in demand for other uses: fuel, salves, etc). Among long-lived societies, one of the few commonalities was lard, as pigs are adaptable creatures that can be raised almost anywhere” (Eat Beef and Bacon!). […]

Looking back at their traditional diet, Okinawans have not consumed many grains, added sugars, industrial vegetable oils, or highly processed foods and they still eat less rice than other Japanese: “Before 1949 the Okinawans ate NO Wheat and little rice” (Julianne Taylor, The Okinawan secret to health and longevity – no wheat?). Also, similar to the Mediterranean people (another population studied after the devastation of WWII) who didn’t use much olive oil until recently, Okinawans traditionally cooked everything in lard that would have come from nutrient-dense pigs, the fat being filled with omega-3s and fat-soluble vitamins. Also, consider that most of the fat in lard is monounsaturated, the same kind of fat that is deemed healthy in olive oil.

“According to gerontologist Kazuhiko Taira, the most common cooking fat used traditionally in Okanawa is a monounsaturated fat-lard. Although often called a “saturated fat,” lard is 50 percent monounsaturated fat (including small amounts of health-producing antimicrobial palmitoleic acid), 40 percent saturated fat and 10 percent polyunsaturated. Taira also reports that healthy and vigorous Okinawans eat 100 grams each of pork and fish each day [7]” (Wikipedia, Longevity in Okinawa).

It’s not only the fat, though. As with most traditional populations, Okinawans ate all parts of the animal, including the nutritious organ meat (and the skin, ears, eyes, brains, etc). By the way, besides pork, they also ate goat meat. There would have been a health benefit from their eating some of their meat raw (e.g., goat) or fermented (e.g., fish), as some nutrients are destroyed in cooking. The small amounts of soy that Okinawans ate in the past was mostly tofu fermented for several months, and fermentation is one of those healthy preparation techniques widely used in traditional societies. They do eat some unfermented tofu as well, but I’d point out that it typically is fried in lard or used to be. […]

The most popular form of pork in the early 1900s was tonkatsu, by the way originally fried in animal fat according to an 1895 cookbook (butter according to that recipe but probably lard before that early period of Westernization). “Several dedicated tonkatsu restaurants cropped up around the 1920s to ’40s, with even more opening in the ’50s and ’60s, after World War II — the big boom period for tonkatsu. […] During the Great Depression of the 1930s, a piece of tonkatsu, which could be bought freshly cooked from the butcher, became the ultimate affordable payday treat for the poor working class. The position of tonkatsu as everyman food was firmly established.” This pork-heavy diet was what most Japanese were eating prior to World War II, but it wouldn’t survive the conflict when food deprivation came to afflict the population long afterwards.

Comment by gp

I just finished reading The Blue Zones and enjoyed it very much, but I was wondering about something that was not addressed in great detail. All of the diets discussed other than the Adventists (Sardinia, Okinawa and Nicoya) include lard, which I understand is actually used in significant quantities in some or all of those places. You describe (Nicoyan) Don Faustino getting multiple 2-liter bottles filled with lard at the market. Does he do this every week, and if so, what is he using all of that lard for? In Nicoya and Sardinia, eggs and dairy appear to play a large role in the daily diet. Your quote from Philip Wagner indicates that the Nicoyans were eating eggs three times a day (sometimes fried in lard), in addition to some kind of milk curd.

The Blue Zones Solutions by Dan Buettner
by Julia Ross (another version on the author’s website)

As in The Blue Zones, his earlier paean to the world’s traditional diets and lifestyles, author Buettner’s new book begins with detailed descriptions of centenarians preparing their indigenous cuisines. He finishes off these introductory tales with a description of a regional Costa Rican diet filled with eggs, cheese, meat and lard, which he dubs “the best longevity diet in the world.”

Then Buettner turns to how we’re to adapt this, and his other model eating practices, into our current lives. At this point he suddenly presents us with a twenty-first century pesco-vegan regimen that is the opposite of the traditional food intake that he has just described in loving detail. He wants us to fast every twenty-four hours by eating only during an eight-hour period each day. He wants us to eat almost no meat, poultry, eggs or dairy products at any time. Aside from small amounts of olive oil, added fats are not even mentioned, except to be warned against.

How Much Soy Do Okinawans Eat?
by Kaayla Daniel

There are other credibility problems with the Okinawa Centenarian Study, at least as interpreted in the author’s popular books. In 2001, Dr. Suzuki reported in the Asia Pacific Journal of Clinical Nutrition that “monounsaturates” were the principal fatty acids in the Okinawan diet. In the popular books, this was translated into a recommendation for canola oil, a genetically modified version of rapeseed oil developed in Canada that could not possibly have become a staple of anyone’s diet before the 1980s. According to gerontologist Kazuhiko Taira, the most common cooking fat used traditionally in Okanawa is a very different monounsaturated fat-lard. Although often called a “saturated fat,” lard is 50 percent monounsaturated fat (including small amounts of health-producing antimicrobial palmitoleic acid), 40 percent saturated fat and 10 percent polyunsaturated. Taira also reports that healthy and vigorous Okinawans eat 100 grams each of pork and fish each day. Thus, the diet of the long-lived Okinawans is actually very different from the kind of soy-rich vegan diet that Robbins recommends.

Nourishing Diets:
How Paleo, Ancestral and Traditional Peoples Really Ate

by Sally Fallon Morell
pp. 263-270
(a version of the following can be found here)

From another source, 7 we learn that:

Traditional foods of Okinawa are extremely varied, remarkably nutrient-dense as are all traditional foods and strictly moderated with the philosophy of hara hachi bu [eat until you are 80 percent full]. While the diet of Okinawa is, indeed, plant-based it is most certainly not “low fat” as has been posited by some writer-researchers about the native foods of Okinawa. Indeed, all those stir fries of bitter melon and fresh vegetables found in Okinawan bowls are fried in lard and seasoned with sesame oil. I remember fondly that a slab of salt pork graced every bowl of udon I slurped up while living on the island. Pig fat is not, as you can imagine, a low-fat food yet the Okinawans are fond of it. Much of the fat consumed is pastured as pigs are commonly raised at home in the gardens of Okinawan homes. Pork and lard, like avocado and olive oil, are a remarkably good source of monounsaturated fatty acid and, if that pig roots around on sunny days, it is also a remarkably good source of vitamin D.

The diet of Okinawa also includes considerably more animal products and meat—usually in the form of pork—than that of the mainland Japanese or even the Chinese. Goat and chicken play a lesser, but still important, role in Okinawan cuisine. Okinawans average about 100 grams or one modest portion of meat per person per day. Animal foods are important on Okinawa and, like all food, play a role in the population’s general health, well-being and longevity. Fish plays an important role in the cooking of Okinawa as well. Seafoods eaten are various and numerous—with Okinawans averaging about 200 grams of fish per day.

Buettner implies that the Okinawans do not eat much fish, but in fact, they eat quite a lot, just not as much as Japanese mainlanders.

The Okinawan diet became a subject of interest after the publication of a 1996 article in Health Magazine about the work of gerontologist Kazuhiko Taira, 8 who described the Okinawan diet as “very healthy—and very, very greasy.” The whole pig is eaten, he noted, everything from “tails to nails.” Local menus offer boiled pig’s feet, entrail soup and shredded ears. Pork is marinated in a mixture of soy sauce, ginger, kelp and small amounts of sugar, then sliced or chopped for stir-fry dishes. Okinawans eat about 100 grams of meat per day—compared to 70 grams in Japan and just over 20 grams in China—and at least an equal amount of fish, for a total of about 200 grams per day, compared to 280 grams per person per day of meat and fish in America. Lard—not vegetable oil—is used in cooking. […]

What’s clear is that the real Okinawan longevity diet is an embarrassment to modern diet gurus. The diet was and is greasy and good, with the largest proportion of calories coming from pork and pork fat, and many additional calories from fish; those who reach old age eat more animal protein and fat than those who don’t. Maybe that’s what gives the Okinawans the attitudes that Buettner so admires, “an affable smugness” that makes it easy to “enjoy today’s simple pleasures.”

Hara Hachi Bu: Lessons from Okinawa
by Jenny McGruther

Traditional foods of Okinawa are extremely varied, remarkably nutrient-dense as are all traditional foods and strictly moderated with the philosophy of hara hachi bu. While the diet of Okinawa is, indeed, plant-based it is most certainly not “low fat” as has been posited by some writer-researchers about the native foods of Okinawa. Indeed, all those stirfries of bittermelon and fresh vegetables found in Okinawan bowls are fried in lard and seasoned with sesame oil. I remember fondly that a slab of salt pork graced every bowl of udon I slurped up while living on the island. Pig fat is not, as you can imagine, a low-fat food yet the Okinawans are fond of it. Much of the fat consumed is pastured as pigs are commonly raised at home in the gardens of Okinawan homes. Pork and lard, like avocado and olive oil, are a remarkably good source of monounsaturated fatty acid and, if that pig roots around on sunny days, it is also a remarkably source of vitamin D.

Epilepsy Not Treated With Ketosis

Silas Weir Mitchell was a famous doctor that first learned about neurological disease during his service in the American Civil War. He is most well known for his views on hysteria and neurasthenia, but he was considered an expert on other neurological conditions as well. One area he was respected in was the treatment of epilepsy, for which he preferred to use drugs. “Despite the prevalent views on lifestyle modification as a treatment for epilepsy during this time period, as well as Mitchell’s own development of the “rest cure” for certain disease states, he was not a proponent of these types of interventions for epilepsy” (David B. Burkholder & Christopher J. Boes, Silas Weir Mitchell on Epilepsy Therapy in the Late 19th to Early 20th Centuries).

From his writings on neurasthenia, he had articulated a common view of this disease in terms of nerves and energy, libido and sexuality. And he applied a similar understanding to epilepsy: “Still, in Mitchell’s first discussion of amyl nitrite as an abortive therapy, he clearly agreed with a common thought of the day by attributing the patient’s epilepsy to sexual vices, stating he had partaken in “…great excess, and that the punishment was distinctly born of the offence” (Burkholder & Boes). But in 1912, he questioned his prior causal explanations, having had written that, “It is conceivable that in nerve centres normal or abnormal substances may accumulate until they result in irritative symptoms and discharges of neural energy. But how then could this sequence be arrested by a mere sensory stimulation, like a ligature on an arm, or by abruptly dilating the cerebral vessels with amyl? The explosions would only be put off for the minute; the activating poison would remain.” These doubts were expressed when in his early 80s, after a long career in medicine.

Still, he never suspected any role to be played by diet or lifestyle. This is strange, considering his professional expertise in his having used diet and lifestyle for those suffering from neurasthenia, a neurological disorder like epilepsy. Even in his theorizing, the factors he considered for both overlapped to some degree in specific details and through general framework. Yet for epilepsy, he couldn’t somehow make a connection in the same way between physical health and mental health. Meanwhile, others were attempting to make such connections. There was much experimentation going on with epilepsy, including dietary protocols.

William Spratling, in Epilepsy and Its Treatment (1904), partly shared Mitchell’s assessment in writing that, “have been unable to determine that different foods have any specific effect on epilepsy itself beyond that which they have on the organism in general.” That didn’t stop him from having suggested a mixed/balanced diet that, though not having excluded carbohydrates, did tell epileptics to eat moderately and slowly while avoiding pastries, alcohol and over-sweetened drinks. In certain extreme cases, he went even further by asserting that, “Foods should be in liquid form and highly nutritious from the start. Various preparations of milk, eggs, and beef extracts may be given; but plain peptonized milk is by far the best food of all. It should be given often and in small amounts.”

Spratling’s professional advice for treatment in some cases potentially could have been ketogenic, if not in a systematic manner. The same might’ve also been true of Sir J. Russell Reynolds’ even earlier 1862 epileptic protocol of avoiding “Salted meats, pastry, preserved vegetables, and cheese” (Epilepsy: Its Symptoms, Treatment, and Relation to Other Chronic Convulsive Diseases). Besides openly advocated low-carb diets like that of William Banting, many scientific experts, medical practitioners, public intellectuals and popular writers during that era flirted around the edges of restricting starches and sugar for various reason, though not to treat epilepsy. That is significant, since the average diet was already far lower in carbohydrates than what was seen in the following generations. Some patients would have found relief from seizures through ketosis without realizing what had helped them. The seeming randomness of who did and who did not experience improvements had to have been frustrating to doctors of the time.

In 1914, two years after having fallen into self-questioning, Mitchell would die without having learned of an effective treatment. Only a few years later in 1921, there was the discovery of dietary ketosis (Rollin Woodyatt) and the discovery of the medical use of a ketogenic diet for epileptic seizures (Russel Wilder), although ketosis was used for this purpose through fasting as far back as 500 BC. Despite this failure, like so many others, he approached the territory of a ketogenic diet while entirely missing it, such as in his recommendations of meat and dairy for neurasthenics which potentially could’ve put a patient into a state of ketosis. He came so close, though. After graduating from medical college in 1851, he moved to Paris and spent a year studying under Claude Bernard. About a decade later, the British Dr. William Harvey heard Bernard speak about the relationship between diet and diabetes, and this information he used to formulate a low-carb diet for his patient William Banting to lose weight. Banting then popularized this diet, but at that point it had already been in use by others going back to the 1790s.

During Mitchell’s lifetime, most Americans would have still followed a diet where carbohydrates were a small portion of meals and a small percentage of calories. It’s probable that the majority of the population during the 19th century was regularly in a state of ketosis, as the common diet back then consisted of mostly animal foods — what Nina Teicholz describes as the “meat-and-butter-gorging eighteenth and nineteenth centuries” (The Big Fat Surprise; see context of quote in Malnourished Americans). Mitchell himself might have experienced ketosis at different points in his life without realizing it. This wouldn’t have been an unusual thing for most of human existence, if not from a low-carb diet then from caloric restriction, intermittent eating, and fasting — ketosis isn’t exactly hard to achieve in a traditional setting. For example, it used to be standard for Americans to eat only one meal a day (Abigail Carroll, Three Squares) and that was in the context of a labor-intensive rural lifestyle. Sugary cereals, Pop-Tarts, etc were not available for breakfast. And snacking all day on crackers, chips, and cookies simply was not an option.

It’s interesting to note that meat-and-butter or rather meat-and-milk was what Mitchell, working as a doctor in the 19th century, told his neurasthenic patients to eat. But for unexplained reasons, he didn’t advise the same or a similar eating pattern to his epileptic patients. Low-carb and animal-based dieting was popular in his lifetime and was used by many doctors for various conditions, often for obesity but far from limited to that. It’s odd that no one made the connection of the ancient practice of fasting for epileptic seizures with the 19th century practice of potentially ketogenic diets. No one managed to figure this out until the 1920s and not for a lack of experimentation with diverse alimentary regimens. Then when it was finally discovered, after a short period of research, it was mostly forgotten about again for another three-quarters of a century. Yet even now drugs remain the primary treatment for epilepsy, despite ketosis being the most effective, not to mention safest, treatment; and, if ketosis is the normal state of physiological functioning, we might call it a cure for many people.

Being “mostly vegan” is like being “a little pregnant.”

A large number of vegans and vegetarians, according to available data, occasionally cheat by eating various animal foods. I know a vegan who eats fish, which technically would make her pegan, but she is attached to identifying as vegan. The majority who try these “plant-based” diets, the data also shows, don’t maintain them beyond a short period of time. It’s a small minority that remain on such restrictive diets.

It’s yet to be demonstrated if veganism, strictly maintained with no instance of cheating, can even be maintained beyond a single generation. That infertility is so common among vegans (and also vegetarians) is an indicator that long-term survival is unlikely. That is similar to what Francis M. Pottenger Jr. discovered when cats were fed contrary to the diet they evolved eating. “By the third generation, they didn’t reach adulthood. There was no generation after that” (Health From Generation To Generation).

Another researcher from earlier last century, Weston A. Price studied healthy populations following traditional diets. In his search, he traveled to every continent and he specifically looked for those adhering to an entirely plant-based diet, but he could find no example anywhere in the world. He did find cannibals. Every healthy population ate large amounts of high quality animal foods, typically not long pig though.

* * *

Interpreting the Work of Dr. Weston A. Price
from Weston A. Price Foundation

One of the purposes of Price’s expedition to the South Seas was to find, if possible “plants or fruits which together, without the use of animal products, were capable of providing all of the requirements of the body for growth and for maintenance of good health and a high state of physical efficiency.” 12 What he found was a population that put great value on animal foods–wild pig and seafood–even groups living inland on some of the larger islands. Even the agricultural tribes in Africa consumed insects and small fish–and these groups were not as robust as the tribes that hunted, fished or kept herds.

“It is significant,” said Price, “that I have as yet found no group that was building and maintaining good bodies exclusively on plant foods. A number of groups are endeavoring to do so with marked evidence of failure.”13

12. Weston A. Price, Nutrition and Physical Degeneration, PPNF, p 109.
13. Weston A. Price, Nutrition and Physical Degeneration, PPNF, p 282.

Studies of Relationships Between Nutritional Deficiencies and (a) Facial and Dental Arch Deformities and (b) Loss of Immunity to Dental Caries Among South Sea Islanders and Florida Indians.
by Weston A. Price

The native foods in practically all the South Sea Islands consisted of a combination of two types; namely, plant foods and sea foods. The former included the roots and tops of several tubers and a variety of fruits. The sea foods consisted chiefly of small forms, both hard- and soft-shelled, and invertebrates, together with fish of various types.

One of the purposes of this trip was to find, if possible, native dietaries consisting entirely of plant foods which were competent for providing all the factors needed for complete and normal physical development without the use of any animal tissues or product.

A special effort was accordingly made to penetrate deeply into the interior of the two largest Islands where the inhabitants were living quite remote from the sea, with the hope that groups of individuals would be found living solely on a vegetarian diet. Not only were no individuals or groups found, even in the interior, who were not frequently receiving shell fish from the sea, but I was informed that they recognized that they could not live over three months in good health without getting something from the sea. A native interpreter informed me that this had been one of the principal causes of bitter warfare between the hill tribes and coast tribes of that and all of the Pacific Islands, since the hill people could not exist without some sea foods to supplement their abundant and rich vegetable diet of the mountain country.

He informed me also that even during the periods of bitter warfare the people from the mountain district would come down to the sea during the night and place in caches delicious plants which grew only at the higher altitudes. They would return the following night to obtain the sea foods that were placed in the same caches by the people from the sea. He stated that even during warfare these messengers would not be captured or disturbed.

This guide and many others explained to me that cannibalism had its origin in the recognition by the hill people that the livers and other organs of their enemies from the coast provided the much needed chemicals which were requisite to supplement the plant foods. Several highly informed sons of cannibals and a few who acknowledged that they had eaten “long pig” advised me that it was common knowledge that the people who had lived by the sea and who had been able to obtain lots of sea foods, particularly the fishermen, were especially sought for staying a famine. One native told me of having left an Island where he was engaged in fishing because of a tip that came to him that his life was in danger because of his occupation.

Weston Price Looked for Vegans But Found Only Cannibals
by Christopher Masterjohn

This experience is, in part, a testament to the extraordinary nutrition packed into shellfish.  Melissa McEwen recently wrote about this in her post “Being Shellfish,” where she noted that some shellfish are not only more nutritious than meat, but exhibit such a dearth of evidence for sentience and the capacity for suffering that some otherwise-vegans argue that eating shellfish is consistent with the basic ethics of veganism. […]

Ultimately, what this story makes especially clear is that there is an enormous difference between a small amount of animal products and no animal products.  Being “mostly vegan” is like being “a little pregnant.”  As I pointed out in my response to Dr. T. Colin Campbell and my review of Dr. Joel Fuhrman’s Eat to Live, animal products that constitute two percent or ten percent of a person’s diet may make or break the healthfulness of the diet, especially if that small percentage is something incredibly nutrient-dense like clams or oysters.

If someone achieves vibrant health on a vegan diet, I will be happy for them.  We should face the facts, however, that humans with limited access to animal products have often gone to great lengths to include at least some animal products in their diet.  And they’ve done that for a reason.