Carcinogenic Grains

In understanding human health, we have to look at all factors as a package deal. Our gut-brain is a system, as is our entire mind-body. Our relationships, lifestyle, the environment around us — all of it is inseparable. This is true even if we limit ourselves to diet alone. It’s not simply calories in/calories out, macronutrient ratios, or anything else along these lines. It is the specific foods eaten in combination with which other foods and in the context of stress, toxins, epigenetic inheritance, gut health, and so much else that determine what effects manifest in the individual.

There are numerous examples of this. But I’ll stick to a simple one, which involves several factors and the relationship between them. First, red meat is associated with cancer and heart disease. Yet causation is hard to prove, as red meat consumption is associated with many other foods in the standard American diet, such as added sugars and vegetable oils in processed foods. The association might be based on confounding factors that are culture-specific, which can explain why we find societies with heavy meat consumption and little cancer.

So, what else might be involved? We have to consider what red meat is being eaten with, at least in the standard American diet that is used as a control in most research. There is, of course, the added sugars and vegetable oils — they are seriously bad for health and may explain much of the confusion. Saturated fat intake has been dropping since the early 1900s and, in its place, there has been a steady rise in the use of vegetable oils; we now know that highly heated and hydrogenated vegetable oils do severe damage. Also, some of the original research that blamed saturated fat, when re-analyzed, found that sugar was the stronger correlation to heart disease.

Saturated fat, as with cholesterol, had been wrongly accused. This misunderstanding has, over multiple generations at this point, led to the early death of at least hundreds of millions of people worldwide, as dozens of the wealthiest and most powerful countries enforced this in their official dietary recommendations which transformed the world’s food system. Similar to eggs, red meat became the fall guy.

Such things as heart disease are related to obesity, and conventional wisdom tells us that fat makes us fat. Is that true? Not exactly or directly. I was amused to discover that a scientific report commissioned by the British government in 1846 (Experimental Researches on the Food of Animals, and the Fattening of Cattle: With Remarks on the Food of Man. Based Upon Experiments Undertaken by Order of the British Government by Robert Dundas Thomson) concluded that “The present experiments seem to demonstrate that the fat of animals cannot be produced from the oil of the food” — fat doesn’t make people fat, and that low-carb meat-eating populations tend to be slim has been observed for centuries.

So, in most cases, what does cause fat accumulation? It is only fat combined with plenty of carbs and sugar that is guaranteed to make us fat, that is to say fat in the presence of glucose in that the two compete as a fuel source.

Think about what an American meal with red meat looks like. A plate might have a steak with some rolls or slices of bread, combined with a potato and maybe some starchy ‘vegetables’ like corn, peas, or lima beans. Or there will be a hamburger with a bun, a side of fries, and a large sugary drink (‘diet’ drinks are no better, as we now know artificial sweeteners fool the body and so are just as likely to make you fat and diabetic). What is the common factor, red meat combined with wheat or some other grain, as part of a diet drenched in carbs and sugar (and all of it cooked or slathered in vegetable oils).

Most Americans have a far greater total intake of carbs, sugar, and vegetable oils than red meat and saturated fat. The preferred meat of Americans these days is chicken with fish also being popular. Why does red meat and saturated fat continue to be blamed for the worsening rates of heart disease and metabolic disease? It’s simply not rational, based on the established facts in the field of diet and nutrition. That isn’t to claim that too much red meat couldn’t be problematic. It depends on the total diet. Also, Americans have the habit of grilling their red meat and grilling increases carcinogens, which could be avoided by not charring one’s meat, but that equally applies to not burning (or frying) anything one eats, including white meat and plant foods. In terms of this one factor, you’d be better off eating beef roasted with vegetables than to go with a plant-based meal that included foods like french fries, fried okra, grilled vegetable shish kabobs, etc.

Considering all of that, what exactly is the cause of cancer that keeps showing up in epidemiological studies? Sarah Ballantyne has some good answers to that (see quoted passage below). It’s not so much about red meat itself as it is about what red meat is eaten with. The crux of the matter is that Americans eat more starchy carbs, mostly refined flour, than they do vegetables. What Ballantyne explains is that two of the potential causes of cancer associated with red meat only occur in a diet deficient in vegetables and abundant in grains. It is the total diet as seen in the American population that is the cause of high rates of cancer.

As a heavy meat diet without grains is not problematic, a heavy carb diet without grains is also not necessarily problematic. Some of the healthiest populations eat lots of carbs like sweet potatoes, but you won’t find any healthy population that eats as many grains as do Americans. There are many issues with grains considered in isolation (read the work of David Perlmutter or any number of writers on the paleo diet), but grains combined with certain other foods in particular can contribute to health concerns.

Then again, some of this is about proportion. For most of the time of agriculture, humans ate small amounts of grains as an occasional food. Grains tended to be stored for hard times or for trade or else turned into alcohol to be mixed with water from unclean sources. The shift to large amounts of grains made into refined flour is an evolutionarily unique dilemma our bodies aren’t designed to handle. The first accounts of white bread are found in texts from slightly over two millennia ago and most Westerners couldn’t afford white bread until the past few centuries when industrialized milling began. Before that, people tended to eat foods that were available and didn’t mix them as much (e.g., eat fruits and vegetables in season). Hamburgers were invented only about a century ago. The constant combining of red meat and grains is not something we are adapted for. That harm to our health results maybe shouldn’t surprise us.

Red meat can be a net loss to health or a net gain. It depends not on the red meat, but what is and isn’t eaten with it. Other factors matter as well. Health can’t be limited to a list of dos and don’ts, even if such lists have their place in the context of more detailed knowledge and understanding. The simplest solution is to eat as most humans ate for hundreds of thousands of years, and more than anything else that means avoiding grains. Even without red meat, many people have difficulties with grains.

Let’s return to the context of evolution. Hominids have been eating fatty red meat for millions of years (early humans having prized red meat from blubbery megafauna until their mass extinction), and yet meat-eating hunter-gatherers rarely get cancer, heart disease, or any of the other modern ailments. How long ago was it when the first humans ate grains? About 12 thousand years ago. Most humans on the planet never touched a grain until the past few millennia. And fewer still included grains with almost every snack and meal until the past few generations. So, what is this insanity of government dietary recommendations putting grains as the base of the food pyramid? Those grains are feeding the cancerous microbes, and doing much else that is harmful.

In conclusion, is red meat bad for human health? It depends. Red meat that is charred or heavily processed combined with wheat and other carbs, lots of sugar and vegetable oils, and few nutritious vegetables, well, that would be a shitty diet that will inevitably lead to horrible health consequences. Then again, the exact same diet minus the red meat would still be a recipe for disease and early death. Yet under other conditions, red meat can be part of a healthy diet. Even a ton of pasture-raised red meat (with plenty of nutrient-dense organ meats) combined with an equal amount of organic vegetables (grown on healthy soil, bought locally, and eaten in season), in exclusion of grains especially refined flour and with limited intake of all the other crap, that would be one of the healthiest diets you could eat.

On the other hand, if you are addicted to grains as many are and can’t imagine a world without them, you would be wise to avoid red meat entirely. Assuming you have any concerns about cancer, you should choose one or the other but not both. I would note, though, that there are many other reasons to avoid grains while there are no other known reasons to avoid red meat, at least for serious health concerns, although some people exclude red meat for other reasons such as digestion issues. The point is that whether or not you eat red meat is a personal choice (based on taste, ethics, etc), not so much a health choice, as long as we separate out grains. That is all we can say for certain based on present scientific knowledge.

* * *

We’ve known about this for years now. Isn’t it interesting that no major health organization, scientific institution, corporate news outlet, or government agency has ever warned the public about the risk factors of carcinogenic grains? Instead, we get major propaganda campaigns to eat more grains because that is where the profit is for big ag, big food, and big oil (that makes farm chemicals and transports the products of big ag and big food). How convenient! It’s nice to know that corporate profit is more important than public health.

But keep listening to those who tell you that cows are destroying the world, even though there are fewer cows in North America than there once were buffalo. Yeah, monocultural GMO crops immersed in deadly chemicals that destroy soil and deplete nutrients are going to save us, not traditional grazing land that existed for hundreds of millions of years. So, sure, we could go on producing massive yields of grains in a utopian fantasy beloved by technocrats and plutocrats that further disconnects us from the natural world and our evolutionary origins, an industrial food system dependent on turning the whole world into endless monocrops denatured of all other life, making entire regions into ecological deserts that push us further into mass extinction. Or we could return to traditional ways of farming and living with a more traditional diet largely of animal foods (meat, fish, eggs, dairy, etc) balanced with an equal amount of vegetables, the original hunter-gatherer diet.

Our personal health is important. And it is intimately tied to the health of the earth. Civilization as we know it was built on grains. That wasn’t necessarily a problem when grains were a small part of the diet and populations were small. But is it still a sustainable socioeconomic system as part of a healthy ecological system? No, it isn’t. So why do we continue to do more of the same that caused our problems in the hope that it will solve our problems? As we think about how different parts of our diet work together to create conditions of disease or health, we need to begin thinking this way about our entire world.

* * *

Paleo Principles
by Sarah Ballantyne

While this often gets framed as an argument for going vegetarian or vegan. It’s actually a reflection of the importance of eating plenty of plant foods along with meat. When we take a closer look at these studies, we see something extraordinarily interesting: the link between meat and cancer tends to disappear once the studies adjust for vegetable intake. Even more exciting, when we examine the mechanistic links between meat and cancer, it turns out that many of the harmful (yes, legitimately harmful!) compounds of meat are counteracted by protective compounds in plant foods.

One major mechanism linking meat to cancer involves heme, the iron-containing compound that gives red meat its color (in contrast to the nonheme iron found in plant foods). Where heme becomes a problem is in the gut: the cells lining the digestive tract (enterocytes) metabolize it into cytotoxic compounds (meaning toxic to living cells), which can then damage the gut barrier (specifically the colonic mucosa; see page 67), cause cell proliferation, and increase fecal water toxicity—all of which raise cancer risk. Yikes! In fact, part of the reason red meat is linked with cancer far more often than with white meat could be due to their differences in heme content; white meat (poultry and fish) contains much, much less.

Here’s where vegetables come to the rescue! Chlorophyll, the pigment in plants that makes them green, has a molecular structure that’s very similar to heme. As a result, chlorophyll can block the metabolism of heme in the intestinal tract and prevent those toxic metabolites from forming. Instead of turning into harmful by-products, heme ends up being metabolized into inert compounds that are no longer toxic or damaging to the colon. Animal studies have demonstrated this effect in action: one study on rats showed that supplementing a heme-rich diet with chlorophyll (in the form of spinach) completely suppressed the pro-cancer effects of heme. All the more reason to eat a salad with your steak.

Another mechanism involves L-carnitine, an amino acid that’s particularly abundant in red meat (another candidate for why red meat seems to disproportionately increase cancer risk compared to other meats). When we consume L-carnitine, our intestinal bacteria metabolize it into a compound called trimethylamine (TMA). From there, the TMA enters the bloodstream and gets oxydized by the liver into yet another compound, trimethylamine-N-oxide (TMAO). This is the one we need to pay attention to!

TMAO has been strongly linked to cancer and heart disease, possibly due to promoting inflammation and altering cholesterol transport. Having high levels of it in the bloodstream could be a major risk factor for some chronic diseases. So is this the nail in the coffin for meat eaters?

Not so fast! An important study on this topic published in 2013 in Nature Medicine sheds light on what’s really going on. This paper had quite a few components, but one of the most interesting has to do with gut bacteria. Basically, it turns out that the bacteria group Prevotella is a key mediator between L-carnitine consumption and having high TMAO levels in our blood. In this study, the researchers found that participants with gut microbiomes dominated by Prevotella produced the most TMA (and therefore TMAO, after it reached the liver) from the L-carnitine they ate. Those with microbiomes high in Bacteroides rather than Prevotella saw dramatically less conversion to TMA and TMAO.

Guess what Prevotella loves to snack on? Grains! It just so happens that people with high Prevotella levels, tend to be those who eat grain-based diets (especially whole grain), since this bacterial group specializes in fermenting the type of polysaccharides abundant in grain products. (For instance, we see extremely high levels of Prevotella in populations in rural Africa that rely on cereals like millet and sorghum.) At the same time, Prevotella doesn’t seem to be associated with a high intake of non-grain plant sources, such as fruit and vegetables.

So is it really the red meat that’s a problem . . . or is it the meat in the context of a grain-rich diet? Based on the evidence we have so far, it seems that grains (and the bacteria that love to eat them) are a mandatory part of the L-carnitine-to-TMAO pathway. Ditch the grains, embrace veggies, and our gut will become a more hospitable place for red meat!

* * *

Georgia Ede has a detailed article about the claim of meat causing cancer. In it, she provides several useful summaries of and quotes from the scientific literature.

WHO Says Meat Causes Cancer?

In November 2013, 23 cancer experts from eight countries gathered in Norway to examine the science related to colon cancer and red/processed meat. They concluded:

“…the interactions between meat, gut and health outcomes such as CRC [colorectal cancer] are very complex and are not clearly pointing in one direction….Epidemiological and mechanistic data on associations between red and processed meat intake and CRC are inconsistent and underlying mechanisms are unclear…Better biomarkers of meat intake and of cancer occurrence and updated food composition databases are required for future studies.” 1) To read the full report: http://www.ncbi.nlm.nih.gov/pubmed/24769880 [open access]

Translation: we don’t know if meat causes colorectal cancer. Now THAT is a responsible, honest, scientific conclusion.

How the WHO?

How could the WHO have come to such a different conclusion than this recent international gathering of cancer scientists? As you will see for yourself in my analysis below, the WHO made the following irresponsible decisions:

  1. The WHO cherry-picked studies that supported its anti-meat conclusions, ignoring those that showed either no connection between meat and cancer or even a protective effect of meat on colon cancer risk. These neutral and protective studies were specifically mentioned within the studies cited by the WHO (which makes one wonder whether the WHO committee members actually read the studies referenced in its own report).
  2. The WHO relied heavily on dozens of “epidemiological” studies (which by their very nature are incapable of demonstrating a cause and effect relationship between meat and cancer) to support its claim that meat causes cancer.
  3. The WHO cited a mere SIX experimental studies suggesting a possible link between meat and colorectal cancer, four of which were conducted by the same research group.
  4. THREE of the six experimental studies were conducted solely on RATS. Rats are not humans and may not be physiologically adapted to high-meat diets. All rats were injected with powerful carcinogenic chemicals prior to being fed meat. Yes, you read that correctly.
  5. Only THREE of the six experimental studies were human studies. All were conducted with a very small number of subjects and were seriously flawed in more than one important way. Examples of flaws include using unreliable or outdated biomarkers and/or failing to include proper controls.
  6. Some of the theories put forth by the WHO about how red/processed meat might cause cancer are controversial or have already been disproved. These theories were discredited within the texts of the very same studies cited to support the WHO’s anti-meat conclusions, again suggesting that the WHO committee members either didn’t read these studies or deliberately omitted information that didn’t support the WHO’s anti-meat position.

Does it matter whether the WHO gets it right or wrong about meat and cancer? YES.

“Strong media coverage and ambiguous research results could stimulate consumers to adapt a ‘safety first’ strategy that could result in abolishment of red meat from the diet completely. However, there are reasons to keep red meat in the diet. Red meat (beef in particular) is a nutrient dense food and typically has a better ratio of N6:N3-polyunsaturated fatty acids and significantly more vitamin A, B6 and B12, zinc and iron than white meat(compared values from the Dutch Food Composition Database 2013, raw meat). Iron deficiencies are still common in parts of the populations in both developing and industrialized countries, particularly pre-school children and women of childbearing age (WHO)… Red meat also contains high levels of carnitine, coenzyme Q10, and creatine, which are bioactive compounds that may have positive effects on health.” 2)

The bottom line is that there is no good evidence that unprocessed red meat increases our risk for cancer. Fresh red meat is a highly nutritious food which has formed the foundation of human diets for nearly two million years. Red meat is a concentrated source of easily digestible, highly bioavailable protein, essential vitamins and minerals. These nutrients are more difficult to obtain from plant sources.

It makes no sense to blame an ancient, natural, whole food for the skyrocketing rates of cancer in modern times. I’m not interested in defending the reputation of processed meat (or processed foods of any kind, for that matter), but even the science behind processed meat and cancer is unconvincing, as I think you’ll agree. […]

Regardless, even if you believe in the (non-existent) power of epidemiological studies to provide meaningful information about nutrition, more than half of the 29 epidemiological studies did NOT support the WHO’s stance on unprocessed red meat and colorectal cancer.

It is irresponsible and misleading to include this random collection of positive and negative epidemiological studies as evidence against meat.

The following quote is taken from one of the experimental studies cited by the WHO. The authors of the study begin their paper with this striking statement:

“In puzzling contrast with epidemiological studies, experimental studies do not support the hypothesis that red meat increases colorectal cancer risk. Among the 12 rodent studies reported in the literature, none demonstrated a specific promotional effect of red meat.” 3)

[Oddly enough, none of these twelve “red meat is fine” studies, which the authors went on to list and describe within the text of the introduction to this article, were included in the WHO report].

I cannot emphasize enough how common it is to see statements like this in scientific papers about red meat. Over and over again, researchers see that epidemiology suggests a theoretical connection between some food and some health problem, so they conduct experiments to test the theory and find no connection. This is why our nutrition headlines are constantly changing. One day eggs are bad for you, the next day they’re fine. Epidemiologists are forever sending well-intentioned scientists on time-consuming, expensive wild goose chases, trying to prove that meat is dangerous, when all other sources–from anthropology to physiology to biochemistry to common sense—tell us that meat is nutritious and safe.

* * *

Below good discussion between Dr. Steven Gundry and Dr. Paul Saladino. It’s an uncommon dialogue. Even though Gundry is known for warning against the harmful substances in plant foods, he has shifted toward a plant-based diet in also warning against too much animal foods or at least too much protein, another issue about IGF1 not relevant to this post. As for Saladino, he is a carnivore and so takes Gundry’s argument against plants to a whole other level. Saladino sees no problem with meat, of course. And his view contradicts what Gundry writes about in his most recent book, The Longevity Paradox.

Anyway, they got onto the topic of TMAO. Saladino points out that fish has more fully formed TMAO than red meat produces in combination with grain-loving Prevotella. Even vegetables produce TMAO. So, why is beef being scapegoated? It’s pure ignorant idiocy. To further this point, Saladino explained that he has tested the microbiome of patients of his on the carnivore diet and it comes up low on the Prevotella bacteria. He doesn’t think TMAO is the danger people claim it is. But even if it were, the single safest diet might be the carnivore diet.

Gundry didn’t even disagree. He pointed out that he did testing on patients of his who are long-term vegans and now in their 70s. They had extremely high levels of TMAO. He sent their lab results to the Cleveland Clinic for an opinion. The experts there refused to believe that it was possible and so dismissed the evidence. That is the power of dietary ideology when it forms a self-enclosed reality tunnel. Red meat is bad and vegetables are good. The story changes over time. It’s the saturated fat. No, it’s the TMAO. Then it will be something else. Always looking for a rationalization to uphold the preferred dogma.

Malnourished Americans

Prefatory Note

It would be easy to mistake this writing as a carnivore’s rhetoric against the evils of grains and agriculture. I’m a lot more agnostic on the issue than it might seem. But I do come off as strong in opinion, from decades of personal experience about bad eating habits and the consequences, and my dietary habits were no better when I was vegetarian.

I’m not so much pro-meat as I am for healthy fats and oils, not only from animals sources but also from plants, with coconut oil and olive oil being two of my favorites. As long as you are getting adequate protein, from whatever source (including vegetarian foods), there is no absolute rule about protein intake. But hunter-gatherers on average do eat more fats and oils than protein (and more than vegetables as well), whether the protein comes from meat or seeds and nuts (though the protein and vegetables they get is of extremely high quality and, of course, nutrient dense; along with much fiber). Too much protein with too little fat/oil causes rabbit sickness. It’s fat and oil that has a higher satiety and, combined with low-carb ketosis, is amazing in eliminating food cravings, addictions, and over-eating.

Besides, I have nothing against plant-based foods. I eat more vegetables on the paleo diet than I did in the past, even when I was a vegetarian, more than any vegetarian I know as well; not just more in quantity but also more in quality. Many paleo and keto dieters have embraced a plant-based diet with varying attitudes about meat and fat. Dr. Terry Wahls, former vegetarian, reversed her symptoms of multiple sclerosis by formulating a paleo diet that include massive loads of nutrient-dense vegetables, while adding in the nutrient-dense animal foods as well (e.g., liver).

I’ve picked up three books lately that emphasize plants even further. One is The Essential Vegetarian Keto Cookbook and pretty much is as the title describes it, mostly recipes with some introductory material about ketosis. Another book, Ketotarian by Dr. Will cole, is likewise about keto vegetarianism, but with leniency toward fish consumption and ghee (the former not strictly vegetarian and the latter not strictly paleo). The most recent I got is The Paleo Vegetarian Diet by Dena Harris, another person with a lenient attitude toward diet. That is what I prefer in my tendency toward ideological impurity. About diet, I’m bi-curious or maybe multi-curious.

My broader perspective is that of traditional foods. This is largely based on the work of Weston A. Price, which I was introduced to long ago by way of the writings of Sally Fallon Morrell (formerly Sally Fallon). It is not a paleo diet in that agricultural foods are allowed, but its advocates share a common attitude with paleolists in the valuing of traditional nutrition and food preparation. Authors from both camps bond over their respect for Price’s work and so often reference those on the other side in their writings. I’m of the opinion, in line with traditional foods, that if you are going to eat agricultural foods then traditional preparation is all the more important (from long-fermented bread and fully soaked legumes to cultured dairy and raw aged cheese). Many paleolists share this opinion and some are fine with such things as ghee. My paleo commitment didn’t stop me from enjoying a white role for Thanksgiving, adorning it with organic goat butter, and it didn’t kill me.

I’m not so much arguing against all grains in this post as I’m pointing out the problems found at the extreme end of dietary imbalance that we’ve reached this past century: industrialized and processed, denatured and toxic, grain-based/obsessed and high-carb-and-sugar. In the end, I’m a flexitarian who has come to see the immense benefits in the paleo approach, but I’m not attached to it as a belief system. I heavily weigh the best evidence and arguments I can find in coming to my conclusions. That is what this post is about. I’m not trying to tell anyone how to eat. I hope that heads off certain areas of potential confusion and criticism. So, let’s get to the meat of the matter.

Grain of Truth

Let me begin with a quote, share some related info, and then circle back around to putting the quote into context. The quote is from Grain of Truth by Stephen Yafa. It’s a random book I picked up at a secondhand store and my attraction to it was that the author is defending agriculture and grain consumption. I figured it would be a good balance to my other recent readings. Skimming it, one factoid stuck out. In reference to new industrial milling methods that took hold in the late 19th century, he writes:

“Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” (p. 17)

That is remarkable. He is talking about the now infamous highly refined flour, something that never existed before. Even commercial whole wheat breads today, with some fiber added back in, have little in common with what was traditionally made for millennia. My grandparents were of that particular generation that was so severely malnourished, and so that was the world into which my parents were born. The modern health decline that has gained mainstream attention began many generations back. Okay, so put that on the backburner.

Against the Grain

In a post by Dr. Malcolm Kendrick, I was having a discussion in the comments section (and, at the same time, I was having a related discussion in my own blog). Göran Sjöberg brought up Jame C. Scott’s book about the development of agriculture, Against the Grain — writing that, “This book is very much about the health deterioration, not least through epidemics partly due to compromised immune resistance, that occurred in the transition from hunting and gathering to sedentary mono-crop agriculture state level scale, first in Mesopotamia about five thousand years ago.”

Scott’s view has interested me for a while. I find compelling the way he connects grain farming, legibility, record-keeping, and taxation. There is a reason great empires were built on grain fields, not on potato patches or vegetable gardens, much less cattle ranching. Grain farming is easily observed and measured, tracked and recorded, and that meant it could be widely taxed to fund large centralized governments along with their armies and, later on, their police forces and intelligence agencies. The earliest settled societies arose prior to agriculture, but they couldn’t become major civilizations until the cultivation of grains.

Another commenter, Sasha, responded with what she considered important qualifications: “I think there are too many confounders in transition from hunter gatherers to agriculture to suggest that health deterioration is due to one factor (grains). And since it was members of upper classes who were usually mummified, they had vastly different lifestyles from that of hunter gatherers. IMO, you’re comparing apples to oranges… Also, grain consumption existed in hunter gatherers and probably intensified long before Mesopotamia 5 thousands years ago as wheat was domesticated around 9,000 BCE and millet around 6,000 BCE to use just two examples.”

It is true that pre-neolithic hunter-gatherers, in some cases, sporadically ate grains in small amounts or at least we have evidence they were doing something with grains, though as far as we know they might have been using it to mix with medicinal herbs or used as a thickener for paints — it’s anyone’s guess. Assuming they were eating those traces of grains we’ve discovered, it surely was no where near at the level of the neolithic agriculturalists. Furthermore, during the following millennia, grains were radically changed through cultivation. As for the Egyptian elite, they were eating more grains than anyone, as farmers were still forced to partly subsist from hunting, fishing, and gathering.

I’d take the argument much further forward into history. We know from records that, through the 19th century, Americans were eating more meat than bread. Vegetable and fruit consumption was also relatively low and mostly seasonal. Part of that is because gardening was difficult with so many pests. Besides, with so many natural areas around, hunting and gathering remained a large part of the American diet. Even in the cities, wild game was easily obtained at cheap prices. Into the 20th century, hunting and gathering was still important and sustained many families through the Great Depression and World War era when many commercial foods were scarce.

It was different in Europe, though. Mass urbanization happened centuries before it did in the United States. And not much European wilderness was left standing in recent history. But with the fall of the Roman Empire and headng into feudalism, many Europeans returned to a fair amount of hunting and gathering, during which time general health improved in the population. Restrictive laws about land use eventually made that difficult and the land enclosure movement made it impossible for most Europeans.

Even so, all of that is fairly recent in the big scheme of things. It took many millennia of agriculture before it more fully replaced hunting, fishing, trapping, and gathering. In places like the United States, that change is well within living memory. When some of my ancestors immigrated here in the 1600s, Britain and Europe still maintained plenty of procuring of wild foods to support their populations. And once here, wild foods were even more plentiful and a lot less work than farming.

Many early American farmers didn’t grow food so much for their own diet as to be sold on the market, sometimes in the form of the popular grain-based alcohols. It was in making alcohol that rural farmers were able to get their product to the market without it spoiling. I’m just speculating, but alcohol might have been the most widespread agricultural food of that era because water was often unsafe to drink.

Another commenter, Martin Back, made the same basic point: “Grain these days is cheap thanks to Big Ag and mechanization. It wasn’t always so. If the fields had to be ploughed by draught animals, and the grain weeded, harvested, and threshed by hand, the final product was expensive. Grain became a store of value and a medium of exchange. Eating grains was literally like eating money, so presumably they kept consumption to a minimum.”

In early agriculture, grain was more of a way to save wealth than a staple of the diet. It was saved for purposes of trade and also saved for hard times when no other food was available. What didn’t happen was to constantly consume grain-based foods every day and all day long — going from a breakfast with toast and cereal to lunch with a sandwich and maybe a salad with croutons, and then a snack of crackers in the afternoon before eating more bread or noodles for dinner.

Historical Examples

So, I am partly just speculating. But it’s informed speculation. I base my view on specific examples. The most obvious example are hunter-gatherers, poor by standards of modern industrialization while maintaining great health, as long as they their traditional way of life is able to be maintained. Many populations that are materially better of in terms of a capitalist society (access to comfortable housing, sanitation, healthcare, an abundance of food in grocery stores, etc) are not better off in terms of chronic diseases.

As the main example I already mentioned, poor Americans have often been a quite healthy lot, as compared to other populations around the world. It is true that poor Americans weren’t particularly healthy in the early colonial period, specifically in Virginia because of indentured servitude. And it’s true that poor Americans today are fairly bad off because of the cheap industrialized diet. Yet for the couple of centuries or so in between, they were doing quite well in terms of health, with lots of access to nutrient-dense wild foods. That point is emphasized by looking at other similar populations at the time, such as back in Europe.

Let’s do some other comparisons. The poor in the Roman Empire did not do well, even when they weren’t enslaved. That was for many reasons, such as growing urbanization and its attendant health risks. When the Roman Empire fell, many of the urban centers collapsed. The poor returned to a more rural lifestyle that depended on a fair amount of wild foods. Studies done on their remains show their health improved during that time. Then at the end of feudalism, with the enclosure movement and the return of mass urbanization, health went back on a decline.

Now I’ll consider the early Egyptians. I’m not sure if there is any info about the diet and health of poor Egyptians. But clearly the ruling class had far from optimal health. It’s hard to make comparisons between then and now, though, because it was an entire different kind of society. The early Bronze Age civilizations were mostly small city-states that lacked much hierarchy. Early Egypt didn’t even have the most basic infrastructure such as maintained roads and bridges. And the most recent evidence indicates that the pyramid workers weren’t slaves but instead worked freely and seem to have fed fairly well, whatever that may or may not indicate about their socioeconomic status. The fact that the poor weren’t mummified leaves us with scant evidence that would more directly inform us.

On the other hand, no one can doubt that there have been plenty of poor populations who had truly horrific living standards with much sickness, suffering, and short lifespans. That is particularly true over the millennia as agriculture became ever more central, since that meant periods of abundance alternating with periods of deficiency and sometimes starvation, often combined with weakened immune systems and rampant sickness. That was less the case for the earlier small city-states with less population density and surrounded by the near constant abundance of wilderness areas.

As always, it depends on what are the specifics we are talking about. Also, any comparison and conclusion is relative.

My mother grew up in a family that hunted and at the time there was a certain amount of access to natural areas for many Americans, something that helped a large part of the population get through the Great Depression and world war era. Nonetheless, by the time of my mother’s childhood, overhunting had depleted most of the wild game (bison, bear, deer, etc were no longer around) and so her family relied on less desirable foods such as squirrel, raccoon, and opossum; even the fish they ate was less than optimal because they came from highly polluted waters because of the very factories and railroad her family worked in. So, the wild food opportunities weren’t nearly as good as it had been a half century earlier, much less in the prior centuries.

Not All Poverty is the Same

Being poor today means a lot of things that it didn’t mean in the past. The high rates of heavy metal toxicity today has rarely been seen among previous poor populations. Today 40% of the global deaths are caused by air pollution, primarily effecting the poor, also extremely different from the past. Beyond that, inequality has grown larger than ever before and that has been strongly correlated to high rates of stress, disease, homicides, and suicides. Such inequality is also seen in terms of climate change, droughts, refugee crises, and war/occupation.

Here is what Sasha wrote in response to me: “I agree with a lot of your points, except with your assertion that “the poor ate fairly well in many societies especially when they had access to wild sources of food”. I know how the poor ate in Russia in the beginning of the 20th century and how the poor eat now in the former Soviet republics and in India. Their diet is very poor even though they can have access to wild sources of food. I don’t know what the situation was for the poor in ancient Egypt but I would be very surprised if it was better than in modern day India or former Soviet Union.”

I’d imagine modern Russia has high inequality similar to the US. About modern India, that is one of the most impoverished, densely populated, and malnourished societies around. And modern industrialization did major harm to Hindu Indians because studies show that traditional vegetarians got a fair amount of nutrients from the insects that were mixed in with pre-modern agricultural goods. Both Russia and India have other problems related to neoliberalism that wasn’t a factor in the past. It’s an entirely different kind of poverty these days. Even if some Russians have some access to wild foods, I’m willing to bet they have nowhere near the access that was available in previous generations, centuries, and millennia.

Compare modern poverty to that of feudalism. At least in England, feudal peasants were guaranteed to be taken care of in hard times. The Church, a large part of local governance at the time, was tasked with feeding and taking care of the poor and needy, from orphans to widows. They were tight communities that took care of their own, something that no longer exists in most of the world where the individual is left to suffer and struggle. Present Social Darwinian conditions are not the norm for human societies across history. The present breakdown of families and communities is historically unprecedented.

Socialized Medicine & Externalized Costs
An Invisible Debt Made Visible
On Conflict and Stupidity
Inequality in the Anthropocene
Capitalism as Social Control

The Abnormal Norms of WEIRD Modernity

Everything about present populations is extremely abnormal. This is seen in diet as elsewhere. Let me return to the quote I began this post with. “Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” * So, what had happened to the health of the American population?

Well, there were many changes. Overhunting, as I already said, made many wild game species extinct or eliminated them from local areas, such that my mother born into a rural farm state never saw a white-tailed deer growing up. Also, much earlier after the Civil War, a new form of enclosure movement happened as laws were passed to prevent people, specifically the then free blacks, from hunting and foraging wherever they wanted (early American laws often protected the rights of anyone to hunt, forage plants, collect timber, etc from any land that was left open, whether or not it was owned by someone). The carryover from the feudal commons was finally and fully eliminated. It was also the end of the era of free range cattle ranching, the ending have come with the invention of barbed wire. Access to wild foods was further reduced by the creation and enforcement of protected lands (e.g., the federal park system), which very much was targeted at the poor who up to that point had relied upon wild foods for health and survival.

All of that was combined with mass urbanization and industrialization with all of its new forms of pollution, stress, and inequality. Processed foods were becoming more widespread at the time. Around the turn of the century unhealthy and industrialized vegetable oils became heavily marketed and hence popular, which replaced butter and lard. Also, muckraking about the meat industry scared Americans off from meat and consumption precipitiously dropped. As such, in the decades prior to World War II, the American diet had already shifted toward what we now know. A new young generation had grown up on that industrialized and processed diet and those young people were the ones showing up as recruits for the military. This new diet in such a short period had caused mass malnourishment. It was a mass experiment that showed failure early on and yet we continue the same basic experiment, not only continuing it but making it far worse.

Government officials and health authorities blamed it on bread production. Refined flour had become widely available because of industrialization. This removed all the nutrients that gave any health value to bread. In response, there was a movement to fortify bread, initially enforced by federal law and later by state laws. That helped some, but obviously the malnourishment was caused by many other factors that weren’t appreciated by most at the time, even though this was the same period when Weston A. Price’s work was published. Nutritional science was young at the time and very most nutrients were still undiscovered or else unappreciated. Throwing a few lab-produced vitamins back into food barely scratches the surface of the nutrient-density that was lost.

Most Americans continue to have severe nutritional deficiencies. We don’t recognize this fact because being underdeveloped and sickly has become normalized, maybe even in the minds of most doctors and health officials. Besides, many of the worst symptoms don’t show up until decades later, often as chronic diseases of old age, although increasingly seen among the young. Far fewer Americans today would meet the health standards of World War recruits. It’s been a steady decline, despite the miracles of modern medicine in treating symptoms and delaying death.

* The data on the British shows an even earlier shift in malnourishment because imperial trade brought an industrialized diet sooner to the British population. Also, rural life with a greater diet of wild foods had more quickly disappeared, as compared to the US. The fate of the British in the late 1800s showed what would later happen more than a half century later on the other side of the ocean.

Lore of Nutrition
by Tim Noakes
pp. 373-375

The mid-Victorian period between 1850 and 1880 is now recognised as the golden era of British health. According to P. Clayton and J. Rowbotham, 47 this was entirely due to the mid-Victorians’ superior diet. Farm-produced real foods were available in such surplus that even the working-class poor were eating highly nutritious foods in abundance. As a result, life expectancy in 1875 was equal to, or even better, than it is in modern Britain, especially for men (by about three years). In addition, the profile of diseases was quite different when compared to Britain today.

The authors conclude:

[This] shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative disease which have emerged due to our failure to maintain mid-Victorian nutritional standards. 48

This mid-Victorians’ healthy diet included freely available and cheap vegetables such as onions, carrots, turnips, cabbage, broccoli, peas and beans; fresh and dried fruit, including apples; legumes and nuts, especially chestnuts, walnuts and hazelnuts; fish, including herring, haddock and John Dory; other seafood, including oysters, mussels and whelks; meat – which was considered ‘a mark of a good diet’ so that ‘its complete absence was rare’ – sourced from free-range animals, especially pork, and including offal such as brain, heart, pancreas (sweet breads), liver, kidneys, lungs and intestine; eggs from hens that were kept by most urban households; and hard cheeses.

Their healthy diet was therefore low in cereals, grains, sugar, trans fats and refined flour, and high in fibre, phytonutrients and omega- 3 polyunsaturated fatty acids, entirely compatible with the modern Paleo or LCHF diets.

This period of nutritional paradise changed suddenly after 1875 , when cheap imports of white flour, tinned meat, sugar, canned fruits and condensed milk became more readily available. The results were immediately noticeable. By 1883 , the British infantry was forced to lower its minimum height for recruits by three inches; and by 1900, 50 per cent of British volunteers for the Boer War had to be rejected because of undernutrition. The changes would have been associated with an alteration in disease patterns in these populations, as described by Yellowlees ( Chapter 2 ).

On Obesity and Malnourishment

There is no contradiction, by the way, between rampant nutritional deficiencies and the epidemic of obesity. Gary Taubes noted the dramatic rise of obesity in America began earlier last century, which is to say that it is not a problem that came out of nowhere with the present younger generations. Americans have been getting fatter for a while now. Specifically, they were getting fatter while at the same time being malnourished, partly because of refined flour that was as empty of a carb that is possible.

Taubes emphasizes the point that this seeming paradox has often been observed among poor populations around the world, lack of optimal nutrition that leads to ever more weight gain, sometimes with children being skinny to an unhealthy degree only to grow up to be fat. No doubt that many Americans in the early 1900s were dealing with much poverty and the lack of nutritious foods that often goes with it. As for today, nutritional deficiencies are different because of enrichment, but it persists nonetheless in many other ways. Also, as Keith Payne argues in The Broken Ladder, growing inequality mimics poverty in the conflict and stress it causes. And inequality has everything to do with food quality, as seen with many poor areas being food deserts.

I’ll give you a small taste of Taube’s discussion. It is from the introduction to one of his books, published a few years ago. If you read the book, look at the section immediately following the below. He gives examples of tribes that were poor, didn’t overeat, and did hard manual labor. Yet they were getting obese, even as nearby tribes sometimes remained a healthy weight. The only apparent difference was what they were eating and not how much they were eating. The populations that saw major weight gain had adopted a grain-based diet, typically because of government rations or government stores.

Why We Get Fat
by Gary Taubes
pp. 17-19

In 1934, a young German pediatrician named Hilde Bruch moved to America, settled in New York City, and was “startled,” as she later wrote, by the number of fat children she saw—“really fat ones, not only in clinics, but on the streets and subways, and in schools.” Indeed, fat children in New York were so conspicuous that other European immigrants would ask Bruch about it, assuming that she would have an answer. What is the matter with American children? they would ask. Why are they so bloated and blown up? Many would say they’d never seen so many children in such a state.

Today we hear such questions all the time, or we ask them ourselves, with the continual reminders that we are in the midst of an epidemic of obesity (as is the entire developed world). Similar questions are asked about fat adults. Why are they so bloated and blown up? Or you might ask yourself: Why am I?

But this was New York City in the mid-1930s. This was two decades before the first Kentucky Fried Chicken and McDonald’s franchises, when fast food as we know it today was born. This was half a century before supersizing and high-fructose corn syrup. More to the point, 1934 was the depths of the Great Depression, an era of soup kitchens, bread lines, and unprecedented unemployment. One in every four workers in the United States was unemployed. Six out of every ten Americans were living in poverty. In New York City, where Bruch and her fellow immigrants were astonished by the adiposity of the local children, one in four children were said to be malnourished. How could this be?

A year after arriving in New York, Bruch established a clinic at Columbia University’s College of Physicians and Surgeons to treat obese children. In 1939, she published the first of a series of reports on her exhaustive studies of the many obese children she had treated, although almost invariably without success. From interviews with her patients and their families, she learned that these obese children did indeed eat excessive amounts of food—no matter how much either they or their parents might initially deny it. Telling them to eat less, though, just didn’t work, and no amount of instruction or compassion, counseling, or exhortations—of either children or parents—seemed to help.

It was hard to avoid, Bruch said, the simple fact that these children had, after all, spent their entire lives trying to eat in moderation and so control their weight, or at least thinking about eating less than they did, and yet they remained obese. Some of these children, Bruch reported, “made strenuous efforts to lose weight, practically giving up on living to achieve it.” But maintaining a lower weight involved “living on a continuous semi-starvation diet,” and they just couldn’t do it, even though obesity made them miserable and social outcasts.

One of Bruch’s patients was a fine-boned girl in her teens, “literally disappearing in mountains of fat.” This young girl had spent her life fighting both her weight and her parents’ attempts to help her slim down. She knew what she had to do, or so she believed, as did her parents—she had to eat less—and the struggle to do this defined her existence. “I always knew that life depended on your figure,” she told Bruch. “I was always unhappy and depressed when gaining [weight]. There was nothing to live for.… I actually hated myself. I just could not stand it. I didn’t want to look at myself. I hated mirrors. They showed how fat I was.… It never made me feel happy to eat and get fat—but I never could see a solution for it and so I kept on getting fatter.”

pp. 33-34

If we look in the literature—which the experts have not in this case—we can find numerous populations that experienced levels of obesity similar to those in the United States, Europe, and elsewhere today but with no prosperity and few, if any, of the ingredients of Brownell’s toxic environment: no cheeseburgers, soft drinks, or cheese curls, no drive-in windows, computers, or televisions (sometimes not even books, other than perhaps the Bible), and no overprotective mothers keeping their children from roaming free.

In these populations, incomes weren’t rising; there were no labor-saving devices, no shifts toward less physically demanding work or more passive leisure pursuits. Rather, some of these populations were poor beyond our ability to imagine today. Dirt poor. These are the populations that the overeating hypothesis tells us should be as lean as can be, and yet they were not.

Remember Hilde Bruch’s wondering about all those really fat children in the midst of the Great Depression? Well, this kind of observation isn’t nearly as unusual as we might think.

How Americans Used to Eat

Below is a relevant passage. It puts into context how extremely unusual has been the high-carb, low-fat diet these past few generations. This is partly what informed some of my thoughts. We so quickly forget that the present dominance of a grain-based diet wasn’t always the case, likely not even in most agricultural societies until quite recently. In fact, the earlier American diet is still within living memory, although those left to remember it are quickly dying off.

Let me explain why history of diets matter. One of the arguments for forcing official dietary recommendations onto the entire population was the belief that Americans in a mythical past ate less meat, fat, and butter while having ate more bread, legumes, and vegetables. This turns out to have been a trick of limited data.

We now know, from better data, that the complete opposite was the case. And we have the further data that shows that the increase of the conventional diet has coincided with increase of obesity and chronic diseases. That isn’t to say eating more vegetables is bad for your health, but we do know that even as the average American intake of vegetables has gone up so has all the diet-related health conditions. During this time, what went down was the consumption of all the traditional foods of the American diet going back to the colonial era: wild game, red meat, organ meat, lard, and butter — all the foods Americans ate in huge amounts prior to the industrialized diet.

What added to the confusion and misinterpretation of the evidence had to do with timing. Diet and nutrition was first seriously studied right at the moment when, for most populations, it had already changed. That was the failure of Ancel Keys research on what came to be called the Mediterranean diet (see Sally Fallon Morrell’s Nourishing Diets). The population was recuperating from World War II that had devastated their traditional way of life, including their diet. Keys took the post-war deprivation diet as being the historical norm, but the reality was far different. Cookbooks and other evidence from before the war showed that this population used to eat higher levels of meat and fat, including saturated fat. So, the very people focused on had grown up and spent most of their lives on a diet that was at the moment no longer available because of disruption of the food system. What good health Keys observed came from a lifetime of eating a different diet. Combined with cherry-picking of data and biased analysis, Keys came to a conclusion that was as wrong as wrong could be.

Slightly earlier, Weston A. Price was able to see a different picture. He intentionally traveled to the places where traditional diets remained fully in place. And the devastation of World War II had yet to happen. Price came to a conclusion that what mattered most of all was nutrient-density. Sure, the vegetables eaten would have been of a higher quality than we get today, largely because they were heirloom cultivars grown on health soil. Nutrient-dense foods can only come from nutrient-dense soil, whereas today our food is nutrient-deficient because our soil is highly depleted. The same goes for animal foods. Animals pastured on healthy land will produce healthy dairy, eggs, meat, and fat; these foods will be high in omega-3s and the fat-soluble vitamins.

No matter if it is coming from plant sources or animal sources, nutrient-density might be the most important factor of all. Why fat is meaningful in this context is that it is fat that is where fat-soluble vitamins are found and it is through fat that they are metabolized. And in turn, the fat-soluble vitamins play a key role in the absorption and processing of numerous other nutrients, not to mention a key role in numerous functions in the body. Nutrient-density and fat-density go hand in hand in terms of general health. That is what early Americans were getting in eating so much wild food, not only wild game but also wild greens, fruit, and mushrooms. And nutrient-density is precisely what we are lacking today, as the nutrients have been intentionally removed to make more palatable commercial foods.

Once again, this has a class dimension, since the wealthier have more access to nutrient-dense foods. Few poor people could afford to shop at a high-end health food store, even if one was located nearby their home. But it was quite different in the past when nutrient-dense foods were available to everyone and sometimes more available to the poor concentrated in rural areas. If we want to improve public health, the first thing we should do is return to this historical norm.

The Big Fat Surprise
by Nina Teicholz
pp. 123-131

Yet despite this shaky and often contradictory evidence, the idea that red meat is a principal dietary culprit has thoroughly pervaded our national conversation for decades. We have been led to believe that we’ve strayed from a more perfect, less meat-filled past. Most prominently, when Senator McGovern announced his Senate committee’s report, called Dietary Goals , at a press conference in 1977, he expressed a gloomy outlook about where the American diet was heading. “Our diets have changed radically within the past fifty years,” he explained, “with great and often harmful effects on our health.” Hegsted, standing at his side, criticized the current American diet as being excessively “rich in meat” and other sources of saturated fat and cholesterol, which were “linked to heart disease, certain forms of cancer, diabetes and obesity.” These were the “killer diseases,” said McGovern. The solution, he declared, was for Americans to return to the healthier, plant-based diet they once ate.

The New York Times health columnist Jane Brody perfectly encapsulated this idea when she wrote, “Within this century, the diet of the average American has undergone a radical shift away from plant-based foods such as grains, beans and peas, nuts, potatoes, and other vegetables and fruits and toward foods derived from animals—meat, fish, poultry, eggs and dairy products.” It is a view that has been echoed in literally hundreds of official reports.

The justification for this idea, that our ancestors lived mainly on fruits, vegetables, and grains, comes mainly from the USDA “food disappearance data.” The “disappearance” of food is an approximation of supply; most of it is probably being eaten, but much is wasted, too. Experts therefore acknowledge that the disappearance numbers are merely rough estimates of consumption. The data from the early 1900s, which is what Brody, McGovern, and others used, are known to be especially poor. Among other things, these data accounted only for the meat, dairy, and other fresh foods shipped across state lines in those early years, so anything produced and eaten locally, such as meat from a cow or eggs from chickens, would not have been included. And since farmers made up more than a quarter of all workers during these years, local foods must have amounted to quite a lot. Experts agree that this early availability data are not adequate for serious use, yet they cite the numbers anyway, because no other data are available. And for the years before 1900, there are no “scientific” data at all.

In the absence of scientific data, history can provide a picture of food consumption in the late eighteenth to nineteenth century in America. Although circumstantial, historical evidence can also be rigorous and, in this case, is certainly more far-reaching than the inchoate data from the USDA. Academic nutrition experts rarely consult historical texts, considering them to occupy a separate academic silo with little to offer the study of diet and health. Yet history can teach us a great deal about how humans used to eat in the thousands of years before heart disease, diabetes, and obesity became common. Of course we don’t remember now, but these diseases did not always rage as they do today. And looking at the food patterns of our relatively healthy early-American ancestors, it’s quite clear that they ate far more red meat and far fewer vegetables than we have commonly assumed.

Early-American settlers were “indifferent” farmers, according to many accounts. They were fairly lazy in their efforts at both animal husbandry and agriculture, with “the grain fields, the meadows, the forests, the cattle, etc, treated with equal carelessness,” as one eighteenth-century Swedish visitor described. And there was little point in farming since meat was so readily available.

The endless bounty of America in its early years is truly astonishing. Settlers recorded the extraordinary abundance of wild turkeys, ducks, grouse, pheasant, and more. Migrating flocks of birds would darken the skies for days . The tasty Eskimo curlew was apparently so fat that it would burst upon falling to the earth, covering the ground with a sort of fatty meat paste. (New Englanders called this now-extinct species the “doughbird.”)

In the woods, there were bears (prized for their fat), raccoons, bobolinks, opossums, hares, and virtual thickets of deer—so much that the colonists didn’t even bother hunting elk, moose, or bison, since hauling and conserving so much meat was considered too great an effort. IX

A European traveler describing his visit to a Southern plantation noted that the food included beef, veal, mutton, venison, turkeys, and geese, but he does not mention a single vegetable. Infants were fed beef even before their teeth had grown in. The English novelist Anthony Trollope reported, during a trip to the United States in 1861, that Americans ate twice as much beef as did Englishmen. Charles Dickens, when he visited, wrote that “no breakfast was breakfast” without a T-bone steak. Apparently, starting a day on puffed wheat and low-fat milk—our “Breakfast of Champions!”—would not have been considered adequate even for a servant.

Indeed, for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old. “I hold a family to be in a desperate way when the mother can see the bottom of the pork barrel,” says a frontier housewife in James Fenimore Cooper’s novel The Chainbearer.

Like the primitive tribes mentioned in Chapter 1, Americans also relished the viscera of the animal, according to the cookbooks of the time. They ate the heart, kidneys, tripe, calf sweetbreads (glands), pig’s liver, turtle lungs, the heads and feet of lamb and pigs, and lamb tongue. Beef tongue, too, was “highly esteemed.”

And not just meat but saturated fats of every kind were consumed in great quantities. Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard. X

In the book Putting Meat on the American Table , researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of eight thousand urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds. A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the eighteenth century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”

About 175 pounds of meat per person per year! Compare that to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, more than half is poultry—chicken and turkey—whereas until the mid-twentieth century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs). Subtracting out the poultry factor, we are left with the conclusion that per capita consumption of red meat today is about 40 to 70 pounds per person, according to different sources of government data—in any case far less than what it was a couple of centuries ago.

Yet this drop in red meat consumption is the exact opposite of the picture we get from public authorities. A recent USDA report says that our consumption of meat is at a “record high,” and this impression is repeated in the media. It implies that our health problems are associated with this rise in meat consumption, but these analyses are misleading because they lump together red meat and chicken into one category to show the growth of meat eating overall, when it’s just the chicken consumption that has gone up astronomically since the 1970s. The wider-lens picture is clearly that we eat far less red meat today than did our forefathers.

Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. They “appeared to yield so little nutriment in proportion to labor spent in cultivation,” wrote one eighteenth-century observer, that “farmers preferred more hearty foods.” Indeed, a pioneering 1888 report for the US government written by the country’s top nutrition professor at the time concluded that Americans living wisely and economically would be best to “avoid leafy vegetables,” because they provided so little nutritional content. In New England, few farmers even had many fruit trees, because preserving fruits required equal amounts of sugar to fruit, which was far too costly. Apples were an exception, and even these, stored in barrels, lasted several months at most.

It seems obvious, when one stops to think, that before large supermarket chains started importing kiwis from New Zealand and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. In New England, that season runs from June through October or maybe, in a lucky year, November. Before refrigerated trucks and ships allowed the transport of fresh produce all over the world, most people could therefore eat fresh fruit and vegetables for less than half the year; farther north, winter lasted even longer. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)

Thus it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator.

So by these accounts, for the first two hundred and fifty years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.

During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s. Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital. The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris , makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his seven hundred male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.” XI About one fifth of the US population was over fifty years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some ten million Americans of a prime age for having a heart attack at the turn of the twentieth century, but heart attacks appeared not to have been a common problem.

Was it possible that heart disease existed but was somehow overlooked? The medical historian Leon Michaels compared the record on chest pain with that of two other medical conditions, gout and migraine, which are also painful and episodic and therefore should have been observed by doctors to an equal degree. Michaels catalogs the detailed descriptions of migraines dating all the way back to antiquity; gout, too, was the subject of lengthy notes by doctors and patients alike. Yet chest pain is not mentioned. Michaels therefore finds it “particularly unlikely” that angina pectoris, with its severe, terrifying pain continuing episodically for many years, could have gone unnoticed by the medical community, “if indeed it had been anything but exceedingly rare before the mid-eighteenth century.” XII

So it seems fair to say that at the height of the meat-and-butter-gorging eighteenth and nineteenth centuries, heart disease did not rage as it did by the 1930s. XIII

Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle , Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another twenty years. In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.

Nevertheless, the idea that Americans once ate little meat and “mostly plants”—espoused by McGovern and a multitude of experts—continues to endure. And Americans have for decades now been instructed to go back to this earlier, “healthier” diet that seems, upon examination, never to have existed.

Plowing the Furrows of the Mind

One of the best books I read this past year is The Invisible History of the Human Race by Christine Kenneally. The book covers the type of data HBDers (human biodiversity advocates) and other hereditarians tend to ignore. Kenneally shows how powerful is environment in shaping thought, perception, and behavior.

What really intrigued me is how persistent patterns can be once set into place. Old patterns get disrupted by violence such as colonialism and mass trauma such as slavery. In the place of the old, something new takes form. But this process isn’t always violent. In some cases, technological innovation can change an entire society.

This is true for as simple of a technology as a plow. Just imagine what impact a more complex technology like computers and the internet will have on society in the coming generations and centuries. Also, over this past century or so, we saw a greater change to agriculture than maybe has been seen in all of civilization. Agricultural is becoming industrialized and technologized.

What new social system is being created? How long will it take to become established as a new stable order?

We live in a time of change and we can’t see the end of it. We are like the people who lived during the time when the use of plows first began to spread. All that we know, as all that they knew, is that we are amidst change. This inevitably creates fear and anxiety. It is a crisis that has the potential of being more transformative than a world war. It is a force that will be both destructive and creative, but either way it is unpredictable.

* * * *

The Invisible History of the Human Race:
How DNA and History Shape Our Identities and Our Futures
by Christine Kenneally
Kindle Locations 2445-2489

Catastrophic events like the plague or slavery are not the only ones that echo down the generations . Widespread and deeply held beliefs can be traced to apparently benign events too, like the invention of technology. In the 1970s the Danish economist Ester Boserup argued that the invention of the plow transformed the way men and women viewed themselves. Boserup’s idea was that because the device changed how farming communities labored, it also changed how people thought about labor itself and about who should be responsible for it.

The main farming technology that existed when the plow was introduced was shifting cultivation. Using a plow takes a lot of upper-body strength and manual power, whereas shifting cultivation relies on handheld tools like hoes and does not require as much strength. As communities took up the plow, it was most effectively used by stronger individuals , and these were most often men. In societies that used shifting cultivation, both men and women used the technology . Of course, the plow was invented not to exclude women but to make cultivation faster and easier in areas where crops like wheat, barley, and teff were grown over large, flat tracts of land in deep soil. Communities living where sorghum and millet grew best— typically in rocky soil— continued to use the hoe. Boserup believed that after the plow forced specialization of labor, with men in the field and women remaining in the home, people formed the belief— after the fact— that this arrangement was how it should be and that women were best suited to home life.

Boserup made a solid historical argument, but no one had tried to measure whether beliefs about innate differences between men and women across the world could really be mapped according to whether their ancestors had used the plow. Nathan Nunn read Boserup’s ideas in graduate school, and ten years later he and some colleagues decided to test them.

Once again Nunn searched for ways to measure the Old World against the new. He and his colleagues divided societies up according to whether they used the plow or shifting cultivation . They gathered current data about male and female lives, including how much women in different societies worked in public versus how much they worked in the home, how often they owned companies, and the degree to which they participated in politics. They also measured public attitudes by comparing responses to statements in the World Value Survey like “When jobs are scarce, men should have more right to a job than a woman.”

Nunn found that if you asked an individual whose ancestors grew wheat about his beliefs regarding women’s place, it was much more likely that his notion of gender equality would be weaker than that of someone whose ancestors had grown sorghum or millet. Where the plow was used there was greater gender inequality and women were less common in the workforce. This was true even in contemporary societies in which most of the subjects would never even have seen a plow, much less used one, and in societies where plows today are fully mechanized to the point that a child of either gender would be capable of operating one.

Similar research in the cultural inheritance of psychology has explored the difference between cultures in the West and the East. Many studies have found evidence for more individualistic, analytic ways of thought in the West and more interdependent and holistic conceptions of the self and cooperation in the East. But in 2014 a team of psychologists investigated these differences in populations within China based on whether the culture in question traditionally grew wheat or rice. Comparing cultures within China rather than between the East and West enabled the researchers to remove many confounding factors, like religion and language.

Participants underwent a series of tests in which they paired two of three pictures. In previous studies the way a dog, a rabbit, and a carrot were paired differed according to whether the subject was from the West or the East . The Eastern subjects tended to pair the rabbit with a carrot, which was thought to be the more holistic, relational solution. The Western subjects paired the dog and the rabbit, which is more analytic because the animals belong in the same category. In another test subjects drew pictures of themselves and their friends. Previous studies had shown that westerners drew themselves larger than their friends . Another test surveyed how likely people were to privilege friends over strangers; typically Eastern cultures score higher on this measure.

In all the tests the researchers found that, independent of a community’s wealth or its exposure to pathogens or to other cultures, the people whose ancestors grew rice were much more relational in their thinking than the people whose ancestors were wheat growers. Other measures pointed at differences between the two groups. For example , people from a wheat-growing culture divorced significantly more often than people from a rice-growing culture, a pattern that echoes the difference in divorce rates between the West and the East. The findings were true for people who live in rice and wheat communities today regardless of their occupation; even when subjects had nothing to do with the production of crops, they still inherited the cultural predispositions of their farming forebears.

The differences between the cultures are attributed to the different demands of the two kinds of agriculture. Rice farming depends on complicated irrigation and the cooperation of farmers around the use of water. It also requires twice the amount of labor that is necessary for wheat, so rice-growing communities often stagger the planting of crops in order that all their members can help with the harvest. Wheat farming, by contrast, doesn’t need complicated irrigation or systems of cooperation among growers.

The implication of these studies is that the way we see the world and act in it—whether the end result is gender inequality or trusting strangers— is significantly shaped by internal beliefs and norms that have been passed down in families and small communities . It seems that these norms are even taken with an individual when he moves to another country. But how might history have such a powerful impact on families, even when they have moved away from the place where that history, whatever it was, took place?