A Food Revolution Worthy of the Name!

“Our success with carbohydrates, however, has had a serious downside: a worldwide plague of obesity, diabetes and other diet-related diseases.”
~Gerald C. Nelson

The conventional view on diet promoted by establishment figures and institutions is based on the idea that all calories are equal. In dieting and fat loss, this has meant promoting a philosophy of calorie-in/calorie-out which translates as calorie counting and calorie restriction. Recent research has brought serious doubt to this largely untested hypothesis that has for so long guided public health recommendations.

There is also a larger background to this issue. The government has spent immense money promoting and subsidizing the high-carb diet. For example, they’ve put decades of funding into research for growing higher yield staples of wheat, corn, and rice. But they have never done anything comparable for healthy foods that are nutrient-dense and low-carb. This promotion of high yield crops with industrialized farming has denatured the soil and the food grown on it. This is problematic since these high-carb staples are low in nutrient-density even when grown on healthy soil.

This mentality of obsessing over food as calories is severely dysfunctional. It ignores the human reality of how our bodies function. And it ignores widespread human experience. Calorie-restricted diets are well known to have one of the lowest rates of compliance and success. It doesn’t matter how many or how few calories one tries to eat, as long as the food one is eating is of such low quality. Your hunger and cravings will drive you in your body’s seeking nutrition.

As I’ve eaten more nutrient-dense foods as part of a diet that is ketogenic and paleo, my hunger decreased and my cravings disappeared. I certainly don’t consume more calories than before and possibly far less, not that I’m counting. I no longer overeat and I find fasting easy. Maybe too many people eat so much making them fat because the food system produces mostly empty calories and processed carbs. It’s what’s available and cheapest, and the food industry is brilliant in making their products as addictive as possible. The average person in our society is endlessly hungry while their body is not getting what it needs. It’s a vicious cycle of decline.

I remember how I was for most of my life until quite recently, with decades as a sugar addict and a junk food junky. I was always hungry and always snacking. Carbs and sugar would keep my blood sugar and serotonin levels on a constant roller coaster ride of highs and lows, and it wrecked my physical and mental health in the process. It wasn’t a happy state. And anyone having told me in my deepest and darkest depressive funk that I should count and restrict my calories would not have been helpful. What I needed was more of the right kinds of calories, those filled with healthy fats and fat-soluble vitamins along with so much else. My body was starving from malnourishment even when I was overeating and, despite regular exercise, eventually gaining weight.

We don’t need to grow more food to feed the world but to grow better food to nourish everyone at least to a basic level, considering how many diseases even in rich countries are caused by nutrient deficiencies (e.g., Dr. Terry Wahls reversed multiple sclerosis symptoms in her self, in patients, and in clinical subjects through increasing nutrient-density). The same amount of food produced, if nutrient-dense, could feed many more people. We already have enough food and will continue to have enough food for the foreseeable future. That of equal and fair distribution of food is a separate issue. The problem isn’t producing a greater quantity for what we desperately need is greater quality. But that is difficult because our industrial farming has harmed the health of the soil and denatured our food supply.

The U.S. gov pays some farmers to not grow anything because the market is flooded with too much food. At the same time, U.S. gov pays other farmers to grow more crops like corn, something I know from living in Iowa, the corn capital of the world. Subsidizing the production of processed carbs and high fructose syrup is sickening and killing us, ignoring the problems with ethanol. Just as important, it also wastes limited resources that could be used in better ways.

We have become disconnected in so many ways. Scientific research and government policies disconnected from human health. An entire civilization disconnected from the earth we depend upon. And the modern mind disconnected from our own bodies, to the point of being alienated from what should be the most natural thing in the world, that of eating. When we are driven by cravings, our bodies are seeking something essential and needed. There is a good reason we’re attracted to things that taste sweet, salty, and fatty/oily. In natural whole foods, these flavors indicate something is nutrient-dense. But we fool the body by eating nutrient-deficient processed foods grown on poor soil. And then we create dietary ideologies that tell us this is normal.

What if we could feed more people with less land? And what if we could do so in a way that brought optimal and sustainable health to individuals, society, and the earth? Now that would be a food revolution worthy of the name!

* * *

The global food problem isn’t what you think
by Gerald C. Nelson 

Here’s what we found:

Under even the worst conditions, there will be enough food, if we define “enough” as meaning sufficient calories, on average, for everyone — with 2,000 calories per day as the standard requirement. . . [T]he post-World War II Green Revolution efforts to boost the productivity of staples such as wheat and rice have been so successful that we are now awash in carbohydrates. And because so much has already been invested in improving the productivity of these crops, solid yield gains will likely continue for the next few decades. The productivity enhancements have also made them more affordable relative to other foods that provide more of the other needed nutrients.

Our success with carbohydrates, however, has had a serious downside: a worldwide plague of obesity, diabetes and other diet-related diseases. The World Health Organization reports that in 2014, there were 462 million underweight adults worldwide but more than 600 million who were obese — nearly two-thirds of them in developing countries. And childhood obesity is rising much faster in poorer countries than in richer ones.

Meanwhile, micronutrient shortages such as Vitamin A deficiency are already causing blindness in somewhere between 250,000 and 500,000 children a year and killing half of them within 12 months of them losing their sight. Dietary shortages of iron, zinc, iodine and folate all have devastating health effects.

These statistics point to the need for more emphasis on nutrients other than carbohydrates in our diets. And in this area, our findings are not reassuring.

Malnourished Americans

Prefatory Note

It would be easy to mistake this writing as a carnivore’s rhetoric against the evils of grains and agriculture. I’m a lot more agnostic on the issue than it might seem. But I do come off as strong in opinion, from decades of personal experience about bad eating habits and the consequences, and my dietary habits were no better when I was vegetarian.

I’m not so much pro-meat as I am for healthy fats and oils, not only from animals sources but also from plants, with coconut oil and olive oil being two of my favorites. As long as you are getting adequate protein, from whatever source (including vegetarian foods), there is no absolute rule about protein intake. But hunter-gatherers on average do eat more fats and oils than protein (and more than vegetables as well), whether the protein comes from meat or seeds and nuts (though the protein and vegetables they get is of extremely high quality and, of course, nutrient dense; along with much fiber). Too much protein with too little fat/oil causes rabbit sickness. It’s fat and oil that has a higher satiety and, combined with low-carb ketosis, is amazing in eliminating food cravings, addictions, and over-eating.

Besides, I have nothing against plant-based foods. I eat more vegetables on the paleo diet than I did in the past, even when I was a vegetarian, more than any vegetarian I know as well; not just more in quantity but also more in quality. Many paleo and keto dieters have embraced a plant-based diet with varying attitudes about meat and fat. Dr. Terry Wahls, former vegetarian, reversed her symptoms of multiple sclerosis by formulating a paleo diet that include massive loads of nutrient-dense vegetables, while adding in the nutrient-dense animal foods as well (e.g., liver).

I’ve picked up three books lately that emphasize plants even further. One is The Essential Vegetarian Keto Cookbook and pretty much is as the title describes it, mostly recipes with some introductory material about ketosis. Another book, Ketotarian by Dr. Will cole, is likewise about keto vegetarianism, but with leniency toward fish consumption and ghee (the former not strictly vegetarian and the latter not strictly paleo). The most recent I got is The Paleo Vegetarian Diet by Dena Harris, another person with a lenient attitude toward diet. That is what I prefer in my tendency toward ideological impurity. About diet, I’m bi-curious or maybe multi-curious.

My broader perspective is that of traditional foods. This is largely based on the work of Weston A. Price, which I was introduced to long ago by way of the writings of Sally Fallon Morrell (formerly Sally Fallon). It is not a paleo diet in that agricultural foods are allowed, but its advocates share a common attitude with paleolists in the valuing of traditional nutrition and food preparation. Authors from both camps bond over their respect for Price’s work and so often reference those on the other side in their writings. I’m of the opinion, in line with traditional foods, that if you are going to eat agricultural foods then traditional preparation is all the more important (from long-fermented bread and fully soaked legumes to cultured dairy and raw aged cheese). Many paleolists share this opinion and some are fine with such things as ghee. My paleo commitment didn’t stop me from enjoying a white role for Thanksgiving, adorning it with organic goat butter, and it didn’t kill me.

I’m not so much arguing against all grains in this post as I’m pointing out the problems found at the extreme end of dietary imbalance that we’ve reached this past century: industrialized and processed, denatured and toxic, grain-based/obsessed and high-carb-and-sugar. In the end, I’m a flexitarian who has come to see the immense benefits in the paleo approach, but I’m not attached to it as a belief system. I heavily weigh the best evidence and arguments I can find in coming to my conclusions. That is what this post is about. I’m not trying to tell anyone how to eat. I hope that heads off certain areas of potential confusion and criticism. So, let’s get to the meat of the matter.

Grain of Truth

Let me begin with a quote, share some related info, and then circle back around to putting the quote into context. The quote is from Grain of Truth by Stephen Yafa. It’s a random book I picked up at a secondhand store and my attraction to it was that the author is defending agriculture and grain consumption. I figured it would be a good balance to my other recent readings. Skimming it, one factoid stuck out. In reference to new industrial milling methods that took hold in the late 19th century, he writes:

“Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” (p. 17)

That is remarkable. He is talking about the now infamous highly refined flour, something that never existed before. Even commercial whole wheat breads today, with some fiber added back in, have little in common with what was traditionally made for millennia. My grandparents were of that particular generation that was so severely malnourished, and so that was the world into which my parents were born. The modern health decline that has gained mainstream attention began many generations back. Okay, so put that on the backburner.

Against the Grain

In a post by Dr. Malcolm Kendrick, I was having a discussion in the comments section (and, at the same time, I was having a related discussion in my own blog). Göran Sjöberg brought up Jame C. Scott’s book about the development of agriculture, Against the Grain — writing that, “This book is very much about the health deterioration, not least through epidemics partly due to compromised immune resistance, that occurred in the transition from hunting and gathering to sedentary mono-crop agriculture state level scale, first in Mesopotamia about five thousand years ago.”

Scott’s view has interested me for a while. I find compelling the way he connects grain farming, legibility, record-keeping, and taxation. There is a reason great empires were built on grain fields, not on potato patches or vegetable gardens, much less cattle ranching. Grain farming is easily observed and measured, tracked and recorded, and that meant it could be widely taxed to fund large centralized governments along with their armies and, later on, their police forces and intelligence agencies. The earliest settled societies arose prior to agriculture, but they couldn’t become major civilizations until the cultivation of grains.

Another commenter, Sasha, responded with what she considered important qualifications: “I think there are too many confounders in transition from hunter gatherers to agriculture to suggest that health deterioration is due to one factor (grains). And since it was members of upper classes who were usually mummified, they had vastly different lifestyles from that of hunter gatherers. IMO, you’re comparing apples to oranges… Also, grain consumption existed in hunter gatherers and probably intensified long before Mesopotamia 5 thousands years ago as wheat was domesticated around 9,000 BCE and millet around 6,000 BCE to use just two examples.”

It is true that pre-neolithic hunter-gatherers, in some cases, sporadically ate grains in small amounts or at least we have evidence they were doing something with grains, though as far as we know they might have been using it to mix with medicinal herbs or used as a thickener for paints — it’s anyone’s guess. Assuming they were eating those traces of grains we’ve discovered, it surely was no where near at the level of the neolithic agriculturalists. Furthermore, during the following millennia, grains were radically changed through cultivation. As for the Egyptian elite, they were eating more grains than anyone, as farmers were still forced to partly subsist from hunting, fishing, and gathering.

I’d take the argument much further forward into history. We know from records that, through the 19th century, Americans were eating more meat than bread. Vegetable and fruit consumption was also relatively low and mostly seasonal. Part of that is because gardening was difficult with so many pests. Besides, with so many natural areas around, hunting and gathering remained a large part of the American diet. Even in the cities, wild game was easily obtained at cheap prices. Into the 20th century, hunting and gathering was still important and sustained many families through the Great Depression and World War era when many commercial foods were scarce.

It was different in Europe, though. Mass urbanization happened centuries before it did in the United States. And not much European wilderness was left standing in recent history. But with the fall of the Roman Empire and headng into feudalism, many Europeans returned to a fair amount of hunting and gathering, during which time general health improved in the population. Restrictive laws about land use eventually made that difficult and the land enclosure movement made it impossible for most Europeans.

Even so, all of that is fairly recent in the big scheme of things. It took many millennia of agriculture before it more fully replaced hunting, fishing, trapping, and gathering. In places like the United States, that change is well within living memory. When some of my ancestors immigrated here in the 1600s, Britain and Europe still maintained plenty of procuring of wild foods to support their populations. And once here, wild foods were even more plentiful and a lot less work than farming.

Many early American farmers didn’t grow food so much for their own diet as to be sold on the market, sometimes in the form of the popular grain-based alcohols. It was in making alcohol that rural farmers were able to get their product to the market without it spoiling. I’m just speculating, but alcohol might have been the most widespread agricultural food of that era because water was often unsafe to drink.

Another commenter, Martin Back, made the same basic point: “Grain these days is cheap thanks to Big Ag and mechanization. It wasn’t always so. If the fields had to be ploughed by draught animals, and the grain weeded, harvested, and threshed by hand, the final product was expensive. Grain became a store of value and a medium of exchange. Eating grains was literally like eating money, so presumably they kept consumption to a minimum.”

In early agriculture, grain was more of a way to save wealth than a staple of the diet. It was saved for purposes of trade and also saved for hard times when no other food was available. What didn’t happen was to constantly consume grain-based foods every day and all day long — going from a breakfast with toast and cereal to lunch with a sandwich and maybe a salad with croutons, and then a snack of crackers in the afternoon before eating more bread or noodles for dinner.

Historical Examples

So, I am partly just speculating. But it’s informed speculation. I base my view on specific examples. The most obvious example are hunter-gatherers, poor by standards of modern industrialization while maintaining great health, as long as they their traditional way of life is able to be maintained. Many populations that are materially better of in terms of a capitalist society (access to comfortable housing, sanitation, healthcare, an abundance of food in grocery stores, etc) are not better off in terms of chronic diseases.

As the main example I already mentioned, poor Americans have often been a quite healthy lot, as compared to other populations around the world. It is true that poor Americans weren’t particularly healthy in the early colonial period, specifically in Virginia because of indentured servitude. And it’s true that poor Americans today are fairly bad off because of the cheap industrialized diet. Yet for the couple of centuries or so in between, they were doing quite well in terms of health, with lots of access to nutrient-dense wild foods. That point is emphasized by looking at other similar populations at the time, such as back in Europe.

Let’s do some other comparisons. The poor in the Roman Empire did not do well, even when they weren’t enslaved. That was for many reasons, such as growing urbanization and its attendant health risks. When the Roman Empire fell, many of the urban centers collapsed. The poor returned to a more rural lifestyle that depended on a fair amount of wild foods. Studies done on their remains show their health improved during that time. Then at the end of feudalism, with the enclosure movement and the return of mass urbanization, health went back on a decline.

Now I’ll consider the early Egyptians. I’m not sure if there is any info about the diet and health of poor Egyptians. But clearly the ruling class had far from optimal health. It’s hard to make comparisons between then and now, though, because it was an entire different kind of society. The early Bronze Age civilizations were mostly small city-states that lacked much hierarchy. Early Egypt didn’t even have the most basic infrastructure such as maintained roads and bridges. And the most recent evidence indicates that the pyramid workers weren’t slaves but instead worked freely and seem to have fed fairly well, whatever that may or may not indicate about their socioeconomic status. The fact that the poor weren’t mummified leaves us with scant evidence that would more directly inform us.

On the other hand, no one can doubt that there have been plenty of poor populations who had truly horrific living standards with much sickness, suffering, and short lifespans. That is particularly true over the millennia as agriculture became ever more central, since that meant periods of abundance alternating with periods of deficiency and sometimes starvation, often combined with weakened immune systems and rampant sickness. That was less the case for the earlier small city-states with less population density and surrounded by the near constant abundance of wilderness areas.

As always, it depends on what are the specifics we are talking about. Also, any comparison and conclusion is relative.

My mother grew up in a family that hunted and at the time there was a certain amount of access to natural areas for many Americans, something that helped a large part of the population get through the Great Depression and world war era. Nonetheless, by the time of my mother’s childhood, overhunting had depleted most of the wild game (bison, bear, deer, etc were no longer around) and so her family relied on less desirable foods such as squirrel, raccoon, and opossum; even the fish they ate was less than optimal because they came from highly polluted waters because of the very factories and railroad her family worked in. So, the wild food opportunities weren’t nearly as good as it had been a half century earlier, much less in the prior centuries.

Not All Poverty is the Same

Being poor today means a lot of things that it didn’t mean in the past. The high rates of heavy metal toxicity today has rarely been seen among previous poor populations. Today 40% of the global deaths are caused by air pollution, primarily effecting the poor, also extremely different from the past. Beyond that, inequality has grown larger than ever before and that has been strongly correlated to high rates of stress, disease, homicides, and suicides. Such inequality is also seen in terms of climate change, droughts, refugee crises, and war/occupation.

Here is what Sasha wrote in response to me: “I agree with a lot of your points, except with your assertion that “the poor ate fairly well in many societies especially when they had access to wild sources of food”. I know how the poor ate in Russia in the beginning of the 20th century and how the poor eat now in the former Soviet republics and in India. Their diet is very poor even though they can have access to wild sources of food. I don’t know what the situation was for the poor in ancient Egypt but I would be very surprised if it was better than in modern day India or former Soviet Union.”

I’d imagine modern Russia has high inequality similar to the US. About modern India, that is one of the most impoverished, densely populated, and malnourished societies around. And modern industrialization did major harm to Hindu Indians because studies show that traditional vegetarians got a fair amount of nutrients from the insects that were mixed in with pre-modern agricultural goods. Both Russia and India have other problems related to neoliberalism that wasn’t a factor in the past. It’s an entirely different kind of poverty these days. Even if some Russians have some access to wild foods, I’m willing to bet they have nowhere near the access that was available in previous generations, centuries, and millennia.

Compare modern poverty to that of feudalism. At least in England, feudal peasants were guaranteed to be taken care of in hard times. The Church, a large part of local governance at the time, was tasked with feeding and taking care of the poor and needy, from orphans to widows. They were tight communities that took care of their own, something that no longer exists in most of the world where the individual is left to suffer and struggle. Present Social Darwinian conditions are not the norm for human societies across history. The present breakdown of families and communities is historically unprecedented.

Socialized Medicine & Externalized Costs
An Invisible Debt Made Visible
On Conflict and Stupidity
Inequality in the Anthropocene
Capitalism as Social Control

The Abnormal Norms of WEIRD Modernity

Everything about present populations is extremely abnormal. This is seen in diet as elsewhere. Let me return to the quote I began this post with. “Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” * So, what had happened to the health of the American population?

Well, there were many changes. Overhunting, as I already said, made many wild game species extinct or eliminated them from local areas, such that my mother born into a rural farm state never saw a white-tailed deer growing up. Also, much earlier after the Civil War, a new form of enclosure movement happened as laws were passed to prevent people, specifically the then free blacks, from hunting and foraging wherever they wanted (early American laws often protected the rights of anyone to hunt, forage plants, collect timber, etc from any land that was left open, whether or not it was owned by someone). The carryover from the feudal commons was finally and fully eliminated. It was also the end of the era of free range cattle ranching, the ending have come with the invention of barbed wire. Access to wild foods was further reduced by the creation and enforcement of protected lands (e.g., the federal park system), which very much was targeted at the poor who up to that point had relied upon wild foods for health and survival.

All of that was combined with mass urbanization and industrialization with all of its new forms of pollution, stress, and inequality. Processed foods were becoming more widespread at the time. Around the turn of the century unhealthy and industrialized vegetable oils became heavily marketed and hence popular, which replaced butter and lard. Also, muckraking about the meat industry scared Americans off from meat and consumption precipitiously dropped. As such, in the decades prior to World War II, the American diet had already shifted toward what we now know. A new young generation had grown up on that industrialized and processed diet and those young people were the ones showing up as recruits for the military. This new diet in such a short period had caused mass malnourishment. It was a mass experiment that showed failure early on and yet we continue the same basic experiment, not only continuing it but making it far worse.

Government officials and health authorities blamed it on bread production. Refined flour had become widely available because of industrialization. This removed all the nutrients that gave any health value to bread. In response, there was a movement to fortify bread, initially enforced by federal law and later by state laws. That helped some, but obviously the malnourishment was caused by many other factors that weren’t appreciated by most at the time, even though this was the same period when Weston A. Price’s work was published. Nutritional science was young at the time and very most nutrients were still undiscovered or else unappreciated. Throwing a few lab-produced vitamins back into food barely scratches the surface of the nutrient-density that was lost.

Most Americans continue to have severe nutritional deficiencies. We don’t recognize this fact because being underdeveloped and sickly has become normalized, maybe even in the minds of most doctors and health officials. Besides, many of the worst symptoms don’t show up until decades later, often as chronic diseases of old age, although increasingly seen among the young. Far fewer Americans today would meet the health standards of World War recruits. It’s been a steady decline, despite the miracles of modern medicine in treating symptoms and delaying death.

* The data on the British shows an even earlier shift in malnourishment because imperial trade brought an industrialized diet sooner to the British population. Also, rural life with a greater diet of wild foods had more quickly disappeared, as compared to the US. The fate of the British in the late 1800s showed what would later happen more than a half century later on the other side of the ocean.

Lore of Nutrition
by Tim Noakes
pp. 373-375

The mid-Victorian period between 1850 and 1880 is now recognised as the golden era of British health. According to P. Clayton and J. Rowbotham, 47 this was entirely due to the mid-Victorians’ superior diet. Farm-produced real foods were available in such surplus that even the working-class poor were eating highly nutritious foods in abundance. As a result, life expectancy in 1875 was equal to, or even better, than it is in modern Britain, especially for men (by about three years). In addition, the profile of diseases was quite different when compared to Britain today.

The authors conclude:

[This] shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative disease which have emerged due to our failure to maintain mid-Victorian nutritional standards. 48

This mid-Victorians’ healthy diet included freely available and cheap vegetables such as onions, carrots, turnips, cabbage, broccoli, peas and beans; fresh and dried fruit, including apples; legumes and nuts, especially chestnuts, walnuts and hazelnuts; fish, including herring, haddock and John Dory; other seafood, including oysters, mussels and whelks; meat – which was considered ‘a mark of a good diet’ so that ‘its complete absence was rare’ – sourced from free-range animals, especially pork, and including offal such as brain, heart, pancreas (sweet breads), liver, kidneys, lungs and intestine; eggs from hens that were kept by most urban households; and hard cheeses.

Their healthy diet was therefore low in cereals, grains, sugar, trans fats and refined flour, and high in fibre, phytonutrients and omega- 3 polyunsaturated fatty acids, entirely compatible with the modern Paleo or LCHF diets.

This period of nutritional paradise changed suddenly after 1875 , when cheap imports of white flour, tinned meat, sugar, canned fruits and condensed milk became more readily available. The results were immediately noticeable. By 1883 , the British infantry was forced to lower its minimum height for recruits by three inches; and by 1900, 50 per cent of British volunteers for the Boer War had to be rejected because of undernutrition. The changes would have been associated with an alteration in disease patterns in these populations, as described by Yellowlees ( Chapter 2 ).

On Obesity and Malnourishment

There is no contradiction, by the way, between rampant nutritional deficiencies and the epidemic of obesity. Gary Taubes noted the dramatic rise of obesity in America began earlier last century, which is to say that it is not a problem that came out of nowhere with the present younger generations. Americans have been getting fatter for a while now. Specifically, they were getting fatter while at the same time being malnourished, partly because of refined flour that was as empty of a carb that is possible.

Taubes emphasizes the point that this seeming paradox has often been observed among poor populations around the world, lack of optimal nutrition that leads to ever more weight gain, sometimes with children being skinny to an unhealthy degree only to grow up to be fat. No doubt that many Americans in the early 1900s were dealing with much poverty and the lack of nutritious foods that often goes with it. As for today, nutritional deficiencies are different because of enrichment, but it persists nonetheless in many other ways. Also, as Keith Payne argues in The Broken Ladder, growing inequality mimics poverty in the conflict and stress it causes. And inequality has everything to do with food quality, as seen with many poor areas being food deserts.

I’ll give you a small taste of Taube’s discussion. It is from the introduction to one of his books, published a few years ago. If you read the book, look at the section immediately following the below. He gives examples of tribes that were poor, didn’t overeat, and did hard manual labor. Yet they were getting obese, even as nearby tribes sometimes remained a healthy weight. The only apparent difference was what they were eating and not how much they were eating. The populations that saw major weight gain had adopted a grain-based diet, typically because of government rations or government stores.

Why We Get Fat
by Gary Taubes
pp. 17-19

In 1934, a young German pediatrician named Hilde Bruch moved to America, settled in New York City, and was “startled,” as she later wrote, by the number of fat children she saw—“really fat ones, not only in clinics, but on the streets and subways, and in schools.” Indeed, fat children in New York were so conspicuous that other European immigrants would ask Bruch about it, assuming that she would have an answer. What is the matter with American children? they would ask. Why are they so bloated and blown up? Many would say they’d never seen so many children in such a state.

Today we hear such questions all the time, or we ask them ourselves, with the continual reminders that we are in the midst of an epidemic of obesity (as is the entire developed world). Similar questions are asked about fat adults. Why are they so bloated and blown up? Or you might ask yourself: Why am I?

But this was New York City in the mid-1930s. This was two decades before the first Kentucky Fried Chicken and McDonald’s franchises, when fast food as we know it today was born. This was half a century before supersizing and high-fructose corn syrup. More to the point, 1934 was the depths of the Great Depression, an era of soup kitchens, bread lines, and unprecedented unemployment. One in every four workers in the United States was unemployed. Six out of every ten Americans were living in poverty. In New York City, where Bruch and her fellow immigrants were astonished by the adiposity of the local children, one in four children were said to be malnourished. How could this be?

A year after arriving in New York, Bruch established a clinic at Columbia University’s College of Physicians and Surgeons to treat obese children. In 1939, she published the first of a series of reports on her exhaustive studies of the many obese children she had treated, although almost invariably without success. From interviews with her patients and their families, she learned that these obese children did indeed eat excessive amounts of food—no matter how much either they or their parents might initially deny it. Telling them to eat less, though, just didn’t work, and no amount of instruction or compassion, counseling, or exhortations—of either children or parents—seemed to help.

It was hard to avoid, Bruch said, the simple fact that these children had, after all, spent their entire lives trying to eat in moderation and so control their weight, or at least thinking about eating less than they did, and yet they remained obese. Some of these children, Bruch reported, “made strenuous efforts to lose weight, practically giving up on living to achieve it.” But maintaining a lower weight involved “living on a continuous semi-starvation diet,” and they just couldn’t do it, even though obesity made them miserable and social outcasts.

One of Bruch’s patients was a fine-boned girl in her teens, “literally disappearing in mountains of fat.” This young girl had spent her life fighting both her weight and her parents’ attempts to help her slim down. She knew what she had to do, or so she believed, as did her parents—she had to eat less—and the struggle to do this defined her existence. “I always knew that life depended on your figure,” she told Bruch. “I was always unhappy and depressed when gaining [weight]. There was nothing to live for.… I actually hated myself. I just could not stand it. I didn’t want to look at myself. I hated mirrors. They showed how fat I was.… It never made me feel happy to eat and get fat—but I never could see a solution for it and so I kept on getting fatter.”

pp. 33-34

If we look in the literature—which the experts have not in this case—we can find numerous populations that experienced levels of obesity similar to those in the United States, Europe, and elsewhere today but with no prosperity and few, if any, of the ingredients of Brownell’s toxic environment: no cheeseburgers, soft drinks, or cheese curls, no drive-in windows, computers, or televisions (sometimes not even books, other than perhaps the Bible), and no overprotective mothers keeping their children from roaming free.

In these populations, incomes weren’t rising; there were no labor-saving devices, no shifts toward less physically demanding work or more passive leisure pursuits. Rather, some of these populations were poor beyond our ability to imagine today. Dirt poor. These are the populations that the overeating hypothesis tells us should be as lean as can be, and yet they were not.

Remember Hilde Bruch’s wondering about all those really fat children in the midst of the Great Depression? Well, this kind of observation isn’t nearly as unusual as we might think.

How Americans Used to Eat

Below is a relevant passage. It puts into context how extremely unusual has been the high-carb, low-fat diet these past few generations. This is partly what informed some of my thoughts. We so quickly forget that the present dominance of a grain-based diet wasn’t always the case, likely not even in most agricultural societies until quite recently. In fact, the earlier American diet is still within living memory, although those left to remember it are quickly dying off.

Let me explain why history of diets matter. One of the arguments for forcing official dietary recommendations onto the entire population was the belief that Americans in a mythical past ate less meat, fat, and butter while having ate more bread, legumes, and vegetables. This turns out to have been a trick of limited data.

We now know, from better data, that the complete opposite was the case. And we have the further data that shows that the increase of the conventional diet has coincided with increase of obesity and chronic diseases. That isn’t to say eating more vegetables is bad for your health, but we do know that even as the average American intake of vegetables has gone up so has all the diet-related health conditions. During this time, what went down was the consumption of all the traditional foods of the American diet going back to the colonial era: wild game, red meat, organ meat, lard, and butter — all the foods Americans ate in huge amounts prior to the industrialized diet.

What added to the confusion and misinterpretation of the evidence had to do with timing. Diet and nutrition was first seriously studied right at the moment when, for most populations, it had already changed. That was the failure of Ancel Keys research on what came to be called the Mediterranean diet (see Sally Fallon Morrell’s Nourishing Diets). The population was recuperating from World War II that had devastated their traditional way of life, including their diet. Keys took the post-war deprivation diet as being the historical norm, but the reality was far different. Cookbooks and other evidence from before the war showed that this population used to eat higher levels of meat and fat, including saturated fat. So, the very people focused on had grown up and spent most of their lives on a diet that was at the moment no longer available because of disruption of the food system. What good health Keys observed came from a lifetime of eating a different diet. Combined with cherry-picking of data and biased analysis, Keys came to a conclusion that was as wrong as wrong could be.

Slightly earlier, Weston A. Price was able to see a different picture. He intentionally traveled to the places where traditional diets remained fully in place. And the devastation of World War II had yet to happen. Price came to a conclusion that what mattered most of all was nutrient-density. Sure, the vegetables eaten would have been of a higher quality than we get today, largely because they were heirloom cultivars grown on health soil. Nutrient-dense foods can only come from nutrient-dense soil, whereas today our food is nutrient-deficient because our soil is highly depleted. The same goes for animal foods. Animals pastured on healthy land will produce healthy dairy, eggs, meat, and fat; these foods will be high in omega-3s and the fat-soluble vitamins.

No matter if it is coming from plant sources or animal sources, nutrient-density might be the most important factor of all. Why fat is meaningful in this context is that it is fat that is where fat-soluble vitamins are found and it is through fat that they are metabolized. And in turn, the fat-soluble vitamins play a key role in the absorption and processing of numerous other nutrients, not to mention a key role in numerous functions in the body. Nutrient-density and fat-density go hand in hand in terms of general health. That is what early Americans were getting in eating so much wild food, not only wild game but also wild greens, fruit, and mushrooms. And nutrient-density is precisely what we are lacking today, as the nutrients have been intentionally removed to make more palatable commercial foods.

Once again, this has a class dimension, since the wealthier have more access to nutrient-dense foods. Few poor people could afford to shop at a high-end health food store, even if one was located nearby their home. But it was quite different in the past when nutrient-dense foods were available to everyone and sometimes more available to the poor concentrated in rural areas. If we want to improve public health, the first thing we should do is return to this historical norm.

The Big Fat Surprise
by Nina Teicholz
pp. 123-131

Yet despite this shaky and often contradictory evidence, the idea that red meat is a principal dietary culprit has thoroughly pervaded our national conversation for decades. We have been led to believe that we’ve strayed from a more perfect, less meat-filled past. Most prominently, when Senator McGovern announced his Senate committee’s report, called Dietary Goals , at a press conference in 1977, he expressed a gloomy outlook about where the American diet was heading. “Our diets have changed radically within the past fifty years,” he explained, “with great and often harmful effects on our health.” Hegsted, standing at his side, criticized the current American diet as being excessively “rich in meat” and other sources of saturated fat and cholesterol, which were “linked to heart disease, certain forms of cancer, diabetes and obesity.” These were the “killer diseases,” said McGovern. The solution, he declared, was for Americans to return to the healthier, plant-based diet they once ate.

The New York Times health columnist Jane Brody perfectly encapsulated this idea when she wrote, “Within this century, the diet of the average American has undergone a radical shift away from plant-based foods such as grains, beans and peas, nuts, potatoes, and other vegetables and fruits and toward foods derived from animals—meat, fish, poultry, eggs and dairy products.” It is a view that has been echoed in literally hundreds of official reports.

The justification for this idea, that our ancestors lived mainly on fruits, vegetables, and grains, comes mainly from the USDA “food disappearance data.” The “disappearance” of food is an approximation of supply; most of it is probably being eaten, but much is wasted, too. Experts therefore acknowledge that the disappearance numbers are merely rough estimates of consumption. The data from the early 1900s, which is what Brody, McGovern, and others used, are known to be especially poor. Among other things, these data accounted only for the meat, dairy, and other fresh foods shipped across state lines in those early years, so anything produced and eaten locally, such as meat from a cow or eggs from chickens, would not have been included. And since farmers made up more than a quarter of all workers during these years, local foods must have amounted to quite a lot. Experts agree that this early availability data are not adequate for serious use, yet they cite the numbers anyway, because no other data are available. And for the years before 1900, there are no “scientific” data at all.

In the absence of scientific data, history can provide a picture of food consumption in the late eighteenth to nineteenth century in America. Although circumstantial, historical evidence can also be rigorous and, in this case, is certainly more far-reaching than the inchoate data from the USDA. Academic nutrition experts rarely consult historical texts, considering them to occupy a separate academic silo with little to offer the study of diet and health. Yet history can teach us a great deal about how humans used to eat in the thousands of years before heart disease, diabetes, and obesity became common. Of course we don’t remember now, but these diseases did not always rage as they do today. And looking at the food patterns of our relatively healthy early-American ancestors, it’s quite clear that they ate far more red meat and far fewer vegetables than we have commonly assumed.

Early-American settlers were “indifferent” farmers, according to many accounts. They were fairly lazy in their efforts at both animal husbandry and agriculture, with “the grain fields, the meadows, the forests, the cattle, etc, treated with equal carelessness,” as one eighteenth-century Swedish visitor described. And there was little point in farming since meat was so readily available.

The endless bounty of America in its early years is truly astonishing. Settlers recorded the extraordinary abundance of wild turkeys, ducks, grouse, pheasant, and more. Migrating flocks of birds would darken the skies for days . The tasty Eskimo curlew was apparently so fat that it would burst upon falling to the earth, covering the ground with a sort of fatty meat paste. (New Englanders called this now-extinct species the “doughbird.”)

In the woods, there were bears (prized for their fat), raccoons, bobolinks, opossums, hares, and virtual thickets of deer—so much that the colonists didn’t even bother hunting elk, moose, or bison, since hauling and conserving so much meat was considered too great an effort. IX

A European traveler describing his visit to a Southern plantation noted that the food included beef, veal, mutton, venison, turkeys, and geese, but he does not mention a single vegetable. Infants were fed beef even before their teeth had grown in. The English novelist Anthony Trollope reported, during a trip to the United States in 1861, that Americans ate twice as much beef as did Englishmen. Charles Dickens, when he visited, wrote that “no breakfast was breakfast” without a T-bone steak. Apparently, starting a day on puffed wheat and low-fat milk—our “Breakfast of Champions!”—would not have been considered adequate even for a servant.

Indeed, for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old. “I hold a family to be in a desperate way when the mother can see the bottom of the pork barrel,” says a frontier housewife in James Fenimore Cooper’s novel The Chainbearer.

Like the primitive tribes mentioned in Chapter 1, Americans also relished the viscera of the animal, according to the cookbooks of the time. They ate the heart, kidneys, tripe, calf sweetbreads (glands), pig’s liver, turtle lungs, the heads and feet of lamb and pigs, and lamb tongue. Beef tongue, too, was “highly esteemed.”

And not just meat but saturated fats of every kind were consumed in great quantities. Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard. X

In the book Putting Meat on the American Table , researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of eight thousand urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds. A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the eighteenth century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”

About 175 pounds of meat per person per year! Compare that to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, more than half is poultry—chicken and turkey—whereas until the mid-twentieth century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs). Subtracting out the poultry factor, we are left with the conclusion that per capita consumption of red meat today is about 40 to 70 pounds per person, according to different sources of government data—in any case far less than what it was a couple of centuries ago.

Yet this drop in red meat consumption is the exact opposite of the picture we get from public authorities. A recent USDA report says that our consumption of meat is at a “record high,” and this impression is repeated in the media. It implies that our health problems are associated with this rise in meat consumption, but these analyses are misleading because they lump together red meat and chicken into one category to show the growth of meat eating overall, when it’s just the chicken consumption that has gone up astronomically since the 1970s. The wider-lens picture is clearly that we eat far less red meat today than did our forefathers.

Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. They “appeared to yield so little nutriment in proportion to labor spent in cultivation,” wrote one eighteenth-century observer, that “farmers preferred more hearty foods.” Indeed, a pioneering 1888 report for the US government written by the country’s top nutrition professor at the time concluded that Americans living wisely and economically would be best to “avoid leafy vegetables,” because they provided so little nutritional content. In New England, few farmers even had many fruit trees, because preserving fruits required equal amounts of sugar to fruit, which was far too costly. Apples were an exception, and even these, stored in barrels, lasted several months at most.

It seems obvious, when one stops to think, that before large supermarket chains started importing kiwis from New Zealand and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. In New England, that season runs from June through October or maybe, in a lucky year, November. Before refrigerated trucks and ships allowed the transport of fresh produce all over the world, most people could therefore eat fresh fruit and vegetables for less than half the year; farther north, winter lasted even longer. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)

Thus it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator.

So by these accounts, for the first two hundred and fifty years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.

During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s. Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital. The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris , makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his seven hundred male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.” XI About one fifth of the US population was over fifty years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some ten million Americans of a prime age for having a heart attack at the turn of the twentieth century, but heart attacks appeared not to have been a common problem.

Was it possible that heart disease existed but was somehow overlooked? The medical historian Leon Michaels compared the record on chest pain with that of two other medical conditions, gout and migraine, which are also painful and episodic and therefore should have been observed by doctors to an equal degree. Michaels catalogs the detailed descriptions of migraines dating all the way back to antiquity; gout, too, was the subject of lengthy notes by doctors and patients alike. Yet chest pain is not mentioned. Michaels therefore finds it “particularly unlikely” that angina pectoris, with its severe, terrifying pain continuing episodically for many years, could have gone unnoticed by the medical community, “if indeed it had been anything but exceedingly rare before the mid-eighteenth century.” XII

So it seems fair to say that at the height of the meat-and-butter-gorging eighteenth and nineteenth centuries, heart disease did not rage as it did by the 1930s. XIII

Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle , Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another twenty years. In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.

Nevertheless, the idea that Americans once ate little meat and “mostly plants”—espoused by McGovern and a multitude of experts—continues to endure. And Americans have for decades now been instructed to go back to this earlier, “healthier” diet that seems, upon examination, never to have existed.

Most Mainstream Doctors Would Fail Nutrition

“A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of eighteen questions correctly. In short, most mainstream doctors would fail nutrition.”
~Dr. Will Cole

That is amazing. The point is emphasized by the fact that these are doctors fresh out of medical school. If they were never taught this info in the immediate preceding years of intensive education and training, they are unlikely to pick up more knowledge later in their careers. These young doctors are among the most well educated people in the world, as few fields are as hard to enter and the drop-out rate of medical students is phenomena. These graduates entering residency programs are among the smartest of Americans, the cream of the crop, having been taught at some of the best schools in the world. They are highly trained experts in their field, but obviously this doesn’t include nutrition.

Think about this. Doctors are where most people turn to for serious health advice. They are the ultimate authority figures that the average person directly meets and talks to. If a cardiologist only got 52 percent right to answers on heart health, would you follow her advice and let her do heart surgery on you? I’d hope not. In that case, why would you listen to the dietary opinion of the typical doctor who is ill-informed? Nutrition isn’t a minor part of health, that is for sure. It is the one area where an individual has some control over their life and so isn’t a mere victim of circumstance. Research shows that simple changes in diet and nutrition, not to mention lifestyle, can have dramatic results. Yet few people have that knowledge because most doctors and other officials, to put it bluntly, are ignorant. Anyone who points out this state of affairs in mainstream thought generally isn’t received with welcoming gratitude, much less friendly dialogue and rational debate.

In reading about the paleo diet, a pattern I’ve noticed is that few critics of it know what the diet is and what is advocated by those who adhere to it. It’s not unusual to see, following a criticism of the paleo diet, a description of dietary recommendations that are basically in line with the paleo diet. Their own caricature blinds them to the reality, obfuscating the common ground of agreement or shared concern. I’ve seen the same kind of pattern in the critics of many alternative views: genetic determinists against epigenetic researchers and social scientists, climate change denialists against climatologists, Biblical apologists against Jesus mythicists, Chomskyan linguists against linguistic relativists, etc. In such cases, there is always plenty of fear toward those posing a challenge and so they are treated as the enemy to be attacked. And it is intended as a battle to which the spoils go to the victor, those in dominance assuming they will be the victor.

After debating some people on a blog post by a mainstream doctor (Paleo-suckered), it became clear to me how attractive genetic determinism and biological essentialism is to many defenders of conventional medicine, that there isn’t much you can do about your health other than to do what the doctor tells you and take your meds (these kinds of views may be on the decline, but they are far from down for the count). What bothers them isn’t limited to the paleo diet but extends seemingly to almost any diet as such, excluding official dietary recommendations. They see diet advocates as quacks, faddists, and cultists who are pushing an ideological agenda, and they feel like they are being blamed for their own ill health; from their perspective, it is unfair to tell someone they are capable of improving their diet, at least beyond the standard advice of eat your veggies and whole grains while gulping down your statins and shooting up your insulin.

As a side note, I’m reminded of how what often gets portrayed as alternative wasn’t always seen that way. Linguistic relativism was a fairly common view prior to the Chomskyan counter-revolution. Likewise, much of what gets promoted by the paleo diet was considered common sense in mainstream medical thought earlier last century and in the centuries prior (e.g., carbs are fattening, easily observed back in the day when most people lived on farms, as carbs were and still are how animals get fattened for the slaughter). In many cases, there are old debates that go in cycles. But the cycles are so long, often extending over centuries, that old views appear as if radically new and so easily dismissed as such.

Early Christians heresiologists admitted to the fact of Jesus mythicism, but their only defense was that the devil did it in planting parallels in prior religions. During the Enlightenment Age, many people kept bringing up these religious parallels and this was part of mainstream debate. Yet it was suppressed with the rise of literal-minded fundamentalism during the modern era. Then there is the battle between the Chomskyites, genetic determinists, etc and their opponents is part of a cultural conflict that goes back at least to the ancient Greeks, between the approaches of Plato and Aristotle (Daniel Everett discusses this in the Dark Matter of the Mind; see this post).

To return to the topic at hand, the notion of food as medicine, a premise of the paleo diet, also goes back to the ancient Greeks — in fact, originates with the founder of modern medicine, Hippocrates (he also is ascribed as saying that, “All disease begins in the gut,”  a slight exaggeration of a common view about the importance of gut health, a key area of connection between the paleo diet and alternative medicine). What we now call functional medicine, treating people holistically, used to be standard practice of family doctors for centuries and probably millennia, going back to medicine men and women. But this caring attitude and practice went by the wayside because it took time to spend with patients and insurance companies wouldn’t pay for it. Traditional healthcare that we now think of as alternative is maybe not possible with a for-profit model, but I’d say that is more of a criticism of the for-profit model than a criticism of traditional healthcare.

The dietary denialists love to dismiss the paleo lifestyle as a ‘fad diet’. But as Timothy Noakes argues, it is the least fad diet around. It is based on the research of what humans have been eating since the Paleoithic era and what hominids have been eating for millions of years. Even as a specific diet, it is the earliest official dietary recommendations given by medical experts. Back when it was popularized, it was called the Banting diet and the only complaint the medical authorities had was not that it was wrong but that it was right and they disliked it being promoted in the popular literature, as they considered dietary advice to be their turf to be defended. Timothy Noakes wrote that,

“Their first error is to label LCHF/Banting ‘the latest fashionable diet’; in other words, a fad. This is wrong. The Banting diet takes its name from an obese 19th-century undertaker, William Banting. First described in 1863, Banting is the oldest diet included in medical texts. Perhaps the most iconic medical text of all time, Sir William Osler’s The Principles and Practice of Medicine , published in 1892, includes the Banting/Ebstein diet as the diet for the treatment of obesity (on page 1020 of that edition). 13 The reality is that the only non-fad diet is the Banting diet; all subsequent diets, and most especially the low-fat diet that the UCT academics promote, are ‘the latest fashionable diets’.”
(Lore of Nutrition, p. 131)

The dominant paradigm maintains its dominance by convincing most people that what is perceived as ‘alternative’ was always that way or was a recent invention of radical thought. The risk the dominant paradigm takes is that, in attacking other views, it unintentionally acknowledges and legitimizes them. That happened in South Africa when the government spent hundreds of thousands of dollars attempting to destroy the career of Dr. Timothy Noakes, but because he was such a knowledgeable expert he was able to defend his medical views with scientific evidence. A similar thing happened when the Chomskyites viciously attacked the linguist Daniel Everett who worked in the field with native tribes, but it turned out he was a better writer with more compelling ideas and also had the evidence on his side. What the dogmatic assailants ended up doing, in both cases, was bringing academic and public attention to these challengers to the status quo.

Even though these attacks don’t always succeed, they are successful in setting examples. Even a pyrrhic victory is highly effective in demonstrating raw power in the short term. Not many doctors would be willing to risk their career as did Timothy Noakes and even fewer would have the capacity to defend themselves to such an extent. It’s not only the government that might go after a doctor but also private litigators. And if a doctor doesn’t toe the line, that doctor can lose their job in a hospital or clinic, be denied the ability to get Medicaire reimbursement, be blacklisted from speaking at medical conferences, and many other forms of punishment. That is what many challengers found in too loudly disagreeing with Ancel Keys and gang — they were effectively silenced and were no longer able to get funding to do research, even though the strongest evidence was on their side of the argument. Being shut out and becoming pariah is not a happy place to be.

The establishment can be fearsome when they flex their muscles. And watch out when they come after you. The defenders of the status quo become even more dangerous precisely when they are the weakest, like an injured and cornered animal who growls all the louder, and most people wisely keep their distance. But without fools to risk it all in testing whether the bark really is worse than the bite, nothing would change and the world would grind to a halt, as inertia settled into full authoritarian control. We are in such a time. I remember back in the era of Bush jr and as we headed into the following time of rope-a-dope hope-and-change. There was a palpable feeling of change in the air and I could viscerally sense the gears clicking into place. Something had irrevocably changed and it wasn’t fundamentally about anything going on in the halls of power but something within society and the culture. It made me feel gleeful at the time, like scratching the exact right spot where it itches — ah, there it is! Outwardly, the world more or less appeared the same, but the public mood had clearly shifted.

The bluntness of reactionary right-wingers is caused by the very fact that the winds of change are turning against them. That is why they praise the crude ridicule of wannabe emperor Donald Trump. What in the past could have been ignored by those in the mainstream no longer can be ignored. And after being ignored, the next step toward potential victory is being attacked, which can be mistaken for loss even as it offers the hope for reversal of fortune. Attacks come in many forms, with a few examples already mentioned. Along with ridicule, there is defamation, character assassination, scapegoating, and straw man arguments; allegations of fraud, quackery, malpractice, or deviancy. These are attacks as preemptive defense, in the hope of enforcing submission and silence. This only works for so long, though. The tide can’t be held back forever.

The establishment is under siege and they know it. Their only hope is to be able hold out long enough until the worst happens and they can drop the pretense in going full authoritarian. That is a risky gamble on their part and likely not to pay off, but it is the only hope they have in maintaining power. Desperation of mind breeds desperation of action. But it’s not as if a choice is being made. The inevitable result of a dominant paradigm is that it closes itself not only to all other possibilities but, more importantly, to even the imagination that something else is possible. Ideological realism becomes a reality tunnel. And insularity leads to intellectual laziness, as those who rule and those who support them have come to depend on a presumed authority as gatekeepers of legitimacy. What they don’t notice or don’t understand is the slow erosion of authority and hence loss of what Julian Jaynes called authorization. Their need to be absolutely right is no longer matched with their capacity to enforce their increasingly rigid worldview, their fragile and fraying ideological dogmatism.

This is why challengers to the status quo are in a different position, thus making the altercation of contestants rather lopsided. There is a freedom to being outside the constraints of mainstream thought. An imbalance of power, in some ways, works in favor of those excluded from power since they have all the world to gain and little to lose, meaning less to defend; this being shown in how outsiders, more easily than insiders, often can acknowledge where the other side is right and accept where points of commonality are to be found, that is to say the challengers to power don’t have to be on the constant attack in the way that is required for defenders of the status quo (similar to how guerrilla fighters don’t have to defeat an empire, but simply not lose and wait it out). Trying to defeat ideological underdogs that have growing popular support is like the U.S. military trying to win a war in Vietnam or Afghanistan — they are on the wrong side of history. But systems of power don’t give up without a fight, and they are willing to sacrifice loads of money and many lives in fighting losing battles, if only to keep the enemies at bay for yet another day. And the zombie ideas these systems are built on are not easily eliminated. That is because they are highly infectious mind viruses that can continue to spread long after the original vector of disease disappeared.

As such, the behemoth medical-industrial complex won’t be making any quick turns toward internal reform. Changes happen over generations. And for the moment, this generation of doctors and other healthcare workers were primarily educated and trained under the old paradigm. It’s the entire world most of them know. The system is a victim of its own success and so those working within the system are victimized again and again in their own indoctrination. It’s not some evil sociopathic self-interest that keeps the whole mess slogging along; after all, even doctors are suffering the same failed healthcare system as the rest of us and are dying of the same preventable diseases. All are sacrificed equally, all are food for the system’s hunger. When my mother brought my nephew for an appointment, the doctor was not trying to be a bad person when she made the bizarre and disheartening claim that all kids eat unhealthy and are sickly; i.e., there is nothing to do about it, just the way kids are. Working within the failed system, that is all she knows. The idea that sickness isn’t or shouldn’t be the norm was beyond her imagination.

It is up to the rest of us to imagine new possibilities and, in some cases, to resurrect old possibilities long forgotten. We can’t wait for a system to change when that system is indifferent to our struggles and suffering. We can’t wait for a future time when most doctors are well-educated on treating the whole patient, when officials are well-prepared for understanding and tackling systemic problems. Change will happen, as so many have come to realize, from the bottom up. There is no other way. Until that change happens, the best we can do is to take care of ourselves and take care of our loved ones. That isn’t about blame. It’s about responsibility, that is to say the ability to respond; and more importantly, the willingness to do so.

* * *

Ketotarian
by Dr. Will Cole
pp. 15-16

With the Hippocratic advice to “let food be thy medicine, and medicine thy food,” how far have we strayed that the words of the founder of modern medicine can actually be threatening to conventional medicine?

Today medical schools in the United States offer, on average, only about nineteen hours of nutrition education over four years of medical school.10 Only 29 percent of U.S. medical schools offer the recommended twenty-five hours of nutrition education.11 A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of eighteen questions correctly.12 In short, most mainstream doctors would fail nutrition. So if you were wondering why someone in functional medicine, outside conventional medicine, is writing a book on how to use food for optimal health, this is why.

Expecting health guidance from mainstream medicine is akin to getting gardening advice from a mechanic. You can’t expect someone who wasn’t properly trained in a field to give sound advice. Brilliant physicians in the mainstream model of care are trained to diagnose a disease and match it with a corresponding pharmaceutical drug. This medicinal matching game works sometimes, but it often leaves the patient with nothing but a growing prescription list and growing health problems.

With the strong influence that the pharmaceutical industry has on government and conventional medical policy, it’s no secret that using foods to heal the body is not a priority of mainstream medicine. You only need to eat hospital food once to know this truth. Even more, under current laws it is illegal to say that foods can heal. That’ right. The words treat, cure, and prevent are in effect owned by the Food and Drug Administration (FDA) and the pharmaceutical industry and can be used in the health care setting only when talking about medications. This is the Orwellian world we live in today; health problems are on the rise even though we spend more on health care than ever, and getting healthy is considered radical and often labeled as quackery.

10. K. Adams et al., “Nutrition Education in U.S. Medical Schools: Latest Update of a National Survey,” Academic Medicine 85, no. 9 (September 2010): 1537-1542, https://www.ncbi.nlm.nih.gov/pubmed/9555760.
11. K. Adams et al., “The State of Nutrition Education at US Medical Schools,” Journal of Biomedical Education 2015 (2015), Article ID 357627, 7 pages, http://dx.doi.org/10.1155/2015/357627.
12. M. Castillo et al., “Basic Nutrition Knowledge of Recent Medical Graduates Entering a Pediatric Reside): 357-361, doi: 10.1515/ijamh-2015-0019, https://www.ncbi.nlm.nih.gov/pubmed/26234947.

Scientific Failure and Self Experimentation

In 2005, John P. A. Ioannidis wrote “Why Most Published Research Findings Are False” that was published in PloS journal. It is the most cited paper in that journal’s history and it has led to much discussion in the media. That paper was a theoretical model but has since been well supported — as Ioannidis explained in an interview with Julia Belluz:

“There are now tons of empirical studies on this. One field that probably attracted a lot of attention is preclinical research on drug targets, for example, research done in academic labs on cell cultures, trying to propose a mechanism of action for drugs that can be developed. There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators. Animal research has also attracted a lot of attention and has had a number of empirical evaluations, many of them showing that almost everything that gets published is claimed to be “significant”. Nevertheless, there are big problems in the designs of these studies, and there’s very little reproducibility of results. Most of these studies don’t pan out when you try to move forward to human experimentation.

“Even for randomized controlled trials [considered the gold standard of evidence in medicine and beyond] we have empirical evidence about their modest replication. We have data suggesting only about half of the trials registered [on public databases so people know they were done] are published in journals. Among those published, only about half of the outcomes the researchers set out to study are actually reported. Then half — or more — of the results that are published are interpreted inappropriately, with spin favoring preconceptions of sponsors’ agendas. If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.”

This is part of the replication crisis that has been known about for decades, although rarely acknowledged or taken seriously. And it is a crisis that isn’t limited to single studies —- Ioannidis wrote that, “Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted” (from a paper reported in the Pacific Standard). The crisis cuts across numerous fields, from economics and genetics to neuroscience and psychology. But to my mind, medical research stands out. Evidence-based medicine is only as good as the available evidence — it has been “hijacked to serve agendas different from what it originally aimed for,” as stated by Ioannidis. (A great book on this topic, by the way, is Richard Harris’ Rigor Mortis.) Studies done by or funded by drug companies, for example, are more likely to come to positive results for efficacy and negative results for side effects. And because the government has severely decreased public funding since the Reagan administration, so much of research is now linked to big pharma. From a Retraction Watch interview, Ioannidis says:

“Since clinical research that can generate useful clinical evidence has fallen off the radar screen of many/most public funders, it is largely left up to the industry to support it. The sales and marketing departments in most companies are more powerful than their R&D departments. Hence, the design, conduct, reporting, and dissemination of this clinical evidence becomes an advertisement tool. As for “basic” research, as I explain in the paper, the current system favors PIs who make a primary focus of their career how to absorb more money. Success in obtaining (more) funding in a fiercely competitive world is what counts the most. Given that much “basic” research is justifiably unpredictable in terms of its yield, we are encouraging aggressive gamblers. Unfortunately, it is not gambling for getting major, high-risk discoveries (which would have been nice), it is gambling for simply getting more money.”

I’ve become familiar with this collective failure through reading on diet and nutrition. Some of the key figures in that field, specifically Ancel Keys, were either intentionally fraudulent or really bad at science. Yet the basic paradigm of dietary recommendations that was instituted by Keys remains in place. The fact that Keys was so influential demonstrates the sad state of affairs. Ioannidis has also covered this area and come to similar dire conclusions. Along with Jonathan Schoenfeld, he considered the question “Is everything we eat associated with cancer?”

“After choosing fifty common ingredients out of a cookbook, they set out to find studies linking them to cancer rates – and found 216 studies on forty different ingredients. Of course, most of the studies disagreed with each other. Most ingredients had multiple studies claiming they increased and decreased the risk of getting cancer. Most of the statistical evidence was weak, and meta-analyses usually showed much smaller effects on cancer rates than the original studies.”
(Alex Reinhart, What have we wrought?)

That is a serious and rather personal issue, not an academic exercise. There is so much bad research out there or else confused and conflicting. It’s about impossible for the average person to wade through it all and come to a certain conclusion. Researchers and doctors are as mired in it as the rest of us. Doctors, in particular, are busy people and don’t typically read anything beyond short articles and literature reviews, and even those they likely only skim in spare moments. Besides, most doctors aren’t trained in research and statistics, anyhow. Even if they were better educated and informed, the science itself is in a far from optimal state and one can find all kinds of conclusions. Take the conflict between two prestigious British journals, the Lancet and the BMJ, the former arguing for statin use and the latter more circumspect. In the context of efficacy and side effects, the disagreement is over diverse issues and confounders of cholesterol, inflammation, artherosclerosis, heart disease, etc — all overlapping.

Recently, my dad went to his doctor who said that research in respectable journals strongly supported statin use. Sure, that is true. But the opposite is equally true, in that there are also respectable journals that don’t support wide use of statins. It depends on which journals one chooses to read. My dad’s doctor didn’t have the time to discuss the issue, as that is the nature of the US medical system. So, probably in not wanting to get caught up in fruitless debate, the doctor agreed to my dad stopping statins and seeing what happens. With failure among researchers to come to consensus, it leaves the patient to be a guinea pig in his own personal experiment. Because of the lack of good data, self-experimentation has become a central practice in diet and nutrition. There are so many opinions out there that, if one cares about one’s health, one is forced to try different approaches and find out what seems to work, even as this methodology is open to many pitfalls and hardy guarantees success. But the individual person dealing with a major health concern often has no other choice, at least not until the science improves.

This isn’t necessarily a reason for despair. At least, a public debate is now happening. Ioannidis, among others, sees the solution as not difficult (psychology, despite its own failings, might end up being key in improving research standards; and also organizations are being set up to promote better standards, including The Nutrition Science Initiative started by the science journalist Gary Taubes, someone often cited by those interested in alternative health views). We simply need to require greater transparency and accountability in the scientific process. That is to say science should be democratic. The failure of science is directly related to the failure seen in politics and economics, related to powerful forces of big money and other systemic biases. It is not so much a failure as it is a success toward ulterior motives. That needs to change.

* * *

Many scientific “truths” are, in fact, false
by Olivia Goldhill

Are most published research findings false?
by Erica Seigneur

The Decline Effect – Why Most Published Research Findings are False
by Paul Crichton

Beware those scientific studies—most are wrong, researcher warns
by Ivan Couronne

The Truthiness Of Scientific Research
by Judith Rich Harris

Is most published research really wrong?
by Geoffrey P Webb

Are Scientists Doing Too Much Research?
by Peter Bruce

Health From Generation To Generation

Traveling around the world, Weston A. Price visited numerous traditional communities. Some of them hunter-gatherers and others agricultural, including some rural communities in Europe. This was earlier last century when industrialization had yet to take hold in most places, a very different time in terms of diet, even in the Western world.

What he found was how healthy these people were, whether they consumed more or less meat, dairy or not — although none were vegetarian (the typical pre-agricultural diet was about 1/3 to 2/3 animal products, often a large part of it saturated fat). The commonality is that they ate nutrient-dense foods, much of it raw, fermented, or prepared traditionally (the singlemost nutrient-dense food is organ meats). As a dentist, the first thing Price looked for was dental health. A common feature of these traditional societies was well-developed jaws and bone structure, straight uncrowded teeth, few cavities facial symmetry, etc. These people never saw a dentist or orthodontist, didn’t brush or floss, and yet their teeth were in excellent condition into old age.

This obviously was not the case with Price’s own American patients that didn’t follow a traditional diet and lifestyle. And when he visited prisons, he found that bone development and dental health was far worse, as indicators of worse general health and by implication worse neurocognitive health (on a related note, testing has shown that prisoners have higher rates of lead toxicity, which harms health in diverse ways). Between malnutrition and toxicity, it is unsurprising that there are so many mentally ill people housed in prisons, especially after psychiatric institutions were closed down.

Another early figure in researching diet and health was Francis M. Pottenger Jr, an American doctor. While working as a full-time assistant at a sanatorium, he did a study on cats. He fed some cats a raw food diet, some a cooked food diet, and another group got some of both. He also observed that the cooked food diet caused developmental problems of bone and dental structure. The results were worse than that, though. For the cats fed cooked food, the health of the next generation declined even further. By the third generation, they didn’t reach adulthood. There was no generation after that.

I was reading about this at work. In my normal excitement about learning something new, I shared this info with a coworker, a guy who has some interest in health but is a conventional thinker. He immediately looked for reasons for why it couldn’t be true, such as claiming that the generations of cats kept as pets disproves Pottenger’s observations. Otherwise, so the argument goes, domestic cats would presumably have gone extinct by now.

That was easy to counter, considering most pets are born strays who ate raw food or born to parents who were strays. As for purebred cats, I’m sure breeders have already figured out that a certain amount of raw food (or supplementation of enzymes, microbes, etc that normally would be found in raw food) is necessary for long term feline health. Like processed human food, processed pet food is heavily fortified with added nutrients, which likely counteracts some of the negative consequences to a cooked food diet. Pottenger’s cats weren’t eating fortified cooked food, but neither were the cats fed raw food getting any extra nutrients.

The thing is that prior to industrialization food was never fortified. All the nutrients humans (and cats) needed to not only survive but thrive was available in a traditional/natural diet. The fact that we have to fortify foods and take multivitamins is evidence of something severely wrong with the modern, industrialized food system. But that only lessens the health problems slightly. As with Pottenger’s cats, even the cats on a cooked food diet who had some raw food added didn’t avoid severely decreased health. Considering the emerging health crisis, the same appears to be true of humans.

The danger we face is that the effects are cumulative across the generations, the further we get from a traditional diet. We are only now a few generations into the modern Western diet. Most humans were still consuming raw milk and other traditional foods not that long ago. Earlier last century, the majority of Americans were rural and had access to fresh organic food from gardens and farms, including raw milk from pastured cows and fertile eggs from pastured chickens (pastured meaning high in omega-3s).

Even living in a large city, one of my grandfathers kept rabbits and chickens for much of his life and kept a garden into his old age. That means my mother was raised with quite a bit of healthy food, as was my father living in a small town surrounded by farms. My brothers and I are the first generation in our family to eat a fully modern industrialized diet from childhood. And indeed, we have more mental/neurocognitive health problems than the generations before. I had a debilitating learning disorder diagnosed in elementary school and severe depression clearly showing in 7th grade, one brother had stuttering and anxiety attacks early on, and my oldest brother had severe allergies in childhood that went untreated for years and since then has had a host of ailments (also, at least one of my brothers and I have suspected undiagnosed Asperger’s or something like that, but such conditions weren’t being diagnosed when we were in school). One thing to keep in mind is that my brothers and I are members of the generation that received one of the highest dosages of lead toxicity in childhood, prior to environmental regulations limiting lead pollution; and research has directly and strongly correlated that to higher rates of criminality, suicide, homicide, aggressive behavior, impulse control problems, lowered IQ, and stunted neurocognitive development (also many physical health conditions).

The trend of decline seems to be continuing. My nieces and nephews eat almost nothing but heavily processed foods, way more than my brothers and I had in our own childhoods, and the produce they do eat is mostly from nutrient-depleted soil, along with being filled with farm chemicals and hormones — all of this having continuously worsened these past decades. They are constantly sick (often every few weeks) and, even though still in grade school, all have multiple conditions such as: Asperger’s, learning disorder, obsessive-compulsion, failure to thrive, asthma, joint pain, etc.

If sugar was heroin, my nephew could be fairly called a junky (regularly devouring bags of candy and on more than one occasion eating a plain bowl of sugar; one step short of snorting powdered sugar and mainlining high fructose corn syrup). And in making these observations, I speak from decades of experience as a junkfood junky, most of all a sugar addict, though never quite to the same extreme. My nieces too have a tremendous intake of sugar and simple carbs, as their families’ vegetarianism doesn’t emphasize vegetables (since going on the paleo diet, I’ve been eating more organic nutrient-dense vegetables and other wholesome foods than my brothers and their families combined) — yet their diet fits well into the Standard American Diet (SAD) and, as the USDA suggests, they get plenty of grains. I wouldn’t be surprised if one or all of them already has pre-diabetes and likely will get diabetes before long, as is becoming common in their generation. The body simply can only take so much harm. I know the damage done to my own body and mind from growing up in this sick society and I hate to see even worse happening to the generations following.

To emphasize this point, the testing of newborn babies in the United States shows that they’ve already accumulated on average more than 200 synthetic chemicals from within the womb; and then imagine all the further chemicals they get from the breast milk of their unhealthy mothers along with all kinds of crap in formulas and in their environments (e.g., carcinogenic fire retardants that they breathe 24/7). Lead toxicity has decreased since my own childhood and that is a good thing, but thousands of new toxins and other chemicals have replaced it. On top of that, the hormones, hormone mimics, and hormone disruptors add to dysbiosis and disease — some suggesting this is a cause of puberty’s greater variance than in past generations, either coming earlier or later depending on gender and other factors (maybe partly explaining the reversal and divergence of educational attainment for girls and boys). Added to this mix, this is the first generation of human guinea pigs to be heavily medicated from childhood, much of it medications that have been shown to permanently alter neurocognitive development.

A major factor in many modern diseases is inflammation. This has many causes from leaky gut to toxicity, the former related to diet and often contributing to the latter (in how the leaky gut allows molecules to more easily cross the gut lining and get into the bloodstream where they can freely travel throughout the body — causing autoimmune disorders, allergies, asthma, rheumatoid arthritis, depression, etc). But obesity is another main cause of inflammation. And one might note that, when the body is overloaded and not functioning optimally, excess toxins are stored in fat cells — which makes losing weight even more difficult as toxins are released back into the body, and if not flushed out causing one to feel sick and tired.

It’s not simply bad lifestyle choices. We are living in unnatural and often outright toxic conditions. Many of the symptoms that we categorize as diseases are the bodies attempt to make the best of a bad situation. All of this adds up to a dysfunctional level across society. Our healthcare system is already too expensive for most people to afford. And the largest part of public funding for healthcare is going to diabetes alone. But the saddest part is the severe decrease in quality of life, as the rate of mood and personality disorders skyrockets. It’s not just diet. For whatever reason (toxins? stress?), with greater urbanization has come greater levels of schizophrenia and psychosis. And autism, a rare condition in the past, has become highly prevalent (by the way, one of the proven effective treatments for autism is a paleo/keto diet; also effective for autoimmune conditions among much else).

It’s getting worse and worse, generation after generation. Imagine what this means in terms of epigenetics and transgenerational trauma, as nutritional deficits and microbiotic decimation accumulates, exacerbated by a society driven mad through inequality and instability, stress and anxiety. If not for nutrients added to our nutrient poor food and supplements added to our unhealthy diet, we’d already be dying out as a society and our civilization would’ve collapsed along with it (maybe similar to how some conjecture the Roman Empire weakened as lead toxicity increased in the population). Under these conditions, that children are our future may not be an affirmation of hope. Nor may these children be filled with gratitude once they’ve reached adulthood and come to realize what we did to them and the world we left them. On the other hand, we aren’t forced to embrace fatalism and cynicism. We already know what to do to turn around all of these problems. And we don’t lack the money or other resources to do what needs to be done. All that we are waiting for is public demand and political will, although that might first require our society reaching a point of existential crisis… we are getting close.

The stumbling block is that there is no profit in the ‘healthcare’ industry for advocating, promoting, incentivizing, and ensuring healthy diet and healthy conditions for a healthy population. Quite the opposite. If disease profiteering was made illegal, there would be trillions of dollars of lost profit every year. Disease is the reality of capitalist realism, a diseased economic system and social order. This collective state of sickliness has become the norm and vested interests will go to great lengths to defend the status quo. But for most who benefit from the dysfunctional and destructive system, they never have to give it much thought. When my mother brought my nephew to the doctor, she pointed out how he is constantly sick and constantly eating a poor diet. The doctor’s response was that this was ‘normal’ for kids (these days), which might be true but the doctor should be shocked and shamed by his own admission. As apathy takes hold and we lose a sense of hope, low standards fall ever lower.

We can’t rely upon the established authority figures in seeking better health for ourselves, our families, and our communities. We know what we need to do. It might not be easy to make such massive changes when everything in society is going against you. And no doubt it is more expensive to eat healthy when the unhealthiest foods (e.g., high fructose corn syrup) are being subsidized by the government. It’s no accident that buying off the dollar menu at a fast food is cheaper than cooking a healthy meal at home. Still, if you are willing to go to the effort (and it is worth the effort), a far healthier diet is possible for many within a limited budget. That is assuming you don’t live in a food desert. But even in that case, there is a movement to create community gardens in poor neighborhoods, people providing for themselves what neither the government nor economy will provide.

Revolutions always begin from the bottom up. Or failing that, the foundations of our society will crumble, as the health of our citizenry declines. It’s a decision we must make, individually and collectively. A choice between two divergent paths leading to separate possible futures. As we have so far chosen suicidal self-destruction, we remain free to choose the other option. As Thomas Paine said, “We have it in our power to begin the world over again.”

* * *

Primal Nutrition
by Ron Schmid, ND
pp. 99-100

Parallels Between Pottenger’s and Price’s Work

While the experiments of McCarrison and Pottenger show the value of raw foods in keeping animals remarkably healthy, one might wonder about the relevance to human needs. Cats are carnivores, humans omnivores, and while the animals’ natural diet is raw, humans have cooked some foods for hundreds of thousands of years. But humans, cats, and guinea pigs are all mammals. And while the human diet is omnivorous, foods of animal origin (some customarily eaten raw) have always formed a substantial and essential part of it.

Problems in cats eating cooked foods provided parallels with the human populations Weston Price studied; the cats developed the same diseases as humans eating refined foods. The deficient generation of cats developed the same dental malformations that children of people eating modernized foods developed, including narrowing of dental arches with attendant crowding of teeth, underbites and overbites, and protruding and crooked teeth. The shape of the cat’s skull and even the entire skeleton became abnormal in severe cases, with concomitant marked behavioral changes.

Price observed these same physical and behavioral changes in both native and modern cultures eating refined foods. These changes accompanied the adoption by a culture of refined foods. In native cultures eating entirely according to traditional wisdom resulted in strength of character and relative freedom from the moral problems of modern cultures. In modern cultures, studies of populations of prisons, reformatories, and homes for the mentally delayed revealed that a large majority of individuals residing there (often approaching 100 percent) had marked abnormalities of the dental arch, often with accompanying changes in the shape of the skull.

This was not coincidence; thinking is a biological process, and abnormal changes in the shape of the skull from one generation to the next can contribute to changes in brain functions and thus in behavior. The behavioral changes in deficient cats were due to changes in nutrition. This was the only variable in Pottenger’s carefully controlled experiments. As with physical degenerative changes, parallels with human populations cannot help but suggest themselves, although the specific nature of the relationship is beyond the scope of this discussion.

Human beings do not have the same nutritional requirements as cats, but whatever else each needs, there is strong empirical evidence that both need a significant amount of certain high-quality raw foods to reproduce and function efficiently.

pp. 390-393

Certain groups of these cats were fed quality, fresh, undenatured food and others were fed varying degrees of denatured and processed food, then the effects were observed over several generations. The results from the inferior diets were not so startling for the first-generation animals but markedly and progressively so in subsequent generations. From the second generation on, the cats that were fed processed and denatured diets showed increasing levels of structural deformities, birth defects, stress-driven behaviors, vulnerability to illness, allergies, reduced learning ability, and, finally, major reproductive problems. When Pottenger attempted to reverse the effects in the genetically weakened and vulnerable later-generation animals with greatly improved diet, he found it took fully four generations for the cats to return to normal.

The reflections that Pottenger’s work casts on the health issues and dietary habits of modern-day society are glaring and inescapable. […]

Pottenger’s work has shown us that progressive generations with poor dietary habits result in increasingly more vulnerable progeny and that each subsequent generation with unhealthy dietary habits results in impaired resistance to disease, increasingly poor health and vitality, impaired mental and cognitive health, and impaired capacity to reproduce. It is all part of what we are seeing in our epidemic levels of poor health and the overwhelming rates of autism, violence, attentional disorders, childhood (and adult) behavioral problems, mental illness, fertility issues, and birth defects.