It would be easy to mistake this writing as a carnivore’s rhetoric against the evils of grains and agriculture. I’m a lot more agnostic on the issue than it might seem. But I do come off as strong in opinion, from decades of personal experience about bad eating habits and the consequences, and my dietary habits were no better when I was vegetarian.
I’m not so much pro-meat as I am for healthy fats and oils, not only from animals sources but also from plants, with coconut oil and olive oil being two of my favorites. As long as you are getting adequate protein, from whatever source (including vegetarian foods), there is no absolute rule about protein intake. But hunter-gatherers on average do eat more fats and oils than protein (and more than vegetables as well), whether the protein comes from meat or seeds and nuts (though the protein and vegetables they get is of extremely high quality and, of course, nutrient dense; along with much fiber). Too much protein with too little fat/oil causes rabbit sickness. It’s fat and oil that has a higher satiety and, combined with low-carb ketosis, is amazing in eliminating food cravings, addictions, and over-eating.
Besides, I have nothing against plant-based foods. I eat more vegetables on the paleo diet than I did in the past, even when I was a vegetarian, more than any vegetarian I know as well; not just more in quantity but also more in quality. Many paleo and keto dieters have embraced a plant-based diet with varying attitudes about meat and fat. Dr. Terry Wahls, former vegetarian, reversed her symptoms of multiple sclerosis by formulating a paleo diet that include massive loads of nutrient-dense vegetables, while adding in the nutrient-dense animal foods as well (e.g., liver).
I’ve picked up three books lately that emphasize plants even further. One is The Essential Vegetarian Keto Cookbook and pretty much is as the title describes it, mostly recipes with some introductory material about ketosis. Another book, Ketotarian by Dr. Will cole, is likewise about keto vegetarianism, but with leniency toward fish consumption and ghee (the former not strictly vegetarian and the latter not strictly paleo). The most recent I got is The Paleo Vegetarian Diet by Dena Harris, another person with a lenient attitude toward diet. That is what I prefer in my tendency toward ideological impurity. About diet, I’m bi-curious or maybe multi-curious.
My broader perspective is that of traditional foods. This is largely based on the work of Weston A. Price, which I was introduced to long ago by way of the writings of Sally Fallon Morrell (formerly Sally Fallon). It is not a paleo diet in that agricultural foods are allowed, but its advocates share a common attitude with paleolists in the valuing of traditional nutrition and food preparation. Authors from both camps bond over their respect for Price’s work and so often reference those on the other side in their writings. I’m of the opinion, in line with traditional foods, that if you are going to eat agricultural foods then traditional preparation is all the more important (from long-fermented bread and fully soaked legumes to cultured dairy and raw aged cheese). Many paleolists share this opinion and some are fine with such things as ghee. My paleo commitment didn’t stop me from enjoying a white role for Thanksgiving, adorning it with organic goat butter, and it didn’t kill me.
I’m not so much arguing against all grains in this post as I’m pointing out the problems found at the extreme end of dietary imbalance that we’ve reached this past century: industrialized and processed, denatured and toxic, grain-based/obsessed and high-carb-and-sugar. In the end, I’m a flexitarian who has come to see the immense benefits in the paleo approach, but I’m not attached to it as a belief system. I heavily weigh the best evidence and arguments I can find in coming to my conclusions. That is what this post is about. I’m not trying to tell anyone how to eat. I hope that heads off certain areas of potential confusion and criticism. So, let’s get to the meat of the matter.
Grain of Truth
Let me begin with a quote, share some related info, and then circle back around to putting the quote into context. The quote is from Grain of Truth by Stephen Yafa. It’s a random book I picked up at a secondhand store and my attraction to it was that the author is defending agriculture and grain consumption. I figured it would be a good balance to my other recent readings. Skimming it, one factoid stuck out. In reference to new industrial milling methods that took hold in the late 19th century, he writes:
“Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” (p. 17)
That is remarkable. He is talking about the now infamous highly refined flour, something that never existed before. Even commercial whole wheat breads today, with some fiber added back in, have little in common with what was traditionally made for millennia. My grandparents were of that particular generation that was so severely malnourished, and so that was the world into which my parents were born. The modern health decline that has gained mainstream attention began many generations back. Okay, so put that on the backburner.
Against the Grain
In a post by Dr. Malcolm Kendrick, I was having a discussion in the comments section (and, at the same time, I was having a related discussion in my own blog). Göran Sjöberg brought up Jame C. Scott’s book about the development of agriculture, Against the Grain — writing that, “This book is very much about the health deterioration, not least through epidemics partly due to compromised immune resistance, that occurred in the transition from hunting and gathering to sedentary mono-crop agriculture state level scale, first in Mesopotamia about five thousand years ago.”
Scott’s view has interested me for a while. I find compelling the way he connects grain farming, legibility, record-keeping, and taxation. There is a reason great empires were built on grain fields, not on potato patches or vegetable gardens, much less cattle ranching. Grain farming is easily observed and measured, tracked and recorded, and that meant it could be widely taxed to fund large centralized governments along with their armies and, later on, their police forces and intelligence agencies. The earliest settled societies arose prior to agriculture, but they couldn’t become major civilizations until the cultivation of grains.
Another commenter, Sasha, responded with what she considered important qualifications: “I think there are too many confounders in transition from hunter gatherers to agriculture to suggest that health deterioration is due to one factor (grains). And since it was members of upper classes who were usually mummified, they had vastly different lifestyles from that of hunter gatherers. IMO, you’re comparing apples to oranges… Also, grain consumption existed in hunter gatherers and probably intensified long before Mesopotamia 5 thousands years ago as wheat was domesticated around 9,000 BCE and millet around 6,000 BCE to use just two examples.”
It is true that pre-neolithic hunter-gatherers, in some cases, sporadically ate grains in small amounts or at least we have evidence they were doing something with grains, though as far as we know they might have been using it to mix with medicinal herbs or used as a thickener for paints — it’s anyone’s guess. Assuming they were eating those traces of grains we’ve discovered, it surely was no where near at the level of the neolithic agriculturalists. Furthermore, during the following millennia, grains were radically changed through cultivation. As for the Egyptian elite, they were eating more grains than anyone, as farmers were still forced to partly subsist from hunting, fishing, and gathering.
I’d take the argument much further forward into history. We know from records that, through the 19th century, Americans were eating more meat than bread. Vegetable and fruit consumption was also relatively low and mostly seasonal. Part of that is because gardening was difficult with so many pests. Besides, with so many natural areas around, hunting and gathering remained a large part of the American diet. Even in the cities, wild game was easily obtained at cheap prices. Into the 20th century, hunting and gathering was still important and sustained many families through the Great Depression and World War era when many commercial foods were scarce.
It was different in Europe, though. Mass urbanization happened centuries before it did in the United States. And not much European wilderness was left standing in recent history. But with the fall of the Roman Empire and headng into feudalism, many Europeans returned to a fair amount of hunting and gathering, during which time general health improved in the population. Restrictive laws about land use eventually made that difficult and the land enclosure movement made it impossible for most Europeans.
Even so, all of that is fairly recent in the big scheme of things. It took many millennia of agriculture before it more fully replaced hunting, fishing, trapping, and gathering. In places like the United States, that change is well within living memory. When some of my ancestors immigrated here in the 1600s, Britain and Europe still maintained plenty of procuring of wild foods to support their populations. And once here, wild foods were even more plentiful and a lot less work than farming.
Many early American farmers didn’t grow food so much for their own diet as to be sold on the market, sometimes in the form of the popular grain-based alcohols. It was in making alcohol that rural farmers were able to get their product to the market without it spoiling. I’m just speculating, but alcohol might have been the most widespread agricultural food of that era because water was often unsafe to drink.
Another commenter, Martin Back, made the same basic point: “Grain these days is cheap thanks to Big Ag and mechanization. It wasn’t always so. If the fields had to be ploughed by draught animals, and the grain weeded, harvested, and threshed by hand, the final product was expensive. Grain became a store of value and a medium of exchange. Eating grains was literally like eating money, so presumably they kept consumption to a minimum.”
In early agriculture, grain was more of a way to save wealth than a staple of the diet. It was saved for purposes of trade and also saved for hard times when no other food was available. What didn’t happen was to constantly consume grain-based foods every day and all day long — going from a breakfast with toast and cereal to lunch with a sandwich and maybe a salad with croutons, and then a snack of crackers in the afternoon before eating more bread or noodles for dinner.
So, I am partly just speculating. But it’s informed speculation. I base my view on specific examples. The most obvious example are hunter-gatherers, poor by standards of modern industrialization while maintaining great health, as long as they their traditional way of life is able to be maintained. Many populations that are materially better of in terms of a capitalist society (access to comfortable housing, sanitation, healthcare, an abundance of food in grocery stores, etc) are not better off in terms of chronic diseases.
As the main example I already mentioned, poor Americans have often been a quite healthy lot, as compared to other populations around the world. It is true that poor Americans weren’t particularly healthy in the early colonial period, specifically in Virginia because of indentured servitude. And it’s true that poor Americans today are fairly bad off because of the cheap industrialized diet. Yet for the couple of centuries or so in between, they were doing quite well in terms of health, with lots of access to nutrient-dense wild foods. That point is emphasized by looking at other similar populations at the time, such as back in Europe.
Let’s do some other comparisons. The poor in the Roman Empire did not do well, even when they weren’t enslaved. That was for many reasons, such as growing urbanization and its attendant health risks. When the Roman Empire fell, many of the urban centers collapsed. The poor returned to a more rural lifestyle that depended on a fair amount of wild foods. Studies done on their remains show their health improved during that time. Then at the end of feudalism, with the enclosure movement and the return of mass urbanization, health went back on a decline.
Now I’ll consider the early Egyptians. I’m not sure if there is any info about the diet and health of poor Egyptians. But clearly the ruling class had far from optimal health. It’s hard to make comparisons between then and now, though, because it was an entire different kind of society. The early Bronze Age civilizations were mostly small city-states that lacked much hierarchy. Early Egypt didn’t even have the most basic infrastructure such as maintained roads and bridges. And the most recent evidence indicates that the pyramid workers weren’t slaves but instead worked freely and seem to have fed fairly well, whatever that may or may not indicate about their socioeconomic status. The fact that the poor weren’t mummified leaves us with scant evidence that would more directly inform us.
On the other hand, no one can doubt that there have been plenty of poor populations who had truly horrific living standards with much sickness, suffering, and short lifespans. That is particularly true over the millennia as agriculture became ever more central, since that meant periods of abundance alternating with periods of deficiency and sometimes starvation, often combined with weakened immune systems and rampant sickness. That was less the case for the earlier small city-states with less population density and surrounded by the near constant abundance of wilderness areas.
As always, it depends on what are the specifics we are talking about. Also, any comparison and conclusion is relative.
My mother grew up in a family that hunted and at the time there was a certain amount of access to natural areas for many Americans, something that helped a large part of the population get through the Great Depression and world war era. Nonetheless, by the time of my mother’s childhood, overhunting had depleted most of the wild game (bison, bear, deer, etc were no longer around) and so her family relied on less desirable foods such as squirrel, raccoon, and opossum; even the fish they ate was less than optimal because they came from highly polluted waters because of the very factories and railroad her family worked in. So, the wild food opportunities weren’t nearly as good as it had been a half century earlier, much less in the prior centuries.
Not All Poverty is the Same
Being poor today means a lot of things that it didn’t mean in the past. The high rates of heavy metal toxicity today has rarely been seen among previous poor populations. Today 40% of the global deaths are caused by air pollution, primarily effecting the poor, also extremely different from the past. Beyond that, inequality has grown larger than ever before and that has been strongly correlated to high rates of stress, disease, homicides, and suicides. Such inequality is also seen in terms of climate change, droughts, refugee crises, and war/occupation.
Here is what Sasha wrote in response to me: “I agree with a lot of your points, except with your assertion that “the poor ate fairly well in many societies especially when they had access to wild sources of food”. I know how the poor ate in Russia in the beginning of the 20th century and how the poor eat now in the former Soviet republics and in India. Their diet is very poor even though they can have access to wild sources of food. I don’t know what the situation was for the poor in ancient Egypt but I would be very surprised if it was better than in modern day India or former Soviet Union.”
I’d imagine modern Russia has high inequality similar to the US. About modern India, that is one of the most impoverished, densely populated, and malnourished societies around. And modern industrialization did major harm to Hindu Indians because studies show that traditional vegetarians got a fair amount of nutrients from the insects that were mixed in with pre-modern agricultural goods. Both Russia and India have other problems related to neoliberalism that wasn’t a factor in the past. It’s an entirely different kind of poverty these days. Even if some Russians have some access to wild foods, I’m willing to bet they have nowhere near the access that was available in previous generations, centuries, and millennia.
Compare modern poverty to that of feudalism. At least in England, feudal peasants were guaranteed to be taken care of in hard times. The Church, a large part of local governance at the time, was tasked with feeding and taking care of the poor and needy, from orphans to widows. They were tight communities that took care of their own, something that no longer exists in most of the world where the individual is left to suffer and struggle. Present Social Darwinian conditions are not the norm for human societies across history. The present breakdown of families and communities is historically unprecedented.
Socialized Medicine & Externalized Costs
An Invisible Debt Made Visible
On Conflict and Stupidity
Inequality in the Anthropocene
Capitalism as Social Control
The Abnormal Norms of WEIRD Modernity
Everything about present populations is extremely abnormal. This is seen in diet as elsewhere. Let me return to the quote I began this post with. “Not until World War II, sixty years later, were measures taken to address the vitamin and mineral deficiencies caused by these grain milling methods. They caught the government’s attention only when 40 percent of the raw recruits drafted by our military proved to be so malnourished that they could not pass a physical and were declared unfit for duty.” * So, what had happened to the health of the American population?
Well, there were many changes. Overhunting, as I already said, made many wild game species extinct or eliminated them from local areas, such that my mother born into a rural farm state never saw a white-tailed deer growing up. Also, much earlier after the Civil War, a new form of enclosure movement happened as laws were passed to prevent people, specifically the then free blacks, from hunting and foraging wherever they wanted (early American laws often protected the rights of anyone to hunt, forage plants, collect timber, etc from any land that was left open, whether or not it was owned by someone). The carryover from the feudal commons was finally and fully eliminated. It was also the end of the era of free range cattle ranching, the ending have come with the invention of barbed wire. Access to wild foods was further reduced by the creation and enforcement of protected lands (e.g., the federal park system), which very much was targeted at the poor who up to that point had relied upon wild foods for health and survival.
All of that was combined with mass urbanization and industrialization with all of its new forms of pollution, stress, and inequality. Processed foods were becoming more widespread at the time. Around the turn of the century unhealthy and industrialized vegetable oils became heavily marketed and hence popular, which replaced butter and lard. Also, muckraking about the meat industry scared Americans off from meat and consumption precipitiously dropped. As such, in the decades prior to World War II, the American diet had already shifted toward what we now know. A new young generation had grown up on that industrialized and processed diet and those young people were the ones showing up as recruits for the military. This new diet in such a short period had caused mass malnourishment. It was a mass experiment that showed failure early on and yet we continue the same basic experiment, not only continuing it but making it far worse.
Government officials and health authorities blamed it on bread production. Refined flour had become widely available because of industrialization. This removed all the nutrients that gave any health value to bread. In response, there was a movement to fortify bread, initially enforced by federal law and later by state laws. That helped some, but obviously the malnourishment was caused by many other factors that weren’t appreciated by most at the time, even though this was the same period when Weston A. Price’s work was published. Nutritional science was young at the time and very most nutrients were still undiscovered or else unappreciated. Throwing a few lab-produced vitamins back into food barely scratches the surface of the nutrient-density that was lost.
Most Americans continue to have severe nutritional deficiencies. We don’t recognize this fact because being underdeveloped and sickly has become normalized, maybe even in the minds of most doctors and health officials. Besides, many of the worst symptoms don’t show up until decades later, often as chronic diseases of old age, although increasingly seen among the young. Far fewer Americans today would meet the health standards of World War recruits. It’s been a steady decline, despite the miracles of modern medicine in treating symptoms and delaying death.
* The data on the British shows an even earlier shift in malnourishment because imperial trade brought an industrialized diet sooner to the British population. Also, rural life with a greater diet of wild foods had more quickly disappeared, as compared to the US. The fate of the British in the late 1800s showed what would later happen more than a half century later on the other side of the ocean.
Lore of Nutrition
by Tim Noakes
The mid-Victorian period between 1850 and 1880 is now recognised as the golden era of British health. According to P. Clayton and J. Rowbotham, 47 this was entirely due to the mid-Victorians’ superior diet. Farm-produced real foods were available in such surplus that even the working-class poor were eating highly nutritious foods in abundance. As a result, life expectancy in 1875 was equal to, or even better, than it is in modern Britain, especially for men (by about three years). In addition, the profile of diseases was quite different when compared to Britain today.
The authors conclude:
[This] shows that medical advances allied to the pharmaceutical industry’s output have done little more than change the manner of our dying. The Victorians died rapidly of infection and/or trauma, whereas we die slowly of degenerative disease. It reveals that with the exception of family planning, the vast edifice of twentieth century healthcare has not enabled us to live longer but has in the main merely supplied methods of suppressing the symptoms of degenerative disease which have emerged due to our failure to maintain mid-Victorian nutritional standards. 48
This mid-Victorians’ healthy diet included freely available and cheap vegetables such as onions, carrots, turnips, cabbage, broccoli, peas and beans; fresh and dried fruit, including apples; legumes and nuts, especially chestnuts, walnuts and hazelnuts; fish, including herring, haddock and John Dory; other seafood, including oysters, mussels and whelks; meat – which was considered ‘a mark of a good diet’ so that ‘its complete absence was rare’ – sourced from free-range animals, especially pork, and including offal such as brain, heart, pancreas (sweet breads), liver, kidneys, lungs and intestine; eggs from hens that were kept by most urban households; and hard cheeses.
Their healthy diet was therefore low in cereals, grains, sugar, trans fats and refined flour, and high in fibre, phytonutrients and omega- 3 polyunsaturated fatty acids, entirely compatible with the modern Paleo or LCHF diets.
This period of nutritional paradise changed suddenly after 1875 , when cheap imports of white flour, tinned meat, sugar, canned fruits and condensed milk became more readily available. The results were immediately noticeable. By 1883 , the British infantry was forced to lower its minimum height for recruits by three inches; and by 1900, 50 per cent of British volunteers for the Boer War had to be rejected because of undernutrition. The changes would have been associated with an alteration in disease patterns in these populations, as described by Yellowlees ( Chapter 2 ).
On Obesity and Malnourishment
There is no contradiction, by the way, between rampant nutritional deficiencies and the epidemic of obesity. Gary Taubes noted the dramatic rise of obesity in America began earlier last century, which is to say that it is not a problem that came out of nowhere with the present younger generations. Americans have been getting fatter for a while now. Specifically, they were getting fatter while at the same time being malnourished, partly because of refined flour that was as empty of a carb that is possible.
Taubes emphasizes the point that this seeming paradox has often been observed among poor populations around the world, lack of optimal nutrition that leads to ever more weight gain, sometimes with children being skinny to an unhealthy degree only to grow up to be fat. No doubt that many Americans in the early 1900s were dealing with much poverty and the lack of nutritious foods that often goes with it. As for today, nutritional deficiencies are different because of enrichment, but it persists nonetheless in many other ways. Also, as Keith Payne argues in The Broken Ladder, growing inequality mimics poverty in the conflict and stress it causes. And inequality has everything to do with food quality, as seen with many poor areas being food deserts.
I’ll give you a small taste of Taube’s discussion. It is from the introduction to one of his books, published a few years ago. If you read the book, look at the section immediately following the below. He gives examples of tribes that were poor, didn’t overeat, and did hard manual labor. Yet they were getting obese, even as nearby tribes sometimes remained a healthy weight. The only apparent difference was what they were eating and not how much they were eating. The populations that saw major weight gain had adopted a grain-based diet, typically because of government rations or government stores.
Why We Get Fat
by Gary Taubes
In 1934, a young German pediatrician named Hilde Bruch moved to America, settled in New York City, and was “startled,” as she later wrote, by the number of fat children she saw—“really fat ones, not only in clinics, but on the streets and subways, and in schools.” Indeed, fat children in New York were so conspicuous that other European immigrants would ask Bruch about it, assuming that she would have an answer. What is the matter with American children? they would ask. Why are they so bloated and blown up? Many would say they’d never seen so many children in such a state.
Today we hear such questions all the time, or we ask them ourselves, with the continual reminders that we are in the midst of an epidemic of obesity (as is the entire developed world). Similar questions are asked about fat adults. Why are they so bloated and blown up? Or you might ask yourself: Why am I?
But this was New York City in the mid-1930s. This was two decades before the first Kentucky Fried Chicken and McDonald’s franchises, when fast food as we know it today was born. This was half a century before supersizing and high-fructose corn syrup. More to the point, 1934 was the depths of the Great Depression, an era of soup kitchens, bread lines, and unprecedented unemployment. One in every four workers in the United States was unemployed. Six out of every ten Americans were living in poverty. In New York City, where Bruch and her fellow immigrants were astonished by the adiposity of the local children, one in four children were said to be malnourished. How could this be?
A year after arriving in New York, Bruch established a clinic at Columbia University’s College of Physicians and Surgeons to treat obese children. In 1939, she published the first of a series of reports on her exhaustive studies of the many obese children she had treated, although almost invariably without success. From interviews with her patients and their families, she learned that these obese children did indeed eat excessive amounts of food—no matter how much either they or their parents might initially deny it. Telling them to eat less, though, just didn’t work, and no amount of instruction or compassion, counseling, or exhortations—of either children or parents—seemed to help.
It was hard to avoid, Bruch said, the simple fact that these children had, after all, spent their entire lives trying to eat in moderation and so control their weight, or at least thinking about eating less than they did, and yet they remained obese. Some of these children, Bruch reported, “made strenuous efforts to lose weight, practically giving up on living to achieve it.” But maintaining a lower weight involved “living on a continuous semi-starvation diet,” and they just couldn’t do it, even though obesity made them miserable and social outcasts.
One of Bruch’s patients was a fine-boned girl in her teens, “literally disappearing in mountains of fat.” This young girl had spent her life fighting both her weight and her parents’ attempts to help her slim down. She knew what she had to do, or so she believed, as did her parents—she had to eat less—and the struggle to do this defined her existence. “I always knew that life depended on your figure,” she told Bruch. “I was always unhappy and depressed when gaining [weight]. There was nothing to live for.… I actually hated myself. I just could not stand it. I didn’t want to look at myself. I hated mirrors. They showed how fat I was.… It never made me feel happy to eat and get fat—but I never could see a solution for it and so I kept on getting fatter.”
If we look in the literature—which the experts have not in this case—we can find numerous populations that experienced levels of obesity similar to those in the United States, Europe, and elsewhere today but with no prosperity and few, if any, of the ingredients of Brownell’s toxic environment: no cheeseburgers, soft drinks, or cheese curls, no drive-in windows, computers, or televisions (sometimes not even books, other than perhaps the Bible), and no overprotective mothers keeping their children from roaming free.
In these populations, incomes weren’t rising; there were no labor-saving devices, no shifts toward less physically demanding work or more passive leisure pursuits. Rather, some of these populations were poor beyond our ability to imagine today. Dirt poor. These are the populations that the overeating hypothesis tells us should be as lean as can be, and yet they were not.
Remember Hilde Bruch’s wondering about all those really fat children in the midst of the Great Depression? Well, this kind of observation isn’t nearly as unusual as we might think.
How Americans Used to Eat
Below is a relevant passage from Nina Teicholz’s book, The Big Fat Surprise. It puts into context how extremely unusual has been the high-carb, low-fat diet these past few generations. This is partly what informed some of my thoughts. We so quickly forget that the present dominance of a grain-based diet wasn’t always the case, likely not even in most agricultural societies until quite recently. In fact, the earlier American diet is still within living memory, although those left to remember it are quickly dying off.
Let me explain why history of diets matter. One of the arguments for forcing official dietary recommendations onto the entire population was the belief that Americans in a mythical past ate less meat, fat, and butter while having ate more bread, legumes, and vegetables. This turns out to have been a trick of limited data.
We now know, from better data, that the complete opposite was the case. And we have the further data that shows that the increase of the conventional diet has coincided with increase of obesity and chronic diseases. That isn’t to say eating more vegetables is bad for your health, but we do know that even as the average American intake of vegetables has gone up so has all the diet-related health conditions. During this time, what went down was the consumption of all the traditional foods of the American diet going back to the colonial era: wild game, red meat, organ meat, lard, and butter — all the foods Americans ate in huge amounts prior to the industrialized diet. I noticed a couple of comments about this. Someone going by the username kaakitwitaasota shared data:
“Robert J. Gordon’s The Rise and Fall of American Growth devotes a section to changes in the American diet. In 1870, the average American ate 74 pounds of beef and veal, 123 pounds of pork, 15 pounds of chicken and turkey and a pound of lamb, for a grand total of 212 pounds of meat a year, falling to 141 pounds by 1940. 1940 represents a mid-century nadir, because meat consumption climbs again after the war: in 2010 the average American ate just barely less meat than in 1870, but much, much more of it was poultry (about 60 pounds of beef, 47 of pork, a pound of lamb and 96 pounds of chicken and turkey).
“Meat consumption was low in the rest of the world, but the comparative prosperity of the US, though abysmal by the standards of today, was borne out in a high consumption of meat. (Though, admittedly, much of that meat was not bought–the American farmer had much more land than his European counterpart, and pigs will generally look after themselves given the space.)
“I wonder about nut and fish consumption, though. Fish + shellfish consumption was only about 11 pounds a person in 1940, and I doubt it was much higher in 1870, though I don’t know for certain.”
Then another comment, going by blkadder, added that, “It’s even more complex than that though. In addition to the type of feed, you need to look at the fat content of the meat. Due in part to the fat-is-bad madness a lot of the fat has been bred out of pork in the U.S. as an example which further complicates analysis.”
Others have covered this same time period of American dietary history. Writing in the New York Times, Jane Ziegelman describes the American love of meat going back centuries (America’s Obsession With Cheap Meat):
“In her 1832 travel book, “Domestic Manners of the Americans,” the English writer Frances Trollope describes the breathtaking quantities of food on American dinner tables. Even tea, she reports, is a “massive meal,” a lavish spread of many cakes and breads and “ham, turkey, hung beef, apple sauce and pickled oysters.”
“Equally impressive to this foreign observer were the carnivorous tendencies of her American hosts. “They consume an extraordinary amount of bacon,” she writes, while “ham and beefsteaks appear morning, noon and night.”
“Americans were indiscriminate in their love for animal protein. Beef, pork, lamb and mutton were all consumed with relish. However, as pointed out by the food historian Harvey Levenstein, it was beef, the form of protein preferred by the upper class, “that reigned supreme in status.” With the opening of the Western frontier in the mid-19th century, increased grazing land for cattle lowered beef prices, making it affordable for the working class.
“Dietary surveys conducted at the turn of the 20th century by Wilbur Atwater, father of American nutrition, revealed that even laborers were able to have beefsteak for breakfast. As Atwater was quick to point out, a high-protein diet set American workers apart from their European counterparts. On average, Americans ate a phenomenal 147 pounds of meat a year; Italians, by contrast, consumed 24.
““Doubtless,” Atwater wrote, “we live and work more intensely than people do in Europe.” The “vigor, ambition and hopes for higher things” that distinguished the American worker, he argued, was fed by repeated helpings of T-bone and sirloin steak.”
That calculation might be way off, an underestimation if anything. The majority of Americans remained rural, largely working as farmers, into the early 20th century. All we have data for is the food that was shipped across state lines, as no one kept nation-wide records on food that was produced, bought, and consumed locally. Until quite recently, most of the food Americans ate either was grown at home or grown at a nearby farm, not to mention much food that was hunted, trapped, fished and gathered. So, that 147 pounds of meat probably was only the tip of the iceberg.
What added to the confusion and misinterpretation of the evidence had to do with timing. Diet and nutrition was first seriously studied right at the moment when, for most populations, it had already changed. That was the failure of Ancel Keys research on what came to be called the Mediterranean diet (see Sally Fallon Morrell’s Nourishing Diets). The population was recuperating from World War II that had devastated their traditional way of life, including their diet. Keys took the post-war deprivation diet as being the historical norm, but the reality was far different. Cookbooks and other evidence from before the war showed that this population used to eat higher levels of meat and fat, including saturated fat. So, the very people focused on had grown up and spent most of their lives on a diet that was at the moment no longer available because of disruption of the food system. What good health Keys observed came from a lifetime of eating a different diet. Combined with cherry-picking of data and biased analysis, Keys came to a conclusion that was as wrong as wrong could be.
Slightly earlier, Weston A. Price was able to see a different picture. He intentionally traveled to the places where traditional diets remained fully in place. And the devastation of World War II had yet to happen. Price came to a conclusion that what mattered most of all was nutrient-density. Sure, the vegetables eaten would have been of a higher quality than we get today, largely because they were heirloom cultivars grown on health soil. Nutrient-dense foods can only come from nutrient-dense soil, whereas today our food is nutrient-deficient because our soil is highly depleted. The same goes for animal foods. Animals pastured on healthy land will produce healthy dairy, eggs, meat, and fat; these foods will be high in omega-3s and the fat-soluble vitamins.
No matter if it is coming from plant sources or animal sources, nutrient-density might be the most important factor of all. Why fat is meaningful in this context is that it is fat that is where fat-soluble vitamins are found and it is through fat that they are metabolized. And in turn, the fat-soluble vitamins play a key role in the absorption and processing of numerous other nutrients, not to mention a key role in numerous functions in the body. Nutrient-density and fat-density go hand in hand in terms of general health. That is what early Americans were getting in eating so much wild food, not only wild game but also wild greens, fruit, and mushrooms. And nutrient-density is precisely what we are lacking today, as the nutrients have been intentionally removed to make more palatable commercial foods.
Once again, this has a class dimension, since the wealthier have more access to nutrient-dense foods. Few poor people could afford to shop at a high-end health food store, even if one was located nearby their home. But it was quite different in the past when nutrient-dense foods were available to everyone and sometimes more available to the poor concentrated in rural areas. If we want to improve public health, the first thing we should do is return to this historical norm.
Now, in ending, here is the longer passage aforementioned:
The Big Fat Surprise
by Nina Teicholz
Yet despite this shaky and often contradictory evidence, the idea that red meat is a principal dietary culprit has thoroughly pervaded our national conversation for decades. We have been led to believe that we’ve strayed from a more perfect, less meat-filled past. Most prominently, when Senator McGovern announced his Senate committee’s report, called Dietary Goals , at a press conference in 1977, he expressed a gloomy outlook about where the American diet was heading. “Our diets have changed radically within the past fifty years,” he explained, “with great and often harmful effects on our health.” Hegsted, standing at his side, criticized the current American diet as being excessively “rich in meat” and other sources of saturated fat and cholesterol, which were “linked to heart disease, certain forms of cancer, diabetes and obesity.” These were the “killer diseases,” said McGovern. The solution, he declared, was for Americans to return to the healthier, plant-based diet they once ate.
The New York Times health columnist Jane Brody perfectly encapsulated this idea when she wrote, “Within this century, the diet of the average American has undergone a radical shift away from plant-based foods such as grains, beans and peas, nuts, potatoes, and other vegetables and fruits and toward foods derived from animals—meat, fish, poultry, eggs and dairy products.” It is a view that has been echoed in literally hundreds of official reports.
The justification for this idea, that our ancestors lived mainly on fruits, vegetables, and grains, comes mainly from the USDA “food disappearance data.” The “disappearance” of food is an approximation of supply; most of it is probably being eaten, but much is wasted, too. Experts therefore acknowledge that the disappearance numbers are merely rough estimates of consumption. The data from the early 1900s, which is what Brody, McGovern, and others used, are known to be especially poor. Among other things, these data accounted only for the meat, dairy, and other fresh foods shipped across state lines in those early years, so anything produced and eaten locally, such as meat from a cow or eggs from chickens, would not have been included. And since farmers made up more than a quarter of all workers during these years, local foods must have amounted to quite a lot. Experts agree that this early availability data are not adequate for serious use, yet they cite the numbers anyway, because no other data are available. And for the years before 1900, there are no “scientific” data at all.
In the absence of scientific data, history can provide a picture of food consumption in the late eighteenth to nineteenth century in America. Although circumstantial, historical evidence can also be rigorous and, in this case, is certainly more far-reaching than the inchoate data from the USDA. Academic nutrition experts rarely consult historical texts, considering them to occupy a separate academic silo with little to offer the study of diet and health. Yet history can teach us a great deal about how humans used to eat in the thousands of years before heart disease, diabetes, and obesity became common. Of course we don’t remember now, but these diseases did not always rage as they do today. And looking at the food patterns of our relatively healthy early-American ancestors, it’s quite clear that they ate far more red meat and far fewer vegetables than we have commonly assumed.
Early-American settlers were “indifferent” farmers, according to many accounts. They were fairly lazy in their efforts at both animal husbandry and agriculture, with “the grain fields, the meadows, the forests, the cattle, etc, treated with equal carelessness,” as one eighteenth-century Swedish visitor described. And there was little point in farming since meat was so readily available.
The endless bounty of America in its early years is truly astonishing. Settlers recorded the extraordinary abundance of wild turkeys, ducks, grouse, pheasant, and more. Migrating flocks of birds would darken the skies for days . The tasty Eskimo curlew was apparently so fat that it would burst upon falling to the earth, covering the ground with a sort of fatty meat paste. (New Englanders called this now-extinct species the “doughbird.”)
In the woods, there were bears (prized for their fat), raccoons, bobolinks, opossums, hares, and virtual thickets of deer—so much that the colonists didn’t even bother hunting elk, moose, or bison, since hauling and conserving so much meat was considered too great an effort. IX
A European traveler describing his visit to a Southern plantation noted that the food included beef, veal, mutton, venison, turkeys, and geese, but he does not mention a single vegetable. Infants were fed beef even before their teeth had grown in. The English novelist Anthony Trollope reported, during a trip to the United States in 1861, that Americans ate twice as much beef as did Englishmen. Charles Dickens, when he visited, wrote that “no breakfast was breakfast” without a T-bone steak. Apparently, starting a day on puffed wheat and low-fat milk—our “Breakfast of Champions!”—would not have been considered adequate even for a servant.
Indeed, for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old. “I hold a family to be in a desperate way when the mother can see the bottom of the pork barrel,” says a frontier housewife in James Fenimore Cooper’s novel The Chainbearer.
Like the primitive tribes mentioned in Chapter 1, Americans also relished the viscera of the animal, according to the cookbooks of the time. They ate the heart, kidneys, tripe, calf sweetbreads (glands), pig’s liver, turtle lungs, the heads and feet of lamb and pigs, and lamb tongue. Beef tongue, too, was “highly esteemed.”
And not just meat but saturated fats of every kind were consumed in great quantities. Americans in the nineteenth century ate four to five times more butter than we do today, and at least six times more lard. X
In the book Putting Meat on the American Table , researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of eight thousand urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds. A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the eighteenth century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”
About 175 pounds of meat per person per year! Compare that to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, more than half is poultry—chicken and turkey—whereas until the mid-twentieth century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs). Subtracting out the poultry factor, we are left with the conclusion that per capita consumption of red meat today is about 40 to 70 pounds per person, according to different sources of government data—in any case far less than what it was a couple of centuries ago.
Yet this drop in red meat consumption is the exact opposite of the picture we get from public authorities. A recent USDA report says that our consumption of meat is at a “record high,” and this impression is repeated in the media. It implies that our health problems are associated with this rise in meat consumption, but these analyses are misleading because they lump together red meat and chicken into one category to show the growth of meat eating overall, when it’s just the chicken consumption that has gone up astronomically since the 1970s. The wider-lens picture is clearly that we eat far less red meat today than did our forefathers.
Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. They “appeared to yield so little nutriment in proportion to labor spent in cultivation,” wrote one eighteenth-century observer, that “farmers preferred more hearty foods.” Indeed, a pioneering 1888 report for the US government written by the country’s top nutrition professor at the time concluded that Americans living wisely and economically would be best to “avoid leafy vegetables,” because they provided so little nutritional content. In New England, few farmers even had many fruit trees, because preserving fruits required equal amounts of sugar to fruit, which was far too costly. Apples were an exception, and even these, stored in barrels, lasted several months at most.
It seems obvious, when one stops to think, that before large supermarket chains started importing kiwis from New Zealand and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. In New England, that season runs from June through October or maybe, in a lucky year, November. Before refrigerated trucks and ships allowed the transport of fresh produce all over the world, most people could therefore eat fresh fruit and vegetables for less than half the year; farther north, winter lasted even longer. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)
Thus it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator.
So by these accounts, for the first two hundred and fifty years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.
During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s. Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital. The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris , makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his seven hundred male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.” XI About one fifth of the US population was over fifty years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some ten million Americans of a prime age for having a heart attack at the turn of the twentieth century, but heart attacks appeared not to have been a common problem.
Was it possible that heart disease existed but was somehow overlooked? The medical historian Leon Michaels compared the record on chest pain with that of two other medical conditions, gout and migraine, which are also painful and episodic and therefore should have been observed by doctors to an equal degree. Michaels catalogs the detailed descriptions of migraines dating all the way back to antiquity; gout, too, was the subject of lengthy notes by doctors and patients alike. Yet chest pain is not mentioned. Michaels therefore finds it “particularly unlikely” that angina pectoris, with its severe, terrifying pain continuing episodically for many years, could have gone unnoticed by the medical community, “if indeed it had been anything but exceedingly rare before the mid-eighteenth century.” XII
So it seems fair to say that at the height of the meat-and-butter-gorging eighteenth and nineteenth centuries, heart disease did not rage as it did by the 1930s. XIII
Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle , Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another twenty years. In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.
Nevertheless, the idea that Americans once ate little meat and “mostly plants”—espoused by McGovern and a multitude of experts—continues to endure. And Americans have for decades now been instructed to go back to this earlier, “healthier” diet that seems, upon examination, never to have existed.
24 thoughts on “Malnourished Americans”
People at Canada Post in 2018 “weren’t slaves but instead worked freely” before, during a rotating strike, and after legislation came down to return them to work.
Christmas is on the line.
I hope the company gets us Swiss Chalet chicken this year. The Egyptian workers got beer!
All very interesting.
One factor I’d add for medical context, especially in the 18th century, are the use of mercury and opium. It would be interesting to reconstruct the effects they had not just on the patients but on the methodology of the doctors in terms of how they created narratives of normal vs abnormal and then how that impacted diet.
I also always find it interesting to consider people at the upper end of the social scale – say Jefferson/franklin etc all of whom lived into their 70s and 80s despite a host of issues from excessive drinking, meat heavy diets, and of course use of opium. George Washington was prescribed mercury steam baths to treat his “gumas” – a medical term for the tumors caused by syphilis. (The mercury is what caused his teeth to fall out). Yet he lived to old age.
Of course these were wealthy men but ironically as medicine was so awful then it’s a wonder they made it passed fifty.
My favorite modern example of the seeming roll of the genetic dice is Sinatra. smoked, drank, did drugs, hung with Kennedys and other mobsters, married a divorced multiple times, had a high stress life and lived into his 80s.
Diet? Genetics? Both I assume.
I know a man in his mid 70s who smokes a lot of cigs, drinks, eats a mix of junk food and vegies, red meat, etc. He could drop at any moment and yet, it wouldn’t surprise me to see him live another ten years – granted a lot of that is down to modern meds (he takes multiple meds for blood pressure and other issues).
These are random notions – mostly about not having a solid idea about what’s at work in all of this beyond a few clear culprits. I eat a lot of fruit, vegetables and try to avoid certain things but at the same time I assume most of what I eat could just as easily have been sprayed at some point with something that is eventually going to be toxic (American strawberries for example are dosed with a pesticide that causes cancer and can’t be washed off no matter what you do with them – so, sadly, no more strawberries – but I’ve eat enough over the years that for all I know I’ll get cancer because of it).
Somewhat related: https://www.latimes.com/science/sciencenow/la-sci-sn-plague-ancestor-strain-20181206-story.html
Interesting information about early cities and the nexus of food, animals, pests, and health.
There are many factors. Beyond diet and genetics, there are thousands of factors of epigenetics, environment, toxins, parasites, microbiome, hygiene hypothesis, etc. With epigenetics, it’s not only your own health but also the health of your parents, grandparents, etc. No one knows how far the effect can go back. But we do know that one generation experiencing famine can lead to increased obesity for at least the two following generations.
The hygiene hypothesis has great potential for explaining much as well, as we are learning how powerfully microbes and parasites effect us. I have a stronger immune system than my brothers, sister-in-laws, and nieces and nephews. The difference in my childhood was how much time I spent in filthy creeks and how many cuts I got. My immune system got a major workout early on. Microbes have also been shown to influence mood and neurocognitive health.
Toxins might have a more complex role than how we normally think of them. When we talk about toxins today, it is in the context of major toxic exposure and overload, involving untold numbers of strange chemicals in our food, water, and in our houses. Fire retardant on bedding that you breathe every night is a known carcinogen. In the past, some toxins here and there maybe normally weren’t a big deal because general health was less compromised, in terms of nutritional deficiency and chronic diseases. Certainly, 40% of the population wasn’t dying from air pollution.
What has changed is nearly every single factor. Those factors combined create health risks not seen with any of them alone. It’s the entire context of health that has been turned on its head. Dr. Dale Bredesen, in a clinical study, has shown that by combining something like 36 factors he was able to reverse the symptoms of Alzheimer’s. But no single factor alone has shown any clear improvement. We humans are micro-ecosystems as part of larger social systems and natural environments.
Let me respond a bit to the article:
“The authors also suggest that the plague made its way from these settlements to the small Swedish farming village thanks to a vast trade network that was made possible by the recent expansion of animal-pulled wagons. As the disease spread along trade routes throughout the continent, it could have caused the Neolithic decline.”
Blue glass beads made in Early Egypt have been found as far as Northern Europe. That was during the Bronze Age when the Great Pyramid was built. The Egyptians were the only society that had perfected making blue glass and so it was highly prized as a trade good.
“In addition, the increased use of the wheel would not necessarily have helped spread plague to the extent that the authors suggest, he said. The DNA sequence of the Y. pestis that was found in Sweden suggests it was not yet capable of being easily transmitted by fleas living on infected rats that might stow away on a cart. The strain present in the Neolithic Era would have spread from person to person at a much slower rate, so it wouldn’t have impacted great swaths of the population at once.”
I might note that early trade was much more extensive than usually assumed. It wasn’t only or primarily the wagon that was behind the trade networks. Besides marauding, Vikings were known as great traders. There was a vast network of waterways they used to transport goods by ship all over Europe, the Levant, and Asia. The Vikings got around and surely they often acted as carriers of diseases. A single carrier who didn’t die could travel over vast areas and potentially have directly infected thousands of people and, after further spreading from person to person, could have killed millions. Fleas wouldn’t be necessary.
“Johannes Krause, a biochemist and director of the Max Planck Institute for the Science of Human History in Germany agreed that there was not enough evidence to suggest the ancient strain evolved in the mega-settlements. “This is wild speculation,” Krause said.”
It’s not exactly wild speculation. We already know that early dense populations increased all kinds of infectious diseases. It would be rather unsurprising if plague began the same way. Some suspect that even a few wild infectious diseases like malaria may have originated in settlements before escaping into the larger mosquito population.
When we wrote this post, we were still more strongly under the sway of a looser paleo viewpoint, which balances plant and animal foods in the form of traditional omnivory. Basically, the paleo diet is Sally Fallon Morrell’s traditional foods but without the grains, legumes, and dairy (other than maybe butter and ghee). What both have in common is the influence of Weston A. Price’s work on fat-soluble vitamins as the key to health.
We still take much wisdom from the paleo and traditional foods approach. But our respect for nutrient-dense animal foods, particularly nose-to-tail, has grown further over time. Our knowledge about and experience with animal-based diets, including the carnivore diet, is greater now than it was several years ago. If we were to rewrite the post now, we would further emphasize that aspect of fatty animal foods and we would prioritize it earlier in the post.
Our dietary paradigm has become fully centered on the nutrient-density of an animal-based diet. We see that as the main distinction. There are diets that include fatty animal foods and increase their intake, and there are those that decrease them or entirely exclude them. This is why we’ve come to see lacto-ovo-vegetarianism as having more in common with the carnivore diet than with veganism and near-vegan vegetarianism.
The grain issue is still important. Much of the changes in civilization these past centuries and millennia hinges on grain agriculture. The first appearance of surplus grain yields in 19th century Europe and America was pivotal, and it was magnified by the increased colonial trade of sugar cane and then the increased local production of sugar from beets. But what really turned the world on its head was how, over this past century, carbs and seed oils came to replace a large swath of animal foods.
It wasn’t only what increased but what decreased. That point was made in this post. But it could’ve been stated earlier and with greater force. If the American diet had maintained high intake of red meat and wild-caught animal foods, if traditional agricultural practices had remained in grain farming, and if seed oils had never replaced animal fats as the main source of fatty acids, then even the rise of an ever greater grain-based and high-carb diet wouldn’t have led to our present extreme malnourishment.
Grains can be problematic. But there is a specific context to what has made them so severely damaging in the standard American diet. Traditional cultivars of grain had less gluten. Traditional breads were slowly fermented that further reduced gluten. And actual whole grains (not the fake whole grains of commercial breads) were a whole other beast. Then it makes a big difference if grain products are filled with sugar or not, made with and slathered with seed oils and margarines or lard and butter.
It’s not clear that grains are necessarily, inevitably, and always problematic; and specifically worse than any other category of plant foods. Since learning more about plant anti-nutrients, we would now note the fact that vegetables and fruits also have increased in the American diet over recent generations is a big piece of the puzzle. Simply eating more leafy greens or whatever, as paleo advises, may or may not be helpful. What matters most is simply the degree their is inclusion of animal-based nutrition and the degree that various plant foods interfere with that.
As a comparison, it would be interesting to look at societies that were high-carb and grain-based but didn’t have industrialized diets of seed oils, added sugar, food additives, farm chemicals, etc. What needs to be isolated are the grains themselves, in controlling other confounding factors. The clearest example of this was the ancient Egyptians who ate more grains than any other ancient society and, indeed, they had the predictable high rate of the diseases of civilization before any other population. But how much of that had to do with the grains themselves or what grains were replacing as populations grew and became urbanized.