Neolithic Troubles

Born Expecting the Pleistocene
by Mark Seely
p. 31

Not our natural habitat

The mismatch hypothesis

Our bodies including our brains—and thus our behavioral predispositions—have evolved in response to very specific environmental and social conditions. Many of those environmental and social conditions no longer exist for most of us. Our physiology and our psychology, all of our instincts and in-born social tendencies, are based on life in small semi-nomadic tribal groups of rarely more than 50 people. There is a dramatic mismatch between life in a crowded, frenetic, technology-based global civilization and the kind of life our biology and our psychology expects [14].

And we suffer serious negative consequences of this mismatch. A clear example can be seen in the obesity epidemic that has swept through developed nations in recent decades: our bodies evolved to meet energy demands in circumstances where the presence of food was less predictable and periods of abundance more variable. Because of this, we have a preference for calorie-dense food, we have a tendency to eat far more than we need, and our bodies are quick to hoard extra calories in the form of body fat.
This approach works quite well during a Pleistocene ice age, but it is maladaptive in our present food-saturated society—and so we have an obesity epidemic because of the mismatch between the current situation and our evolution-derived behavioral propensities with respect to food. Studies on Australian aborigines conducted in the 1980s, evaluating the health effects of the transition from traditional hunter-gatherer lifestyle to urban living, found clear evidence of the health advantages associated with a lifestyle consistent with our biological design [15]. More recent research on the increasingly popular Paleo-diet [16] has since confirmed wide-ranging health benefits associated with selecting food from a pre-agriculture menu, including cancer resistance, reduction in the prevalence of autoimmune disease, and improved mental health.

[14] Ornstein, R. & Ehrlich, P. (1989). New World, New Mind. New York: Simon & Schuster.
[15] O’Dea, K., Spargo, R., & Akerman, K. (1980). The effect of transition from traditional to urban life-style on the insulin secretory response in Australian Aborigines. Diabetes Care, 3(1), 31-37; O’Dea, K., White, N., & Sinclair, A. (1988). An investigation of nutrition-relatedrisk factors in an isolated Aboriginal community in northern Australia: advantagesof a traditionally-orientated life-style. The Medical Journal of Australia, 148 (4), 177-80.
[16] E.g., Frassetto, L. A., Schloetter, M., Mietus-Snyder, M., Morris, R. C., & Sebastian, A. (2009). Metabolic and physiological improvements from consuming a Paleolithic, hunter-gatherer type diet. European Journal of Clinical Nutrition, 63, 947=955.

pp. 71-73

The mechanisms of cultural evolution can be seen in the changing patterns of foraging behavior in response to changes in food availability and changes in population density. Archaeological analyses suggest that there is a predictable pattern of dietary choice that emerges from the interaction among population density, relative abundance of preferred food sources, and factors that relate to the search and handling of various foods. [56] In general, diets become more varied, or broaden, as population increases and the preferred food becomes more difficult to obtain. When a preferred food source is abundant, the calories in the diet may consist largely of that one particular food. But as the food source becomes more difficult to obtain, less preferable foods will be included and the diet will broaden. Such dietary changes imply changes in patterns of behavior within the community—changes of culture.

Behavior ecologists and anthropologists have partitioned the foraging process into two components with respect to the cost-benefit analysis associated with dietary decisions:
search and handling. [57] The search component of the cost-benefit ledger refers to the amount of work per calorie payoff (and other benefits such as the potential for enhanced social standing) associated with a food item’s abundance, distance, terrain, proximity of another group’s territory, water sources, etc. The handling component refers to the work per calorie payoff associated with getting the food into a state (location, form, etc.) in which it can be consumed. Search and handling considerations can be largely independent of each other. The residential permanence involved with the incorporation of agriculture reduces the search consideration greatly, and makes handling the primary consideration. Global industrial food economies change entirely the nature of both search and handling: handling in industrial society—from the perspective of the individual and the individual’s decision processes—is reduced largely to considerations of speed and convenience. The search component has been re-appropriated and refocused by corporate marketing, and reduced to something called shopping.

Domestication, hands down the most dramatic and far-reaching example of cultural evolution, emerges originally as a response to scarcity that is tied to a lack of mobility and an increase in population density. Domestication is a way of further broadening the diet when other local sources of food are already being maximally exploited. Initial experimentation with animal domestication “occurred in situations where forager diets were already quite broad and where the principle goal of domestication was the production of milk, an exercise that made otherwise unusable plants or plant parts available for human consumption. . . .” [58] The transition to life-ways based even partially on domestication has some counter-intuitive technological ramifications as well.

This leads to a further point about efficiency. It is often said that the adoption of more expensive subsistence technology marks an improvement in this aspect of food procurement: better tools make the process more efficient. This is true in the sense that such technology often enables its users to extract more nutrients per unit weight of resource processed or area of land harvested. If, on the other hand, the key criterion is the cost/benefit ratio, the rate of nutrient gained relative to the effort needed to acquire it, then the use of more expensive tools will often be associated with declines in subsistence efficiency. Increased investment in handling associated with the use of high-cost projectile weapons, in plant foods that require extensive tech-related processing, and in more intensive agriculture all illustrate this point. [59]

In modern times, thanks to the advent of—and supportive propaganda associated with—factory industrial agriculture, farming is coupled with ideas of plentitude and caloric abundance. However, in the absence of fossil energy and petroleum-based chemical fortification, farming is expensive in terms of the calories produced as a function of the amount of work involved. For example, “farmers grinding corn with hand-held stone tools can earn no more than about 1800 kcal per hour of total effort devoted to farming, and this from the least expensive cultivation technique.” [60] A successful fishing or bison hunting expedition is orders of magnitude more efficient in terms of the ratio of calories expended to calories obtained.

[56] Bird & O’Connell [Bird, D. W., & O’Connell, J. F. (2006). Behavioral ecology and archaeology. Journal of Archaeological Research, 14, 143-188]
[57] Ibid.
[58] Ibid, p. 152.
[59] Ibid, p. 153.
[60] Ibid, p. 151, italics in original.

pp. 122-123

The birth of the machine

The domestication frame

The Neolithic marks the beginnings of large scale domestication, what is typically referred to as the agricultural revolution. It was not really a revolution in that it occurred over an extended period of time (several thousand years) and in a mosaic piecemeal fashion, both in terms of the adoption of specific agrarian practices and in terms of specific groups of people who practiced them. Foraging lifestyles continue today, and represented the dominant lifestyle on the planet until relatively recently. The agricultural revolution was a true revolution, however, in terms of its consequences for the humans who adopted domestication-based life-ways, and for the rest of the natural world. The transition from nomadic and seminomadic hunting and gathering to sedentary agriculture is the most significant chapter in the chronicle of the human species. But it is clearly not a story of unmitigated success. Jared Diamond, who acknowledges somewhat the self-negating double-edge of technological “progress,” has called domestication the biggest mistake humans ever made.

That transition from hunting and gathering to agriculture is generally considered a decisive step in our progress, when we at last acquired the stable food supply and leisure time prerequisite to the great accomplishments of modern civilization. In fact, careful examination of that transition suggests another conclusion: for most people the transition brought infectious disease, malnutrition, and a shorter lifespan. For human society in general it worsened the relative lot of women and introduced class-based inequality. More than any other milestone along the path from chimpanzeehood to humanity, agriculture inextricably combines causes of our rise and our fall. [143]

The agricultural revolution had profoundly negative consequences for human physical,
psychological, and social well being, as well as a wide-ranging negative impact on the planet.

For humans, malnutrition and the emergence of infectious disease are the most salient physiological results of an agrarian lifestyle. A large variety of foodstuffs and the inclusion of a substantial amount of meat make malnutrition an unlikely problem for hunter gatherers, even during times of relative food scarcity. Once the diet is based on a few select mono-cropped grains supplemented by milk and meat from nutritionally-inferior domesticated animals, the stage is set for nutritional deficit. As a result, humans are not as tall or broad in stature today as they were 25,000 years ago; and the mean age of death is lower today as well. [144] In addition, both the sedentism and population density associated with agriculture create the preconditions for degenerative and infectious disease. “Among the human diseases directly attributable to our sedentary lives in villages and cities are heart and vascular disorders, diabetes, stroke, emphysema,
hypertension, and cirrhoses [sic.] of the liver, which together cause 75 percent of the deaths in the industrial nations.” [145] The diet and activity level of a foraging lifestyle serve as a potent prophylactic against all of these common modern-day afflictions. Nomadic hunter-gatherers are by no means immune to parasitic infection and disease. But the spread of disease is greatly limited by low population density and by a regular change of habitation which reduced exposure to accumulated wastes. Both hunter-gatherers and agriculturalists are susceptible to zoonotic diseases carried by animals, but domestication reduces an animal’s natural immunity to disease and infection, creates crowded conditions that support the spread of disease among animal populations, and increases the opportunity for transmission to humans. In addition, permanent dwellings provide a niche for a new kind of disease-carrying animal specialized for symbiotic parasitic cohabitation with humans, the rat being among the most infamous.
Plagues and epidemic outbreaks were not a problem in the Pleistocene.

There is a significant psychological dimension to the agricultural revolution as well.
A foraging hunter-gatherer lifestyle frames natural systems in terms of symbiosis and interrelationship. Understanding subtle connections among plants, animals, geography,
and seasonal climate change is an important requisite of survival. Human agents are intimately bound to these natural systems and contemplate themselves in terms of these systems, drawing easy analogy between themselves and the natural communities around them, using animals, plants, and other natural phenomena as metaphor. The manipulative focus of domestication frames natural systems in antagonistic terms of control and resistance. “Agriculture removed the means by which men [sic.] could contemplate themselves in any other than terms of themselves (or machines). It reflected back upon nature an image of human conflict and competition . . . .” [146] The domestication frame changed our perceived relationship with the natural world,
and lies at the heart of our modern-day environmental woes. According to Paul Shepard,
with animal domestication we lost contact with an essential component of our human nature, the “otherness within,” that part of ourselves that grounds us to the rest of nature:

The transformation of animals through domestication was the first step in remaking them into subordinate images of ourselves—altering them to fit human modes and purposes. Our perception of not only ourselves but also of the whole of animal life was subverted, for we mistook the purpose of those few domesticates as the purpose of all. Plants never had for us the same heightened symbolic representation of purpose itself. Once we had turned animals into the means of power among ourselves and over the rest of nature, their uses made possible the economy of husbandry that would, with the addition of the agrarian impulse, produce those motives and designs on the earth contrary to respecting it. Animals would become “The Others.” Purposes of their own were not allowable, not even comprehensible. [147]

Domestication had a profound impact on human psychological development. Development—both physiological and psychological—is organized around a series of stages and punctuated by critical periods, windows of time in which the development and functional integration of specific systems are dependent upon external input of a designated type and quality. If the necessary environmental input for a given system is absent or of a sufficiently reduced quality, the system does not mature appropriately. This can have a snowball effect because the future development of other systems is almost always critically dependent on the successful maturation of previously developed systems. The change in focus toward the natural world along with the emergence of a new kind of social order interfered with epigenetic programs that evolved to anticipate the environmental input associated with a foraging lifestyle. The result was arrested development and a culture-wide immaturity:

Politically, agriculture required a society composed of members with the acumen of children. Empirically, it set about amputating and replacing certain signals and experiences central to early epigenesis. Agriculture not only infantilized animals by domestication, but exploited the infantile human traits of normal individual neoteny. The obedience demanded by the organization necessary for anything larger than the earliest village life, associated with the rise of a military caste, is essentially juvenile and submissive . . . . [148]

[143] Diamond (1992), p. 139. [Diamond, J. (1992). The Third Chimpanzee. New York: HarperCollins.]
[144] Shepard (1998) [Shepard, P. (1998). Coming Home to the Pleistocene. Washington, D.C.: Island Press]
[145] Ibid, p. 99.
[146] Shepard (1982), p. 114. [Shepard, P. (1982). Nature and Madness. Athens Georgia: University of Georgia Press]
[147] Shepard (1998), p. 128.
[148] Shepard (1982), pp. 113-114.

Ancient Atherosclerosis?

In reading about health, mostly about diet and nutrition, I regularly come across studies that are either poorly designed or poorly interpreted. The conclusions don’t always follow from the data or there are so many confounders that other conclusions can’t be discounted. Then the data gets used by dietary ideologues.

There is a major reason I appreciate the dietary debate among proponents of traditional, ancestral, paleo, low-carb, ketogenic, and some other related views (anti-inflammatory diets, autoimmune diets, etc such as the Wahls Protocol for multiple sclerosis and Bredesen Protocol for Alzheimer’s). This area of alternative debate leans heavily on questioning conventional certainties by digging deep into the available evidence. These diets seem to attract people capable of changing their minds or maybe it is simply that many people who eventually come to these unconventional views do so after having already tried numerous other diets.

For example, Dr. Terry Wahls is a clinical professor of Internal Medicine, Epidemiology, and Neurology  at the University of Iowa while also being Associate Chief of Staff at a Veterans Affairs hospital. She was as conventional as doctors come until she developed multiple sclerosis, began researching and experimenting, and eventually became a practitioner of functional medicine. Also, she went from being a hardcore vegetarian following mainstream dietary advice (avoided saturated fats, ate whole grains and legumes, etc) to embracing an essentially nutrient-dense paleo diet; her neurologist at the Cleveland Clinic referred her to Dr. Loren Cordain’s paleo research at Colorado State University. Since that time, she has done medical research and, recently having procured funding, she is in the process of doing a study in order to further test her diet.

Her experimental attitude, both personally and scientifically, is common among those interested in these kinds of diets and functional medicine. This experimental attitude is necessary when one steps outside of conventional wisdom, something Dr. Wahls felt she had to do to save her own life — a motivating factor of health crisis that leads many people to try a paleo, keto, etc diet after trying all else (these involve protocols to deal with serious illnesses, such as ketosis being medically used for treatment of epileptic seizures). Contradicting professional opinion of respected authorities (e.g., the American Heart Association), a diet like this tends to be an option of last resort for most people, something they come to after much failure and worsening of health. That breeds a certain mentality.

On the other hand, it should be unsurprising that people raised on mainstream views and who hold onto those views long into adulthood (and long into their careers) tend not to be people willing to entertain alternative views, no matter what the evidence indicates. This includes those working in the medical field. Some ask, why are doctors so stupid? As Dr. Michael Eades explains, it’s not that they’re stupid but that many of them are ignorant; to put it more nicely, they’re ill-informed. They simply don’t know because, like so many others, they are repeating what they’ve been told by other authority figures. And the fact of the matter is most doctors never learned much about certain topics in the first place: “A study in the International Journal of Adolescent Medicine and Health assessed the basic nutrition and health knowledge of medical school graduates entering a pediatric residency program and found that, on average, they answered only 52 percent of the eighteen questions correctly. In short, most mainstream doctors would fail nutrition” (Dr. Will Cole, Ketotarian).

The reason people stick to the known, even when it is wrong, is because it is familiar and so it feels safe (and because of liability, healthcare workers and health insurance companies prefer what is perceived as safe). Doctors, as with everyone else, are dependent on heuristics to deal with a complex world. And doctors, more than most people, are too busy to explore the large amounts of data out there, much less analyze it carefully for themselves.

This maybe relates to why most doctors tend to not make the best researchers, not to dismiss those attempting to do quality research. For that reason, you might think scientific researchers who aren’t doctors would be different than doctors. But that obviously isn’t always the case because, if so, Ancel Keys low quality research wouldn’t have dominated professional dietary advice for more than a half century. Keys wasn’t a medical professional or even trained in nutrition, rather he was educated in a wide variety of other fields (economics, political science, zoology, oceanography, biology, and physiology) with his earliest research done on the physiology of fish.

I came across yet another example of this, although less extreme than that of Keys, but also different in that at least some of the authors of the paper are medical doctors. The study in question involved the participation of 19 people. The paper is “Atherosclerosis across 4000 years of human history: the Horus study of four ancient populations,” peer-reviewed and published (2013) in the highly respectable Lancet Journal (Keys’ work, one might note, was also highly respectable). This study on atherosclerosis was well reported in the mainstream news outlets and received much attention from those critical of paleo diets, offered as a final nail in the coffin, claimed as being absolute proof that ancient people were as unhealthy as we are.

The 19 authors conclude that, “atherosclerosis was common in four preindustrial populations, including a preagricultural hunter-gatherer population, and across a wide span of human history. It remains prevalent in contemporary human beings. The presence of atherosclerosis in premodern human beings suggests that the disease is an inherent component of human ageing and not characteristic of any specific diet or lifestyle.” There you have it. Heart disease is simply in our genetics — so take your statin meds like your doctor tells you to do, just shut up and quit asking questions, quit looking at all the contrary evidence.

But even ignoring all else, does the evidence from this paper support their conclusion? No. It doesn’t require much research or thought to ascertain the weak case presented. In the paper itself, on multiple occasions including in the second table, they admit that three out of four of the populations were farmers who ate largely an agricultural diet and, of course, lived an agricultural lifestyle. At most, these examples can speak to the conditions of the neolithic but not the paleolithic. Of these three, only one was transitioning from an earlier foraging lifestyle, but as with the other two was eating a higher carb diet from foods they farmed. Also, the most well known example of the bunch, the Egyptians, particularly point to the problems of an agricultural diet — as described by Michael Eades in Obesity in ancient Egypt:

“[S]everal thousand years ago when the future mummies roamed the earth their diet was a nutritionist’s nirvana. At least a nirvana for all the so-called nutritional experts of today who are recommending a diet filled with whole grains, fresh fruits and vegetables, and little meat, especially red meat. Follow such a diet, we’re told, and we will enjoy abundant health.

“Unfortunately, it didn’t work that way for the Egyptians. They followed such a diet simply because that’s all there was. There was no sugar – it wouldn’t be produced for another thousand or more years. The only sweet was honey, which was consumed in limited amounts. The primary staple was a coarse bread made of stone-ground, whole wheat. Animals were used as beasts of burden and were valued much more for the work they could do than for the meat they could provide. The banks of the Nile provided fertile soil for growing all kinds of fruits and vegetables, all of which were a part the low-fat, high-carbohydrate Egyptian diet. And there were no artificial sweeteners, artificial coloring, artificial flavors, preservatives, or any of the other substances that are part of all the manufactured foods we eat today.

“Were the nutritionists of today right about their ideas of the ideal diet, the ancient Egyptians should have had abundant health. But they didn’t. In fact, they suffered pretty miserable health. Many had heart disease, high blood pressure, diabetes and obesity – all the same disorders that we experience today in the ‘civilized’ Western world. Diseases that Paleolithic man, our really ancient ancestors, appeared to escape.”

With unintentional humor, the authors of the paper note that, “None of the cultures were known to be vegetarian.” No shit. Maybe that is because until late in the history of agriculture there were no vegetarians and for good reason. As Weston Price noted, there is a wide variety of possible healthy diets as seen in traditional communities. Yet for all his searching for a healthy traditional community that was strictly vegan or even vegetarian, he could never find any; the closest examples were those that relied largely on such things as insects and grubs because of a lack of access to larger sources of protein and fat. On the other hand, the most famous vegetarian population, Hindu Indians, have one of the shortest lifespans (to be fair, though, that could be for other reasons such as poverty-related health issues).

Interestingly, there apparently has never been a study done comparing a herbivore diet and a carnivore diet, although one study touched on it while not quite eliminating all plants from the latter. As for fat, there is no evidence that it is problematic (vegetable oils are another issue), if anything the opposite: “In a study published in the Lancet, they found that people eating high quantities of carbohydrates, which are found in breads and rice, had a nearly 30% higher risk of dying during the study than people eating a low-carb diet. And people eating high-fat diets had a 23% lower chance of dying during the study’s seven years of follow-up compared to people who ate less fat” (Alice Park, The Low-Fat vs. Low-Carb Diet Debate Has a New Answer); and “The Mayo Clinic published a study in the Journal of Alzheimer’s Disease in 2012 demonstrating that in individuals favoring a high-carb diet, risk for mild cognitive impairment was increased by 89%, contrasted to those who ate a high-fat diet, whose risk was decreased by 44%” (WebMD interview of Dr. David Perlmutter). Yet the respectable authorities tell us that fat is bad for our health, making it paradoxical that many fat-gluttonous societies have better health. There are so many paradoxes, according to conventional thought, that one begins to wonder if conventional thought is the real paradox.

Now let me discuss the one group, the Unangan, that at first glance stands out from the rest. The authors say that the, “five Unangan people living in the Aleutian Islands of modern day Alaska (ca 1756–1930 CE, one excavation site).” Those mummies are far different than those from the other populations that came much earlier in history. Four of the Unangan died around 1900 and one around 1850. Why does that matter? Well, for the reason that their entire world was being turned on its head at that time. The authors claim that, “The Unangan’s diet was predominately marine, including seals, sea lions, sea otters, whale, fish, sea urchins, and other shellfish and birds and their eggs. They were hunter-gatherers living in barabaras, subterranean houses to protect against the cold and fierce winds.” They base this claim on the assumption that these particular mummified Unangan had been eating the same diet as their ancestors for thousands of years, but the evidence points in the opposite direction.

Questioning this assumption, Jeffery Gerber explains that, “During life (before 1756–1930 CE) not more than a few short hundred years ago, the 5 Unangan/Aleut mummies were hardly part of an isolated group. The Fur Seal industry exploded in the 18th century bringing outside influence, often violent, from countries including Russia and Europe. These mummies during life, were probably exposed to foods (including sugar) different from their traditional diet and thus might not be representative of their hunter-gatherer origins” (Mummies, Clogged Arteries and Ancient Junk Food). One might add that, whatever Western foods that may have been introduced, we do know of another factor — the Government of Nunavat official website states that, “European whalers regularly travelled to the Arctic in the late 17th and 18th century. When they visited, they introduced tobacco to Inuit.” Why is that significant? Tobacco is a known risk factor for atherosclerosis. Gideon Mailer and Nicola Hale, in their book Decolonizing the Diet, elaborate on the colonial history of the region (pp. 162-171):

“On the eve of Western contact, the indigenous population of present-day Alaska numbered around 80,000. They included the Alutiiq and Unangan communities, more commonly defined as Aleuts, Inupiat and Yupiit, Athabaskans, and the Tinglit and Haida groups. Most groups suffered a stark demographic decline from the mid-eighteenth century to the mid-nineteenth century, during the period of extended European — particularly Russian — contact. Oral traditions among indigenous groups in Alaska described whites as having taken hunting grounds from other related communities, warning of a similar fate to their own. The Unangan community, numbering more than 12,000 at contact, declined by around 80 percent by 1860. By as early as the 1820s, as Jacobs has described, “The rhythm of life had changed completely in the Unangan villages now based on the exigencies of the fur trade rather than the subsistence cycle, meaning that often villages were unable to produce enough food to keep them through the winter.” Here, as elsewhere, societal disruption was most profound in the nutritional sphere, helping account for the failure to recover population numbers following disease epidemics.

“In many parts of Alaska, Native American nutritional strategies and ecological niches were suddenly disrupted by the arrival of Spanish and Russian settlers. “Because,” as Saunt has pointed out “it was extraordinarily difficult to extract food from the challenging environment,” in Alaska and other Pacific coastal communities, “any disturbance was likely to place enormous stress on local residents.” One of indigenous Alaska’s most important ecological niches centered on salmon access points. They became steadily more important between the Paleo-Eskimo era around 4,200 years ago and the precontact period, but were increasingly threatened by Russian and American disruptions from the 1780s through the nineteenth century. Dependent on nutrients and omega fatty acids such as DHA from marine resources such as salmon, Aleut and Alutiiq communities also required other animal products, such as intestines, to prepare tools and waterproof clothing to take advantage of fishing seasons. Through the later part of the eighteenth century, however, Russian fur traders and settlers began to force them away from the coast with ruthless efficiency, even destroying their hunting tools and waterproof apparatus. The Russians were clear in their objectives here, with one of their men observing that the Native American fishing boats were “as indispensable as the plow and the horse for the farmer.”

“Here we are provided with another tragic case study, which allows us to consider the likely association between disrupted access to omega-e fatty acids such as DHA and compromised immunity. We have already noted the link between DHA, reduced inflammation and enhanced immunity in the millennia following the evolution of the small human gut and the comparatively larger human brain. Wild animals, but particularly wild fish, have been shown to contain far higher proportions of omega-3 fatty acids than the food sources that apparently became more abundant in Native American diets after European contact, including in Alaska. Fat-soluble vitamins and DHA are abundantly found in fish eggs and fish fats, which were prized by Native Americans in the Northwest and Great Lakes regions, in the marine life used by California communities, and perhaps more than anywhere else, in the salmon products consumed by indigenous Alaskan communities. […]

“In Alaska, where DHA and vitamin D-rich salmon consumption was central to precontact subsistence strategies, alongside the consumption of nutrient-dense animal products and the regulation of metabolic hormones through periods of fasting or even through the efficient use of fatty acids or ketones for energy, disruptions to those strategies compromised immunity among those who suffered greater incursions from Russian and other European settlers through the first half of the nineteenth century.

“A collapse in sustainable subsistence practices among the Aleuts of Alaska exacerbated population decline during the period of Russian contact. The Russian colonial regime from the 1740s to 1840s destroyed Aleut communities through open warfare and by attacking and curtailing their nutritional resources, such as sea otters, which Russians plundered to supply the Chinese market for animal skins. Aleuts were often forced into labor, and threatened by the regular occurrence of Aleut women being taken as hostages. Curtailed by armed force, Aleuts were often relocated to the Pribilof Islands or to California to collect seals and sea otters. The same process occurred as Aleuts were co-opted into Russian expansion through the Aleutian Islands, Kodiak Island and into the southern coast of Alaska. Suffering murder and other atrocities, Aleuts provided only one use to Russian settlers: their perceived expertise in hunting local marine animals. They were removed from their communities, disrupting demography further and preventing those who remained from accessing vital nutritional resources due to the discontinuation of hunting frameworks. Colonial disruption, warfare, captivity and disease were accompanied by the degradation of nutritional resources. Aleut population numbers declined from 18,000 to 2,000 during the period of Russian occupation in the first half of the nineteenth century. A lag between the first period of contact and the intensification of colonial disruption demonstrates the role of contingent interventions in framing the deleterious effects of epidemics, including the 1837-38 smallpox epidemic in the region. Compounding these problems, communities used to a relatively high-fat and low-fructose diet were introduced to alcohol by the Russians, to the immediate detriment of their health and well-being.”

The traditional hunter-gatherer diet, as Mailer and Hale describe it, was high in the nutrients that protect against inflammation. The loss of these nutrients and the simultaneous decimation to the population was a one-two punch. Without the nutrients, their immune systems were compromised. And with their immune systems compromised, they were prone to all kinds of health conditions, probably including heart disease which of course is related to inflammation. Weston A. Price, in Nutrition and Physical Degeneration, observed that morbidity and mortality of health conditions such as heart disease rise and fall with the seasons, following precisely the growth and dying away of vegetation throughout the year (which varies by region as do the morbidity and mortality rates; the regions of comparison were in the United States and Canada). He was able to track this down to the change of fat soluble vitamins, specifically vitamin D, in dairy. When fresh vegetation was available, cows ate it and so produced more of these nutrients and presumably more omega-3s at the same time.

Prior to colonization, the Unang would have had access to even higher levels of these protective nutrients year round. The most nutritious dairy taken from the springtime wouldn’t come close in comparison to the nutrient profile of wild game. I don’t know why anyone would be shocked that, like agricultural populations, hunter-gatherers also experience worsening health after loss of wild resources. Yet the authors of the mummy studies act like they made a radical discovery that throws to the wind every doubt anyone ever had about simplistic mainstream thought. It turns out, they seem to be declaring, that we are all victims of genetic determinism after all and so toss out your romantic fairy tales about healthy primitives from the ancient world. The problem is all the evidence that undermines their conclusion, including the evidence that they present in their own paper, that is when it is interpreted in full context.

As if responding to the researchers, Mailer and Hale write (p. 186): “Conditions such as diabetes are thus often associated with heart disease and other syndromes, given their inflammatory component. They now make up a huge proportion of treatment and spending in health services on both sides of the Atlantic. Yet policy makers and researchers in those same health services often respond to these conditions reactively rather than proactively — as if they were solely genetically determined, rather than arising due to external nutritional factors. A similarly problematic pattern of analysis, as we have noted, has led scholars to ignore the central role of nutritional change in Native American population loss after European contact, focusing instead on purportedly immutable genetic differences.”

There is another angle related to the above but somewhat at a tangent. I’ll bring it up because the research paper mentions it in passing as a factor to be considered: “All four populations lived at a time when infections would have been a common aspect of daily life and the major cause of death. Antibiotics had yet to be developed and the environment was non-hygienic. In 20th century hunter-foragers-horticulturalists, about 75% of mortality was attributed to infections, and only 10% from senescence. The high level of chronic infection and inflammation in premodern conditions might have promoted the inflammatory aspects of atherosclerosis.”

This is familiar territory for me, as I’ve been reading much about inflammation and infections. The authors are presenting the old view of the immune system, as opposed to that of functional medicine that looks at the entire human. An example of the latter is the hygiene hypothesis that argues it is the exposure to microbes that strengthens the immune system and there has been much evidence in support of it (such as children raised with animals or on farms being healthier as adults). The researchers above are making an opposing argument that is contradicted by populations remaining healthy when lacking modern medicine as long as they maintain traditional diet and lifestyle in a healthy ecosystem, including living soil that hasn’t been depleted from intensive farming.

This isn’t only about agriculturalists versus hunter-gatherers. The distinction between populations goes deeper into culture and environment. Weston A. Price discovered this simple truth in finding healthy populations among both agriculturalists and hunter-gatherers, but it was specific populations under specific conditions. Also, at the time when he traveled in the early 20th century, there were still traditional communities living in isolation in Europe. One example is Loetschenatal Valley in Switzerland, while visiting the country in two separate trips in the consecutive years of 1931 and 1932 — as he writes of it:

“We were told that the physical conditions that would not permit people to obtain modern foods would prevent us from reaching them without hardship. However, owing to the completion of the Loetschberg Tunnel, eleven miles long, and the building of a railroad that crosses the Loetschental Valley, at a little less than a mile above sea level, a group of about 2,000 people had been made easily accessible for study, shortly prior to 1931. Practically all the human requirements of the people in that valley, except a few items like sea salt, have been produced in the valley for centuries.”

He points out that, “Notwithstanding the fact that tuberculosis is the most serious disease of Switzerland, according to a statement given me by a government official, a recent report of inspection of this valley did not reveal a single case.” In Switzerland and other countries, he found an “association of dental caries and tuberculosis.” The commonality was early life development, as underdeveloped and maldeveloped bone structure led to diverse issues: crowded teeth, smaller skull size, misaligned features, and what was called tubercular chest. And that was an outward sign of deeper and more systemic developmental issues, including malnutrition, inflammation, and the immune system:

“Associated with a fine physical condition the isolated primitive groups have a high level of immunity to many of our modern degenerative processes, including tuberculosis, arthritis, heart disease, and affections  of the internal organs. When, however, these individuals have lost this high level of physical excellence a definite lowering in their resistance to the modern degenerative processes has taken place. To illustrate, the narrowing of the facial and dental arch forms of the children of the modernized parents, after they had adopted the white man’s food, was accompanied by an increase in susceptibility to pulmonary tuberculosis.”

Any population that lost its traditional way of life became prone to disease. But this could often as easily be reversed by having the diseased individual return to healthy conditions. In discussing Dr. Josef Romig, Price said that, “Growing out of his experience, in which he had seen large numbers of the modernized Eskimos and Indians attacked with tuberculosis, which tended to be progressive and ultimately fatal as long as the patients stayed under modernized living conditions, he now sends them back when possible to primitive conditions and to a primitive diet, under which the death rate is very much lower than under modernized  conditions. Indeed, he reported that a great majority of the afflicted recover under the primitive type of living and nutrition.”

The point made by Mailer and Hale was earlier made by Price. As seen with pre-contact Native Alaskans, the isolated traditional residents of Loetschenatal Valley had nutritious diets. Price explained that he “arranged to have samples of food, particularly dairy products, sent to me about twice a month, summer and winter. These products have been tested for their mineral and vitamin contents, particularly the fat-soluble activators. The samples were found to be high in vitamins and much higher than the average samples of commercial dairy products in America and Europe, and in the lower areas of Switzerland.” Whether fat and organ meats from marine animals or dairy from pastured alpine cows, the key is high levels of fat soluble vitamins and, of course, omega-3 fatty acids procured from a pristine environment (healthy soil and clean water with no toxins, farm chemicals, hormones, etc). It also helped that both populations ate much that was raw which maintains the high nutrient content that is partly destroyed through heat.

Some might find it hard to believe that what you eat can determine whether or not you get a serious disease like tuberculosis. Conventional medicine tells us that the only thing that protects us is either avoiding contact or vaccination. But this view is being seriously challenged, as Mailer and Hale make clear (p. 164): “Several studies have focused on the link between Vitamin D and the health outcomes of individuals infected with tuberculosis, taking care to discount other causal factors and to avoid determining causation merely through association. Given the historical occurrence of the disease among indigenous people after contact, including in Alaska, those studies that have isolated the contingency of immunity on active Vitamin D are particularly pertinent to note. In biochemical experiments, the presence of the active form of vitamin D has been shown to have a crucial role in the destruction of Mycobacterium tuberculosis by macrophages. A recent review has found that tuberculosis patients tend to retain a lower-than-average vitamin D status, and that supplementation of the nutrient improved outcomes in most cases.” As an additional thought, the popular tuberculosis sanitoriums, some in the Swiss Alps, were attractive because “it was believed that the climate and above-average hours of sunshine had something to do with it” (Jo Fahy, A breath of fresh air for an alpine village). What does sunlight help the body to produce? Vitamin D.

As an additional perspective, James C. Scotts’ Against the Grain, writes that, “Virtually every infectious disease caused by micro-organisms and specifically adapted to Homo sapiens has arisen in the last ten thousand years, many of them in the last five thousand years as an effect of ‘civilisation’: cholera, smallpox, measles, influenza, chickenpox, and perhaps malaria” It is not only that agriculture introduces new diseases but also makes people susceptible to them. That might be true, as Scott suggests, even of a disease like malaria. The Piraha are more likely to die of malaria than anything else, but that might not have been true in the past. Let me offer a speculation by connecting to the mummy study.

The Ancestral Puebloans, one of the groups in the mummy study, were at the time farming maize (corn) and squash while foraging pine nuts, seeds, amaranth (grain), and grasses. How does this compare to the more recent Piraha? A 1948 Smithsonian publication, Handbook of South American Indians ed. Julian H. Steward, reported that, “The Pirah grew maize, sweet manioc (macaxera), a kind of yellow squash (jurumum), watermelon, and cotton” (p. 267). So it turns out that, like the Ancestral Puebloan, the Piraha have been on their way toward a more agricultural lifestyle for a while. I also noted that the same publication added the detail that the Piraha “did not drink rum,” but by the time Daniel Everett met the Piraha in 1978 traders had already introduced them to alcohol and it had become an occasional problem. Not only were they becoming agricultural but also Westernized, two factors that likely contributed to decreased immunity.

Like other modern hunter-gatherers, the Piraha have been effected by the Neolithic Revolution and are in many ways far different from Paleolithic hunter-gatherers. Ancient dietary habits are shown in the analysis of ancient bones — M.P. Richards writes that, “Direct evidence from bone chemistry, such as the measurement of the stable isotopes of carbon and nitrogen, do provide direct evidence of past diet, and limited studies on five Neanderthals from three sites, as well as a number of modern Palaeolithic and Mesolithic humans indicates the importance of animal protein in diets. There is a significant change in the archaeological record associated with the introduction of agriculture worldwide, and an associated general decline in health in some areas. However, there is an rapid increase in population associated with domestication of plants, so although in some regions individual health suffers after the Neolithic revolution, as a species humans have greatly expanded their population worldwide” (A brief review of the archaeological evidence for Palaeolithic and Neolithic subsistence). This is further supported in the analysis of coprolites. “Studies of ancient human coprolites, or fossilized human feces, dating anywhere from three hundred thousand to as recent as fifty thousand years ago, have revealed essentially a complete lack of any plant material in the diets of the subjects studied (Bryant and Williams-Dean 1975),” Nora Gedgaudas tells us in Primal Body, Primal Mind (p. 39).

This diet changed as humans entered our present interglacial period with its warmer temperatures and greater abundance of vegetation, which was lacking during the Paleolithic Period: “There was far more plant material in the diets of our more recent ancestors than our more ancient hominid ancestors, due to different factors” (Gedgaudas, p. 37). Following the earlier megafauna mass extinction, it wasn’t only agriculturalists but also hunter-gatherers who began to eat more plants and in many cases make use of cultivated plants (either that they cultivated or that they adopted from nearby agriculturalists). To emphasize how drastic was this change, this loss of abundant meat and fat, consider the fact that humans have yet to regain the average height and skull size of Paleolithic humans.

The authors of the mummy study didn’t even attempt to look at the data of Paleolithic humans. The populations compared are entirely from the past few millennia. And the only hunter-gatherer group included was post-contact. So, why are the authors so confident in their conclusion? I presume they were simply trying to get published and get media attention in a highly competitive market of academic scholarship. These people obviously aren’t stupid, but they had little incentive to fully inform themselves either. All the info I shared in this post I was able to gather in about a half an hour of several web searches, not exactly difficult academic research. It’s amazing the info that is easily available these days, for those who want to find it.

Let me make one last point. The mummy study isn’t without its merits. The paper mentions other evidence that remains to be explained: “We also considered the reliability and previous work of the authors. Autopsy studies done as long ago as the mid-19th century showed atherosclerosis in ancient Egyptians. Also, in more recent times, Zimmerman undertook autopsies and described atherosclerosis in the mummies of two Unangan men from the same cave as our Unangan mummies and of an Inuit woman who lived around 400 CE. A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.”

Let’s break that down. Further examples of Egyptian mummies is irrelevant, as their diet was so strikingly similar to the idealized Western diet recommended by mainstream doctors, dieticians, and nutritionists. That leaves the rest to account for. The older Unangan mummies are far more interesting and any meaningful paper would have led with that piece of data, but even then it wouldn’t mean what the authors think it means. Atherosclerosis is one small factor and not necessarily as significant as assumed. From a functional medicine perspective, it’s the whole picture that matters in how the body actually functions and in the health that results. If so, atherosclerosis might not indicate the same thing for all populations. In Nourishing Diets, Morell writes that (pp. 124-5),

“Critics have pointed out that Keys omitted from his study many areas of the world where consumption of animal foods is high and deaths from heart attack are low, including France — the so-called French paradox. But there is also a Japanese paradox. In 1989, Japanese scientists returned to the same two districts that Keys had studied. In an article titled “lessons fro Science from the Seven Countries Study,” they noted that per capita consumption of rice had declined, while consumption of fats, oils, meats, poultry, dairy products and fruit had all increased. […]

“During the postwar period of increased animal consumption, the Japanese average height increased three inches and the age-adjusted death rate from all causes declined from 17.6 to 7.4 per 1,000 per year. Although the rates of hypertension increased, stroke mortality declined markedly. Deaths from cancer also went down in spite of the consumption of animal foods.

“The researchers also noted — and here is the paradox — that the rate of myocardial infarction (heart attack) and sudden death did not change during this period, in spite of the fact that the Japanese weighed more, had higher blood pressure and higher cholesterol levels, and ate more fat, beef and dairy foods.”

Right here in the United States, we have are own ‘paradox’ as well. Good Calories, Bad Calories by Gary Taubes makes a compelling argument that, based on the scientific research, there is no strong causal link between atherosclerosis and coronary heart disease. Nina Teicholz has also written extensively about this, such as in her book The Big Fat Surprise; and in an Atlantic piece (How Americans Got Red Meat Wrong) she lays out some of the evidence showing that Americans in the 19th century, as compared to the following century, ate more meat and fat while they ate fewer vegetables and fruits. Nonetheless: “During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.” Whether or not earlier Americans had high rates of atherosclerosis, there is strong evidence indicating they did not have high rates of heart disease, of strokes and heart attacks. The health crisis for these conditions, as Tiecholz notes, didn’t take hold until the very moment meat and animal fat consumption took a nosedive. So what gives?

The takeaway is this. We have no reason to assume that atherosclerosis in the present or in the past can tell us much of anything about general health. Even ignoring the fact that none of the mummies studied was from a high protein and high fat Paleo population, we can make no meaningful interpretations of the presence of atherosclerosis among some of the individuals. Going by modern data, there is no reason to jump to the conclusion that they had high mortality rates because of it. Quite likely, they died from completely unrelated health issues. A case in point is that of the Masai, around which there is much debate in interpreting the data. George V. Mann and others wrote a paper, Atherosclerosis in the Masai, that demonstrated the complexity:

“The hearts and aortae of 50 Masai men were collected at autopsy. These pastoral people are exceptionally active and fit and they consume diets of milk and meat. The intake of animal fat exceeds that of American men. Measurements of the aorta showed extensive atherosclerosis with lipid infiltration and fibrous changes but very few complicated lesions. The coronary arteries showed intimal thickening by atherosclerosis which equaled that of old U.S. men. The Masai vessels enlarge with age to more than compensate for this disease. It is speculated that the Masai are protected from their atherosclerosis by physical fitness which causes their coronary vessels to be capacious.”

Put this in the context provided in What Causes Heart Disease? by Sally Fallon Morell and Mary Enig: “The factors that initiate a heart attack (or a stroke) are twofold. One is the pathological buildup of abnormal plaque, or atheromas, in the arteries, plaque that gradually hardens through calcification. Blockage most often occurs in the large arteries feeding the heart or the brain. This abnormal plaque or atherosclerosis should not be confused with the fatty streaks and thickening that is found in the arteries of both primitive and industrialized peoples throughout the world. This thickening is a protective mechanism that occurs in areas where the arteries branch or make a turn and therefore incur the greatest levels of pressure from the blood. Without this natural thickening, our arteries would weaken in these areas as we age, leading to aneurysms and ruptures. With normal thickening, the blood vessel usually widens to accommodate the change. But with atherosclerosis the vessel ultimately becomes more narrow so that even small blood clots may cause an obstruction.”

A distinction is being made here that maybe wasn’t being made in the the mummy study. What gets measured as atherosclerosis could correlate to diverse health conditions and consequences in various populations across dietary lifestyles, regional environments, and historical and prehistorical periods. Finding atherosclerosis in an individual, especially a mummy, might not tell us any useful info about overall health.

Just for good measure, let’s tackle the last piece of remaining evidence the authors mention: “A previous study using CT scanning showed atherosclerotic calcifications in the aorta of the Iceman, who is believed to have lived about 3200 BCE and was discovered in 1991 in a high snowfield on the Italian-Austrian border.” Calling him Iceman, to most ears, sounds similar to calling an ancient person a caveman — implying that he was a hunter for it is hard to grow plants on ice. In response, Paul Mabry writes in Did Meat Eating Make Ancient Hunter Gatherers Get Heart Disease, showing what was left out in the research paper:

“Sometimes the folks trying to discredit hunter-gather diets bring in Ötzi, “The Iceman” a frozen human found in the Tyrolean Mountains on the border between Austria and Italy that also had plaques in his heart arteries. He was judged to be 5300 years old making his era about 3400 BCE. Most experts feel agriculture had reach Europe almost 700 years before that according to this article. And Ötzi himself suggests they are right. Here’s a quote from the Wikipedia article on Ötzi’s last meal (a sandwich): “Analysis of Ötzi’s intestinal contents showed two meals (the last one consumed about eight hours before his death), one of chamois meat, the other of red deer and herb bread. Both were eaten with grain as well as roots and fruits. The grain from both meals was a highly processed einkornwheat bran,[14] quite possibly eaten in the form of bread. In the proximity of the body, and thus possibly originating from the Iceman’s provisions, chaff and grains of einkorn and barley, and seeds of flax and poppy were discovered, as well as kernels of sloes (small plumlike fruits of the blackthorn tree) and various seeds of berries growing in the wild.[15] Hair analysis was used to examine his diet from several months before. Pollen in the first meal showed that it had been consumed in a mid-altitude conifer forest, and other pollens indicated the presence of wheat and legumes, which may have been domesticated crops. Pollen grains of hop-hornbeam were also discovered. The pollen was very well preserved, with the cells inside remaining intact, indicating that it had been fresh (a few hours old) at the time of Ötzi’s death, which places the event in the spring. Einkorn wheat is harvested in the late summer, and sloes in the autumn; these must have been stored from the previous year.””

Once again, we are looking at the health issues of someone eating an agricultural diet. It’s amazing that the authors, 19 of them, apparently all agreed that diet has nothing to do with a major component of health. That is patently absurd. To the credit of Lancet, they published a criticism of this conclusion (though these critics repeats their own preferred conventional wisdom, in their view on saturated fat) — Atherosclerosis in ancient populations by Gino Fornaciari and Raffaele Gaeta:

“The development of vascular calcification is related not only to atherosclerosis but also to conditions such as disorders of calcium-phosphorus metabolism, diabetes, chronic microinflammation, and chronic renal insufficiency.

“Furthermore, stating that atherosclerosis is not characteristic of any specific diet or lifestyle, but an inherent component of human ageing is not in agreement with recent studies demonstrating the importance of diet and physical activity.5 If atherosclerosis only depended on ageing, it would not have been possible to diagnose it in a young individual, as done in the Horus study.1

“Finally, classification of probable atherosclerosis on the basis of the presence of a calcification in the expected course of an artery seems incorrect, because the anatomy can be strongly altered by post-mortem events. The walls of the vessels might collapse, dehydrate, and have the appearance of a calcific thickening. For this reason, the x-ray CT pattern alone is insufficient and diagnosis should be supported by histological study.”

As far as I know, this didn’t lead to a retraction of the paper. Nor did this criticism receive the attention that the paper itself was given. None of the people who praised the paper bothered to point out the criticism, at least not among what I came across. Anyway, how did this weakly argued paper based on faulty evidence get published in the first place? And then how does it get spread by so many as if proven fact?

This is the uphill battle faced by anyone seeking to offer an alternative perspective, especially on diet. This makes meaningful debate next to impossible. That won’t stop those like me from slowly chipping away at the vast edifice of the dominant paradigm. On a positive note, it helps when the evidence used against an alternative view, after reinterpretation, ends up being strong evidence in favor of it.

Paradoxes of State and Civilization Narratives

Below is a passage from a recent book by James C. Scott, Against the Grain.

The book is about agriculture, sedentism, and early statism. The author questions the standard narrative. In doing so, he looks more closely at what the evidence actually shows us about civilization, specifically in terms of supposed collapses and dark ages (elsewhere in the book, he also discusses how non-state ‘barbarians’ are connected to, influenced by, and defined according to states).

Oddly, Scott never mentions Göbekli Tepe. It is an ancient archaeological site that offers intriguing evidence of civilization preceding and hence not requiring agriculture, sedentism, or statism. As has been said of it, “First came the temple, then the city.” That would seem to fit into the book’s framework.

The other topic not mentioned, less surprisingly, is Julian Jaynes’ theory of bicameralism. Jaynes’ view might complicate Scott’s interpretations. Scott goes into great detail about domestication and slavery, specifically in the archaic civilizations such as first seen with the walled city-states. But Jaynes pointed out that authoritarianism as we know it didn’t seem to exist early on, as the bicameral mind made social conformity possible through non-individualistic identity and collective experience (explained in terms of the hypothesis of archaic authorization).

Scott’s focus is more on external factors. From perusing the book, he doesn’t seem to fully take into account social science research, cultural studies, anthropology, philology, etc. The thesis of the book could have been further developed by exploring other areas, although maybe the narrow focus is useful for emphasizing the central point about agriculture. There is a deeper issue, though, that the author does touch upon. What does it mean to be a domesticated human? After all, that is what civilization is about.

He does offer an interesting take on human domestication. Basically, he doesn’t see that most humans ever take the yoke of civilization willingly. There must be systems of force and control in place to make people submit. I might agree, even as I’m not sure that this is the central issue. It’s less about how people submit in body than how they submit in mind. Whether or not we are sheep, there is no shepherd. Even the rulers of the state are sheep.

The temple comes first. Before civilization proper, before walled city-states, before large-scale settlement, before agriculture, before even pottery, there was a temple. What does the temple represent?

* * *

Against the Grain
by James C. Scott
pp. 22-27

PARADOXES OF STATE AND CIVILIZATION NARRATIVES

A foundational question underlying state formation is how we ( Homo sapiens sapiens ) came to live amid the unprecedented concentrations of domesticated plants, animals, and people that characterize states. From this wide-angle view, the state form is anything but natural or given. Homo sapiens appeared as a subspecies about 200,000 years ago and is found outside of Africa and the Levant no more than 60,000 years ago. The first evidence of cultivated plants and of sedentary communities appears roughly 12,000 years ago. Until then—that is to say for ninety-five percent of the human experience on earth—we lived in small, mobile, dispersed, relatively egalitarian, hunting-and-gathering bands. Still more remarkable, for those interested in the state form, is the fact that the very first small, stratified, tax-collecting, walled states pop up in the Tigris and Euphrates Valley only around 3,100 BCE, more than four millennia after the first crop domestications and sedentism. This massive lag is a problem for those theorists who would naturalize the state form and assume that once crops and sedentism, the technological and demographic requirements, respectively, for state formation were established, states/empires would immediately arise as the logical and most efficient units of political order. 4

These raw facts trouble the version of human prehistory that most of us (I include myself here) have unreflectively inherited. Historical humankind has been mesmerized by the narrative of progress and civilization as codified by the first great agrarian kingdoms. As new and powerful societies, they were determined to distinguish themselves as sharply as possible from the populations from which they sprang and that still beckoned and threatened at their fringes. In its essentials, it was an “ascent of man” story. Agriculture, it held, replaced the savage, wild, primitive, lawless, and violent world of hunter-gatherers and nomads. Fixed-field crops, on the other hand, were the origin and guarantor of the settled life, of formal religion, of society, and of government by laws. Those who refused to take up agriculture did so out of ignorance or a refusal to adapt. In virtually all early agricultural settings the superiority of farming was underwritten by an elaborate mythology recounting how a powerful god or goddess entrusted the sacred grain to a chosen people.

Once the basic assumption of the superiority and attraction of fixed-field farming over all previous forms of subsistence is questioned, it becomes clear that this assumption itself rests on a deeper and more embedded assumption that is virtually never questioned. And that assumption is that sedentary life itself is superior to and more attractive than mobile forms of subsistence. The place of the domus and of fixed residence in the civilizational narrative is so deep as to be invisible; fish don’t talk about water! It is simply assumed that weary Homo sapiens couldn’t wait to finally settle down permanently, could not wait to end hundreds of millennia of mobility and seasonal movement. Yet there is massive evidence of determined resistance by mobile peoples everywhere to permanent settlement, even under relatively favorable circumstances. Pastoralists and hunting-and-gathering populations have fought against permanent settlement, associating it, often correctly, with disease and state control. Many Native American peoples were confined to reservations only on the heels of military defeat. Others seized historic opportunities presented by European contact to increase their mobility, the Sioux and Comanche becoming horseback hunters, traders, and raiders, and the Navajo becoming sheep-based pastoralists. Most peoples practicing mobile forms of subsistence—herding, foraging, hunting, marine collecting, and even shifting cultivation—while adapting to modern trade with alacrity, have bitterly fought permanent settlement. At the very least, we have no warrant at all for supposing that the sedentary “givens” of modern life can be read back into human history as a universal aspiration. 5

The basic narrative of sedentism and agriculture has long survived the mythology that originally supplied its charter. From Thomas Hobbes to John Locke to Giambattista Vico to Lewis Henry Morgan to Friedrich Engels to Herbert Spencer to Oswald Spengler to social Darwinist accounts of social evolution in general, the sequence of progress from hunting and gathering to nomadism to agriculture (and from band to village to town to city) was settled doctrine. Such views nearly mimicked Julius Caesar’s evolutionary scheme from households to kindreds to tribes to peoples to the state (a people living under laws), wherein Rome was the apex, with the Celts and then the Germans ranged behind. Though they vary in details, such accounts record the march of civilization conveyed by most pedagogical routines and imprinted on the brains of schoolgirls and schoolboys throughout the world. The move from one mode of subsistence to the next is seen as sharp and definitive. No one, once shown the techniques of agriculture, would dream of remaining a nomad or forager. Each step is presumed to represent an epoch-making leap in mankind’s well-being: more leisure, better nutrition, longer life expectancy, and, at long last, a settled life that promoted the household arts and the development of civilization. Dislodging this narrative from the world’s imagination is well nigh impossible; the twelve-step recovery program required to accomplish that beggars the imagination. I nevertheless make a small start here.

It turns out that the greater part of what we might call the standard narrative has had to be abandoned once confronted with accumulating archaeological evidence. Contrary to earlier assumptions, hunters and gatherers—even today in the marginal refugia they inhabit—are nothing like the famished, one-day-away-from-starvation desperados of folklore. Hunters and gathers have, in fact, never looked so good—in terms of their diet, their health, and their leisure. Agriculturalists, on the contrary, have never looked so bad—in terms of their diet, their health, and their leisure. 6 The current fad of “Paleolithic” diets reflects the seepage of this archaeological knowledge into the popular culture. The shift from hunting and foraging to agriculture—a shift that was slow, halting, reversible, and sometimes incomplete—carried at least as many costs as benefits. Thus while the planting of crops has seemed, in the standard narrative, a crucial step toward a utopian present, it cannot have looked that way to those who first experienced it: a fact some scholars see reflected in the biblical story of Adam and Eve’s expulsion from the Garden of Eden.

The wounds the standard narrative has suffered at the hands of recent research are, I believe, life threatening. For example, it has been assumed that fixed residence—sedentism—was a consequence of crop-field agriculture. Crops allowed populations to concentrate and settle, providing a necessary condition for state formation. Inconveniently for the narrative, sedentism is actually quite common in ecologically rich and varied, preagricultural settings—especially wetlands bordering the seasonal migration routes of fish, birds, and larger game. There, in ancient southern Mesopotamia (Greek for “between the rivers”), one encounters sedentary populations, even towns, of up to five thousand inhabitants with little or no agriculture. The opposite anomaly is also encountered: crop planting associated with mobility and dispersal except for a brief harvest period. This last paradox alerts us again to the fact that the implicit assumption of the standard narrative—namely that people couldn’t wait to abandon mobility altogether and “settle down”—may also be mistaken.

Perhaps most troubling of all, the civilizational act at the center of the entire narrative: domestication turns out to be stubbornly elusive. Hominids have, after all, been shaping the plant world—largely with fire—since before Homo sapiens. What counts as the Rubicon of domestication? Is it tending wild plants, weeding them, moving them to a new spot, broadcasting a handful of seeds on rich silt, depositing a seed or two in a depression made with a dibble stick, or ploughing? There appears to be no “aha!” or “Edison light bulb” moment. There are, even today, large stands of wild wheat in Anatolia from which, as Jack Harlan famously showed, one could gather enough grain with a flint sickle in three weeks to feed a family for a year. Long before the deliberate planting of seeds in ploughed fields, foragers had developed all the harvest tools, winnowing baskets, grindstones, and mortars and pestles to process wild grains and pulses. 7 For the layman, dropping seeds in a prepared trench or hole seems decisive. Does discarding the stones of an edible fruit into a patch of waste vegetable compost near one’s camp, knowing that many will sprout and thrive, count?

For archaeo-botanists, evidence of domesticated grains depended on finding grains with nonbrittle rachis (favored intentionally and unintentionally by early planters because the seedheads did not shatter but “waited for the harvester”) and larger seeds. It now turns out that these morphological changes seem to have occurred well after grain crops had been cultivated. What had appeared previously to be unambiguous skeletal evidence of fully domesticated sheep and goats has also been called into question. The result of these ambiguities is twofold. First, it makes the identification of a single domestication event both arbitrary and pointless. Second, it reinforces the case for a very, very long period of what some have called “low-level food production” of plants not entirely wild and yet not fully domesticated either. The best analyses of plant domestication abolish the notion of a singular domestication event and instead argue, on the basis of strong genetic and archaeological evidence, for processes of cultivation lasting up to three millennia in many areas and leading to multiple, scattered domestications of most major crops (wheat, barley, rice, chick peas, lentils). 8

While these archaeological findings leave the standard civilizational narrative in shreds, one can perhaps see this early period as part of a long process, still continuing, in which we humans have intervened to gain more control over the reproductive functions of the plants and animals that interest us. We selectively breed, protect, and exploit them. One might arguably extend this argument to the early agrarian states and their patriarchal control over the reproduction of women, captives, and slaves. Guillermo Algaze puts the matter even more boldly: “Early Near Eastern villages domesticated plants and animals. Uruk urban institutions, in turn, domesticated humans.” 9

Fascism, Corporatism, and Big Ag

For a number of years, I’ve been trying to wrap my mind around fascism and corporatism. The latter is but one part of the former, although sometimes they are used interchangeably. Corporatism was central to fascism, a defining feature.

Corporatism didn’t originate with fascism, though. It has a long history that became well developed under feudalism. In centuries past, corporations were never conflated with private businesses. Instead, corporations were entities of the state government and served the interests of the state. Corporatism, as such, was an entire society based on this.

The slave plantation South is an example of a corporatist society. This is the basis of the argument made by Eugene Genovese and Elizabeth Fox-Genovese. They connected corporatism to traditional conservatism, as opposed to the individualistic liberalism of capitalism. Like in fascism, this slave plantation corporatism was a rigid social order with social roles clearly defined. It’s a mostly forgotten strain of American conservatism that once was powerful.

In fascist regimes, corporatism was used to organize society and the economy by way of the government’s role of linking labor and industry—similar to slavery, it was “designed to minimize class antagonisms” (Genovese & Fox-Genovese, The Mind of the Master Class, p. 668). In its initial phases, industrial corporations gained immense power and wealth, as managerial efficiency becomes the dominant priority, centralized planning combined with private ownership.

Fascism is a counterrevolutionary expression of reactionary conservatism. As Corey Robin explains, the political right worldview (fascist and otherwise) has a particular talent of borrowing, both from the left and from the past. It just as easily borrows elements from pre-modern corporatism as it does from modern socialism and capitalism. It’s a mishmash, unconcerned by principled consistency and ideological coherence. This makes it highly adaptable and potentially hard to detect.

This relates to how the right-wing in the US transformed corporatism into an element of capitalism. The slave plantation south was central to this process, combining elements of pre-capitalism with capitalism. Slave owners like Thomas Jefferson were increasingly moving toward industrialized capitalism. Before the Civil War, many plantations were being industrialized and many slaves were leased out to work in Southern factories.

Looking for fascism or elements of fascism in American society requires careful observation and analysis. It won’t manifest in the way it did in early 20th century Europe. Capitalists have been much more independent in the US, at times leading to their having more power over government than the other way around. It’s less clear in a country like this which direction power runs, either as fascism or inverted totalitarianism. Either way, the economic system is centrally important for social control.

Yet capitalist rhetoric in the US so often speaks of a mistrust of government. Some history would be helpful. Consider again the example of the South. In Democracy and Trust, Mark E. Warren writes that,

The Southern herrenvolk democracy thrived on slavery and after the Reconstruction remained “mired in the defense of a totally segregated society” (Black and Black 1987: 75). It shared with the Northern elite a suspicion of majority rule and mass participation. It continued to use collective systems of mutual trust both to provide political solidarity and to divide and discourage participation in the political system. But it differed radically from its Northern conservative counterpart in its lack of hostility to the state and governmental authorities. What the South loathed was, and remains, not big government but centralized, federal government On the state and city levels, elites see politics as a means of exercising power, not something to be shunned. (pp. 166-7)

I would correct one thing. Southerners were never against centralized, federal government. In fact, until the mid 1800s, the Southern elite dominated the federal goverment. It was their using the federal government to enforce slave laws onto the rest of the country that led to growing conflict that turned into a civil war. What Southerners couldn’t abide was a centralized, federal government that had come under the sway of the growing industry and population of the North.

The Southern elite loved big government so much that they constantly looked toward expanding the politics and economics of slavery. It’s why Southerners transported so many slaves Westward (see Bound Away by Fischer & Kelly) and why they had their eyes on Mexico and Cuba.

Slaveholders even went as far as California during the 1849 gold rush and they brought their slaves with them. California became technically a free state, although slavery persisted. Later on, Civil War conflict arose on the West Coat, but open battle was avoided. Interestingly, the conflict in California also fell along a North-South divide, with the southern Californians seeking secession from northern California even before the Civil War.

Southern California saw further waves of Southerners. Besides earlier transplanted Southerners, this included the so-called Okies of the Dust Bowl looking for agricultural work and the post-war laborers looking for employment in the defense industry. A Southern-influenced culture became well-established in Southern California. This was a highly religious population that eventually would lead to the phenomenon of mega-churches, televangelists, and the culture wars. It also helped shape a particular kind of highly profitable big ag with much power and influence. Kathryn Olmsted, from Right Out of California, wrote that,

These growers were not angry at the New Deal because they hated big government. Unlike Eastern conservatives, Western businessmen were not libertarians who opposed most forms of government intervention in the economy. Agribusiness relied on the government to survive and prosper: it needed price supports for stability, government dams and canals for irrigation, and state university research for crop improvements. These business leaders not only acknowledged but demanded a large role for government in the economy.

By focusing on Western agribusiness, we can see that the New Right was no neoliberal revolt against the dead hand of government intervention. Instead, twentieth-century conservatism was a reaction to the changes in the ways that government was intervening in the economy— in short, a shift from helping big business to creating a level playing field for workers. Even Ronald Reagan, despite his mythical image as a cowboy identified with the frontier, was not really a small-government conservative but a corporate conservative. 110 Reagan’s revolution did not end government intervention in the economy: it only made the government more responsive to the Americans with the most wealth and power. (Kindle Locations 4621-4630)

This Californian political force is what shaped a new generation of right-wing Republicans. Richard Nixon was born and raised in the reactionary heart of Southern California. It was where the Southern Strategy was developed that Nixon would help push onto the national scene. Nixon set the stage for the likes of Ronald Reagan, which helped extend this new conservatism beyond the confines of big ag, as Reagan had become a corporate spokesperson before getting into politics.

The origins of this California big ag is important and unique. Unlike Midwestern farming, that of California more quickly concentrated land ownership and so concentrated wealth and power. Plus, it was highly dependent on infrastructure funded, built, and maintained by big government. It should be noted that big ag was among the major recipients of New Deal farm subsidies. Their complaints about the New Deal was that it gave farm laborers some basic rights, although the New Deal kept the deck stacked in big ag’s favor. Early 20th century Californian big ag is one of the clearest examples of overt fascism in US history.

The conservative elite in California responded to the New Deal similar to how the conservative elite in the South responded to Reconstruction. It led to a backlash where immense power was wielded at the state level. As Olmsted makes clear,

employers could use state and local governments to limit the reach of federal labor reforms. Carey McWilliams and Herbert Klein wrote in The Nation that California had moved from “sporadic vigilante activity to controlled fascism, from the clumsy violence of drunken farmers to the calculated maneuvers of an economic-militaristic machine.” No longer would employers need to rely on hired thugs to smash strikes. Instead, they could trust local prosecutors to brand union leaders as “criminal syndicalists” and then send them to prison. McWilliams and Klein suggested that this antiunion alliance between big business and the courts was similar to the state-business partnership in Hitler’s Germany. 104

But these growers and their supporters were not European-style fascists; they were the forerunners of a new, distinctly American movement. (Kindle Locations 4134-4141)

Still, it was fascism. In The Harvest Gypsies, John Steinbeck wrote that, “Fascistic methods are more numerous, more powerfully applied and more openly practiced in California than any other place in the United States.”

The development of big ag in California was different, at least initially. But everything across the country was moving toward greater concentration. It wasn’t just California. Organizations like the Farm Bureau in other parts of the country became central. As in California, it set farmers against labor, as organized labor in demanding basic rights came to be perceived as radical. Richard McIntyre, in his essay “Labor Militance and the New Deal” from When Government Helped, he writes that, “Groups representing farmers outside the South, such as the Farm Bureau, also supported Taft-Hartley because they saw strikes and secondary boycotts as limiting their ability to get crops to market. The split between labor and various kinds of farmers allowed capitalists to heal their divisions” (p. 133).

It was also a division among farmers themselves, as there had also been agricultural traditions of left-wing politics and populist reform. “From its beginning in Indiana the Farm Bureau made it clear that the organization was composed of respectable members of the farming community and that it was not a bunch of radicals or troublemakers” (Barbara J. Steinson, Rural Life in Indiana, 1800–1950). By respectable, this meant that the haves got more and the have-nots lost what little they had.

Even though big ag took a different route in regions like the Midwest, the end results were similar in the increasing concentration of land and wealth, which is to say the end of the small family farm. This was happening all over, such as in the South: “These ideals emphasized industrialized, commercial farming by ever-larger farms and excluded many smaller farms from receiving the full benefit of federal farm aid. The resulting programs, by design, contributed significantly to the contraction of the farm population and the concentration of farm assets in the Carolinas” (Elizabeth Kathleen Brake, Uncle Sam on the Family Farm). Those excluded from farm aid were the typical groups, minorities and poor whites.

This country was built on farming. It’s the best farmland in the world. That means vast wealth. Big ag lobbyists have a lot of pull in the federal government. That is why fascism in this country early on found its footing in this sector of the economy, rather than with industry. Over time, corporatism has come to dominate the entire economy, and the locus of power has shifted to the financial sector. Agriculture, like other markets, have become heavily tied to those who control the flow of money. The middle class, through 401(k)s, also have become tied to financial markets.

Corporatism no longer means what it once did. In earlier European fascism, it was dependent on organizational society. That was at a time when civic organizations, labor unions, etc shaped all of life. We no longer live in that kind of world.

Because of this, new forms of authoritarianism don’t require as overt methods of social control. It becomes ever more difficult for the average person to see what is happening and why. More and more people are caught up in a vicious economy, facing poverty and debt, maybe homelessness or incarceration. The large landowner or industrialist won’t likely send out goons to beat you up. There are no Nazi Brownshirts marching in the street. There is no enemy to fight or resist, just a sense of the everything getting worse all around you.

Yet some have begun to grasp the significance of decentralization. Unsurprisingly, a larger focus has been on the source of food, such as the locally grown movement. Raising one’s own food is key in seeking economic and political independence. Old forms of the yeoman farmer may be a thing of the past, but poor communities have begun to turn to community gardens and the younger generation has become interested in making small farming viable again. It was technology with the force of the state behind it that allowed centralization. A new wave of ever more advanced and cheaper technology is making greater decentralization possible.

Those with power, though, won’t give it up easily.

* * *

American Fascism and the New Deal: The Associated Farmers of California and the Pro-Industrial Movement
by Nelson A. Pichardo Almanzar and Brian W. Kulik

Right Out of California: The 1930s and the Big Business Roots of Modern Conservatism
by Kathryn S. Olmsted

From Slavery to the Cooperative Commonwealth: Labor and Republican Liberty in the Nineteenth Century
by Alex Gourevitch

Developing the Country: “Scientific Agriculture” and the Roots of the Republican Party
by Ariel Ron

Scientific Agriculture and the Agricultural State Farmers, Capitalism, and Government in the Late Nineteenth Century
by Ariel Ron

Uncle Sam on the Family Farm: Farm Policy and the Business of Southern Agriculture, 1933-1965
by Elizabeth Kathleen Brake

A Progressive Rancher Opposes the New Deal: Dan Casement, Eugenics, and Republican Virtue
by Daniel T. Gresham

Whose Side Is the American Farm Bureau On?
by Ian T. Shearn

Farm Bureau Works Against Small Family Farm ‘Hostages’
Letter to FFC from one member of the Farm Bureau who operates a family farm.

by jcivitas

The Impact of Globalization on Family Farm Agriculture
by Bill Christison

Plowing the Furrows of the Mind

One of the best books I read this past year is The Invisible History of the Human Race by Christine Kenneally. The book covers the type of data HBDers (human biodiversity advocates) and other hereditarians tend to ignore. Kenneally shows how powerful is environment in shaping thought, perception, and behavior.

What really intrigued me is how persistent patterns can be once set into place. Old patterns get disrupted by violence such as colonialism and mass trauma such as slavery. In the place of the old, something new takes form. But this process isn’t always violent. In some cases, technological innovation can change an entire society.

This is true for as simple of a technology as a plow. Just imagine what impact a more complex technology like computers and the internet will have on society in the coming generations and centuries. Also, over this past century or so, we saw a greater change to agriculture than maybe has been seen in all of civilization. Agricultural is becoming industrialized and technologized.

What new social system is being created? How long will it take to become established as a new stable order?

We live in a time of change and we can’t see the end of it. We are like the people who lived during the time when the use of plows first began to spread. All that we know, as all that they knew, is that we are amidst change. This inevitably creates fear and anxiety. It is a crisis that has the potential of being more transformative than a world war. It is a force that will be both destructive and creative, but either way it is unpredictable.

* * * *

The Invisible History of the Human Race:
How DNA and History Shape Our Identities and Our Futures
by Christine Kenneally
Kindle Locations 2445-2489

Catastrophic events like the plague or slavery are not the only ones that echo down the generations . Widespread and deeply held beliefs can be traced to apparently benign events too, like the invention of technology. In the 1970s the Danish economist Ester Boserup argued that the invention of the plow transformed the way men and women viewed themselves. Boserup’s idea was that because the device changed how farming communities labored, it also changed how people thought about labor itself and about who should be responsible for it.

The main farming technology that existed when the plow was introduced was shifting cultivation. Using a plow takes a lot of upper-body strength and manual power, whereas shifting cultivation relies on handheld tools like hoes and does not require as much strength. As communities took up the plow, it was most effectively used by stronger individuals , and these were most often men. In societies that used shifting cultivation, both men and women used the technology . Of course, the plow was invented not to exclude women but to make cultivation faster and easier in areas where crops like wheat, barley, and teff were grown over large, flat tracts of land in deep soil. Communities living where sorghum and millet grew best— typically in rocky soil— continued to use the hoe. Boserup believed that after the plow forced specialization of labor, with men in the field and women remaining in the home, people formed the belief— after the fact— that this arrangement was how it should be and that women were best suited to home life.

Boserup made a solid historical argument, but no one had tried to measure whether beliefs about innate differences between men and women across the world could really be mapped according to whether their ancestors had used the plow. Nathan Nunn read Boserup’s ideas in graduate school, and ten years later he and some colleagues decided to test them.

Once again Nunn searched for ways to measure the Old World against the new. He and his colleagues divided societies up according to whether they used the plow or shifting cultivation . They gathered current data about male and female lives, including how much women in different societies worked in public versus how much they worked in the home, how often they owned companies, and the degree to which they participated in politics. They also measured public attitudes by comparing responses to statements in the World Value Survey like “When jobs are scarce, men should have more right to a job than a woman.”

Nunn found that if you asked an individual whose ancestors grew wheat about his beliefs regarding women’s place, it was much more likely that his notion of gender equality would be weaker than that of someone whose ancestors had grown sorghum or millet. Where the plow was used there was greater gender inequality and women were less common in the workforce. This was true even in contemporary societies in which most of the subjects would never even have seen a plow, much less used one, and in societies where plows today are fully mechanized to the point that a child of either gender would be capable of operating one.

Similar research in the cultural inheritance of psychology has explored the difference between cultures in the West and the East. Many studies have found evidence for more individualistic, analytic ways of thought in the West and more interdependent and holistic conceptions of the self and cooperation in the East. But in 2014 a team of psychologists investigated these differences in populations within China based on whether the culture in question traditionally grew wheat or rice. Comparing cultures within China rather than between the East and West enabled the researchers to remove many confounding factors, like religion and language.

Participants underwent a series of tests in which they paired two of three pictures. In previous studies the way a dog, a rabbit, and a carrot were paired differed according to whether the subject was from the West or the East . The Eastern subjects tended to pair the rabbit with a carrot, which was thought to be the more holistic, relational solution. The Western subjects paired the dog and the rabbit, which is more analytic because the animals belong in the same category. In another test subjects drew pictures of themselves and their friends. Previous studies had shown that westerners drew themselves larger than their friends . Another test surveyed how likely people were to privilege friends over strangers; typically Eastern cultures score higher on this measure.

In all the tests the researchers found that, independent of a community’s wealth or its exposure to pathogens or to other cultures, the people whose ancestors grew rice were much more relational in their thinking than the people whose ancestors were wheat growers. Other measures pointed at differences between the two groups. For example , people from a wheat-growing culture divorced significantly more often than people from a rice-growing culture, a pattern that echoes the difference in divorce rates between the West and the East. The findings were true for people who live in rice and wheat communities today regardless of their occupation; even when subjects had nothing to do with the production of crops, they still inherited the cultural predispositions of their farming forebears.

The differences between the cultures are attributed to the different demands of the two kinds of agriculture. Rice farming depends on complicated irrigation and the cooperation of farmers around the use of water. It also requires twice the amount of labor that is necessary for wheat, so rice-growing communities often stagger the planting of crops in order that all their members can help with the harvest. Wheat farming, by contrast, doesn’t need complicated irrigation or systems of cooperation among growers.

The implication of these studies is that the way we see the world and act in it—whether the end result is gender inequality or trusting strangers— is significantly shaped by internal beliefs and norms that have been passed down in families and small communities . It seems that these norms are even taken with an individual when he moves to another country. But how might history have such a powerful impact on families, even when they have moved away from the place where that history, whatever it was, took place?

Myth, Religion, and Social Development: Part II

Myth, Religion, and Social Development: Part II

Posted on Apr 8th, 2008 by Marmalade : Gaia Child Marmalade
(This is also posted in the God Pod.)

I want to bring together the evolutionary causes at work behind myth and religion.  I mentioned these earlier: Campbell’s view on the transition from hunter to planter societies, Philippe’s ideas about the paganism incorporated into Christianity, Spiral Dynamics, and the Axial Age.

To this I want to add Paul Shepard’s theory about Pleistocene man.  Shepard believes that the transition between hunters and planters was the most important shift in social development… or disruption rather.  This shift was world-wide and is comparable to the Axial Age.

Add this all up, and it gives us 2 major shifts connecting 3 major eras.  Its Spiral Dynamics that allows us to map this out.  (It goes without saying that this is all tentative.)

First Era: Prior to the post-Pleistocene shift, we have the vmemes of beige and purple.  It seems that Campbell and Shepard are treating these two inseparably.  As we know very little about the myths of beige, we don’t need to worry about it.  Still, its beige that Shepard is somewhat romanticizing.  In addition, I think the individualistic focus of red vmeme gets mixed in because the myths were written down during the development of the blue vmeme, and red came to represent all of the past.  There is the theory that the vmemes switch between a focus on the individual and a focus on the collective, and it makes sense to me.  So, beige and red would be individualistic which isn’t to say the individual has yet fully developed.

Anyways, to simplify, this first era is the Age of the Shaman… Campbell’s Shamanistic Titan seeking personal power through personal sacrifice.  But the Shaman isn’t a monk… the Shaman is also the Warrior and the Hunter.  Visions have power.

Also, this was the time when the divine man-animal was worshipped, the prototype of all later dying/ressurection gods.  Here is a quote from a review of a book by Paul Shepard(along with Barry Sanders) titled The Sacred Paw:

They give a really good argument for the shifting of the emphasis of the myths from the Bear Mother to the adventures of her sons, who eventually become purely human heroes. The Underworld and Rebirth themes of the Bear Mother are slowly stripped form her until she is nothing but a memory.

Post-Pleistocene(or rather post-hunter/gatherer) shift: The cause of this is explained variously.  Did the Ice Age traumatize the collective psyche of the human species?  Or, according to Shepard, did the shift occur from within… for some unknown reason man falling out of alignment with his environment?  Or was it some kind of Telos(God?) that propelled social evolution?  And was this shift a good thing(an evolugionary advantage) or a bad thing(Shepard’s collective madness)?  For our purposes, answering these questions isn’t necessary.  All we need to know is that a major shift happened.

As for Spiral Dynamics, my guess is that this shift was red vmeme and also red shifting into blue.  This shift probably occurred over a very long period of time.

Second Era: This is the beginnings of civilization proper: agriculture and city-states, and the great Matriarchies… this is very early blue vmeme which isn’t blue as we know it now.  This era was blue in a more pure form, not adulterated by orange and green as found in the Third Era.

At this time, society became hierarchical and the caste system came about… and with it the division of labor.  Life was extremely organized including religion… the visions of the shaman became the oracles that served the priesthood, and the myths became complex rituals.  Life revolved around the seasons and the seasonal celebrations.  This was where we got our celebrations of the Equinxes and Solstices as Solar symbolism was the focus.

(A shift within the Second Era)  In the later part of this era, the Matriarchies lost power and written history began.  But the Patriarchies were also blue and they retained the hierarchical structure even if a different gender was on top.  The primary difference was that orange was beginning to develop with a reemergence of individualism, meaning the hierarchy was not quite as strict as previously.  The shift between Matriarchy and Patriarchy is significant, but it isn’t my focus for the moment.  The development of Patriarchy was a disatisfaction with the old ways.  One explanation(that Jeremy Taylor brings up) is that the precession of the equinoxes altered the timing by which the Matriarchies had planted and harvested.  This led to priesthood no longer being able to predict the seasons and so social unrest followed.  This disatisfaction with the prior Goddess worship can be felt in the myth of Gilgamesh.

Axial Age: (Karl Jaspers first wrote about this, and Karen Armstrong wrote a whole book about it.)  This is when first arose all of the world religions that we know today.  Or, in the case of Hinduism and Judaism, when previous religions were revisioned.  The Old Testament was written down for the first time during this time.  Christianity and Islam were later manifestations of this Age.

Blue is still in power, but orange has developed enough to allow some incisive questioning of tradition.  Also, green is first showing itself to any significant degree.  So we have the development of rationality and self-inquiry along with a sense of social equality and justice.  Liberation was the spiritual response and democracy was the political response.

Mythologically, we have the development of the savior stories as we know them today.  Jesus doesn’t change the world by conquering nations.  He changes the world by confronting himself, challenging the human condition.  The prophets of this age tended to turn inward.

The agricultural city-states were being forced to develop new modes of politics.  The Greeks developed democracy and philosophy.  The great myths were being written down and questioned which meant man was no longer controlled by the gods, but could choose their own destiny.  The heroes of this time often challenged the gods.  Man could save himself, man was coming of age.

Third Era: The age of empires… symbolized in the West by the Romans and the later Catholic Church.  Blue is still very much the dominating paradigm, but orange has become established.  However, the green that showed itself in the Axial Age is squelched back out of existence not to be seen again until the Rennaisance.

Religion becomes more ritualized and homogenized than it had ever before.  Using the Roman Empire as its structure, the Catholic Church destroys and/or incorporates every religion it comes into contact with.  And this is why we have such a strange mix of mythologies in Christianity today.  But also this paved the way for us moderns to see the universal truths behind all myths.  (Buddhism did something similar for the East.)

During this time, Jesus the prophet and savior becomes the Ruler of the World.

To be continued…

Access_public Access: Public 3 Comments Print Post this!views (190)  

Nicole : wakingdreamer

about 1 hour later

Nicole said

i’m really enjoying this series… thanks so much for cross posting!

Marmalade : Gaia Explorer

about 9 hours later

Marmalade said

I’m glad you’re enjoying it, but I do hope some others will respond.  I really don’t know to what degree this all makes sense.  My knowledge of Spiral Dynamics is pretty basic, but its not the most important part of my thinking here even though I’m using it as a central context.

I don’t have any perfectly clear ideas at the moment.  I’m just pondering the possibilities of patterns.

Nicole : wakingdreamer

1 day later

Nicole said

excellent! yes, we are in a bit of a lull at the moment on the God Pod… just catching our breath before the next waves of activity i’m sure! 🙂