The Commons of World, Experience, and Identity

The commons, sadly, have become less common. Mark Vernon writes that, “in the Middle Ages, fifty per cent or more of the land was commons, accessible to everybody” (Spiritual Commons). The Charter of the Forest formally established the commons within English law and it lasted from 1217 to 1971. That isn’t some ancient tradition but survived far into modernity, well within still living memory. The beginning of the end was the enclosure movement that was first seen not long after the Charter was signed into law, but the mass evictions of peasants from their land wouldn’t happen until many centuries later with sheep herding, coal mining, and industrialization.

It’s hard for us to imagine what was the commons. It wasn’t merely about land and resources, about the customs and laws about rights and responsibilities, about who had access to what and in what ways. The commons was a total social order, a way of being. The physical commons was secondary to the spiritual commons as community, home and sense of place (“First came the temple, then the city.”) — “Landscape is memory, and memory in turn compresses to become the rich black seam that underlies our territory” (Alan Moore, “Coal Country”, from Spirits of Place); “…haunted places are the only ones people can live in” (Michel de Certeau, The Practice of Everyday Life). The commons also was a living force, at a time when Christianity permeated every aspect of life and when the felt experience of Paganism continued in local traditions and stories, often incorporated into Church rituals and holy days. Within the commons, there was a shared world where everyone was accountable to everyone else. Even a chicken or a wagon could be brought to court, according to the English common law of doedands (Self, Other, & World).

The parish was an expression of the commons, embodying local community and identity that was reinforced by the annual beating of the bounds, a practice that goes back to ancient Rome, a faint memory of what once was likely akin to the Aboriginal songlines in invoking a spiritual reality. It was within the parish that life revolved and the community was maintained, such as determining disputes and taking care of the sick, crippled, elderly, widowed, and orphaned. We can’t genuinely care about what we feel disconnected from. Community is fellowship, kinship and neighborliness, is intimate relationship and familiarity. This relates to why Germanic ‘freedom’ meant to be part of a free people and etymologically was related to ‘friendship’, as opposed to Latin ‘liberty’ that merely indicated one wasn’t enslaved while surrounded by those who were (Liberty, Freedom, and Fairness).

“It is the non-material aspects of life,” Vernon suggests, “that, more often than not, are crucial for finding meaning and purpose, particularly when life involves suffering.” He states that a crucial element is to re-imagine, and that makes me think of he living imagination or what some call the imaginal as described by William Blake, Henry Cobin, James Hillman, Patrick Harpur, and many others. And to re-imagine would mean to re-experience in new light. He goes onto speak of the ancient Greek view of time. John Demos, in Circles and Lines, explains how cyclical time remained central to American experience late into the colonial era and, as the United States wasn’t fully urbanized until the 20th century, surely persisted in rural eras for much longer. Cyclical time was about a sense of recurrence and return, central to the astrological worldview that gave us the word ‘revolution’, that is to revolve. The American Revolutionaries were hoping for a return and the sense of the commons was still strong among them, even as it was disappearing quickly.

Instead of time as abundance, the modern world feels like time is always running out and closing in on us. We have no sense of openness to the world, as we’ve become insulated within egoic consciousness and hyper-individualism. As with beating the bounds of the parish, cyclical time contains the world into a familiar landscape of the larger world of weather patterns and seasons, of the sun, moon and stars — the North Wind is a force and a being, shaping the world around us; the river that floods the valley is the bringer of life. The world was vitally and viscerally alive in a way few moderns have ever experienced. Our urban yards and our rural farms are ecological deserts. City lights and smog hide the heavens from our view. Let us share a longer excerpt from Vernon’s insightful piece:

“Spiritual commons are often manifest in and through the loveliness of the material world, so that matters as well. It’s another area, alongside education, where spiritual commons has practical implications. That was spotted early by John Ruskin.

“Consider his 1884 lecture, The Storm-Cloud of the Nineteenth Century, in which he noted that “one of the last pure sunsets I ever saw” was in 1876, almost a decade previously. The colours back then were “prismatic”, he said, the sun going into “gold and vermillion”. “The brightest pigments we have would look dim beside the truth,” he continued. He had attempted to reflect that glorious manifestation of the spiritual commons in paint.

“He also knew that his experience of its beauty was lost because the atmosphere was becoming polluted. As a keen observer of nature, he noted how dust and smoke muddied and thinned the sky’s brilliance. In short, it would be crucial to clean up the environment if the vivid, natural displays were to return. Of course. But the subtler point Ruskin draws our attention to is the one about motivation: he wanted the vivid, natural displays because he had an awareness of, and desire for, spiritual commons.”

That is reminiscent of an event from 1994. There was a major earthquake on the West Coast and Los Angeles had a blackout. The emergency services were swamped with calls, not from people needing help for injuries but out of panic for the strange lights they were seeing in the sky. It scared people, as if the lights were more threatening than the earthquake itself — actual signs from the heavens. Eventually, the authorities were able to figure out what was going on. Thousands of urbanites were seeing the full starry sky for the first time in their entire lives. That situation has worsened since then, as mass urbanization is pushed to further extremes and, even though smog has lessened, light pollution has not (Urban Weirdness). We are literally disconnected from the immensity of the world around us, forever enclosed within our own human constructions. Even our own humanity has lost is wildness (see Paul Shepard’s The Others: How Animals Made Us Human).

We can speak of the world as living, but to most of us that is an abstract thought or a scientific statement. Sure, the world is full of other species and ecosystems. That doesn’t capture the living reality itself, though, the sense of vibrant and pulsing energy, the sounds and voices of other beings (Radical Human Mind: From Animism to Bicameralism and Beyond) — this is what the neuroanatomist Jill Bolte-Taylor, in her “Stroke of Insight”, described as the “life-force power of the universe” (See Scott Preston’s Immanence of the Transcendent & The Premises of Our Existence), maybe related to what Carl Jung referred to as the “objective psyche”. One time while tripping on magic mushrooms, I saw-felt the world glistening, the fields shimmered in the wind and moonlight and everything breathed a single breath in unison.

That animistic worldview once was common, as was the use of psychedelics, prior to their being outlawed and increasingly replaced by addictive substances, from nicotine to caffeine (The World that Inhabits Our Mind). And so the addictive mind has built up psychic scar tissue, the thick walls of the mind that safely and comfortably contain us (“Yes, tea banished the fairies.” & Diets and Systems). Instead of beating the bounds of a parish, we beat the bounds of our private egoic territory, our thoughts going round in round like creatures caught in a tidal pool that is drying up in the harsh sunlight — when will the tide come back in?

* * *

Here is some additional historical info. The feudal laws were to some extent carried over into North America. In early America, legally owning land didn’t necessarily mean much. Land was only effectively owned to the degree you used it and that originally was determined by fencing. So, having a paper that says you own thousands of acres didn’t necessarily mean anything, if it wasn’t being maintained for some purpose.

It was every citizen’s right to use any land (for fishing, hunting, gathering, camping, etc) as long as it wasn’t fenced in — that was at a time when fencing was expensive and required constant repair. This law remained in place until after the Civil War. It turned out to be inconvenient to the whites who wanted to remain masters, as blacks could simply go anywhere and live off of the land. That was unacceptable and so blacks need to be put back in their place. That was the end of that law.

But there were other similar laws about land usage. Squatting rights go back far into history. Even to this day, if someone shows no evidence of using and maintaining a building, someone who squats there for a period of time can claim legal ownership of it. Some of my ancestors were squatters. My great grandfather was born in a house his family was squatting in. Another law still in place has to do with general land usage. If someone uses your land to graze their horses or as a walking path, some laws will allow legal claims to be made on continuing that use of land, unless the owner explicitly sent legal paperwork in advance declaring his ownership.

There was a dark side to this. Canada also inherited this legal tradition from feudalism. In one case, a family owned land that they enjoyed but didn’t explicitly use. It was simply beautiful woods. A company was able to dredge up an old law that allowed them to assert their right to use the land that the family wasn’t using. Their claim was based on minerals that were on the property. They won the case and tore up the woods for mining, despite having no ownership of the land. Those old feudal laws worked well in feudalism but not always so well in capitalism.

I’ll end on a positive note. There was a law that was particularly common in Southern states. It basically stated that an individual’s right to land was irrevocable. Once you legally owned land, no one could ever forcefully take it away from you. Even if you went into debt or didn’t pay your taxes, the land would be yours. The logic was that land meant survival. You could be utterly impoverished and yet access to land meant access to food, water, firewood, building materials, etc. The right to basic survival, sustenance, and subsistence could not be taken away from anyone (well, other than Native Americans, African-Americans, Mexican-Americans, etc; okay, not an entirely positive note to end on).

How American Democracy was Strangled in the Crib

March 1, 1781 – Ratification of the Articles of Confederation
This was the first Constitution of the United States, preceding our current constitution by eight years. Provisions included:
-Unicameral legislature with only one house of the Congress.
-No system of national courts or executive branch
-One vote per state irrespective of the size of the state.
-Levying taxes in hands of the state government
-Power to coin and borrow money
-Time limits on holding public office
-No standing army or navy
-No provision for national government interference in commerce and trade – each state could impose tariffs on trade.

This last provision regarding decentralized decision-making on commerce and trade was the pretext for a gathering to “amend” the Articles. Once gathered, the delegates replaced entirely the Articles with an entirely new proposed constitution that was, in many respects, more top-down and favorable to commercial interests.

February 25, 1791 – Creation of the First Bank of the United States
The federal government issued a 20-year charter (very unusual at the time since most corporate charters, or licenses, were issued by states) to create the first national private bank. The bank was given permission to create money as debt. Its paper money was accepted for taxes. Eighty percent of its shares were privately owned, and 75% of those were foreign owned (mostly by the English and Dutch). Alexander Hamilton championed the first national private bank; Jefferson, Madison and others opposed it.

February 24, 1803 – U.S. Supreme Court establishes supreme authority of the U.S. Supreme Court
Marbury v. Madison (5 U.S. 137) established the concept of “judicial review.” The Supreme Court ruled that they were supreme, and Congress did not contest it. This gave them the power to make law. President Jefferson said: “The Constitution, on this hypothesis, is a mere thing of wax in the hands of the judiciary, which they may twist and shape into any form they please.”

A fine article explaining the problems of judicial review is “The Case Against Judicial Review: Building a strong basis for our legal system.”
https://www.poclad.org/BWA/2007/BWA_2007_FALL.html

From:
REAL Democracy History Calendar: February 24 – March 1

Why Is Average Body Temperature Lowering?

Researchers at Stanford University, according to analysis of data going back to the 1800s, found that average body temperature has decreased (Myroslava Protsiv et al, Decreasing human body temperature in the United States since the Industrial Revolution). Other data supports the present lower norm (J.S. Hausmann et al, Using Smartphone Crowdsourcing to Redefine Normal and Febrile Temperatures in Adults: Results from the Feverprints Study).

They considered that increased health and so decreased inflammation could be the cause, but it’s not clear that inflammation overall has decreased. The modern industrial diet of sugar and seed oils is highly inflammatory. Inflammation has been linked to epidemic of diseases of civilization: obesity, diabetes, heart disease, arthritis, depression, schizophrenia, and much else. In some ways, inflammation is worse than it has ever been. That is why, as a society we’ve become obsessed with anti-inflammatories, from aspirin to turmeric.

The authors of the paper, however, offer other data that contradicts their preferred hypothesis: “However, a small study of healthy volunteers from Pakistan—a country with a continued high incidence of tuberculosis and other chronic infections—confirms temperatures more closely approximating the values reported by Wunderlich”. Since these were healthy volunteers, they should not have had higher inflammation from infections, parasites, etc. So, why were their body temperatures higher than is seen among modern Westerners?

It also has been suggested that there are other potential contributing factors. Ambient temperatures are highly controlled and so the body has to do less work in maintaining an even body temperature. Also, people are less physically active than they once were. The more interesting explanation is that the microbiome has been altered, specifically reduced in the number and variety of heat-producing microbes (Nita Jain, A Microbial-Based Explanation for Cooling Human Body Temperatures).

I might see a clue in the Pakistan data. That population is presumably more likely to be following their traditional diet. If so, this would mean they have been less Westernized in their eating habits, which would translate as fewer refined starchy carbs and sugar, along with fewer seed oils high in omega-6 fatty acids. Their diets might in general be more restricted: fewer calories, smaller portions, less snacking, and longer periods between meals. Plus, as this would be an Islamic population, fasting is part of their religious tradition.

This might point to more time spent in and near ketosis. It might be noted that ketosis is also anti-inflammatory. So why the higher body temperature? Well, there is the microbiome issue. A population on a traditional diet combined with less antibiotic usage would likely still be supporting a larger microbiome. By the way, ketosis is one of the factors that supports a different kind of microbiome, related to its use as treatment for epilspsy (Rachael Rettner, How the Keto Diet Helps Prevent Seizures: Gut Bacteria May Be Key). And ketosis raises the basil metabolic rate which in turn raises temperature. Even though fasting lowers body temperature in the short term, if it was part of an overall ketogenic diet it would help promote on average higher body temperatures.

This is indicated by the research on other animals: “An increased resistance to cold assessed by the rate of fall in body tem-perature in the animals as well as human beings on a high-fat diet has been reported by LEBLANC (1957) and MITCHELL et al. (1946), respectively. LEBLANC (1957) suggested that the large amount of fat accumulated in animals fed a high-fat diet could not explain, either as a source of energy reserves or as an insulator, the superiority of high-fat diet in a cold environment, postulating some changes induced by a high-fat diet in the organism that permits higher sustained rate of heat production in the cold.” (Akihiro Kuroshima, Effects of Cold Adaptation and High-Fat Diet On Cold Resistance and Metabolic REsponses To Acute Exposure In Rats).

And: “Rats on a corn oil diet convert less T4 to active T3 than rats on a lard diet. Rats on a safflower oil diet have a more greatly reduced metabolic response to T3 than rats on a beef fat diet. Rats on a high-PUFA diet have brown fat that’s less responsive to thyroid hormone. Remember, brown fat is the type that generates heat to keep us warm. Rats on a long-term diet high in soybean oil have terrible body temperature regulation, which thyroid function in large part controls” (Mark Sisson, Is Keto Bad For Your Thyroid?). A 1946 study found that a high-fat diet had less of a drop in body temperature in response to cold (H.H. Mitchell, The tolerance of man to cold as affected by dietary modification; carbohydrate versus fat and the effect of the frequency of meals).

Specifically about ketosis, in mice it increases energy expenditure and causes brown fat to produce more heat (Shireesh Srivastava, A Ketogenic Diet Increases Brown Adipose Tissue Mitochondrial Proteins and UCP1 Levels in Mice). Other studies confirm this and some show an increase of brown fat. Brown fat is what keeps us warm. Babies have a lot of it and, in the past, it was thought adults lost it, but it turns out that we maintain brown fat throughout our lives. It’s just that diets have different affects on it.

Bikman points out the relationship between insulin and ketones — when one is high the other is low. Insulin tells the body to slow down metabolism and store energy, that is to say produce fat and to shut down the activity of brown fat. Ketones do the opposite, not only activating brown fat but causing white fat to act more like brown fat. This is what causes the metabolic advantage of the keto diet, in losing excess body fat and maintaining lower weight, as it increases the burning of 200-300 calories per day (metabolizing 10 lbs of body fat a year). By the way, cold exposure and exercise also activate brown fat, which goes back to general lifestyle factors that go hand in hand with diet.

Some people attest to feeling warmer in winter while in ketosis (Ketogenic Forums, Ketosis, IF, brown fat, and being warmer in cool weather), although others claim to not handle cold well which might simply be an issue of how quickly people become fully fat-adapted. A lifetime of a high-carb diet changes the body. But other than permanently damaged biological functioning, the body should be able to eventually shift into more effective ketosis and hence thermogenesis.

In humans, there is an evolutionary explanation for this. And humans indeed are unique in being able to more easily enter and remain in ketosis. But think about when ketosis most often happened in the past and you’ll understand why it seems to be inefficient in wasting energy as heat, what is a slight metabolic advantage if you’re trying to lose weight. For most of human existence, carb restriction was forced upon the species during the coldest season when starchy plants don’t grow. That is key.

It was an advantage to not only be able to survive off of one’s own body fa but to simultaneously create extra heat, especially during enforced fasting when food supplies were low as fasting would tend to drop body temperature — an argument made by the insulin researcher Benjamin Bikman (see 9/9/17 interview with Mike Mutzel on High Intensity Health at 20:34 mark, Insulin, Brown Fat & Ketones w/ Benjamin Bikman, PhD; & see Human Adaptability and Health). Ketosis is a compensatory function for survival during the harshest time of the year, winter.

Maybe modern Westerners have lower body temperature for the same reason they are plagued with diseases of civilization, specifically those having to do with metabolic syndrome and insulin resistance. If we didn’t take so many drugs and other substances to manage inflammation, maybe our body temperature would be higher. But it’s possible the lack of ketosis by itself might be enough to significantly keep it at a reduced level. And if not ketosis, something else about the diet and metabolism likely are involved.

* * *

What is the relevance? Does it matter that average body temperature has changed? As I pointed out above, it could indicate how the entirety of physiological functioning has been altered. A major component has to do with the metabolism which relates to diet, gut health, and microbiome. About the latter, Nita Jain wrote that,

“A 2010 report observed that 36.7° C may be the ideal temperature to ward off fungal infection whilst maintaining metabolism. In other words, high body temperatures represent optimization in the tradeoff between metabolic expenditure and resistance to infectious diseases. Our reduced exposure to potentially pathogenic fungi in developed countries may therefore be another possible factor driving changes in human physiology” (A Microbial-Based Explanation for Cooling Human Body Temperatures).

That would be significant indeed. And it would be far from limited to fungal infections: “In general, a ketogenic diet is useful for treating bacterial and viral infections, because bacteria and viruses don’t have mitochondria, so a ketogenic diet starves them of their favorite fuel source, glucose” (Paleo Leap, Infections and Chronic Disorders). Ketosis, in being anti-inflammatory, has been used to treat gout and autoimmune disorders, along with mood disorders that often include brain inflammation.

The inflammatory pathway, of course, is closely linked to the immune system. Reducing inflammation is part of complex processes in the body. Opposite of the keto diet, a high-carb diet produces inflammatory markers that suppress the immune system and so compromises prevention of and healing from infections. Indeed, obese and diabetic patients are hospitalized more often and get worse symptoms of influenza infections (flu).

But it’s not merely the reduction of inflammation. As an energy source, ketones are preferred over glucose by immune cells that fight infections, although maybe some bacteria can use ketones. It’s a similar pattern with cancer, in which ketosis can help prevent some cancers from growing in the early stages, but the danger is once established particular kinds of cancers can adapt to using ketones. So, it isn’t as simple as ketosis curing everything, even if it is a overall effective preventative measure in maintaining immunological health and general health.

What interests us most here are infections. Let’s look further at the flu. One study gave mice an influenza infection (Emily L. Goldberg et al, Ketogenic diet activates protective γδ T cell responses against influenza virus infection; Abby Olena, Keto Diet Protects Mice from Flu). The mice were on different diets. All of those on standard chow died, but half survived on the keto diet. To determine causes, other mice were put on a diet high in both fat and carbs while others were given exogenous ketones, but these mice also died. It wasn’t only the fat or the ketones in the keto diet. Something about fat metabolism seems to have been key, that is to say not only fat and not only the ketones but something about how fat is turned into ketones during ketosis, although some speculate that protein restriction might have been important.

The researchers were able to pinpoint the mechanisms for fighting off the infection. Turning fat into ketones allows the gamma delta subset of T cells in the lungs to be activated in response to influenza. This was unexpected as they haven’t been a focus in previous research. These T cells increase mucus production in epithelial cells in the lungs. This creates a protective barrier that traps the virus and allows it to be coughed up. At the same time, the keto diet blocks the production of inflammasones, multiunit protein complexes activated by the immune system. This reduces the inflammation that can harm the lungs. This relates to the T cell stimulation.

From an anecdotal perspective, here is an interesting account: “I have been undergoing a metabolic reset to begin the year. I have been low carb/keto on and off for the last 4.5 years and hop in and out of ketosis for short periods of time when it benefits me or when my body is telling me I need to. Right now, I decided to spend the first 6 weeks of 2018 in ketosis. I check my numbers every morning and have consistently been between 1.2 and 2.2 mmol/L. I contracted a virus two days ago (it was not influenza but I caught something) and my ketone levels shot through the roof. Yesterday morning I was at 5.2 (first morning of being sick) and this morning I was at 5.8 (although now I am in a fasted state as I have decided to fast through this virus.)” (bluesy2, Keto Levels with Virus/Flu).

Maybe that is a normal response for someone in ketosis. The mouse study suggests there is something about the process itself in producing ketones that is involved in the T cell stimulation. The ketones also might have a benefit for other reasons, but the process of fat oxidation or something related to it might be the actual trigger. In this case, the ketone levels are an indicator of what is going on, that the immune system is fully engaged. The important point, though, is this only happens in a ketogenic state and it has much to do with basil metabolic rate and body temperature regulation.

* * *

98.6 Degrees Fahrenheit Isn’t the Average Anymore
by Jo Craven McGinty

Nearly 150 years ago, a German physician analyzed a million temperatures from 25,000 patients and concluded that normal human-body temperature is 98.6 degrees Fahrenheit.

That standard has been published in numerous medical texts and helped generations of parents judge the gravity of a child’s illness.

But at least two dozen modern studies have concluded the number is too high.

The findings have prompted speculation that the pioneering analysis published in 1869 by Carl Reinhold August Wunderlich was flawed.

Or was it?

In a new study, researchers from Stanford University argue that Wunderlich’s number was correct at the time but is no longer accurate because the human body has changed.

Today, they say, the average normal human-body temperature is closer to 97.5 degrees Fahrenheit.

“That would be a huge drop for a population,” said Philip Mackowiak, emeritus professor of medicine at the University of Maryland School of Medicine and editor of the book “Fever: Basic Mechanisms and Management.”

Body temperature is a crude proxy for metabolic rate, and if it has fallen, it could offer a clue about other physiological changes that have occurred over time.

“People are taller, fatter and live longer, and we don’t really understand why all those things have happened,” said Julie Parsonnet, who specializes in infectious diseases at Stanford and is senior author of the paper. “Temperature is linked to all those things. The question is which is driving the others.” […]

Overall, temperatures of the Civil War veterans were higher than measurements taken in the 1970s, and, in turn, those measurements were higher than those collected in the 2000s.

“Two things impressed me,” Dr. Parsonnet said. “The magnitude of the change and that temperature has continued to decline at the same rate.” […]

“Wunderlich did a brilliant job,” Dr. Parsonnet said, “but people who walked into his office had tuberculosis, they had dysentery, they had bone infections that had festered their entire lives, they were exposed to infectious diseases we’ve never seen.”

For his study, he did try to measure the temperatures of healthy people, she said, but even so, life expectancy at the time was 38 years, and chronic infections such as gum disease and syphilis afflicted large portions of the population. Dr. Parsonnet suspects inflammation caused by those and other persistent maladies explains the temperature documented by Wunderlich and that a population-level change in inflammation is the most plausible explanation for a decrease in temperature.

Decreasing human body temperature in the United States since the Industrial Revolution
by Myroslava Protsiv, Catherine Ley, Joanna Lankester, Trevor Hastie, Julie Parsonnet

Annotations

Jean-Francois Toussaint
Feb 15

This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.

Very interesting paper. Well done. However, a hypothesis still remains to be tested. The decline of the infectious burden well corresponds to the decrease of the body temperatures between the XIXth and XXth century cohorts (UAVCW vs NHANES), but it does not explain the further and much more important reduction between the XXth and XXIth century studies (NHANES vs STRIDE); see Figure 1 (distributions gap) and Figure 1 / Supp 1 (curve gap), where the impact seems to be twice as large between 1971 and 2007 than between 1860 and 1971.

Besides regulating the ambient room temperature (through winter heating in the early XXth century and summer air conditioning in the late XXth and early XXIth century), another hypothesis was not discussed here ie the significant decline in daily physical activity, one of the primary drivers of physiological heat production.

Regular physical activity alters core temperature even hours after exercising; 5h of moderate intensity exercise (60% VO2max) also increase the resting heart rate and metabolic rate during the following hours and night with a sympathetic nervous system activated until the next morning (Mischler, 2003) and higher body temperatures measured among the most active individuals (Aoyagi, 2018).

As in most developed countries, the North American people – who worked hard in agriculture or industry during the XIXth century – lost their active daily habits. We are now spending hours, motionless in front of our screens, and most of our adolescents follow this unsettling trend (Twenge, 2019); such an effect on temperature and energy regulation should also be considered as it may have an important impact on the potential progresses of their life expectancy and life duration.

Jean-François Toussaint Université de Paris, Head IRMES

Mischler I, et al. Prolonged Daytime Exercise Repeated Over 4 Days Increases Sleeping Heart Rate and Metabolic Rate. Can J Appl Physiol. Aug 2003; 28 (4): 616-29 DOI: 10.1139/h03-047

Aoyagi Y, et al. Objectively measured habitual physical activity and sleep-related phenomena in 1645 people aged 1–91 years: The Nakanojo Community Study. Prev Med Rep. 2018; 11: 180-6 DOI: 10.1016/j.pmedr.2018.06.013

Twenge JM, et al. Trends in U.S. Adolescents’ media use, 1976–2016: The rise of digital media, the decline of TV, and the (near) demise of print. Psychol Pop Media Cult, 2019; 8(4): 329-45. DOI: 10.1037/ppm0000203

Nita Jain
(edited Feb 15) Feb 14

Although there are many factors that influence resting metabolic rate, change in the population-level of inflammation seems the most plausible explanation for the observed decrease in temperature over time.

Reduced body temperature measurements may also be the result of loss of microbial diversity and rampant antibiotic use in the Western world. Indeed, the authors mention that a small study of healthy volunteers from Pakistan reported higher mean body temperatures than those encountered in developed countries where exposure to antimicrobial products is greater.

Rosenberg et al. reported that heat provision is an under-appreciated contribution of microbiota to hosts. Previous reports have estimated bacterial specific rates of heat production at around 168 mW/gram. From these findings, we can extrapolate that an estimated 70% of human body heat production in a resting state is the result of gut bacterial metabolism.

Consistent with this idea are reports that antibiotic treatment of rabbits and rodents lowers body temperature. Germ-free mice and piglets similarly displayed decreased body temperatures compared to conventionally raised animals and did not produce a fever in response to an infectious stimulus.

Although heat production by symbiotic microbes appears to be a general phenomenon observed in both plants and animals, its significance in humans has hardly been studied. Nonetheless, the concomitant loss of diversity and heat contribution of the gut microbiota may have far-reaching implications for host metabolic health.

A Microbial-Based Explanation for Cooling Human Body Temperatures
by Nita Jain

I would like to propose that our reduced body temperature measurements may be the result of loss of microbial diversity and rampant antibiotic use in the Western world. Indeed, a small study of healthy volunteers from Pakistan reported higher mean body temperatures than those encountered in developed countries where exposure to antimicrobial products is greater.

Heat provision is an under-appreciated contribution of microbiota to hosts. Microbes produce heat as a byproduct when breaking down dietary substrates and creating cell materials. Previous reports have estimated bacterial specific rates of heat production at around 168 mW/gram. From these findings, we can extrapolate that an estimated 70% of human body heat production in a resting state is the result of gut bacterial metabolism.

Consistent with this idea are reports that antibiotic treatment of rabbits and rodents lowers body temperature. Germ-free mice and piglets similarly displayed decreased body temperatures compared to conventionally raised animals and did not produce a fever in response to an infectious stimulus. The relationship also appears to be bi-directional, as host tolerance to cold has been shown to drive changes in the gut microbiomes of blue tilapia.

Heat production in goats was found to decrease by about 50% after emptying the rumen to values similar to what would be expected during a fasting state. These observations suggest that during fasting, microbial fermentation is responsible for half of the animal’s heat production while host metabolism accounts for the other half. The warming effect of microbes has also been reported in plants. Yeast populations residing in floral nectar release heat when breaking down sugar, increasing nectar temperature and modifying the internal flower microenvironment.

What is Moderate-Carb in a High-Carb Society?

If we were eating what the government actually funded in agricultural supports, we’d be having a giant corn fritter, deep fried in soybean oil. And it’s like, that’s not exactly what we should be eating.
~ Mark Hyman

A couple years back (2018), researchers did an analysis of long-term data on intake of carbohydrates, plant foods, and animal foods: Sara B Seidelmann, et al, Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis). The data, however, turns out to be more complicated than how it was reported in the mainstream news and in other ways over-simplified.

This was an epidemiological study of 15,000 people done with notoriously unreliable self-reports called Food Frequency Questionnaires based on the subjects’ memory of years of eating habits. The basic conclusion was that a diet moderate in carbs is the healthiest. That reminds me of the “controlled carbs” that used to be advocated to ‘manage’ diabetes that, in fact, worsened diabetes over time (American Diabetes Association Changes Its TuneAmerican Diabetes Association Changes Its Tune) — what was being managed was slow decline leading to early death. Why is it the ruling elite and its defenders, whether talking about diet or politics, always trying to portray extreme positions as ‘moderate’?

Let’s dig into the study. Although the subjects were seen six times over a 25 year period, the questionnaire was given only twice with the first visit in the late 1980s and with the third visit in the mid 1990s — two brief and inaccurate snapshots with the apparent assumption that dietary habits didn’t change from the mid 1990s to 2017. As was asked of the subjects, do you recall your exact dietary breakdown for the the past year? In my personal observations, many people can’t recall what they ate last week or sometimes even what they had the day before — the human memory is short and faulty (the reason nutritionists will have patients keep daily food diaries).

There was definitely something off about the data. When the claimed total caloric intake is added up it would’ve meant starvation rations for many of the subjects, which is to say they were severely underestimating parts of their diet, most likely the parts of their diet that are the unhealthiest (snacks, fast food, etc). Shockingly, they didn’t even assess or rather didn’t include carbohydrate intake for all those periods for they later on extrapolated from the earlier data with no explanation for this apparent data manipulation.

To further problemitize the results, those who developed metabolic health conditions (diabetes, stroke, heart disease) in the duration, likely caused by carbohydrate consumption, were excluded from the study, as were those who died — it was expected and one might surmise it was intentionally designed to find no link between dietary carbs and health outcomes. That is to say the study was next to worthless (John Ioannidis, The Challenge of Reforming Nutritional Epidemiologic Research). Over 80% of the hypotheses of nutritional epidemiology are later proved wrong in clinical trials (S. Stanley Young & Alan Karr, Deming, data and observational studies).

Besides, the researchers defined low-carb as anything below 40% and very high-carb as anything above 70%, though the study itself was mainly looking at percentages in between these. This study wasn’t about the keto diet (5% carbs of total energy intake, typically 20-50 grams per day) or even generally low-carb diets (below 25%) and moderate-carb diets (25-33% or maybe slightly higher). Instead, the researchers compared diets that were varying degrees of high-carb (37-61%, about 144 grams and higher). It’s true that one might argue that, compared to the general population, a ‘moderate’-carb diet could be anything below the average high-carb levels of the standard American diet (50-60%), the high levels the researchers considered ‘moderate’ as in being ‘normal’. But with this logic, the higher the average carb intake goes the higher ‘moderate’ also becomes, a not very meaningful definition for health purposes.

Based on bad data and confounded factors for this high-carb population, the researchers speculated that diets below 37% carbs would show even worse health outcomes, but they didn’t actually have any data about low-carb diets. To put this in perspective, traditional hunter-gatherer diets tend to be closer to the ketogenic level of carb intake with, on average, 20% at the lower range and 40% at the highest extreme, and that is particularly ketogenic with a feast-and-fast pattern. Some hunter-gatherers, from Inuit to Masai, go long periods with few if any carbs, well within ketosis, and they don’t show signs of artherosclerosis, diabetes, etc.

The study simply looked at correlations without controlling for confounders: “The low carb group at the beginning had more smokers (33% vs 22%), more former smokers (35% vs 29%), more diabetics (415 vs 316), twice the native Americans, fewer habitual exercisers (474 vs 614) ” (Richard Morris, Facebook). And alcohol intake, one of the single most important factors for health and lifespan, was not adjusted for at all. Taken together, that is what is referred to as the unhealthy user bias, whereas the mid-range group in this study were affected by the healthy user bias. Was this a study of diet or a study of lifestyle and demographic populations?

On top of that, neither was data collected on specific eating patterns in terms of portion sizes, caloric intake, regularity of meals, and fasting. Also, the details of types of foods eaten weren’t entirely determined either, such as whole vs processed, organic vs non-organic, pasture-raised vs factory-farmed — and junk foods like pizza and energy bars weren’t included at all in the questionnaire; while whole categories of foods were conflated  with meat being lumped together with cakes and baked goods, as separate from fruits and vegetables. A grass-finished steak or wild-caught salmon with greens from your garden was treated as nutritionally the same as a fast food hamburger and fries.

Some other things should be clarified. This study wasn’t original research but was data mining older data sets from the research of others. Also, keep in mind that it was published in the Lancet Public Health, not in the Lancet journal itself. The authors and funders paid $5,000 for it to be published there and it was never peer-reviewed. Another point is that the authors of the paper speak of ‘substitutions’: “…mortality increased when carbohydrates were exchanged for animal-derived fat or protein and mortality decreased when the substitutions were plant-based.” This is simply false. No subjects in this study replaced any foods for another. This an imagined scenario, a hypothesis that wasn’t tested. By the way, don’t these scientists know that carbohydrates come from plants? I thought that was basic scientific knowledge.

To posit that too few carbs is dangerous, the authors suggest that, “Long-term effects of a low carbohydrate diet with typically low plant and increased animal protein and fat consumption have been hypothesised to stimulate inflammatory pathways, biological ageing, and oxidative stress.” This is outright bizarre. We don’t need to speculate. In much research, it already has been shown that sugar, a carbohydrate, is inflammatory. What happens when sugar and other carbs are reduced far enough? The result is ketosis. And what is the affect of ketosis? It is an anti-inflammatory state, not to mention promoting healing through increased autophagy. How do these scientists not know basic science in the field they are supposedly experts in? Or were they purposefully cherrypicking what fit their preconceived conclusion?

Here is the funny part. Robb Wolf points out (see video below) that in the same issue of the same journal on the same publishing date, there was a second article that gives a very different perspective (Andrew Mente & Salim Yusuf, Evolving evidence about diet and health). The other study concluded a low-carb diet based on meat and animal fats particularly lowered lifespan which probably simply demonstrated the unhealthy user effect (these people were heavier, smoked more, etc), but this other article looked at other data and came to very different conclusions,

“More recently, studies using standardised questionnaires, careful documentation of outcomes with common definitions, and contemporary statistical approaches to minimise confounding have generated a substantial body of evidence that challenges the conventional thinking that fats are harmful. Also, some populations (such as the US population) changed their diets from one relatively high in fats to one with increased carbohydrate intake. This change paralleled the increased incidence of obesity and diabetes. So the focus of nutrition research has recently shifted to the potential harms of carbohydrates. Indeed, higher carbohydrate intake can have more adverse effects on key atherogenic lipoproteins (eg, increase the apolipoprotein B-to-apolipoprotein A1 ratio) than can any natural fats. Additionally, in short-term trials, extreme carbohydrate restriction led to greater short-term weight loss and lower glucose concentrations compared with diets with higher amounts of carbohydrate. Robust data from observational studies support a harmful effect of refined, high glycaemic load carbohydrates on mortality.”

Then, in direct response to the other study, the authors warned that, “The Findings of the meta-analysis should be interpreted with caution, given that so-called group thinking can lead to biases in what is published from observational studies, and the use of analytical approaches to produce findings that fit in with current thinking.” So which Lancet article should we believe? Why did the media obsess over the one while ignoring the other?

And what about the peer-reviewed PURE study that was published the previous year (2018) in the Lancet journal itself? The PURE study was much larger and better designed. Although also observational and correlative, it was the best study of its kind ever done. The researchers found that carbohydrates were linked to a shorter lifespan and saturated fat to a longer lifespan, and yet it didn’t the same kind of mainstream media attention. I wonder why.

The study can tell us nothing about low-carb diets, even if low-carb diets had been included in the study. Yet the mainstream media and health experts heralded it as proof that a low-carb diet was dangerous and a moderate-carb diet was the best. Is this willful ignorance or intentional deception? The flaws in the study were so obvious, but it confirmed the biases of conventional dietary dogma and so was promoted without question.

On the positive side, the more often this kind of bullshit gets put before the public and torn apart as deceptive rhetoric the more aware the public becomes about what is actually being debated. But sadly, this will give nutrition studies an even worse reputation than it already has. And it could discredit science in the eyes of many and could bleed over into a general mistrust of scientific experts, authority figures, and public intellectuals (e.g., helping to promote a cynical attitude of climate change denialism). This is why it’s so important that we get the science right and not use pseudo-science as an ideological platform.

* * *

Will a Low-Carb Diet Shorten Your Life?
by Chris Kresser

I hope you’ll recognize many of the shortcomings of the study, because you’ve seen them before:

  • Using observational data to draw conclusions about causality
  • Relying on inaccurate food frequency questionnaires (FFQs)
  • Failing to adjust for confounding factors
  • Focusing exclusively on diet quantity and ignoring quality
  • Meta-analyzing data from multiple sources

Unfortunately, this study has already been widely misinterpreted by the mainstream media, and that will continue because:

  1. Most media outlets don’t have science journalists on staff anymore
  2. Even so-called “science journalists” today seem to lack basic scientific literacy

In light of the Aug 16th, 2018 Lancet study on carbohydrate intake and mortality, where do you see the food and diet industry heading? (Quora)
Answered by Chris Notal

A study where the conclusion was decided before the data.

They mentioned multiple problems in their analysis, but then ignored this in their introduction and conclusion.

The different cohorts: the cohort with the lowest consumption of carbs also had more smokers, more fat people, more males, they exercised less, and were more likely to be diabetic; each of these categories independently of each other more likely to result in an earlier death. Also, recognize that for the past several decades we have been told that if you want to be healthy, you eat high carb and low fat. So even if that was false, you have people with generally healthier habits period who will live longer than those who do their own thing and rebel against healthy eating knowledge of the time. For example, suppose low carb was actually found to be healthier than high carb: it wouldn’t be sufficient to offset the healthy living habits of those who had been consuming high carb.
Also, look at the age groups. The starting ages were 46–64. And it covered the next 30 years. Which meant they were studying how many people live into their 90’s. Who’s more likely to live into their 90’s, a smoker or non-smoker? Someone who is overweight or not? Males or females? Those who exercise or those who don’t? The problem is that each variable they used in the study along with high carb, on their own supports living longer than the opposite.

Carbs, Good for You? Fat Chance!
By Nina Teicholz

A widely reported study last month purported to show that carbohydrates are essential to longevity and that low-carb diets are “linked to early death,” as a USA Today headline put it. The study, published in the Lancet Public Health journal, is the nutrition elite’s response to the challenge coming from a fast-growing body of evidence demonstrating the health benefits of low-carb eating…

The Lancet authors, in recommending a “moderate” diet of 50% to 60% carbohydrates, essentially endorse the government’s nutrition guidelines. Because this diet has been promoted by the U.S. government for nearly 40 years, it has been tested rigorously in NIH-funded clinical trials involving more than 50,000 people. The results of those trials show clearly that a diet of “moderate” carbohydrate consumption neither fights disease nor reduces mortality.

Deflating Another Dietary Dogma
By Dan Murphy

Just the linking of “carbohydrate intake” and “mortality” tells you all you need to know about the authors’ conclusions, and Teicholz pulls no punches in challenging their findings, calling them “the nutrition elite’s response to the challenge coming from a fast-growing body of evidence demonstrating the health benefits of low-carb eating.”

By way of background, Teicholz noted that for decades USDA’s Dietary Guidelines for Americans have directed people to increase their consumption of carbohydrates and avoid eating fats. “Despite following this advice for nearly four decades, Americans are sicker and fatter than ever,” she wrote. “Such a record of failure should have discredited the nutrition establishment.”

Amen, sister.

Teicholz went on to explain that even though the study’s authors relied on data from the Atherosclerosis Risk in Communities (ARIC) project, which since 1987 has observed 15,000 middle-aged people in four U.S. communities, their apparently “robust dataset” is something of an illusion.

Why? Because the ARIC relied on suspect food questionnaires. Specifically, the ARIC researches used a form listing only 66 food items. That might seem like a lot, but such questionnaires typically include as many as 200 items to ensure that respondents’ recalls are accurate.

“Popular foods such as pizza and energy bars were left out [of the ARIC form],” Teicholz wrote, “with undercounting of calories the inevitable result. ARIC calculated that participants ate only 1,500 calories a day — starvation rations for most.”

Low carbs and mortality
by John Schoonbee

An article on carbohydrate intake and mortality appeared in The Lancet Public Health last week. It is titled “Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis”. In the summary of the article, the word “association” occurs 6 times. The words “cause”, “causes” or “causal” are not used at all (except as part of “all-cause mortality”).

Yet the headlines in various news outlets are as follows:

BBC : “Low-carb diets could shorten life, study suggests”

The Guardian : “Both low- and high-carb diets can raise risk of early death, study finds”

New Scientist : “Eating a low-carb diet may shorten your life – unless you go vegan too”

All 3 imply active causality. Time Magazine is more circumspect and perhaps implies more of the association noted in the article : “Eating This Many Carbs Is Linked to a Longer Life”. These headline grabbing tactics are part of what makes nutritional science so frustratingly hard. A headline could perhaps have read : “An association with mortality has been found with extreme intakes of carbohydrates but no causality has been shown”

To better understand what an association in this context means, it is perhaps good to use 2 examples. One a bit silly, but proves the point, the other more nuanced, and in fact a very good illustration of the difference between causality and association.

Hospitals cause people to die. Imagine someone saying being in hospital shortens your life span, or increases your mortality. Imagine telling a child going for a tonsillectomy this! Of course people who are admitted to hospital have a higher mortality risk than those (well people) not admitted because they are generally sicker. This is an association, but it’s not causal. Being in a hospital does not cause death, but is associated with increased death (of course doctor-caused iatrogenic deaths and multidrug resistant hospital bugs alters this conversation).

A closer example which more parallels the the Lancet Public Health article, is when considering mortality among young smokers, men particularly. Young men who smoke have a higher mortality risk, mostly related to accidental death. Does this mean smoking causes increased deaths in young men? Clearly the answer is NO. But smoking is certainly associated with an increased death rate in young men. Why? Because these young men who smoke have far higher risk taking profiles and personalities, leading to more risk taking behavior including higher risk driving styles. Using a product that has severe health warnings and awful pictures, with impunity, clearly indicates a certain attitude towards risk. They are dying more because of their risk taking behavior which is associated with a likelihood of smoking. But it’s not the smoking of cigarettes that is killing them when they are young. (When they are older, the cancer and heart disease is of course caused by the cigarette smoking, but at an earlier age, that is not the case.)

The guidelines for “healthy” eating since the late 1970’s (which were not evidence based) have stipulated a certain proportion carbohydrate intake. Guidelines have typically also biased plants as being healthier than animal sources of protein and fat. In this context then, “healthy eating” is understood to be consuming 50-55% of carbohydrates, and having less animal products, and more plants, as general rules. It means those who then choose to ignore these guidelines – hence eat far higher amounts of animal fat and protein – would conceivably be those that are snubbing generally accepted “good health” advice (whether evidence based or not) and who probably do not care as much about their health. Their lifestyles would not unreasonably therefore be expected to be unhealthier in general.

The Lancet Public Health article shows that in the quintile of their study participants having the least amount of carbohydrate intake, they significantly

  • are more likely to be male
  • smoke more
  • exercise less
  • have higher bmi’s and
  • are more likely to be diabetic.

“Those eating the least carbohydrates smoked more, exercised less, were more overweight, and were more likely to be diabetic”

This seems to confirm an unhealthy user bias. Interestingly the authors also note that “the animal-based low carbohydrate dietary score was associated with lower average intake of both fruit and vegetables“. Ignoring conventional wisdom around the health of fruit and vegetables reaffirms the data and conclusion that the low carb intake group lack a certain healthy mindset.

Low, moderate or high carbohydrate?
by Zoe Harcombe

In 1977 the Senator McGovern committee issued some dietary goals for Americans (Ref 1). The first goal was “Increase carbohydrate consumption to account for 55 to 60 percent of the energy (caloric) intake.” This recommendation did not come from any evidence related to carbohydrate. It was the inevitable consequence of setting a dietary fat guideline of 30% with protein being fairly constant at 15%.

Call me suspicious, but when a paper published 40 years later, in August 2018, concluded that the optimal intake of carbohydrate is 50-55%, I smelled a rat. The study, published in The Lancet Public Health (Ref 2), also directly contradicted the PURE study, which was published in The Lancet, in August 2017 (Ref 3). No wonder people are confused. […]

I wondered what kind of person would be consuming a low carbohydrate diet in the late 1980s/early 1990s (when the 2 questionnaires in a 25 year study were done). The characteristics table in the paper tells us exactly what kind of person was in the lowest carbohydrate group. They were far more likely to be: male; diabetic; and current smokers; and far less likely to be in the highest exercise category. The ARIC study would adjust for these characteristics, but, as I often say, you can’t adjust for a whole type of person.

The groups have been subjectively chosen – not even the carb ranges are even. Most covered a 10% range (e.g. 40-50%), but the range chosen for the ‘optimal’ group (50-55%) was just 5% wide. This placed as many as 6,097 people in one group and as few as 315 in another.

This is the single biggest issue behind the headlines.

The subjective group divisions introduced what I call “the small comparator group issue.” This came up in the recent whole grains review (Ref 6). I’ll repeat the explanation here, and build on it, as it’s crucial to understanding this paper.

If 20 children go skiing – 2 of them with autism – and 2 children die in an avalanche – 1 with autism and 1 without – the death rate for the non-autistic children is 1 in 18 (5.5%) and the death rate for the autistic children is 1 in 2 (50%). Can you see how bad (or good?) you can make things look with a small comparator group?

From subjective grouping to life expectancy headlines

For the media headlines “Low carb diets could shorten life, study suggests” (Ref 5), the researchers applied a statistical technique (called Kaplan-Meier estimates) to the ARIC data. This is entirely a statistical exercise – we don’t know when people will die. We just know how many have died so far.

This exercise resulted in the claim “we estimated that a 50-year-old participant with intake of less than 30% of energy from carbohydrate would have a projected life expectancy of 29·1 years, compared with 33·1 years for a participant who consumed 50–55% of energy from carbohydrate…  Similarly, we estimated that a 50-year-old participant with high carbohydrate intake (>65% of energy from carbohydrate) would have a projected life expectancy of 32·0 years, compared with 33·1 years for a participant who consumed 50–55% of energy from carbohydrate.”

Do you see how both of these claims have used the small comparator group extremes (<30% and >65%) to make the reference group look better?

Back to the children skiing… If we were to use the data we have so far (50% of autistic children died and 5.5% of non-autistic children died) and to extrapolate this out to predict survival, life expectancy for the autistic children would look catastrophic. This is exactly what has happened with the small groups – <30% carb and >65% carb – in this study.

The data have been manipulated.

When Bad Science Can Harm You
by Angela Stanton

“Statistical Analysis

We did a time varying sensitivity analysis: between baseline ARIC Visit 1 and Visit 3, carbohydrate intake was calculated on the basis of responses from the baseline FFQ. From Visit 3 onwards, the cumulative average of carbohydrate intake was calculated on the basis of the mean of baseline and Visit 3 FFQ responses…”

WOW, hold on now. They collected carbohydrate information from the first and third visit and then they estimated the rest based on these two visits? Do they mean by this that

  1. The data for years 2,4,5, and 6 didn’t match what they wanted to see?
  2. The data for years 2,4,5, and 6 didn’t exist?

What kind of a trick might this hide? Not the kind of statistics I would like to consider as VALID STATISTICAL ANALYSIS.

“…WWhen Bad Science Can Harm You
Angela A Stanton, Ph.D. Angela A Stanton, to reduce potential confounding from changes in diet that could arise from the diagnosis of these diseases… The expected residual years of survival were estimated…”

Oh wow! So those who ate a lot of carbohydrates and developed diabetes, stroke, heart disease during the study were excluded? This does not reduce confounding changes but actually increases them. That is because the very thing they are studying is how carbohydrates influence health and longevity, that is no diabetes, no strokes, and no heart disease. By excluding those that actually ended up with them completely changes the outcome to the points the authors are trying to make rather than reflect the reality.

Also, if they presume a change in diet for these participants, why not for the rest? Do you detect any problems here? I do! […]

There are 3 types of studies on nutrition:

  1. Bad
  2. Good
  3. Meaningless–meaning it repeats something that was already repeated hundreds of times

This study falls into Bad and Meaningless nutrition studies. It is actually not really science–these researchers simply cracked the same database that others already have and manipulated the data to fit their hypothesis.

I have commented all through the quotes from the study of what was shocking to read and see. What is even more amazing is the last 2 sentences, a quote, in the press release by Jennifer Cockerell, Press Association Health Correspondent:

Dr Ian Johnson, emeritus fellow at the Quadram Institute Bioscience in Norwich, said: ‘The national dietary guidelines for the UK, which are based on the findings of the Scientific Advisory Committee on Nutrition, recommend that carbohydrates should account for 50% of total dietary energy intake. In fact, this figure is close to the average carbohydrate consumption by the UK population observed in dietary surveys. It is gratifying to see from the new study that this level of carbohydrate intake seems to be optimal for longevity.‘”

It is not gratifying but horrible to see that the UK, one of the most diseased countries on the planet today, plagued by type 2 diabetes, obesity, and heart disease, should consider its current general carbohydrate consumption levels to be ideal and finds support in this study for what they are currently doing.

I suppose that if type 2 diabetes, obesity, and other metabolic diseases is what the country wants (and why wouldn’t it want that? Guess who profits from sick people?), then indeed, a 50% carbohydrate diet is ideal.

Latest Low-Carb Study: All Politics, No ScienceLatest Low-Carb Study: All Politics, No Science
by Georgia Ede

Where’s the Evidence?

Ludicrous Methods. The most important thing to understand is that this study was an “epidemiological” study, which should not be confused with a scientific experiment. This type of study does not test diets on people; instead, it generates guesses (hypotheses) about nutrition based on surveys called Food Frequency Questionnaires (FFQs). Below is an excerpt from the FFQ that was modified for use in this study. How well do you think you could answer questions like these?

Provided by Lancet Public Health
Source: Provided by Lancet Public Health

How is anyone supposed to recall what was eaten as many as 12 months prior? Most people can’t remember what they ate three days ago. Note that “I don’t know” or “I can’t remember” or “I gave up dairy in August” are not options; you are forced to enter a specific value. Some questions even require that you do math to convert the number of servings of fruit you consumed seasonally into an annual average—absurd. These inaccurate guesses become the “data” that form the foundation of the entire study. Foods are not weighed, measured, or recorded in any way.

The entire FFQ used contained only 66 questions, yet the typical modern diet contains thousands of individual ingredients. It would be nearly impossible to design a questionnaire capable of capturing that kind of complexity, and even more difficult to mathematically analyze the risks and benefits of each ingredient in any meaningful way. This methodology has been deemed fatally flawed by a number of respected scientists, including Stanford Professor John Ioannidis in this 2018 critique published by JAMA.

Missing Data. Between 1987 and 2017, researchers met with subjects enrolled in the study a total of six times, yet the FFQ was administered only twice: at the first visit in the late 1980s and at the third visit in the mid-1990s. Yes, you read that correctly. Did the researchers assume that everyone in the study continued eating exactly the same way from the mid-1990s to 2017? Popular new products and trends surely affected how some of them ate (Splenda, kale chips, or cupcakes, anyone?) and drank (think Frappucinos, juice boxes, and smoothies). Why was no effort made to evaluate intake during the final 20-plus years of the study? Even if the FFQ method were a reliable means of gathering data, the suggestion that what individuals reported eating in the mid-1990s would be directly responsible for their deaths more than two decades later is hard to swallow.

There are other serious flaws to cover below, but the two already listed above are reasons enough to discredit this study. People can debate how to interpret the data until the low-carb cows come home, but I would argue that there is no real data in this study to begin with. The two sets of “data” are literally guesses about certain aspects of people’s diets gathered on only two occasions. Do these researchers expect us to believe they accurately represent participants’ eating patterns over the course of 30 years? This is such a preposterous proposition that one could argue not only that the data are inaccurate, but that they are likely wildly so.

Learn why we think you should QUESTION the results of the recent Lancet study which suggests that a low carb diet is bad for your health.
by Tony Hampton

1) Just last year, the Lancet published a more reliable study with over 120,000 participates entitled Associations of fats and carbohydrate intake with cardiovascular disease and mortality in 18 countries from five continents (PURE): a prospective cohort study. This study involved participates actually visiting a doctors office where various biomarkers were tracked. Here is the link to this study: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32252-3/abstract In this study, high carbohydrate intake was associated with higher risk of total mortality, whereas total fat and individual types of fat were related to lower total mortality. This is consistent with Dr. Hope and my recommendation to consume a lower carb high-fat diet.

2) Unlike the PURE study, the new Lancet study containing only 15,428 participates entitled Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis used food frequency questionnaires (FFQ) containing 66 questions asking participates what they ate previously. This is not as reliable as a randomized control trial where participants are divided by category into separate groups to compare interventions and are fed specific diets. Using FFQ is simply not reliable. Can you remember what you ate last week or over the last year? FFQ also are unreliable because participates tend to downplay their bad eating habits and describe what they think the researchers want to hear. FFQ are simply inherently inaccurate compared to randomized control trails and allow participates to self-declare themselves as eating low carb in this study.

3) Of the groups participating in the new Lancet study, the lower carb group’s participates were the least healthy of the study participates with higher rates of smokers (over 70% smoked or previously smoked), diabetics, overweight, and those who exercised less. This was not true of the other group’s participates.

4) The so-called low carb group at less than 40% carbs is not really a low carb diet. The participates in this group consuming 35-40% carbs are consuming nearly 200 carbs per day. Many of our patients on a low carb diet are consuming less than 50 carbs per day. So are the participates in this study really on a low carb diet? We would suggest they are not.

5) Declaration of interests: When Dr. Hope and I learned to review research studies, the first question we were taught to ask was: who funded the study. If you click on the study link above and go to the bottom of the study, you will see under the declaration of interest section that there were some personal fees from two pharmaceuticals (Novartis and Zogenix). Pharmaceuticals provide needed resources to fund much-needed research. The big message here, however, is full disclosure. Just as I discussed at the beginning of this post, Dr. Hope and I are somewhat biased towards a low carb high-fat diet. We felt you needed to be aware of this as you read this post. You also need to know who funded the Lancet study we are discussing. You decide how to use that information.

6) The Lancet study is an observational study. Observational studies only show an association, not causation. Association is weak science and should always be questioned.

7) The moderate carb diet in this study was associated with the lowest mortality. In this study, participates ate a diet with 50-55% carbs. This mirrors the current USDA diet which has been recommended over the last 40 plus years. During this timeline, Americans followed the USDA recommendations and reduced saturated fat while increasing carbs in their diets. This led to the onset of the obesity epidemic. Let us not go back to recommendations which have not worked.

8) Media sensationalism and bias. I know it’s frustrating to keep hearing mixed messages and dramatic headlines but this is how the media gets your attention, so don’t be convinced by headlines. If you are still reading at this point in the post, you won’t be sidetracked by dramatic press releases.

STUDY: Do Low Carb Diets Increase Mortality?
by Siim Land

Here’s my debunking:

  • The “low carb group” wasn’t actually low carb and had a carb intake of 37% of total calories…It’s much rather moderate carb
  • “Low carb participants” were more sedentary, current smokers, diabetics, and didn’t exercise
  • The study was conducted over the course of 25 years with follow-ups every few years
  • No real indication of what the people actually ate in what amounts and at what macronutrient ratios
  • The same applies to the increased mortality rates in high carb intake – no indication of food quality of carb content
  • Correlation does not equal causation
  • Animal proteins and fats contributed more to mortality than plant-based foods, which again doesn’t take into account food quality and quantities
  • It’s true that too much of anything is bad and you don’t want to eat too many carbs, too much fat, too much meat, or too much protein…

Is Keto Bad For You? Addressing Keto ClickBait
by Chelsea Malone

Where Did the Study Go Wrong?

  1. This was not a controlled study. Other factors that influence lifespan like physical activity, stress levels, and smoking habits were recorded, but not adjusted for. The “low-carb” group also consisted of the highest amount of smokers and the lowest amount of total physical activity conducted.
  2. The data collection process left plenty of room for errors. In order to collect the data on total carbohydrate consumption, participants were given a questionnaire (FFQ) where they indicated how often they ate specific foods on a list over the past several years. Most individuals would not be able to accurately recall total food consumption over such a long period of time and were likely filled with errors.
  3. Consuming under 44% of total daily calories from carbohydrates was considered low carb. To put this into perspective, if the average person consumes 2,000 calories a day, that is 220 grams of carbohydrates. This is nowhere near low-carb or keto territory.
  4. This study is purely correlational, and correlation does not equal causation. Think of it like this: If a new study was published showing individuals who wear purple socks were more likely to get into a car crash than individuals wearing red socks, would you assume that purple socks cause car accidents? You probably wouldn’t and the same principle applies to this study.

#Fakenews Headlines – Low Carb Diets aren’t Dangerous!
by Belinda Fettke

Not only was the data cherry-picked from a Food Frequency Questionnaire that lumped ‘meat in with the cakes and baked goods’ category while dairy, fruit, and vegetables were all kept as separate entities (implying that meat is a discretionary and unhealthy food??), they also excluded anyone who became metabolically unwell over the 25 year period since the study began (but not from baseline). […]

Dr Aseem Malhotra took it to another level in his interview on BBC World News.

Here are a couple of Key Points he outlined on Facebook:

1. Reviewing ALL the up to date evidence the suggestion that low carb diets shorten lifespan from this fatally flawed association study is COMPLETELY AND TOTALLY FALSE. To say that they do is a MISCARRIAGE OF SCIENCE!

2. The most effective approach for managing type 2 diabetes is cutting sugar and starch. A systematic review of randomised trials … reveals its best for blood glucose and cardiovascular risk factors in short AND long term. […]

The take-away message is please don’t believe everything that is written about the latest study to come out of Harvard T.H Chan School of Public Health. The authors/funders of this paper; Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis paid $5,000 to be published in the Lancet Public Health (not to be confused with the official parent publication – The Lancet). While it went past an editorial committee it has not yet been peer reviewed.

Low-carb or high carb diet: What I want you to know about the ‘healthiest diet’, as an NHS Doctor
by Dr Aseem Malhotra

1418: Jimmy Moore Rant On Anti-Keto Lancet Study
from The Livin’ La Vida Low-Carb Show

Antipsychotics: Effects and Experience

Many people now know how antidepressants are overprescribed. Studies have shown that most taking them receive no benefit at all. Besides that, there are many negative side effects, including suicidality. But what few are aware of is how widely prescribed also are antipsychotics. They aren’t only used for severe cases such as schizophrenia. Often, they are given for treatment of conditions that have nothing to do with psychosis. Depression and personality disorders are other examples. Worse still, it is regularly given to children in foster care to make them more manageable.

That was the case with me, in treating my depression. Along with the antidepressant Paxil, I was put on the antipsychotic Risperdal. I don’t recall being given an explanation at the time and I wasn’t in the mindset back then to interrogate the doctors. Antipsychotics are powerful tranquilizers that shut down the mind and increase sleep. Basically, it’s an attempt to solve the problem by making the individual utterly useless to the world, entirely disconnected, calmed into mindlessness and numbness. That is a rather extreme strategy. Rather than seeking healing, it treats the person suffering as the problem to be solved.

For those on them, they can find themselves sleeping all the time, have a hard time concentrating, and many of them unable to work. It can make them inert and immobile, often gaining weight in the process. But if you try to get off of them, there can be serious withdrawl symptoms. The problems is that prescribers rarely tell patients about the side effects or the long term consequences to antipsychotic use, as seen with what some experience as permanent impairment of mental ability. This is partly because drug companies have suppressed the information on the negatives and promoted them as a miracle drug.

Be highly cautious with any psychiatric medications, including antidepressants but especially antipsychotics. These are potent chemicals only to be used in the most desperate of cases, not to be used so cavalierly as they are now. As with diet, always question a healthcare professional recommending any kind of psychiatric medications for you or a loved one. And most important, research these drugs in immense detail before taking them. Know what you’re dealing with and learn of the experiences of others.

Here is an interesting anecdote. Ketogenic diets have been used to medically treat diverse neurocognitive disorders, originally epileptic seizures, but they are also used to treat weight loss. There was an older lady, maybe in her 70s. She had been diagnosed with schizophrenia since she was a teenager. The long-term use of antipsychotics had caused her to become overweight.

She went to Dr. Eric Westman who trained under Dr. Robert Atkins. She was put on the keto diet and did lose weight but she was surprised to find here schizophrenic symptoms also reduce, to such an extent she was able to stop taking the antipsychotics. So, how many doctors recommend a ketogenic diet before prescribing dangerous drugs? The answer is next to zero. There simply is no incentive for doctors to do so within our present medical system and many incentives to continue with the overprescription of drugs.

No doctor ever suggested to me that I try the keto diet or anything similar, despite the fact that none of the prescribed drugs helped. Yet I too had the odd experience of going on the keto diet to lose weight only to find that I had also lost decades of depression in the process. The depressive funks, irritability and brooding simply disappeared. That is great news for the patient but a bad business model. Drug companies can’t make any profit from diets. And doctors that step out of line with non-standard practices open themselves up to liability and punishment by medical boards, sometimes having their license removed.

So, psychiatric medications continue to be handed out like candy. The young generation right now is on more prescribed drugs than ever before. They are guinea pigs for the drug companies. Who is going to be held accountable when this mass experiment on the public inevitably goes horribly wrong when we discover the long-term consequences on the developing brains and bodies of children and young adults?

* * *

Largest Survey of Antipsychotic Experiences Reveals Negative Results
By Ayurdhi Dhar, PhD

While studies have attributed cognitive decline and stunted recovery to antipsychotic use, less attention has been paid to patients’ first-person experiences on these drugs. In one case where a psychiatrist tried the drugs and documented his experience, he wrote:

“I can’t believe I have patients walking around on 800mg of this stuff. There’s no way in good conscience I could dose this BID (sic) unless a patient consented to 20 hours of sleep a day. I’m sure there’s a niche market for this med though. There has to be a patient population that doesn’t want to feel emotions, work, have sex, take care of their homes, read, drive, go do things, and want to drop their IQ by 100 points.”

Other adverse effects of antipsychotics include poor heart health, brain atrophy, and increased mortality. Only recently have researchers started exploring patient experiences on antipsychotic medication. There is some evidence to suggest that some service users believe that they undermine recovery. However, these first-person reports do not play a significant part in how these drugs are evaluated. […]

Read and Sacia found that only 14.3% reported that their experience on antipsychotics was purely positive, 27.9% of the participants had mixed experiences, and the majority of participants (57.7%) only reported negative results.

Around 22% of participants reported drug effects as more positive than negative on the Overall Antipsychotic Rating scale, with nearly 6% calling their experience “extremely positive.” Most participants had difficulty articulating what was positive about their experience, but around 14 people noted a reduction in symptoms, and 14 others noted it helped them sleep.

Of those who stated they had adverse effects, 65% reported withdrawal symptoms, and 58% reported suicidality. In total, 316 participants complained about adverse effects from the drugs. These included weight gain, akathisia, emotional numbing, cognitive difficulties, and relationship problems. […]

Similar results were reported in a recent review, which found that while some patients reported a reduction in symptoms on antipsychotics, others stated that they caused sedation, emotional blunting, loss of autonomy, and a sense of resignation. Participants in the current survey also complained of the lingering adverse effects of antipsychotics, long after they had discontinued their use.

Importantly, these negative themes also included negative interactions with prescribers of the medication. Participants reported a lack of information about side-effects and withdrawal effects, lack of support from prescribers, and lack of knowledge around alternatives; some noted that they were misdiagnosed, and the antipsychotics made matters worse.

One participant said: “I was not warned about the permanent/semi-permanent effects of antipsychotics which I got.” Another noted: “Most doctors do not have a clue. They turn their backs on suffering patients, denying the existence of withdrawal damage.”

This is an important finding as previous research has shown that positive relationships with one’s mental health provider are considered essential to recovery by many patients experiencing first-episode psychosis.

Diet and Industrialization, Gender and Class

Below are a couple of articles about the shift in diet since the 19th century. Earlier Americans ate a lot of meat, lard, and butter. It’s how everyone ate — women and men, adults and children — as that was what was available and everyone ate meals together. Then there was a decline in consumption of both red meat and lard in the early 20th century (dairy has also seen a decline). The changes created a divergence in who was eating what.

It’s interesting that, as part of moral panic and identity crisis, diets became gendered as part of reinforcing social roles and the social order. It’s strange that industrialization and gendering happened simultaneously, although maybe it’s not so strange. It was largely industrialization in altering society so dramatically that caused the sense of panic and crisis. So, diet also became heavily politicized and used for social engineering, a self-conscious campaign to create a new kind of society of individualism and nuclear family.

This period also saw the rise of the middle class as an ideal, along with increasing class anxiety and class war. This led to the popularity of cookbooks within bourgeois culture, as the foods one ate not only came to define gender identity but also class identity. As grains and sugar were only becoming widely available in the 19th century with improved agriculture and international trade, the first popular cookbooks were focused on desert recipes (Liz Susman Karp, Eliza Leslie: The Most Influential Cookbook Writer of the 19th Century). Before that, deserts had been limited to the rich.

Capitalism was transforming everything. The emerging industrial diet was self-consciously created to not only sell products but to sell an identity and lifestyle. It was an entire vision of what defined the good life. Diet became an indicator of one’s place in society, what one aspired toward or was expected to conform to.

* * *

How Steak Became Manly and Salads Became Feminine
Food didn’t become gendered until the late 19th century.
by Paul Freedman

Before the Civil War, the whole family ate the same things together. The era’s best-selling household manuals and cookbooks never indicated that husbands had special tastes that women should indulge.

Even though “women’s restaurants” – spaces set apart for ladies to dine unaccompanied by men – were commonplace, they nonetheless served the same dishes as the men’s dining room: offal, calf’s heads, turtles and roast meat.

Beginning in the 1870s, shifting social norms – like the entry of women into the workplace – gave women more opportunities to dine without men and in the company of female friends or co-workers.

As more women spent time outside of the home, however, they were still expected to congregate in gender-specific places.

Chain restaurants geared toward women, such as Schrafft’s, proliferated. They created alcohol-free safe spaces for women to lunch without experiencing the rowdiness of workingmen’s cafés or free-lunch bars, where patrons could get a free midday meal as long as they bought a beer (or two or three).

It was during this period that the notion that some foods were more appropriate for women started to emerge. Magazines and newspaper advice columns identified fish and white meat with minimal sauce, as well as new products like packaged cottage cheese, as “female foods.” And of course, there were desserts and sweets, which women, supposedly, couldn’t resist.

How Crisco toppled lard – and made Americans believers in industrial food
by Helen Zoe Veit

For decades, Crisco had only one ingredient, cottonseed oil. But most consumers never knew that. That ignorance was no accident.

A century ago, Crisco’s marketers pioneered revolutionary advertising techniques that encouraged consumers not to worry about ingredients and instead to put their trust in reliable brands. It was a successful strategy that other companies would eventually copy. […]

It was only after a chemist named David Wesson pioneered industrial bleaching and deodorizing techniques in the late 19th century that cottonseed oil became clear, tasteless and neutral-smelling enough to appeal to consumers. Soon, companies were selling cottonseed oil by itself as a liquid or mixing it with animal fats to make cheap, solid shortenings, sold in pails to resemble lard.

Shortening’s main rival was lard. Earlier generations of Americans had produced lard at home after autumn pig slaughters, but by the late 19th century meat processing companies were making lard on an industrial scale. Lard had a noticeable pork taste, but there’s not much evidence that 19th-century Americans objected to it, even in cakes and pies. Instead, its issue was cost. While lard prices stayed relatively high through the early 20th century, cottonseed oil was abundant and cheap. […]

In just five years, Americans were annually buying more than 60 million cans of Crisco, the equivalent of three cans for every family in the country. Within a generation, lard went from being a major part of American diets to an old-fashioned ingredient. […]

In the decades that followed Crisco’s launch, other companies followed its lead, introducing products like Spam, Cheetos and Froot Loops with little or no reference to their ingredients.

Once ingredient labeling was mandated in the U.S. in the late 1960s, the multisyllabic ingredients in many highly processed foods may have mystified consumers. But for the most part, they kept on eating.

So if you don’t find it strange to eat foods whose ingredients you don’t know or understand, you have Crisco partly to thank.

 

Red Flag of Twin Studies

Consider this a public service announcement. The moment someone turns to twin studies as reliable and meaningful evidence, it’s a dead give away about the kind of person they are. And when someone uses this research in the belief they are proving genetic causes, it demonstrates a number of things.

First and foremost, it shows they don’t understand what is heritability. It is about population level factors and can tell us nothing about individuals, much less disentangle genetics from epigenetics and environment. Heritability does not mean genetic inheritance, although even some scientists who know better sometimes talk as if they are the same thing. The fact of the matter is, beyond basic shared traits (e.g., two eyes, instead of one or three), there is little research proving direct genetic causation, typically only seen in a few rare diseases. All that heritability can do is point to the possibility of genetic causes, but all that allows is the articulation of a hypothesis to be tested by actual genetic research which is rarely done.

And second, this gives away the ideological game being played. Either the person ideologically identifies as a eugenicist, racist, etc or has unconsciously assimilated eugenicist, racist, etc ideology without realizing it. In either case, there is next to zero chance that any worthwhile discussion will follow from it. It doesn’t matter what is the individual’s motivations or if they are even aware of them. It’s probably best to just walk away. You don’t need to call them out, much less call them a racist or whatever. You know all that you need to know at that point. Just walk away. And if you don’t walk away, go into the situation with your eyes wide open for you are entering a battlefield of ideological rhetoric.

So, keep this in mind. Twin studies are some of the worst research around, opposite of how they get portrayed by ideologues as being strong evidence. Treat them as you would the low quality epidemiological research in nutrition studies (such as the disproven Seven Countries Study and China Study). They are evidence, at best, to be considered in a larger context of information but not to be taken alone as significant and meaningful. Besides, the twin studies are so poorly designed and so sparse in number that not much can be said about them. If anything, all they are evidence for is how to do science badly. That isn’t to say that, theoretically, twin studies couldn’t be designed well, but as far as I know it hasn’t happened yet. It’s not easy research to do for obvious reasons, as humans are complex creatures part of complex conditions.

For someone to even mention twin studies, other than to criticize them, is a red flag. Scrutinize carefully anything such a person says. Or better yet, when possible, simply ignore them. The problem with weak evidence that is repeated as if true is that it never really is about the evidence in the first place. Twin studies is one of those things that, like dog whistle politics, stands in for something else. It is what I call a symbolic conflation, a distraction tactic to point away from the real issue. Few people talking about twin studies actually care about either twins or science. You aren’t going to convince a believer that their beliefs are false. If anything, they will become even more vehement in their beliefs and you’ll end up frustrated.

* * *

What Genetics Does And Doesn’t Tell Us
Heritability & Inheritance, Genetics & Epigenetics, Etc
Unseen Influences: Race, Gender, and Twins
Weak Evidence, Weak Argument: Race, IQ, Adoption
Identically Different: A Scientist Changes His Mind

Exploding the “Separated-at-Birth” Twin Study Myth
by Jay Joseph, PsyD

“The reader whose knowledge of separated twin studies comes only from the secondary accounts provided by textbooks can have little idea of what, in the eyes of the original investigators, constitutes a pair of ‘separated’ twins”—Evolutionary geneticist Richard Lewontin, neurobiologist Steven Rose, and psychologist Leon Kamin in Not in Our Genes, 19841

“The Myth of the Separated Identical Twins”—Chapter title in sociologist Howard Taylor’s The IQ Game, 19802

Supporters of the nature (genetic) side of the “nature versus nurture” debate often cite studies of “reared-apart” or “separated” MZ twin pairs (identical, monozygotic) in support of their positions.3 In this article I present evidence that, in fact, most studied pairs of this type do not qualify as reared-apart or separated twins.

Other than several single-case and small multiple-case reports that have appeared since the 1920s, there have been only six published “twins reared apart” (TRA) studies. (The IQ TRA study by British psychologist Cyril Burt was discredited in the late 1970s on suspicions of fraud, and is no longer part of the TRA study literature.) The authors of these six studies assessed twin resemblance and calculated correlations for “intelligence” (IQ), “personality,” and other aspects of human behavior. In the first three studies—by Horatio Newman and colleagues in 1937 (United States, 29 MZ pairs), James Shields in 1962 (Great Britain, 44 MZ pairs), and Niels Juel-Nielsen in 1965 (Denmark, 12 MZ pairs)—the authors provided over 500 pages of detailed case-history information for the combined 75 MZ pairs they studied.

The three subsequent TRA studies were published in the 1980s and 1990s, and included Thomas J. Bouchard, Jr. and colleagues’ widely cited “Minnesota Study of Twins Reared Apart” (MISTRA), and studies performed in Sweden and Finland. In the Swedish study, the researchers defined twin pairs as “reared apart” if they had been “separated by the age of 11.”4 In the Finnish study, the average age at separation was 4.3 years, and 12 of the 30 “reared-apart” MZ pairs were separated between the ages of 6 and 10.5 In contrast to the original three studies, the authors of these more recent studies did not provide case-history information for the pairs they investigated. (The MISTRA researchers did publish a few selected case histories, some of which, like the famous “Three Identical Strangers” triplets, had already been publicized in the media.)

The Newman et al. and Shields studies were based on twins who had volunteered to participate after responding to media or researcher appeals to do so in the interest of scientific research. As Leon Kamin and other analysts pointed out long ago, however, TRA studies based on volunteer twins are plagued by similarity biases, in part because twins had to have known of each other’s existence to be able to participate in the study. Like the famous MISTRA “Firefighter Pair,” some twins discovered each other because of their behavioral similarities. The MISTRA researchers arrived at their conclusions in favor of genetics on the basis of a similarity-biased volunteer twin sample. […]

Contrary to the common contemporary claim that twin pairs found in TRA studies were “separated at birth”—which should mean that twins did not know each other or interact with each other between their near-birth separation and the time they were reunited for the study—the information provided by the original researchers shows that few if any MZ pairs fit this description. This is even more obvious in the 1962 Shields study. As seen in the tables below and in the case descriptions:

  • Some pairs were separated well after birth
  • Some pairs grew up nearby to each other and attended school together
  • Most pairs grew up in similar cultural and socioeconomic environments
  • Many pairs were raised by different members of the same family
  • Most pairs had varying degrees of contact while growing up
  • Some pairs had a close relationship as adults
  • Some pairs were reunited and lived together for periods of time

In other words, in addition to sharing a common prenatal environment and many similar postnatal environmental influences (described here), twin pairs found in volunteer-based TRA study samples were not “separated at birth” in the way that most people understand this term. The best way to describe this sample is to say that it consisted of partially reared-apart MZ twin pairs.

The Minnesota researchers have always denied access to independent researchers who wanted to inspect the unpublished MISTRA raw data and case history information, and we can safely assume that the volunteer MISTRA MZ twin pairs were no more “reared apart” than were the MZ pairs

The Large and Growing Caste of Permanent Underclass

The United States economy is in bad condition for much of the population, but you wouldn’t necessarily know that by watching the news or listening to the president, especially if you live in the comforable economic segregation of a college town, a tech hub, a suburb, or a gentrified neighborhood. As the middle class shrinks, many fall into the working class and many others into poverty. The majority of Americans are some combination of unemployed, underemployed, and underpaid (Alt-Facts of Unemployment) — with almost half the population being low wage workers and 40 million below the poverty line. No one knows the full unemployment rate, as the permanently unemployed are excluded from the data along with teens (Teen Unemployment). As for the numbers of homeless, there is no reliable data at all, but we do know that 6,300 Americans are evicted every day.

Most of these people barely can afford to pay their bills or else end up on welfare or in debt or, worse still, entirely fall through the cracks. For the worst off, those who don’t end up homeless often find themselves in prison or caught up in the legal system. This is because the desperately poor often turn to illegal means to gain money: prostitution, selling drugs, petty theft, etc; or even minor criminal acts (remember that Eric Garner was killed by police for illegally selling cigarettes on a sidewalk); whatever it takes to get by in the hope of avoiding the harshest fate. Even for the homeless to sleep in public or beg or rummage through the trash is a crime in many cities and, if not a crime, it could lead to constant harassment by police, as if life isn’t already hard enough for them.

Unsurprisingly, economic data is also not kept about the prison and jail population, as they are removed from society and made invisible (Invisible Problems of Invisible People). In some communities, the majority of men and many of the women are locked up or were at one time. When in prison, as with the permanently unemployed, they can be eliminated from the economic accounting of these communities and the nation. Imprison enough people and the official unemployment rate will go down, especially as it then creates employment in the need to hire prison guards and the various companies that serve prisons. But for those out on parole who can’t find work or housing, knowing that at least they are being recorded in the data is little comfort. Still others, sometimes entirely innocent, get tangled up in the legal system through police officers targeting minorities and the poor.  Forcing people to give false confessions out of threat is a further problem and, unlike in the movies, the poor rarely get legal representation.

Once the poor and homeless are in the legal system, it can be hard to escape for there are all kinds of fees, fines, and penalties that they can’t afford. In various ways, the criminal system, in punishing the victims of our oppressive society, harms not only individuals but breaks apart families and cripples entire communities. The war on drugs, in particular, has been a war on minorities and the poor. Rich white people have high rates of drug use, but no where near an equivalent rate of being stopped and frisked, arrested and convicted for drug crimes. A punitive society is about maintaining class hierarchy and privilege while keeping the poor in their place. As Mother Jones put it, a child who picked up loose coal on the side of railroad tracks or a hobo who stole shoes would be arrested, but steal a whole railroad and you’ll become a United States Senator or be “heralded as a Napoleon of finance” (Size Matters).

A large part of the American population lives paycheck to paycheck, many of them in debt they will never be able to pay off, often from healthcare crises since stress and poverty — especially a poverty diet — worsens health even further. But those who are lucky enough to avoid debt are constantly threatened by one misstep or accident. Imagine getting sick or injured when your employment gives you no sick days and you have a family to house and feed. Keep in mind that most people on welfare are also working and only on it temporarily (On Welfare: Poverty, Unemployment, Health, Etc). Meanwhile, the biggest employers of the impoverished (Walmart, Amazon, etc) are the biggest beneficiaries of the welfare that is spent at their stores.

So, poverty is good for business, in maintaining both cheap labor subsidized by the government and consumerism likewise subsidized by the government. To the capitalist class, none of this is a problem but an opportunity for profit. That is, profit on top of the trillions of dollars given each year to individual industries, from oil industry to military-industrial complex, that comes from direct and indirect subsidies and much of it hidden — that is to say, corporate welfare and socialism for the rich (Trillions Upon Trillions of Dollars, Investing in Violence and Death). These are vast sums of public wealth and public resources that, in a just society and functioning social democracy, would support the public good. That is not only money stolen from the general public, including stolen from the poor, but opportunities lost for social improvement and economic reform.

Worse still, it isn’t only theft from living generations for the greater costs are externalized onto the future that will be inherited by the children growing up now and those not yet born. The United Nations is not an anti-capitalist organization in the slightest, but a recent UN report came to a strong conclusion: “The report found that when you took the externalized costs into effect, essentially NONE of the industries was actually making a profit. The huge profit margins being made by the world’s most profitable industries (oil, meat, tobacco, mining, electronics) is being paid for against the future: we are trading long term sustainability for the benefit of shareholders. Sometimes the environmental costs vastly outweighed revenue, meaning that these industries would be constantly losing money had they actually been paying for the ecological damage and strain they were causing.” Large sectors of the global economy are a net loss to society. Their private profits and social benefit are a mirage. It is theft hidden behind false pretenses of a supposedly free market that in reality is not free, in any sense of the word.

As a brief side note, let’s make clear the immensity of this theft that extends to global proportions. Big biz and big gov are so closely aligned as to essentially be the same entity, and which controls which is not always clear, be it fascism or inverted totalitarianism. When the Western governments destroyed Iraq and Libya, it was done to steal their wealth and resources, oil in the case of one and gold for the other, and it cost the lives of millions of innocent people. When Hillary Clinton as Secretary of State intervened in Haiti to suppress wages to maintain cheap labor for American corporations, that was not only theft but authoritarian oppression in a country that once started a revolution to overthrow the colonial oppressors that had enslaved them.

These are but a few examples of endless acts of theft, not always violent but often so. All combined, we are talking about possibly thousands of trillions of dollars stolen from the world-wide population every year. And it is not a new phenomenon, as it goes back to the 1800s with the violent theft of land from Native Americans and Mexicans. General Smedley Butler wrote scathingly about these imperial wars and “Dollar Diplomacy” on behalf of “Big Business, for Wall Street, and for the Bankers” (Danny Sjursen, Where Have You Gone, Smedley Butler?). Poverty doesn’t happen naturally. It is created and enforced, and the ruling elite responsible are homicidal psychopaths. Such acts are a crime against humanity; more than that, they are pure evil.

Work is supposedly the definition of worth in our society and yet the richer one is the less one works. Meanwhile, the poor are working as much as they are able when they can find work, often working themselves to the point of exhaustion and sickness and early death. Even among the homeless, many of them are working or were recently employed, a surprising number of them low-paid professionals such as public school teachers, Uber drivers, and gig workers who can’t afford housing in high-priced urban areas where the jobs are to be found. Somehow merely not being unemployed is supposed to be a great boon, but working in a state of utter fear of getting by day to day is not exactly a happy situation. Unlike in generations past, a job isn’t a guarantee of a good life, much less the so-called American Dream. Gone are days the days when a single income from an entry level factory job could support a family with several kids, a nice house, a new car, regular vacations, cheap healthcare, and a comfortable nest egg.

As this shows, it’s far from limited to the poorest. These days most college students, the fortunate few to have higher education (less than a quarter of Americans), are struggling to find work at all (Jordan Weissman, 53% of Recent College Grads Are Jobless or Underemployed—How?). Yet those without a college degree are facing far greater hardship. And it’s easy to forget that the United States citizenry remains largely uneducated because few can afford college with tuition going up. Even most college students come out with massive debt and they are the lucky ones. What once was a privilege has become a burden for many. A college education doesn’t guarantee a good job as it did in the past, but chances for employment are even worse without a degree — so, damned if you do and damned if you don’t.

It’s not only that wages have stagnated for most and, relative to inflation, dropped for many others. Costs of livings (housing, food, etc) have simultaneously gone up, not to mention the disappearance of job security and good benefits. Disparities in general have become vaster — disparities in wealth, healthcare, education, opportunities, resources, and political representation. The growing masses at the bottom of society are part of the permanent underclass, which is to say they are the American caste of untouchables or rather unspeakables (Barbara Ehrenreich: Poverty, Homelesness). The mainstream mention them only when they seem a threat, such as fears about them dragging down the economy or in scapegoating them for the election of Donald Trump as president or whatever other distraction of the moment. Simple human concern for the least among us, however, rarely comes up as a priority.

Anyway, why are we still idealizing a fully employed workforce (Bullshit Jobs) and so demonizing the unemployed (Worthless Non-Workers) at a time when many forms of work are becoming increasingly meaningless and unnecessary, coming close to obsolete? Such demonization doesn’t bode well for the future (Our Bleak Future: Robots and Mass Incarceration). It never really was about work but about social control. If the people are kept busy, tired and stressed, they won’t have the time and energy for community organizing and labor organizing, democratic participation and political campaigning, protesting and rioting or even maybe revolution.  But it doesn’t have to be this way. If we ever harnessed a fraction of the human potential that is wasted and thrown away, if we used public wealth and public resources to invest in the citizenry and promote the public good, we could transform society over night. What are we waiting for? Why do we tolerate and allow this moral wrongdoing to continue?

* * *

A commenter below shared a documentary (How poor people survive in the USA) from the DW, a German public brodcaster. It is strange to watch foreign news reporting on the United States as if it were about a developing country that was in the middle of a crisis. Maybe that is because the United States is a developing country in the middle of a crisis. Many parts of this country have poverty, disease, and mortality rates that are as high or higher than what s seen in many countries that were once called third world. And inequality, once considered absolute proof of a banana republic, is now higher here than it ever was in the original banana republics. In fact, inequality — that is to say concentrated wealth and power — has never been this radically extreme in any society in all of history.

It’s not about a few poor people in various places but about the moral failure and democratic failure of an entire dysfunctional and corrupt system. And it’s not limited to the obvious forms of poverty and inequality for the consequences to the victims are harsh, from parasite load to toxic exposure that causes physical sickness and mental illness, stunts neurocognitive development and lowers IQ, increases premature puberty and behavioral problems, and generally destroys lives. We’ve known about this information for decades. It even occasionally, if only briefly and superficially, shows up in corporate media reporting. We can’t honestly claim ignorance as a defense of our apathy and indifference, of our collective failure.

Poverty isn’t a lack of character. It’s a lack of cash
by Rutger Bregman

On Conflict and Stupidity
Inequality in the Anthropocene
Parasites Among the Poor and the Plutocrats
Stress and Shittiness
The Desperate Acting Desperately
Stress Is Real, As Are The Symptoms
Social Conditions of an Individual’s Condition
Social Disorder, Mental Disorder
Urban Weirdness
Lead Toxicity is a Hyperobject
Connecting the Dots of Violence
Trauma, Enbodied and Extended
An Invisible Debt Made Visible
Public Health, Public Good

Childhood adversity linked to early puberty, premature brain development and mental illness
from Science Daily

Poor kids hit puberty sooner and risk a lifetime of health problems
by Ying Sun

* * *

Low-Wage Jobs are the New American Normal
by Dawn Allen

It’s clear that the existence of the middle class was a historic anomaly. In 2017, MIT economist Peter Temin argued that we’re splitting into a two-class system. There’s a small upper class, about 20% of Americans, predominantly white, degree holders, working largely in the technology and finance sectors, that holds the lion’s share of wealth and political power in the country. Then, there’s a much larger precariat below them, “minority-heavy” but still mostly white, with little power and low-wage, if any, jobs. Escaping lower-class poverty depends upon navigating two flawless, problem-free decades, starting in early childhood, ending with a valuable college degree. The chances of this happening for many are slim, while implementing it stresses kids out and even makes them meaner. Fail, and you risk “shit-life syndrome” and a homeless retirement.

Underemployment Is the New Unemployment
Western countries are celebrating low joblessness, but much of the new work is precarious and part-time.
by Leonid Bershidsky

Some major Western economies are close to full employment, but only in comparison to their official unemployment rate. Relying on that benchmark alone is a mistake: Since the global financial crisis, underemployment has become the new unemployment.

In a recent paper, David Bell and David Blanchflower singled out underemployment as a reason why wages in the U.S. and Europe are growing slower than they did before the global financial crisis, despite unemployment levels that are close to historic lows. In some economies with lax labor market regulation — the U.K. and the Netherlands, for example — more people are on precarious part-time contracts than out of work. That could allow politicians to use just the headline unemployment number without going into details about the quality of the jobs people manage to hold down.

Measuring underemployment is difficult everywhere. To obtain more or less accurate data, those working part-time should probably be asked how many hours they’d like to put in, and those reporting a large number of hours they wish they could add should be recorded as underemployed. But most statistical agencies make do with the number of part-timers who say they’d like a full-time job. The U.S. Bureau of Labor Statistics doesn’t provide an official underemployment number, and existing semi-official measures, according to Bell and Blanchflower, could seriously underestimate the real situation.

The need for governments to show improvement on jobs since the global crisis has led to an absurd situation. Generous standards for measuring unemployment produce numbers that don’t agree with most people’s personal experience and the anecdotal evidence from friends and family. A lot of people are barely working, and wages are going up too slowly to fit a full employment picture. At the same time, underemployment, which, according to Bell and Blanchflower, has “replaced unemployment as the main indicator of labor market slack,” is rarely discussed and unreliably measured.

Governments should provide a clearer picture of how many people are not working as much as they’d like to — and of how many hours they’d like to add. Labor market flexibility is a nice tool in a crisis, but during an economic expansion, the focus should be on improving employment quality, not just reducing the number of people who draw an unemployment check. An increasing number of better jobs, and as a consequence wage growth, becomes the most important measure of policy success.

The War on Work — and How to End It
by Edward L. Glaeser

In 1967, 95 percent of “prime-age” men between the ages of 25 and 54 worked. During the Great Recession, though, the share of jobless prime-age males rose above 20 percent. Even today, long after the recession officially ended, more than 15 percent of such men aren’t working. And in some locations, like Kentucky, the numbers are even higher: Fewer than 70 percent of men lacking any college education go to work every day in that state. […]

From 1945 to 1968, only 5 percent of men between the ages of 25 and 54 — prime-age males — were out of work. But during the 1970s, something changed. The mild recession of 1969–70 produced a drop in the employment rate of this group, from 95 percent to 92.5 percent, and there was no rebound. The 1973–74 downturn dragged the employment rate below 90 percent, and, after the 1979–82 slump, it would stay there throughout most of the 1980s. The recessions at the beginning and end of the 1990s caused further deterioration in the rate. Economic recovery failed to restore the earlier employment ratio in both instances.

The greatest fall, though, occurred in the Great Recession. In 2011, more than one in five prime-age men were out of work, a figure comparable to the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless — a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession.

The trend in the female employment ratio is more complicated because of the postwar rise in the number of women in the formal labor market. In 1955, 37 percent of prime-age women worked. By 2000, that number had increased to 75 percent — a historical high. Since then, the number has come down: It stood at 71.7 percent at the end of 2016. Interpreting these figures is tricky, since more women than men voluntarily leave the labor force, often finding meaningful work in the home. The American Time Survey found that non-employed women spend more than six hours a day doing housework and caring for others. Non-employed men spend less than three hours doing such tasks.

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high-school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today. The regional variation is also enormous. Kentucky’s 23 percent male jobless rate leads the nation; in Iowa, the rate is under 10 percent. […]

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Lost in Recession, Toll on Underemployed and Underpaid
by Michael Cooper

These are anxious days for American workers. Many, like Ms. Woods, are underemployed. Others find pay that is simply not keeping up with their expenses: adjusted for inflation, the median hourly wage was lower in 2011 than it was a decade earlier, according to data from a forthcoming book by the Economic Policy Institute, “The State of Working America, 12th Edition.” Good benefits are harder to come by, and people are staying longer in jobs that they want to leave, afraid that they will not be able to find something better. Only 2.1 million people quit their jobs in March, down from the 2.9 million people who quit in December 2007, the first month of the recession.

“Unfortunately, the wage problems brought on by the recession pile on top of a three-decade stagnation of wages for low- and middle-wage workers,” said Lawrence Mishel, the president of the Economic Policy Institute, a research group in Washington that studies the labor market. “In the aftermath of the financial crisis, there has been persistent high unemployment as households reduced debt and scaled back purchases. The consequence for wages has been substantially slower growth across the board, including white-collar and college-educated workers.”

Now, with the economy shaping up as the central issue of the presidential election, both President Obama and Mitt Romney have been relentlessly trying to make the case that their policies would bring prosperity back. The unease of voters is striking: in a New York Times/CBS News poll in April, half of the respondents said they thought the next generation of Americans would be worse off, while only about a quarter said it would have a better future.

And household wealth is dropping. The Federal Reserve reported last week that the economic crisis left the median American family in 2010 with no more wealth than in the early 1990s, wiping away two decades of gains. With stocks too risky for many small investors and savings accounts paying little interest, building up a nest egg is a challenge even for those who can afford to sock away some of their money.

Expenses like putting a child through college — where tuition has been rising faster than inflation or wages — can be a daunting task. […]

Things are much worse for people without college degrees, though. The real entry-level hourly wage for men who recently graduated from high school fell to $11.68 last year, from $15.64 in 1979, according to data from the Economic Policy Institute. And the percentage of those jobs that offer health insurance has plummeted to 22.8 percent, from 63.3 percent in 1979.

Though inflation has stayed relatively low in recent years, it has remained high for some of the most important things: college, health care and even, recently, food. The price of food in the home rose by 4.8 percent last year, one of the biggest jumps in the last two decades.

Meet the low-wage workforce
by Martha Ross and Nicole Bateman

Low-wage workers comprise a substantial share of the workforce. More than 53 million people, or 44% of all workers ages 18 to 64 in the United States, earn low hourly wages. More than half (56%) are in their prime working years of 25-50, and this age group is also the most likely to be raising children (43%). They are concentrated in a relatively small number of occupations, and many face economic hardship and difficult roads to higher-paying jobs. Slightly more than half are the sole earners in their families or make major contributions to family income. Nearly one-third live below 150% of the federal poverty line (about $36,000 for a family of four), and almost half have a high school diploma or less.

Women and Black workers, two groups for whom there is ample evidence of labor market discrimination, are overrepresented among low-wage workers.

To lift the American economy, we need to understand the workers at the bottom of it
by Martha Ross and Nicole Bateman

These low-wage workers are a racially diverse group, and disproportionately female. Fifty-two percent are white, 25% are Latino or Hispanic, 15% are Black, and 5% are Asian American. Females account for 54% of low-wage workers, higher than their total share of the entire workforce (48%).

Fifty-seven percent of low-wage workers work full time year-round, considerably lower than mid/high-wage workers (81%). Among those working less than full time year-round, the data don’t specify if this is voluntary or involuntary, and it is probably a mix.

Two-thirds of low-wage workers are in their prime working years of 25-54, and nearly half of this group (40%) are raising children. Given the links between education and earnings, it is not surprising that low-wage workers have lower levels of education than mid/high-wage workers. Fourteen percent of low-wage workers have a bachelor’s degree, compared to 44% among mid/high-wage workers, and nearly half (49%) have a high school diploma or less, compared to 24% among mid/high-wage workers. […]

The largest cluster consists of prime-age adults with a high school diploma or less

The largest cluster, accounting for 15 million people (28% of low-wage workers) consists of workers ages 25 to 50 with no more than a high school diploma. It is one of two clusters that are majority male (54%) and it is the most racially and ethnically diverse of all groups, with the lowest share of white workers (40%) and highest share of Latino or Hispanic workers (39%). Many in this cluster also experience economic hardship, with high shares living below 150% of the federal poverty line (39%), receiving safety net assistance (35%), and relying solely on their wages to support their families (31%). This cluster is also the most likely to have children (44%).

Low-wage work is more pervasive than you think, and there aren’t enough “good jobs” to go around
by Martha Ross and Nicole Bateman

Even as the U.S. economy hums along at a favorable pace, there is a vast segment of workers today earning wages low enough to leave their livelihood and families extremely vulnerable. That’s one of the main takeaways from our new analysis, in which we found that 53 million Americans between the ages of 18 to 64—accounting for 44% of all workers—qualify as “low-wage.” Their median hourly wages are $10.22, and median annual earnings are about $18,000. (See the methods section of our paper to learn about how we identify low-wage workers.)

The existence of low-wage work is hardly a surprise, but most people—except, perhaps, low-wage workers themselves—underestimate how prevalent it is. Many also misunderstand who these workers are. They are not only students, people at the beginning of their careers, or people who need extra spending money. A majority are adults in their prime working years, and low-wage work is the primary way they support themselves and their families.

Low-wage work is a source of economic vulnerability

There are two central questions when considering the prospects of low-wage workers:

  1. Is the job a springboard or a dead end?
  2. Does the job provide supplemental, “nice to have” income, or is it critical to covering basic living expenses?

We didn’t analyze the first question directly, but other research is not encouraging, finding that while some workers move on from low-wage work to higher-paying jobs, many do not. Women, people of color, and those with low levels of education are the most likely to stay in low-wage jobs. In our analysis, over half of low-wage workers have levels of education suggesting they will stay low-wage workers. This includes 20 million workers ages 25-64 with a high school diploma or less, and another seven million young adults 18-24 who are not in school and do not have a college degree.

As to the second question, a few data points show that for millions of workers, low-wage work is a primary source of financial support—which leaves these families economically vulnerable.

  • Measured by poverty status: 30% of low-wage workers (16 million people) live in families earning below 150% of the poverty line. These workers get by on very low incomes: about $30,000 for a family of three and $36,000 for a family of four.
  • Measured by the presence or absence of other earners: 26% of low-wage workers (14 million people) are the only earners in their families, getting by on median annual earnings of about $20,000. Another 25% (13 million people) live in families in which all workers earn low wages, with median family earnings of about $42,000. These 27 million low-wage workers rely on their earnings to provide for themselves and their families, as they are either the family’s primary earner or a substantial contributor to total earnings. Their earnings are unlikely to represent “nice to have” supplemental income.

The low-wage workforce is part of every regional economy

We analyzed data for nearly 400 metropolitan areas, and the share of workers in a particular place earning low wages ranges from a low of 30% to a high of 62%. The relative size of the low-wage population in a given place relates to broader labor market conditions such as the strength of the regional labor market and industry composition.

Low-wage workers make up the highest share of the workforce in smaller places in the southern and western parts of the United States, including Las Cruces, N.M. and Jacksonville, N.C. (both 62%); Visalia, Calif. (58%); Yuma, Ariz. (57%); and McAllen, Texas (56%). These and other metro areas where low-wage workers account for high shares of the workforce are places with lower employment rates that concentrate in agriculture, real estate, and hospitality.

Post-Work: The Radical Idea of a World Without Jobs
by Andy Beckett

As a source of subsistence, let alone prosperity, work is now insufficient for whole social classes. In the UK, almost two-thirds of those in poverty – around 8 million people – are in working households. In the US, the average wage has stagnated for half a century.

As a source of social mobility and self-worth, work increasingly fails even the most educated people – supposedly the system’s winners. In 2017, half of recent UK graduates were officially classified as “working in a non-graduate role”. In the US, “belief in work is crumbling among people in their 20s and 30s”, says Benjamin Hunnicutt, a leading historian of work. “They are not looking to their job for satisfaction or social advancement.” (You can sense this every time a graduate with a faraway look makes you a latte.)

Work is increasingly precarious: more zero-hours or short-term contracts; more self-employed people with erratic incomes; more corporate “restructurings” for those still with actual jobs. As a source of sustainable consumer booms and mass home-ownership – for much of the 20th century, the main successes of mainstream western economic policy – work is discredited daily by our ongoing debt and housing crises. For many people, not just the very wealthy, work has become less important financially than inheriting money or owning a home.

Whether you look at a screen all day, or sell other underpaid people goods they can’t afford, more and more work feels pointless or even socially damaging – what the American anthropologist David Graeber called “bullshit jobs” in a famous 2013 article. Among others, Graeber condemned “private equity CEOs, lobbyists, PR researchers … telemarketers, bailiffs”, and the “ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone is spending so much of their time working”.

The argument seemed subjective and crude, but economic data increasingly supports it. The growth of productivity, or the value of what is produced per hour worked, is slowing across the rich world – despite the constant measurement of employee performance and intensification of work routines that makes more and more jobs barely tolerable.

Unsurprisingly, work is increasingly regarded as bad for your health: “Stress … an overwhelming ‘to-do’ list … [and] long hours sitting at a desk,” the Cass Business School professor Peter Fleming notes in his book, The Death of Homo Economicus, are beginning to be seen by medical authorities as akin to smoking.

Work is badly distributed. People have too much, or too little, or both in the same month. And away from our unpredictable, all-consuming workplaces, vital human activities are increasingly neglected. Workers lack the time or energy to raise children attentively, or to look after elderly relations. “The crisis of work is also a crisis of home,” declared the social theorists Helen Hester and Nick Srnicek in a 2017 paper. This neglect will only get worse as the population grows and ages.

And finally, beyond all these dysfunctions, loom the most-discussed, most existential threats to work as we know it: automation, and the state of the environment. Some recent estimates suggest that between a third and a half of all jobs could be taken over by artificial intelligence in the next two decades. Other forecasters doubt whether work can be sustained in its current, toxic form on a warming planet. […]

And yet, as Frayne points out, “in some ways, we’re already in a post-work society. But it’s a dystopic one.” Office employees constantly interrupting their long days with online distractions; gig-economy workers whose labour plays no part in their sense of identity; and all the people in depressed, post-industrial places who have quietly given up trying to earn – the spectre of post-work runs through the hard, shiny culture of modern work like hidden rust.

Last October, research by Sheffield Hallam University revealed that UK unemployment is three times higher than the official count of those claiming the dole, thanks to people who come under the broader definition of unemployment used by the Labour Force Survey, or are receiving incapacity benefits. When Frayne is not talking and writing about post-work, or doing his latest temporary academic job, he sometimes makes a living collecting social data for the Welsh government in former mining towns. “There is lots of worklessness,” he says, “but with no social policies to dignify it.”

Creating a more benign post-work world will be more difficult now than it would have been in the 70s. In today’s lower-wage economy, suggesting people do less work for less pay is a hard sell. As with free-market capitalism in general, the worse work gets, the harder it is to imagine actually escaping it, so enormous are the steps required.

We should all be working a four-day week. Here’s whyWe should all be working a four-day week. Here’s why
by Owen Jones

Many Britons work too much. It’s not just the 37.5 hours a week clocked up on average by full-time workers; it’s the unpaid overtime too. According to the TUC, workers put in 2.1bn unpaid hours last year – that’s an astonishing £33.6bn of free labour.

That overwork causes significant damage. Last year, 12.5m work days were lost because of work-related stress, depression or anxiety. The biggest single cause by a long way – in some 44% of cases – was workload. Stress can heighten the risk of all manner of health problems, from high blood pressure to strokes. Research even suggests that working long hours increases the risk of excessive drinking. And then there’s the economic cost: over £5bn a year, according to the Health and Safety Executive. No wonder the public health expert John Ashton is among those suggesting a four-day week could improve the nation’s health. […]

This is no economy-wrecking suggestion either. German and Dutch employees work less than we do but their economies are stronger than ours. It could boost productivity: the evidence suggests if you work fewer hours, you are more productive, hour for hour – and less stress means less time off work. Indeed, a recent experiment with a six-hour working day at a Swedish nursing home produced promising results: higher productivity and fewer sick days. If those productivity gains are passed on to staff, working fewer hours doesn’t necessarily entail a pay cut.

Do you work more than 39 hours a week? Your job could be killing you
by Peter Fleming

The costs of overwork can no longer be ignored. Long-term stress, anxiety and prolonged inactivity have been exposed as potential killers.

Researchers at Columbia University Medical Center recently used activity trackers to monitor 8,000 workers over the age of 45. The findings were striking. The average period of inactivity during each waking day was 12.3 hours. Employees who were sedentary for more than 13 hours a day were twice as likely to die prematurely as those who were inactive for 11.5 hours. The authors concluded that sitting in an office for long periods has a similar effect to smoking and ought to come with a health warning.

When researchers at University College London looked at 85,000 workers, mainly middle-aged men and women, they found a correlation between overwork and cardiovascular problems, especially an irregular heart beat or atrial fibrillation, which increases the chances of a stroke five-fold.

Labour unions are increasingly raising concerns about excessive work, too, especially its impact on relationships and physical and mental health. Take the case of the IG Metall union in Germany. Last week, 15,000 workers (who manufacture car parts for firms such as Porsche) called a strike, demanding a 28-hour work week with unchanged pay and conditions. It’s not about indolence, they say, but self-protection: they don’t want to die before their time. Science is on their side: research from the Australian National University recently found that working anything over 39 hours a week is a risk to wellbeing.

Is there a healthy and acceptable level of work? According to US researcher Alex Soojung-Kim Pang, most modern employees are productive for about four hours a day: the rest is padding and huge amounts of worry. Pang argues that the workday could easily be scaled back without undermining standards of living or prosperity. […]

Other studies back up this observation. The Swedish government, for example, funded an experiment where retirement home nurses worked six-hour days and still received an eight-hour salary. The result? Less sick leave, less stress, and a jump in productivity.

All this is encouraging as far as it goes. But almost all of these studies focus on the problem from a numerical point of view – the amount of time spent working each day, year-in and year-out. We need to go further and begin to look at the conditions of paid employment. If a job is wretched and overly stressful, even a few hours of it can be an existential nightmare. Someone who relishes working on their car at the weekend, for example, might find the same thing intolerable in a large factory, even for a short period. All the freedom, creativity and craft are sucked out of the activity. It becomes an externally imposed chore rather than a moment of release.

Why is this important?

Because there is a danger that merely reducing working hours will not change much, when it comes to health, if jobs are intrinsically disenfranchising. In order to make jobs more conducive to our mental and physiological welfare, much less work is definitely essential. So too are jobs of a better kind, where hierarchies are less authoritarian and tasks are more varied and meaningful.

Capitalism doesn’t have a great track record for creating jobs such as these, unfortunately. More than a third of British workers think their jobs are meaningless, according to a survey by YouGov. And if morale is that low, it doesn’t matter how many gym vouchers, mindfulness programmes and baskets of organic fruit employers throw at them. Even the most committed employee will feel that something is fundamentally missing. A life.

Most Americans Don’t Know Real Reason Japan Was Bombed

United States bombing Japan in the Second World War was a demonstration of psychopathic brutality. It was unnecessary, as Japan was already defeated, but it was meant to send a message to the Soviets. Before the dust had settled from the savagery, the power-mongers among the Allied leadership were already planning for a Third World War (Cold War Ideology and Self-Fulfilling Prophecies), even though the beleaguered Soviets had no interest in more war as they took the brunt of the decimation and death count in defeating the Nazis.

The United States, in particular, having come out wealthier after the war thought that the Soviets would be an easy target to take out and so they sought to kick their former allies while they were still down. The US, in a fit of paranoia and psychosis, was scheming to drop hundreds of atomic bombs on Russia, to eliminate them before they could get the chance to develop their own nuclear weapons. Yet Stalin never planned, much less intended, to attack the West nor did he think they had the capacity to do so. All of the archives that were opened after the Soviet collapse showed that Stalin simply wanted to develop a trading partnership with the West, as he stated was his intention. Through the intervention of spies, the Soviets did start their own nuclear program and then demonstrated their capacity. So, a second nuclear attack by the United States was narrowly averted and  the Third World War was downgraded to the Cold War (see article and book at the end of the post).

This topic has come up before in this blog, but let’s come at it from a different angle. Consider General Douglas MacArthur. He was no pacifist or anything close to approximating one. He was a megalomaniac with good PR, a bully and a jerk, an authoritarian and would-be strongman hungering for power and fame. He “publicly lacked introspection. He was also vain, borderline corrupt, ambitious and prone to feuds” (Andrew Fe, Why was General MacArthur called “Dugout Doug?”). Also, he was guilty of insubordination, always certain he was right; and the times that events went well under his command were often because he took credit for other people’s ideas, plans and actions. His arrogance eventually led him to being removed from his position and that ended his career.

He was despised by many who worked with him and served under him. “President Harry Truman considered MacArthur a glory-seeking egomaniac, describing him at one point as “God’s right hand man” ” (Alpha History, Douglas MacArthur). Dwight Eisenhower, who knew him well from years of army service, “disliked MacArthur for his vanity, his penchant for theatrics, and for what Eisenhower perceived as “irrational” behavior” (National Park Service, Most Disliked Contemporaries). MacArthur loved war and had psychopathic level of disregard for the lives of others, sometimes to the extent of seeking victory at any cost. There are two examples that demonstrate this, one before the Second World War and the other following after.

Early in his career with Eisenhower and George S. Patton under his command, there was the infamous attack on the Bonus Army camp, consisting of WWI veterans — along with their families — protesting for payment of the money they were owed by the federal government (Mickey Z., The Bonus Army). He was ordered to remove the protesters but to do so non-violently. Instead, as became a pattern with him, he disobeyed those orders by having the protesters gassed and the camp trampled and torched. This led to the death of several people, including an infant. This was one of his rare PR disasters, to say the least. And trying to sue journalists for libel didn’t help.

The later example was in 1950. In opposition to President Harry Truman, “MacArthur favored waging all-out war against China. He wanted to drop 20 to 30 atomic bombs on Manchuria, lay a “radioactive belt of nuclear-contaminated material” to sever North Korea from China, and use Chinese Nationalist and American forces to annihilate the million or so Communist Chinese troops in North Korea” (Max Boot, He Has Returned). Some feared that, if the General had his way, he might start another world war… or rather maybe the fear was about China not being the preferred enemy some of the ruling elite wanted to target for the next world war.

Certainly, he was not a nice guy nor did he have any respect for democracy, human rights, or any other such liberal values. If he had been born in Germany instead, he would have made not merely a good Nazi but a great Nazi. He was a right-wing reactionary and violent imperialist, as he was raised to be by his military father who modeled imperialist aspirations (Rethinking History, Rating General Douglas MacArthur). He felt no sympathy or pity for enemies. Consider how he was willing to treat his fellow citizens, including some veterans in the Bonus Army who served beside him in the previous world war. His only loyalty was to his own sense of greatness and the military-industry that promoted him into power.

But what did General MacArthur, right-wing authoritarian that he was, think about dropping atomic bombs on an already defeated Japan? He thought it an unnecessary and cruel act toward a helpless civilian population consisting mostly of women, children and the elderly; an opinion he shared with many other military leaders at the time. Besides, as Norman Cousins, consultant to General MacArthur during the occupation of Japan, wrote, “MacArthur… saw no military justification for dropping of the bomb. The war might have ended weeks earlier, he said, if the United States had agreed, as it later did anyway, to the retention of the institution of the emperor” (quoted in Cameron Reilly’s The Psychopath Epidemic).

There was no reason, in his mind, to destroy a country when it was already defeated and instead could serve the purposes of the American Empire. For all of his love of war and violence, he showed no interest in vengeance or public humiliation toward the Japanese people. After the war, he was essentially made an imperial administrator and colonial governor of Japan, and he ruled with paternalistic care and fair-minded understanding. War was one thing and ruling another. Even an authoritarian should be able to tell the difference between these two.

It made no sense, the reasons given for incinerating two large cities and their populations in a country that couldn’t have fought back at that point even if the leadership had wanted to. What MacArthur understood was that the Japanese simply wanted to save face as much as possible while coming to terms with defeat and negotiating their surrender. Further violence was simply psychopathic brutality. There is no way of getting around that ugly truth. So, why have Americans been lied to and indoctrinated to believe otherwise for generations since? Well, because the real reasons couldn’t be given.

The atomic bombing wasn’t an act to end a war but to start another one, this time against the Soviets. To honestly and openly declare a new war before the last war had even ended would not have gone over well with the American people. And once this action was taken it could never be revealed, not even when all those involved had long been dead. Propaganda narratives, once sustained long enough, take on a life of their own. The tide is slowly turning, though. As each generation passes, fewer and fewer remain who believe it was justified, from 85 percent in 1945 to 56 percent in 2015.

When the last generation raised on WWII propaganda dies, that percentage will finally drop below the 50 percent mark and maybe we will then have an honest discussion about the devastating results of moral failure that didn’t end with those atomic bombs but have been repeated in so many ways since then. The crimes against humanity in bombing of Japan were echoed in the travesty of the Vietnam War and the Iraq War. Millions upon millions dead over the decades from various military actions by the Pentagon and covert operations by the CIA combined with sanctions that are considered declarations of war. Sanctions, by the way, were what incited the Japanese to attack the United States. In enforcing sanctions against a foreign government, the United States entered the war of its own volition by effectively declaring war against Japan and then acted surprised when they defended themselves.

All combined, through direct and indirect means, that possibly adds up into hundreds of millions in body count of innocents sacrificed so far since American imperial aspirations began. This easily matches the levels of atrocity seen in the most brutal regimes of the past (Investing in Violence and Death, Endless Outrage, Evil Empire, & State and Non-State Violence Compared). The costs are high. When will there be a moral accounting?

* * *

Hiroshima, Nagasaki, and the Spies Who Kept a Criminal US with a Nuclear Monopoly from Making More of Them
by Dave Lindorff

It was the start of the nuclear age. Both bombs dropped on Japan were war crimes of the first order, particularly because we now know that the Japanese government, which at that time was having all its major cities destroyed by incendiary bombs that turned their mostly wooden structures into towering firestorms, was even before Aug. 6, desperately trying to surrender via entreaties through the Swiss government.

The Big Lie is that the bomb was dropped to save US troops from having to invade Japan. In fact, there was no need to invade. Japan was finished, surrounded, the Russians attacking finally from the north, its air force and navy destroyed, and its cities being systematically torched.

Actually, the US didn’t want Japan to surrender yet though.Washington and President Harry Truman wanted to test their two new super weapons on real urban targets, and even more importantly, wanted to send a stark message to the Soviet Union, the supposed World War II ally which US war strategists and national security staff actually viewed all through the conflict as America’s next existential enemy.

As authors Michio Kaku and Daniel Axelrod, two theoretical physicists, wrote in their frightening, disturbing and well researched book To Win a Nuclear War: The Pentagon’s Secret War Plans (South End Press, 1987), the US began treacherously planning to use its newly developed super weapon, the atom bomb, against the war-ravaged Soviet Union, even before the war had ended in Europe. Indeed a first plan, to drop 20-30 Hiroshima-sized bombs on 20 Russian Cities, code named JIC 329/1, was intended to be launched in December 1945. Fortunately that never happened because at that point the US only had two atomic bombs in its “stockpile.”

The describe how as the production of new bombs sped up, with 9 nuclear devices by June 1946, 35 by March 1948 and 150 by January 1949, new plans with such creepy names as Operations Pincher, Broiler, Bushwacker, Sizzle and Dropshot were developed, and the number of Soviet cities to be vaporized grew from 20 to 200.

Professors Kaku and Axelrod write that Pentagon strategists were reluctant to go forward with these early planned attacks not because of any unwillingness to launch an unprovoked war, but out of a fear that the destruction of Soviet targets would be inadequate to prevent the Soviet’s still powerful and battle-tested Red Army from responding by over-running war-ravaged Europe in response to such an attack—a counterattack the US would not have been able to prevent. These strategists recommended that no attack be made until the US military had at least 300 nukes at its disposal (remember, at this time there were no hydrogen bombs, and the size of fission bomb was  constrained by the small size of the core’s critical mass). It was felt, in fact, that the bombs were so limited in power that it could take two or three to decimate a city like Moscow or Leningrad.

So the plan for wiping out the Soviet Union was gradually deferred to January 1953, by which time it was estimated that there would be 400 larger Nagasaki bombs available, and that even if only 100 of these 25-50 kiloton weapons hit their targets it could “implement the concept of ‘killing a nation.’”

The reason this epic US holocaust never came to pass is now clear: to the astonishment of US planners and even many  of the US nuclear scientists who had worked so hard in the Manhattan Project to invent and produce the atomic bomb (two types of atomic bomb, really), in August 29, 1949 the Soviets exploded their own bomb, the “First Lightning”: an almost exact replica of the “Fat Man” Plutonium bomb that destroyed Nagasaki four years earlier.

And the reason the Soviet scientists, brilliant as they were but financially strapped by the massive destruction the country had suffered during the war, had been able to create their bomb in roughly the same amount of time that the hugely funded Manhattan Project had done was primarily the information provided by a pair of scientists working at Los Alamos who offered detailed plans, secrets about how to work with the very tricky and unpredictable element Plutonium, and how to get a Plutonium core to explode in a colossal fireball instead of just producing a pathetic “fizzle.”

The Psychopath Epidemic
by Cameron Reilly

Another of my favorite examples of the power of brainwashing by the military-industrial complex is that of the bombings of Hiroshima and Nagasaki by the United States in 1945. Within the first two to four months of the attacks, the acute effects killed 90,000-166,000 people in Hiroshima and 60,000-80,000 in Nagasaki, with roughly half of the deaths in each city occurring on the first day. The vast majority of the casualties were civilians.

In the seventy-three years that have passed since Hiroshima, poll after poll has shown that most Americans think that the bombings were wholly justified. According to a survey in 2015, fifty-six percent of Americans agreed that the attacks were justified, significantly less than the 85 percent who agreed in 1945 but still high considering the facts don’t support the conclusion.

The reasons most Americans cite for the justification of the bombings is that they stopped the war with Japan; that Japan started the war with the attack on Pearl Harbor and deserved punishment; and that the attacks prevented Americans from having to invade Japan causing more deaths on both sides. These “facts” are so deeply ingrained in most American minds that they believe them to be fundamental truths. Unfortunately, they don’t stand up to history.

The truth is that the United States started the war with Japan when it froze Japanese assets in the United States and embargoed the sale of oil the country needed. Economic sanctions then, as now, are considered acts of war.

As for using the bombings to end war, the U.S. was well aware in the middle 1945 that the Japanese were prepared to surrender and expected it would happen when the USSR entered the war against them in August 1945, as pre-arranged between Truman and Stalin. The primary sticking point for the Japanese was the status of Emperor Hirohito. He was considered a god by his people, and it was impossible for them to hand him over for execution by their enemies. It would be like American Christians handing over Jesus, or Italian Catholics handing over the pope. The Allies refused to clarify what Hirohito’s status would be post-surrender. In the end, they left him in place as emperor anyway.

One American who didn’t think using the atom bomb was necessary was Dwight Eisenhower, future president and, at the time, the supreme allied commander in Europe. He believed:

Japan was already defeated and that dropping the bomb was completely unnecessary, and… the use of a weapon whose employment was, I though, no longer mandatory as a measure to save American lives. It was my belief that Japan was, at that very moment, seeking some way to surrender with a minimum loss of “face.”…

Admiral William Leahy, chief of staff to Presidents Franklin Roosevelt and Harry Truman, agreed.

It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons. My own feeling was that in being the first to use it, we had adopted an ethical standard common to the barbarians of the Dark Ages. I was not taught to maek war in that fashion, and wars cannot be won by destroying women and children.

Norman Cousins was a consultant to General MacArthur during the American occupation of Japan. Cousins wrote that

MacArthur… saw no military justification for dropping of the bomb. The war might have ended weeks earlier, he said, if the United States had agreed, as it later did anyway, to the retention of the institution of the emperor.

If General Dwight Eisenhower, General Douglas MacArthur, and Admiral William Leahy all believed dropping atom bombs on Japan was unnecessary, why do so many American civilians still today think it was?

Probably because they have been told to think that, repeatedly, in a carefully orchestrated propaganda campaign, enforced by the military-industrial complex (that Eisenhower tried to warn us about), that has run continuously since 1945.

As recently as 1995, the fiftieth anniversary of the bombings of Hiroshima and Nagasaki, the Smithsonian Institute was forced to censor its retrospective on the attacks under fierce pressure from Congress and the media because it contained “text that would have raised questions about the morality of the decision to drop the bomb.”

On August 15, 1945, about a week after the bombing of Nagasaki, Truman tasked the U.S. Strategic Bombing Survey to conduct a study on the effectiveness of the aerial attacks on Japan, both conventional and atomic. Did they affect the Japanese surrender?

The survey team included hundreds of American officers, civilians, and enlisted men, based in Japan. They interviewed 700 Japanese military, government, and industry officials and had access to hundreds of Japanese wartime documents.

Less than a year later, they published their conclusion—that Japan would likely have surrendered in 1945 without the Soviet declaration of war and without an American invasion: “It cannot be said that the atomic bomb convinced the leaders who effected the peace of the necessity of surrender. The decision to surrender, influenced in part by knowledge of the low state of popular morale, had been taken at least as early as 26 June at a meeting of the Supreme War Guidance Council in the presence of the Emperor.”

June 26 was six weeks before the first bomb was dropped on Hiroshima. The emperor wanted to surrender and had been trying to open up discussions with the Soviets, the only country with whom they still had diplomatic relations.

According to many scholars, the final straw would have come on August 15 when the Soviet Union, as agreed months previously with the Truman administration, were planning to declare they were entering the war with Japan.

But instead of waiting, Truman dropped the first atomic bomb on Japan on August 6.

The proposed American invasion of the home islands wasn’t scheduled until November.

Mass Delusion of Mass Tree Planting

Mass tree planting is another example, as with EAT-Lancet and corporate veganism, of how good intentions can get co-opted by bad interests. Planting trees could be beneficial or not so much. It depends on how it is done. Still, even if done well, it would never be as beneficial as protecting and replenishing the forests that already exist as living ecosystems.

But governments and corporations like the idea of planting trees because it is a way of greenwashing the problem and so continuing on with the status quo, continuing with the exploitation of native lands and the destruction of indigenous populations. Just plant more trees, largely as monocrop tree plantations, and pretend the ongoing ecocide does not matter.

My brother is a naturalist who has worked in several states around the country. When I shared the below article with him, he responded that,

“Yep, that’s been a joke among naturalists for a while! It’s kind of like the north woods of MN and WI. What was once an old growth pine forest is now a essentially a tree plantation of nothing but maples and birch grown for paper pulp. Where there are still pines, they are in perfect rows and never more than 30 years old. It’s some of the most depressing “wilderness” I’ve ever seen.”

Holistic, sustainable and regenerative multi-use land management would be far better. That is essentially what hunter-gatherers do with the land they live on. It can also be done with mixed farming such as rotating animals between pastures that might also have trees for production of fruit and nuts while allowing natural habitat for wildlife.

Here is the key question: Does the land have healthy soil that absorbs rainfall and supports a living ecosystem with diverse species? If not, it is not an environmental solution to ecological destruction, collapse, and climate change.

* * *

Planting 1 Trillion Trees Might Not Actually Be A Good Idea
by Justine Calma

“But the science behind the campaign, a study that claims 1 trillion trees can significantly reduce greenhouse gases, is disputed. “People are getting caught up in the wrong solution,” says Forrest Fleischman, who teaches natural resources policy at the University of Minnesota and has spent years studying the effects of tree planting in India. “Instead of that guy from Salesforce saying, ‘I’m going to put money into planting a trillion trees,’ I’d like him to go and say, ‘I’m going to put my money into helping indigenous people in the Amazon defend their lands,’” Fleischman says. “That’s going to have a bigger impact.””