Antipsychotics: Effects and Experience

Many people now know how antidepressants are overprescribed. Studies have shown that most taking them receive no benefit at all. Besides that, there are many negative side effects, including suicidality. But what few are aware of is how widely prescribed also are antipsychotics. They aren’t only used for severe cases such as schizophrenia. Often, they are given for treatment of conditions that have nothing to do with psychosis. Depression and personality disorders are other examples. Worse still, it is regularly given to children in foster care to make them more manageable.

That was the case with me, in treating my depression. Along with the antidepressant Paxil, I was put on the antipsychotic Risperdal. I don’t recall being given an explanation at the time and I wasn’t in the mindset back then to interrogate the doctors. Antipsychotics are powerful tranquilizers that shut down the mind and increase sleep. Basically, it’s an attempt to solve the problem by making the individual utterly useless to the world, entirely disconnected, calmed into mindlessness and numbness. That is a rather extreme strategy. Rather than seeking healing, it treats the person suffering as the problem to be solved.

For those on them, they can find themselves sleeping all the time, have a hard time concentrating, and many of them unable to work. It can make them inert and immobile, often gaining weight in the process. But if you try to get off of them, there can be serious withdrawl symptoms. The problems is that prescribers rarely tell patients about the side effects or the long term consequences to antipsychotic use, as seen with what some experience as permanent impairment of mental ability. This is partly because drug companies have suppressed the information on the negatives and promoted them as a miracle drug.

Be highly cautious with any psychiatric medications, including antidepressants but especially antipsychotics. These are potent chemicals only to be used in the most desperate of cases, not to be used so cavalierly as they are now. As with diet, always question a healthcare professional recommending any kind of psychiatric medications for you or a loved one. And most important, research these drugs in immense detail before taking them. Know what you’re dealing with and learn of the experiences of others.

Here is an interesting anecdote. Ketogenic diets have been used to medically treat diverse neurocognitive disorders, originally epileptic seizures, but they are also used to treat weight loss. There was an older lady, maybe in her 70s. She had been diagnosed with schizophrenia since she was a teenager. The long-term use of antipsychotics had caused her to become overweight.

She went to Dr. Eric Westman who trained under Dr. Robert Atkins. She was put on the keto diet and did lose weight but she was surprised to find here schizophrenic symptoms also reduce, to such an extent she was able to stop taking the antipsychotics. So, how many doctors recommend a ketogenic diet before prescribing dangerous drugs? The answer is next to zero. There simply is no incentive for doctors to do so within our present medical system and many incentives to continue with the overprescription of drugs.

No doctor ever suggested to me that I try the keto diet or anything similar, despite the fact that none of the prescribed drugs helped. Yet I too had the odd experience of going on the keto diet to lose weight only to find that I had also lost decades of depression in the process. The depressive funks, irritability and brooding simply disappeared. That is great news for the patient but a bad business model. Drug companies can’t make any profit from diets. And doctors that step out of line with non-standard practices open themselves up to liability and punishment by medical boards, sometimes having their license removed.

So, psychiatric medications continue to be handed out like candy. The young generation right now is on more prescribed drugs than ever before. They are guinea pigs for the drug companies. Who is going to be held accountable when this mass experiment on the public inevitably goes horribly wrong when we discover the long-term consequences on the developing brains and bodies of children and young adults?

* * *

Largest Survey of Antipsychotic Experiences Reveals Negative Results
By Ayurdhi Dhar, PhD

While studies have attributed cognitive decline and stunted recovery to antipsychotic use, less attention has been paid to patients’ first-person experiences on these drugs. In one case where a psychiatrist tried the drugs and documented his experience, he wrote:

“I can’t believe I have patients walking around on 800mg of this stuff. There’s no way in good conscience I could dose this BID (sic) unless a patient consented to 20 hours of sleep a day. I’m sure there’s a niche market for this med though. There has to be a patient population that doesn’t want to feel emotions, work, have sex, take care of their homes, read, drive, go do things, and want to drop their IQ by 100 points.”

Other adverse effects of antipsychotics include poor heart health, brain atrophy, and increased mortality. Only recently have researchers started exploring patient experiences on antipsychotic medication. There is some evidence to suggest that some service users believe that they undermine recovery. However, these first-person reports do not play a significant part in how these drugs are evaluated. […]

Read and Sacia found that only 14.3% reported that their experience on antipsychotics was purely positive, 27.9% of the participants had mixed experiences, and the majority of participants (57.7%) only reported negative results.

Around 22% of participants reported drug effects as more positive than negative on the Overall Antipsychotic Rating scale, with nearly 6% calling their experience “extremely positive.” Most participants had difficulty articulating what was positive about their experience, but around 14 people noted a reduction in symptoms, and 14 others noted it helped them sleep.

Of those who stated they had adverse effects, 65% reported withdrawal symptoms, and 58% reported suicidality. In total, 316 participants complained about adverse effects from the drugs. These included weight gain, akathisia, emotional numbing, cognitive difficulties, and relationship problems. […]

Similar results were reported in a recent review, which found that while some patients reported a reduction in symptoms on antipsychotics, others stated that they caused sedation, emotional blunting, loss of autonomy, and a sense of resignation. Participants in the current survey also complained of the lingering adverse effects of antipsychotics, long after they had discontinued their use.

Importantly, these negative themes also included negative interactions with prescribers of the medication. Participants reported a lack of information about side-effects and withdrawal effects, lack of support from prescribers, and lack of knowledge around alternatives; some noted that they were misdiagnosed, and the antipsychotics made matters worse.

One participant said: “I was not warned about the permanent/semi-permanent effects of antipsychotics which I got.” Another noted: “Most doctors do not have a clue. They turn their backs on suffering patients, denying the existence of withdrawal damage.”

This is an important finding as previous research has shown that positive relationships with one’s mental health provider are considered essential to recovery by many patients experiencing first-episode psychosis.

Diet and Industrialization, Gender and Class

Below are a couple of articles about the shift in diet since the 19th century. Earlier Americans ate a lot of meat, lard, and butter. It’s how everyone ate — women and men, adults and children — as that was what was available and everyone ate meals together. Then there was a decline in consumption of both red meat and lard in the early 20th century (dairy has also seen a decline). The changes created a divergence in who was eating what.

It’s interesting that, as part of moral panic and identity crisis, diets became gendered as part of reinforcing social roles and the social order. It’s strange that industrialization and gendering happened simultaneously, although maybe it’s not so strange. It was largely industrialization in altering society so dramatically that caused the sense of panic and crisis. So, diet also became heavily politicized and used for social engineering, a self-conscious campaign to create a new kind of society of individualism and nuclear family.

This period also saw the rise of the middle class as an ideal, along with increasing class anxiety and class war. This led to the popularity of cookbooks within bourgeois culture, as the foods one ate not only came to define gender identity but also class identity. As grains and sugar were only becoming widely available in the 19th century with improved agriculture and international trade, the first popular cookbooks were focused on desert recipes (Liz Susman Karp, Eliza Leslie: The Most Influential Cookbook Writer of the 19th Century). Before that, deserts had been limited to the rich.

Capitalism was transforming everything. The emerging industrial diet was self-consciously created to not only sell products but to sell an identity and lifestyle. It was an entire vision of what defined the good life. Diet became an indicator of one’s place in society, what one aspired toward or was expected to conform to.

* * *

How Steak Became Manly and Salads Became Feminine
Food didn’t become gendered until the late 19th century.
by Paul Freedman

Before the Civil War, the whole family ate the same things together. The era’s best-selling household manuals and cookbooks never indicated that husbands had special tastes that women should indulge.

Even though “women’s restaurants” – spaces set apart for ladies to dine unaccompanied by men – were commonplace, they nonetheless served the same dishes as the men’s dining room: offal, calf’s heads, turtles and roast meat.

Beginning in the 1870s, shifting social norms – like the entry of women into the workplace – gave women more opportunities to dine without men and in the company of female friends or co-workers.

As more women spent time outside of the home, however, they were still expected to congregate in gender-specific places.

Chain restaurants geared toward women, such as Schrafft’s, proliferated. They created alcohol-free safe spaces for women to lunch without experiencing the rowdiness of workingmen’s cafés or free-lunch bars, where patrons could get a free midday meal as long as they bought a beer (or two or three).

It was during this period that the notion that some foods were more appropriate for women started to emerge. Magazines and newspaper advice columns identified fish and white meat with minimal sauce, as well as new products like packaged cottage cheese, as “female foods.” And of course, there were desserts and sweets, which women, supposedly, couldn’t resist.

How Crisco toppled lard – and made Americans believers in industrial food
by Helen Zoe Veit

For decades, Crisco had only one ingredient, cottonseed oil. But most consumers never knew that. That ignorance was no accident.

A century ago, Crisco’s marketers pioneered revolutionary advertising techniques that encouraged consumers not to worry about ingredients and instead to put their trust in reliable brands. It was a successful strategy that other companies would eventually copy. […]

It was only after a chemist named David Wesson pioneered industrial bleaching and deodorizing techniques in the late 19th century that cottonseed oil became clear, tasteless and neutral-smelling enough to appeal to consumers. Soon, companies were selling cottonseed oil by itself as a liquid or mixing it with animal fats to make cheap, solid shortenings, sold in pails to resemble lard.

Shortening’s main rival was lard. Earlier generations of Americans had produced lard at home after autumn pig slaughters, but by the late 19th century meat processing companies were making lard on an industrial scale. Lard had a noticeable pork taste, but there’s not much evidence that 19th-century Americans objected to it, even in cakes and pies. Instead, its issue was cost. While lard prices stayed relatively high through the early 20th century, cottonseed oil was abundant and cheap. […]

In just five years, Americans were annually buying more than 60 million cans of Crisco, the equivalent of three cans for every family in the country. Within a generation, lard went from being a major part of American diets to an old-fashioned ingredient. […]

In the decades that followed Crisco’s launch, other companies followed its lead, introducing products like Spam, Cheetos and Froot Loops with little or no reference to their ingredients.

Once ingredient labeling was mandated in the U.S. in the late 1960s, the multisyllabic ingredients in many highly processed foods may have mystified consumers. But for the most part, they kept on eating.

So if you don’t find it strange to eat foods whose ingredients you don’t know or understand, you have Crisco partly to thank.

 

Red Flag of Twin Studies

Consider this a public service announcement. The moment someone turns to twin studies as reliable and meaningful evidence, it’s a dead give away about the kind of person they are. And when someone uses this research in the belief they are proving genetic causes, it demonstrates a number of things.

First and foremost, it shows they don’t understand what is heritability. It is about population level factors and can tell us nothing about individuals, much less disentangle genetics from epigenetics and environment. Heritability does not mean genetic inheritance, although even some scientists who know better sometimes talk as if they are the same thing. The fact of the matter is, beyond basic shared traits (e.g., two eyes, instead of one or three), there is little research proving direct genetic causation, typically only seen in a few rare diseases. All that heritability can do is point to the possibility of genetic causes, but all that allows is the articulation of a hypothesis to be tested by actual genetic research which is rarely done.

And second, this gives away the ideological game being played. Either the person ideologically identifies as a eugenicist, racist, etc or has unconsciously assimilated eugenicist, racist, etc ideology without realizing it. In either case, there is next to zero chance that any worthwhile discussion will follow from it. It doesn’t matter what is the individual’s motivations or if they are even aware of them. It’s probably best to just walk away. You don’t need to call them out, much less call them a racist or whatever. You know all that you need to know at that point. Just walk away. And if you don’t walk away, go into the situation with your eyes wide open for you are entering a battlefield of ideological rhetoric.

So, keep this in mind. Twin studies are some of the worst research around, opposite of how they get portrayed by ideologues as being strong evidence. Treat them as you would the low quality epidemiological research in nutrition studies (such as the disproven Seven Countries Study and China Study). They are evidence, at best, to be considered in a larger context of information but not to be taken alone as significant and meaningful. Besides, the twin studies are so poorly designed and so sparse in number that not much can be said about them. If anything, all they are evidence for is how to do science badly. That isn’t to say that, theoretically, twin studies couldn’t be designed well, but as far as I know it hasn’t happened yet. It’s not easy research to do for obvious reasons, as humans are complex creatures part of complex conditions.

For someone to even mention twin studies, other than to criticize them, is a red flag. Scrutinize carefully anything such a person says. Or better yet, when possible, simply ignore them. The problem with weak evidence that is repeated as if true is that it never really is about the evidence in the first place. Twin studies is one of those things that, like dog whistle politics, stands in for something else. It is what I call a symbolic conflation, a distraction tactic to point away from the real issue. Few people talking about twin studies actually care about either twins or science. You aren’t going to convince a believer that their beliefs are false. If anything, they will become even more vehement in their beliefs and you’ll end up frustrated.

* * *

What Genetics Does And Doesn’t Tell Us
Heritability & Inheritance, Genetics & Epigenetics, Etc
Unseen Influences: Race, Gender, and Twins
Weak Evidence, Weak Argument: Race, IQ, Adoption
Identically Different: A Scientist Changes His Mind

Exploding the “Separated-at-Birth” Twin Study Myth
by Jay Joseph, PsyD

“The reader whose knowledge of separated twin studies comes only from the secondary accounts provided by textbooks can have little idea of what, in the eyes of the original investigators, constitutes a pair of ‘separated’ twins”—Evolutionary geneticist Richard Lewontin, neurobiologist Steven Rose, and psychologist Leon Kamin in Not in Our Genes, 19841

“The Myth of the Separated Identical Twins”—Chapter title in sociologist Howard Taylor’s The IQ Game, 19802

Supporters of the nature (genetic) side of the “nature versus nurture” debate often cite studies of “reared-apart” or “separated” MZ twin pairs (identical, monozygotic) in support of their positions.3 In this article I present evidence that, in fact, most studied pairs of this type do not qualify as reared-apart or separated twins.

Other than several single-case and small multiple-case reports that have appeared since the 1920s, there have been only six published “twins reared apart” (TRA) studies. (The IQ TRA study by British psychologist Cyril Burt was discredited in the late 1970s on suspicions of fraud, and is no longer part of the TRA study literature.) The authors of these six studies assessed twin resemblance and calculated correlations for “intelligence” (IQ), “personality,” and other aspects of human behavior. In the first three studies—by Horatio Newman and colleagues in 1937 (United States, 29 MZ pairs), James Shields in 1962 (Great Britain, 44 MZ pairs), and Niels Juel-Nielsen in 1965 (Denmark, 12 MZ pairs)—the authors provided over 500 pages of detailed case-history information for the combined 75 MZ pairs they studied.

The three subsequent TRA studies were published in the 1980s and 1990s, and included Thomas J. Bouchard, Jr. and colleagues’ widely cited “Minnesota Study of Twins Reared Apart” (MISTRA), and studies performed in Sweden and Finland. In the Swedish study, the researchers defined twin pairs as “reared apart” if they had been “separated by the age of 11.”4 In the Finnish study, the average age at separation was 4.3 years, and 12 of the 30 “reared-apart” MZ pairs were separated between the ages of 6 and 10.5 In contrast to the original three studies, the authors of these more recent studies did not provide case-history information for the pairs they investigated. (The MISTRA researchers did publish a few selected case histories, some of which, like the famous “Three Identical Strangers” triplets, had already been publicized in the media.)

The Newman et al. and Shields studies were based on twins who had volunteered to participate after responding to media or researcher appeals to do so in the interest of scientific research. As Leon Kamin and other analysts pointed out long ago, however, TRA studies based on volunteer twins are plagued by similarity biases, in part because twins had to have known of each other’s existence to be able to participate in the study. Like the famous MISTRA “Firefighter Pair,” some twins discovered each other because of their behavioral similarities. The MISTRA researchers arrived at their conclusions in favor of genetics on the basis of a similarity-biased volunteer twin sample. […]

Contrary to the common contemporary claim that twin pairs found in TRA studies were “separated at birth”—which should mean that twins did not know each other or interact with each other between their near-birth separation and the time they were reunited for the study—the information provided by the original researchers shows that few if any MZ pairs fit this description. This is even more obvious in the 1962 Shields study. As seen in the tables below and in the case descriptions:

  • Some pairs were separated well after birth
  • Some pairs grew up nearby to each other and attended school together
  • Most pairs grew up in similar cultural and socioeconomic environments
  • Many pairs were raised by different members of the same family
  • Most pairs had varying degrees of contact while growing up
  • Some pairs had a close relationship as adults
  • Some pairs were reunited and lived together for periods of time

In other words, in addition to sharing a common prenatal environment and many similar postnatal environmental influences (described here), twin pairs found in volunteer-based TRA study samples were not “separated at birth” in the way that most people understand this term. The best way to describe this sample is to say that it consisted of partially reared-apart MZ twin pairs.

The Minnesota researchers have always denied access to independent researchers who wanted to inspect the unpublished MISTRA raw data and case history information, and we can safely assume that the volunteer MISTRA MZ twin pairs were no more “reared apart” than were the MZ pairs

The Large and Growing Caste of Permanent Underclass

The United States economy is in bad condition for much of the population, but you wouldn’t necessarily know that by watching the news or listening to the president, especially if you live in the comforable economic segregation of a college town, a tech hub, a suburb, or a gentrified neighborhood. As the middle class shrinks, many fall into the working class and many others into poverty. The majority of Americans are some combination of unemployed, underemployed, and underpaid (Alt-Facts of Unemployment) — with almost half the population being low wage workers and 40 million below the poverty line. No one knows the full unemployment rate, as the permanently unemployed are excluded from the data along with teens (Teen Unemployment). As for the numbers of homeless, there is no reliable data at all, but we do know that 6,300 Americans are evicted every day.

Most of these people barely can afford to pay their bills or else end up on welfare or in debt or, worse still, entirely fall through the cracks. For the worst off, those who don’t end up homeless often find themselves in prison or caught up in the legal system. This is because the desperately poor often turn to illegal means to gain money: prostitution, selling drugs, petty theft, etc; or even minor criminal acts (remember that Eric Garner was killed by police for illegally selling cigarettes on a sidewalk); whatever it takes to get by in the hope of avoiding the harshest fate. Even for the homeless to sleep in public or beg or rummage through the trash is a crime in many cities and, if not a crime, it could lead to constant harassment by police, as if life isn’t already hard enough for them.

Unsurprisingly, economic data is also not kept about the prison and jail population, as they are removed from society and made invisible (Invisible Problems of Invisible People). In some communities, the majority of men and many of the women are locked up or were at one time. When in prison, as with the permanently unemployed, they can be eliminated from the economic accounting of these communities and the nation. Imprison enough people and the official unemployment rate will go down, especially as it then creates employment in the need to hire prison guards and the various companies that serve prisons. But for those out on parole who can’t find work or housing, knowing that at least they are being recorded in the data is little comfort. Still others, sometimes entirely innocent, get tangled up in the legal system through police officers targeting minorities and the poor.  Forcing people to give false confessions out of threat is a further problem and, unlike in the movies, the poor rarely get legal representation.

Once the poor and homeless are in the legal system, it can be hard to escape for there are all kinds of fees, fines, and penalties that they can’t afford. In various ways, the criminal system, in punishing the victims of our oppressive society, harms not only individuals but breaks apart families and cripples entire communities. The war on drugs, in particular, has been a war on minorities and the poor. Rich white people have high rates of drug use, but no where near an equivalent rate of being stopped and frisked, arrested and convicted for drug crimes. A punitive society is about maintaining class hierarchy and privilege while keeping the poor in their place. As Mother Jones put it, a child who picked up loose coal on the side of railroad tracks or a hobo who stole shoes would be arrested, but steal a whole railroad and you’ll become a United States Senator or be “heralded as a Napoleon of finance” (Size Matters).

A large part of the American population lives paycheck to paycheck, many of them in debt they will never be able to pay off, often from healthcare crises since stress and poverty — especially a poverty diet — worsens health even further. But those who are lucky enough to avoid debt are constantly threatened by one misstep or accident. Imagine getting sick or injured when your employment gives you no sick days and you have a family to house and feed. Keep in mind that most people on welfare are also working and only on it temporarily (On Welfare: Poverty, Unemployment, Health, Etc). Meanwhile, the biggest employers of the impoverished (Walmart, Amazon, etc) are the biggest beneficiaries of the welfare that is spent at their stores.

So, poverty is good for business, in maintaining both cheap labor subsidized by the government and consumerism likewise subsidized by the government. To the capitalist class, none of this is a problem but an opportunity for profit. That is, profit on top of the trillions of dollars given each year to individual industries, from oil industry to military-industrial complex, that comes from direct and indirect subsidies and much of it hidden — that is to say, corporate welfare and socialism for the rich (Trillions Upon Trillions of Dollars, Investing in Violence and Death). These are vast sums of public wealth and public resources that, in a just society and functioning social democracy, would support the public good. That is not only money stolen from the general public, including stolen from the poor, but opportunities lost for social improvement and economic reform.

Worse still, it isn’t only theft from living generations for the greater costs are externalized onto the future that will be inherited by the children growing up now and those not yet born. The United Nations is not an anti-capitalist organization in the slightest, but a recent UN report came to a strong conclusion: “The report found that when you took the externalized costs into effect, essentially NONE of the industries was actually making a profit. The huge profit margins being made by the world’s most profitable industries (oil, meat, tobacco, mining, electronics) is being paid for against the future: we are trading long term sustainability for the benefit of shareholders. Sometimes the environmental costs vastly outweighed revenue, meaning that these industries would be constantly losing money had they actually been paying for the ecological damage and strain they were causing.” Large sectors of the global economy are a net loss to society. Their private profits and social benefit are a mirage. It is theft hidden behind false pretenses of a supposedly free market that in reality is not free, in any sense of the word.

As a brief side note, let’s make clear the immensity of this theft that extends to global proportions. Big biz and big gov are so closely aligned as to essentially be the same entity, and which controls which is not always clear, be it fascism or inverted totalitarianism. When the Western governments destroyed Iraq and Libya, it was done to steal their wealth and resources, oil in the case of one and gold for the other, and it cost the lives of millions of innocent people. When Hillary Clinton as Secretary of State intervened in Haiti to suppress wages to maintain cheap labor for American corporations, that was not only theft but authoritarian oppression in a country that once started a revolution to overthrow the colonial oppressors that had enslaved them.

These are but a few examples of endless acts of theft, not always violent but often so. All combined, we are talking about possibly thousands of trillions of dollars stolen from the world-wide population every year. And it is not a new phenomenon, as it goes back to the 1800s with the violent theft of land from Native Americans and Mexicans. General Smedley Butler wrote scathingly about these imperial wars and “Dollar Diplomacy” on behalf of “Big Business, for Wall Street, and for the Bankers” (Danny Sjursen, Where Have You Gone, Smedley Butler?). Poverty doesn’t happen naturally. It is created and enforced, and the ruling elite responsible are homicidal psychopaths. Such acts are a crime against humanity; more than that, they are pure evil.

Work is supposedly the definition of worth in our society and yet the richer one is the less one works. Meanwhile, the poor are working as much as they are able when they can find work, often working themselves to the point of exhaustion and sickness and early death. Even among the homeless, many of them are working or were recently employed, a surprising number of them low-paid professionals such as public school teachers, Uber drivers, and gig workers who can’t afford housing in high-priced urban areas where the jobs are to be found. Somehow merely not being unemployed is supposed to be a great boon, but working in a state of utter fear of getting by day to day is not exactly a happy situation. Unlike in generations past, a job isn’t a guarantee of a good life, much less the so-called American Dream. Gone are days the days when a single income from an entry level factory job could support a family with several kids, a nice house, a new car, regular vacations, cheap healthcare, and a comfortable nest egg.

As this shows, it’s far from limited to the poorest. These days most college students, the fortunate few to have higher education (less than a quarter of Americans), are struggling to find work at all (Jordan Weissman, 53% of Recent College Grads Are Jobless or Underemployed—How?). Yet those without a college degree are facing far greater hardship. And it’s easy to forget that the United States citizenry remains largely uneducated because few can afford college with tuition going up. Even most college students come out with massive debt and they are the lucky ones. What once was a privilege has become a burden for many. A college education doesn’t guarantee a good job as it did in the past, but chances for employment are even worse without a degree — so, damned if you do and damned if you don’t.

It’s not only that wages have stagnated for most and, relative to inflation, dropped for many others. Costs of livings (housing, food, etc) have simultaneously gone up, not to mention the disappearance of job security and good benefits. Disparities in general have become vaster — disparities in wealth, healthcare, education, opportunities, resources, and political representation. The growing masses at the bottom of society are part of the permanent underclass, which is to say they are the American caste of untouchables or rather unspeakables (Barbara Ehrenreich: Poverty, Homelesness). The mainstream mention them only when they seem a threat, such as fears about them dragging down the economy or in scapegoating them for the election of Donald Trump as president or whatever other distraction of the moment. Simple human concern for the least among us, however, rarely comes up as a priority.

Anyway, why are we still idealizing a fully employed workforce (Bullshit Jobs) and so demonizing the unemployed (Worthless Non-Workers) at a time when many forms of work are becoming increasingly meaningless and unnecessary, coming close to obsolete? Such demonization doesn’t bode well for the future (Our Bleak Future: Robots and Mass Incarceration). It never really was about work but about social control. If the people are kept busy, tired and stressed, they won’t have the time and energy for community organizing and labor organizing, democratic participation and political campaigning, protesting and rioting or even maybe revolution.  But it doesn’t have to be this way. If we ever harnessed a fraction of the human potential that is wasted and thrown away, if we used public wealth and public resources to invest in the citizenry and promote the public good, we could transform society over night. What are we waiting for? Why do we tolerate and allow this moral wrongdoing to continue?

* * *

A commenter below shared a documentary (How poor people survive in the USA) from the DW, a German public brodcaster. It is strange to watch foreign news reporting on the United States as if it were about a developing country that was in the middle of a crisis. Maybe that is because the United States is a developing country in the middle of a crisis. Many parts of this country have poverty, disease, and mortality rates that are as high or higher than what s seen in many countries that were once called third world. And inequality, once considered absolute proof of a banana republic, is now higher here than it ever was in the original banana republics. In fact, inequality — that is to say concentrated wealth and power — has never been this radically extreme in any society in all of history.

It’s not about a few poor people in various places but about the moral failure and democratic failure of an entire dysfunctional and corrupt system. And it’s not limited to the obvious forms of poverty and inequality for the consequences to the victims are harsh, from parasite load to toxic exposure that causes physical sickness and mental illness, stunts neurocognitive development and lowers IQ, increases premature puberty and behavioral problems, and generally destroys lives. We’ve known about this information for decades. It even occasionally, if only briefly and superficially, shows up in corporate media reporting. We can’t honestly claim ignorance as a defense of our apathy and indifference, of our collective failure.

Poverty isn’t a lack of character. It’s a lack of cash
by Rutger Bregman

On Conflict and Stupidity
Inequality in the Anthropocene
Parasites Among the Poor and the Plutocrats
Stress and Shittiness
The Desperate Acting Desperately
Stress Is Real, As Are The Symptoms
Social Conditions of an Individual’s Condition
Social Disorder, Mental Disorder
Urban Weirdness
Lead Toxicity is a Hyperobject
Connecting the Dots of Violence
Trauma, Enbodied and Extended
An Invisible Debt Made Visible
Public Health, Public Good

Childhood adversity linked to early puberty, premature brain development and mental illness
from Science Daily

Poor kids hit puberty sooner and risk a lifetime of health problems
by Ying Sun

* * *

Low-Wage Jobs are the New American Normal
by Dawn Allen

It’s clear that the existence of the middle class was a historic anomaly. In 2017, MIT economist Peter Temin argued that we’re splitting into a two-class system. There’s a small upper class, about 20% of Americans, predominantly white, degree holders, working largely in the technology and finance sectors, that holds the lion’s share of wealth and political power in the country. Then, there’s a much larger precariat below them, “minority-heavy” but still mostly white, with little power and low-wage, if any, jobs. Escaping lower-class poverty depends upon navigating two flawless, problem-free decades, starting in early childhood, ending with a valuable college degree. The chances of this happening for many are slim, while implementing it stresses kids out and even makes them meaner. Fail, and you risk “shit-life syndrome” and a homeless retirement.

Underemployment Is the New Unemployment
Western countries are celebrating low joblessness, but much of the new work is precarious and part-time.
by Leonid Bershidsky

Some major Western economies are close to full employment, but only in comparison to their official unemployment rate. Relying on that benchmark alone is a mistake: Since the global financial crisis, underemployment has become the new unemployment.

In a recent paper, David Bell and David Blanchflower singled out underemployment as a reason why wages in the U.S. and Europe are growing slower than they did before the global financial crisis, despite unemployment levels that are close to historic lows. In some economies with lax labor market regulation — the U.K. and the Netherlands, for example — more people are on precarious part-time contracts than out of work. That could allow politicians to use just the headline unemployment number without going into details about the quality of the jobs people manage to hold down.

Measuring underemployment is difficult everywhere. To obtain more or less accurate data, those working part-time should probably be asked how many hours they’d like to put in, and those reporting a large number of hours they wish they could add should be recorded as underemployed. But most statistical agencies make do with the number of part-timers who say they’d like a full-time job. The U.S. Bureau of Labor Statistics doesn’t provide an official underemployment number, and existing semi-official measures, according to Bell and Blanchflower, could seriously underestimate the real situation.

The need for governments to show improvement on jobs since the global crisis has led to an absurd situation. Generous standards for measuring unemployment produce numbers that don’t agree with most people’s personal experience and the anecdotal evidence from friends and family. A lot of people are barely working, and wages are going up too slowly to fit a full employment picture. At the same time, underemployment, which, according to Bell and Blanchflower, has “replaced unemployment as the main indicator of labor market slack,” is rarely discussed and unreliably measured.

Governments should provide a clearer picture of how many people are not working as much as they’d like to — and of how many hours they’d like to add. Labor market flexibility is a nice tool in a crisis, but during an economic expansion, the focus should be on improving employment quality, not just reducing the number of people who draw an unemployment check. An increasing number of better jobs, and as a consequence wage growth, becomes the most important measure of policy success.

The War on Work — and How to End It
by Edward L. Glaeser

In 1967, 95 percent of “prime-age” men between the ages of 25 and 54 worked. During the Great Recession, though, the share of jobless prime-age males rose above 20 percent. Even today, long after the recession officially ended, more than 15 percent of such men aren’t working. And in some locations, like Kentucky, the numbers are even higher: Fewer than 70 percent of men lacking any college education go to work every day in that state. […]

From 1945 to 1968, only 5 percent of men between the ages of 25 and 54 — prime-age males — were out of work. But during the 1970s, something changed. The mild recession of 1969–70 produced a drop in the employment rate of this group, from 95 percent to 92.5 percent, and there was no rebound. The 1973–74 downturn dragged the employment rate below 90 percent, and, after the 1979–82 slump, it would stay there throughout most of the 1980s. The recessions at the beginning and end of the 1990s caused further deterioration in the rate. Economic recovery failed to restore the earlier employment ratio in both instances.

The greatest fall, though, occurred in the Great Recession. In 2011, more than one in five prime-age men were out of work, a figure comparable to the Great Depression. But while employment came back after the Depression, it hasn’t today. The unemployment rate may be low, but many people have quit the labor force entirely and don’t show up in that number. As of December 2016, 15.2 percent of prime-age men were jobless — a figure worse than at any point between World War II and the Great Recession, except during the depths of the early 1980s recession.

The trend in the female employment ratio is more complicated because of the postwar rise in the number of women in the formal labor market. In 1955, 37 percent of prime-age women worked. By 2000, that number had increased to 75 percent — a historical high. Since then, the number has come down: It stood at 71.7 percent at the end of 2016. Interpreting these figures is tricky, since more women than men voluntarily leave the labor force, often finding meaningful work in the home. The American Time Survey found that non-employed women spend more than six hours a day doing housework and caring for others. Non-employed men spend less than three hours doing such tasks.

Joblessness is disproportionately a condition of the poorly educated. While 72 percent of college graduates over age 25 have jobs, only 41 percent of high-school dropouts are working. The employment-rate gap between the most and least educated groups has widened from about 6 percent in 1977 to almost 15 percent today. The regional variation is also enormous. Kentucky’s 23 percent male jobless rate leads the nation; in Iowa, the rate is under 10 percent. […]

The rise of joblessness among the young has been a particularly pernicious effect of the Great Recession. Job loss was extensive among 25–34-year-old men and 35–44-year-old men between 2007 and 2009. The 25–34-year-olds have substantially gone back to work, but the number of employed 35–44-year-olds, which dropped by 2 million at the start of the Great Recession, hasn’t recovered. The dislocated workers in this group seem to have left the labor force permanently.

Lost in Recession, Toll on Underemployed and Underpaid
by Michael Cooper

These are anxious days for American workers. Many, like Ms. Woods, are underemployed. Others find pay that is simply not keeping up with their expenses: adjusted for inflation, the median hourly wage was lower in 2011 than it was a decade earlier, according to data from a forthcoming book by the Economic Policy Institute, “The State of Working America, 12th Edition.” Good benefits are harder to come by, and people are staying longer in jobs that they want to leave, afraid that they will not be able to find something better. Only 2.1 million people quit their jobs in March, down from the 2.9 million people who quit in December 2007, the first month of the recession.

“Unfortunately, the wage problems brought on by the recession pile on top of a three-decade stagnation of wages for low- and middle-wage workers,” said Lawrence Mishel, the president of the Economic Policy Institute, a research group in Washington that studies the labor market. “In the aftermath of the financial crisis, there has been persistent high unemployment as households reduced debt and scaled back purchases. The consequence for wages has been substantially slower growth across the board, including white-collar and college-educated workers.”

Now, with the economy shaping up as the central issue of the presidential election, both President Obama and Mitt Romney have been relentlessly trying to make the case that their policies would bring prosperity back. The unease of voters is striking: in a New York Times/CBS News poll in April, half of the respondents said they thought the next generation of Americans would be worse off, while only about a quarter said it would have a better future.

And household wealth is dropping. The Federal Reserve reported last week that the economic crisis left the median American family in 2010 with no more wealth than in the early 1990s, wiping away two decades of gains. With stocks too risky for many small investors and savings accounts paying little interest, building up a nest egg is a challenge even for those who can afford to sock away some of their money.

Expenses like putting a child through college — where tuition has been rising faster than inflation or wages — can be a daunting task. […]

Things are much worse for people without college degrees, though. The real entry-level hourly wage for men who recently graduated from high school fell to $11.68 last year, from $15.64 in 1979, according to data from the Economic Policy Institute. And the percentage of those jobs that offer health insurance has plummeted to 22.8 percent, from 63.3 percent in 1979.

Though inflation has stayed relatively low in recent years, it has remained high for some of the most important things: college, health care and even, recently, food. The price of food in the home rose by 4.8 percent last year, one of the biggest jumps in the last two decades.

Meet the low-wage workforce
by Martha Ross and Nicole Bateman

Low-wage workers comprise a substantial share of the workforce. More than 53 million people, or 44% of all workers ages 18 to 64 in the United States, earn low hourly wages. More than half (56%) are in their prime working years of 25-50, and this age group is also the most likely to be raising children (43%). They are concentrated in a relatively small number of occupations, and many face economic hardship and difficult roads to higher-paying jobs. Slightly more than half are the sole earners in their families or make major contributions to family income. Nearly one-third live below 150% of the federal poverty line (about $36,000 for a family of four), and almost half have a high school diploma or less.

Women and Black workers, two groups for whom there is ample evidence of labor market discrimination, are overrepresented among low-wage workers.

To lift the American economy, we need to understand the workers at the bottom of it
by Martha Ross and Nicole Bateman

These low-wage workers are a racially diverse group, and disproportionately female. Fifty-two percent are white, 25% are Latino or Hispanic, 15% are Black, and 5% are Asian American. Females account for 54% of low-wage workers, higher than their total share of the entire workforce (48%).

Fifty-seven percent of low-wage workers work full time year-round, considerably lower than mid/high-wage workers (81%). Among those working less than full time year-round, the data don’t specify if this is voluntary or involuntary, and it is probably a mix.

Two-thirds of low-wage workers are in their prime working years of 25-54, and nearly half of this group (40%) are raising children. Given the links between education and earnings, it is not surprising that low-wage workers have lower levels of education than mid/high-wage workers. Fourteen percent of low-wage workers have a bachelor’s degree, compared to 44% among mid/high-wage workers, and nearly half (49%) have a high school diploma or less, compared to 24% among mid/high-wage workers. […]

The largest cluster consists of prime-age adults with a high school diploma or less

The largest cluster, accounting for 15 million people (28% of low-wage workers) consists of workers ages 25 to 50 with no more than a high school diploma. It is one of two clusters that are majority male (54%) and it is the most racially and ethnically diverse of all groups, with the lowest share of white workers (40%) and highest share of Latino or Hispanic workers (39%). Many in this cluster also experience economic hardship, with high shares living below 150% of the federal poverty line (39%), receiving safety net assistance (35%), and relying solely on their wages to support their families (31%). This cluster is also the most likely to have children (44%).

Low-wage work is more pervasive than you think, and there aren’t enough “good jobs” to go around
by Martha Ross and Nicole Bateman

Even as the U.S. economy hums along at a favorable pace, there is a vast segment of workers today earning wages low enough to leave their livelihood and families extremely vulnerable. That’s one of the main takeaways from our new analysis, in which we found that 53 million Americans between the ages of 18 to 64—accounting for 44% of all workers—qualify as “low-wage.” Their median hourly wages are $10.22, and median annual earnings are about $18,000. (See the methods section of our paper to learn about how we identify low-wage workers.)

The existence of low-wage work is hardly a surprise, but most people—except, perhaps, low-wage workers themselves—underestimate how prevalent it is. Many also misunderstand who these workers are. They are not only students, people at the beginning of their careers, or people who need extra spending money. A majority are adults in their prime working years, and low-wage work is the primary way they support themselves and their families.

Low-wage work is a source of economic vulnerability

There are two central questions when considering the prospects of low-wage workers:

  1. Is the job a springboard or a dead end?
  2. Does the job provide supplemental, “nice to have” income, or is it critical to covering basic living expenses?

We didn’t analyze the first question directly, but other research is not encouraging, finding that while some workers move on from low-wage work to higher-paying jobs, many do not. Women, people of color, and those with low levels of education are the most likely to stay in low-wage jobs. In our analysis, over half of low-wage workers have levels of education suggesting they will stay low-wage workers. This includes 20 million workers ages 25-64 with a high school diploma or less, and another seven million young adults 18-24 who are not in school and do not have a college degree.

As to the second question, a few data points show that for millions of workers, low-wage work is a primary source of financial support—which leaves these families economically vulnerable.

  • Measured by poverty status: 30% of low-wage workers (16 million people) live in families earning below 150% of the poverty line. These workers get by on very low incomes: about $30,000 for a family of three and $36,000 for a family of four.
  • Measured by the presence or absence of other earners: 26% of low-wage workers (14 million people) are the only earners in their families, getting by on median annual earnings of about $20,000. Another 25% (13 million people) live in families in which all workers earn low wages, with median family earnings of about $42,000. These 27 million low-wage workers rely on their earnings to provide for themselves and their families, as they are either the family’s primary earner or a substantial contributor to total earnings. Their earnings are unlikely to represent “nice to have” supplemental income.

The low-wage workforce is part of every regional economy

We analyzed data for nearly 400 metropolitan areas, and the share of workers in a particular place earning low wages ranges from a low of 30% to a high of 62%. The relative size of the low-wage population in a given place relates to broader labor market conditions such as the strength of the regional labor market and industry composition.

Low-wage workers make up the highest share of the workforce in smaller places in the southern and western parts of the United States, including Las Cruces, N.M. and Jacksonville, N.C. (both 62%); Visalia, Calif. (58%); Yuma, Ariz. (57%); and McAllen, Texas (56%). These and other metro areas where low-wage workers account for high shares of the workforce are places with lower employment rates that concentrate in agriculture, real estate, and hospitality.

Post-Work: The Radical Idea of a World Without Jobs
by Andy Beckett

As a source of subsistence, let alone prosperity, work is now insufficient for whole social classes. In the UK, almost two-thirds of those in poverty – around 8 million people – are in working households. In the US, the average wage has stagnated for half a century.

As a source of social mobility and self-worth, work increasingly fails even the most educated people – supposedly the system’s winners. In 2017, half of recent UK graduates were officially classified as “working in a non-graduate role”. In the US, “belief in work is crumbling among people in their 20s and 30s”, says Benjamin Hunnicutt, a leading historian of work. “They are not looking to their job for satisfaction or social advancement.” (You can sense this every time a graduate with a faraway look makes you a latte.)

Work is increasingly precarious: more zero-hours or short-term contracts; more self-employed people with erratic incomes; more corporate “restructurings” for those still with actual jobs. As a source of sustainable consumer booms and mass home-ownership – for much of the 20th century, the main successes of mainstream western economic policy – work is discredited daily by our ongoing debt and housing crises. For many people, not just the very wealthy, work has become less important financially than inheriting money or owning a home.

Whether you look at a screen all day, or sell other underpaid people goods they can’t afford, more and more work feels pointless or even socially damaging – what the American anthropologist David Graeber called “bullshit jobs” in a famous 2013 article. Among others, Graeber condemned “private equity CEOs, lobbyists, PR researchers … telemarketers, bailiffs”, and the “ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone is spending so much of their time working”.

The argument seemed subjective and crude, but economic data increasingly supports it. The growth of productivity, or the value of what is produced per hour worked, is slowing across the rich world – despite the constant measurement of employee performance and intensification of work routines that makes more and more jobs barely tolerable.

Unsurprisingly, work is increasingly regarded as bad for your health: “Stress … an overwhelming ‘to-do’ list … [and] long hours sitting at a desk,” the Cass Business School professor Peter Fleming notes in his book, The Death of Homo Economicus, are beginning to be seen by medical authorities as akin to smoking.

Work is badly distributed. People have too much, or too little, or both in the same month. And away from our unpredictable, all-consuming workplaces, vital human activities are increasingly neglected. Workers lack the time or energy to raise children attentively, or to look after elderly relations. “The crisis of work is also a crisis of home,” declared the social theorists Helen Hester and Nick Srnicek in a 2017 paper. This neglect will only get worse as the population grows and ages.

And finally, beyond all these dysfunctions, loom the most-discussed, most existential threats to work as we know it: automation, and the state of the environment. Some recent estimates suggest that between a third and a half of all jobs could be taken over by artificial intelligence in the next two decades. Other forecasters doubt whether work can be sustained in its current, toxic form on a warming planet. […]

And yet, as Frayne points out, “in some ways, we’re already in a post-work society. But it’s a dystopic one.” Office employees constantly interrupting their long days with online distractions; gig-economy workers whose labour plays no part in their sense of identity; and all the people in depressed, post-industrial places who have quietly given up trying to earn – the spectre of post-work runs through the hard, shiny culture of modern work like hidden rust.

Last October, research by Sheffield Hallam University revealed that UK unemployment is three times higher than the official count of those claiming the dole, thanks to people who come under the broader definition of unemployment used by the Labour Force Survey, or are receiving incapacity benefits. When Frayne is not talking and writing about post-work, or doing his latest temporary academic job, he sometimes makes a living collecting social data for the Welsh government in former mining towns. “There is lots of worklessness,” he says, “but with no social policies to dignify it.”

Creating a more benign post-work world will be more difficult now than it would have been in the 70s. In today’s lower-wage economy, suggesting people do less work for less pay is a hard sell. As with free-market capitalism in general, the worse work gets, the harder it is to imagine actually escaping it, so enormous are the steps required.

We should all be working a four-day week. Here’s whyWe should all be working a four-day week. Here’s why
by Owen Jones

Many Britons work too much. It’s not just the 37.5 hours a week clocked up on average by full-time workers; it’s the unpaid overtime too. According to the TUC, workers put in 2.1bn unpaid hours last year – that’s an astonishing £33.6bn of free labour.

That overwork causes significant damage. Last year, 12.5m work days were lost because of work-related stress, depression or anxiety. The biggest single cause by a long way – in some 44% of cases – was workload. Stress can heighten the risk of all manner of health problems, from high blood pressure to strokes. Research even suggests that working long hours increases the risk of excessive drinking. And then there’s the economic cost: over £5bn a year, according to the Health and Safety Executive. No wonder the public health expert John Ashton is among those suggesting a four-day week could improve the nation’s health. […]

This is no economy-wrecking suggestion either. German and Dutch employees work less than we do but their economies are stronger than ours. It could boost productivity: the evidence suggests if you work fewer hours, you are more productive, hour for hour – and less stress means less time off work. Indeed, a recent experiment with a six-hour working day at a Swedish nursing home produced promising results: higher productivity and fewer sick days. If those productivity gains are passed on to staff, working fewer hours doesn’t necessarily entail a pay cut.

Do you work more than 39 hours a week? Your job could be killing you
by Peter Fleming

The costs of overwork can no longer be ignored. Long-term stress, anxiety and prolonged inactivity have been exposed as potential killers.

Researchers at Columbia University Medical Center recently used activity trackers to monitor 8,000 workers over the age of 45. The findings were striking. The average period of inactivity during each waking day was 12.3 hours. Employees who were sedentary for more than 13 hours a day were twice as likely to die prematurely as those who were inactive for 11.5 hours. The authors concluded that sitting in an office for long periods has a similar effect to smoking and ought to come with a health warning.

When researchers at University College London looked at 85,000 workers, mainly middle-aged men and women, they found a correlation between overwork and cardiovascular problems, especially an irregular heart beat or atrial fibrillation, which increases the chances of a stroke five-fold.

Labour unions are increasingly raising concerns about excessive work, too, especially its impact on relationships and physical and mental health. Take the case of the IG Metall union in Germany. Last week, 15,000 workers (who manufacture car parts for firms such as Porsche) called a strike, demanding a 28-hour work week with unchanged pay and conditions. It’s not about indolence, they say, but self-protection: they don’t want to die before their time. Science is on their side: research from the Australian National University recently found that working anything over 39 hours a week is a risk to wellbeing.

Is there a healthy and acceptable level of work? According to US researcher Alex Soojung-Kim Pang, most modern employees are productive for about four hours a day: the rest is padding and huge amounts of worry. Pang argues that the workday could easily be scaled back without undermining standards of living or prosperity. […]

Other studies back up this observation. The Swedish government, for example, funded an experiment where retirement home nurses worked six-hour days and still received an eight-hour salary. The result? Less sick leave, less stress, and a jump in productivity.

All this is encouraging as far as it goes. But almost all of these studies focus on the problem from a numerical point of view – the amount of time spent working each day, year-in and year-out. We need to go further and begin to look at the conditions of paid employment. If a job is wretched and overly stressful, even a few hours of it can be an existential nightmare. Someone who relishes working on their car at the weekend, for example, might find the same thing intolerable in a large factory, even for a short period. All the freedom, creativity and craft are sucked out of the activity. It becomes an externally imposed chore rather than a moment of release.

Why is this important?

Because there is a danger that merely reducing working hours will not change much, when it comes to health, if jobs are intrinsically disenfranchising. In order to make jobs more conducive to our mental and physiological welfare, much less work is definitely essential. So too are jobs of a better kind, where hierarchies are less authoritarian and tasks are more varied and meaningful.

Capitalism doesn’t have a great track record for creating jobs such as these, unfortunately. More than a third of British workers think their jobs are meaningless, according to a survey by YouGov. And if morale is that low, it doesn’t matter how many gym vouchers, mindfulness programmes and baskets of organic fruit employers throw at them. Even the most committed employee will feel that something is fundamentally missing. A life.

Most Americans Don’t Know Real Reason Japan Was Bombed

United States bombing Japan in the Second World War was a demonstration of psychopathic brutality. It was unnecessary, as Japan was already defeated, but it was meant to send a message to the Soviets. Before the dust had settled from the savagery, the power-mongers among the Allied leadership were already planning for a Third World War (Cold War Ideology and Self-Fulfilling Prophecies), even though the beleaguered Soviets had no interest in more war as they took the brunt of the decimation and death count in defeating the Nazis.

The United States, in particular, having come out wealthier after the war thought that the Soviets would be an easy target to take out and so they sought to kick their former allies while they were still down. The US, in a fit of paranoia and psychosis, was scheming to drop hundreds of atomic bombs on Russia, to eliminate them before they could get the chance to develop their own nuclear weapons. Yet Stalin never planned, much less intended, to attack the West nor did he think they had the capacity to do so. All of the archives that were opened after the Soviet collapse showed that Stalin simply wanted to develop a trading partnership with the West, as he stated was his intention. Through the intervention of spies, the Soviets did start their own nuclear program and then demonstrated their capacity. So, a second nuclear attack by the United States was narrowly averted and  the Third World War was downgraded to the Cold War (see article and book at the end of the post).

This topic has come up before in this blog, but let’s come at it from a different angle. Consider General Douglas MacArthur. He was no pacifist or anything close to approximating one. He was a megalomaniac with good PR, a bully and a jerk, an authoritarian and would-be strongman hungering for power and fame. He “publicly lacked introspection. He was also vain, borderline corrupt, ambitious and prone to feuds” (Andrew Fe, Why was General MacArthur called “Dugout Doug?”). Also, he was guilty of insubordination, always certain he was right; and the times that events went well under his command were often because he took credit for other people’s ideas, plans and actions. His arrogance eventually led him to being removed from his position and that ended his career.

He was despised by many who worked with him and served under him. “President Harry Truman considered MacArthur a glory-seeking egomaniac, describing him at one point as “God’s right hand man” ” (Alpha History, Douglas MacArthur). Dwight Eisenhower, who knew him well from years of army service, “disliked MacArthur for his vanity, his penchant for theatrics, and for what Eisenhower perceived as “irrational” behavior” (National Park Service, Most Disliked Contemporaries). MacArthur loved war and had psychopathic level of disregard for the lives of others, sometimes to the extent of seeking victory at any cost. There are two examples that demonstrate this, one before the Second World War and the other following after.

Early in his career with Eisenhower and George S. Patton under his command, there was the infamous attack on the Bonus Army camp, consisting of WWI veterans — along with their families — protesting for payment of the money they were owed by the federal government (Mickey Z., The Bonus Army). He was ordered to remove the protesters but to do so non-violently. Instead, as became a pattern with him, he disobeyed those orders by having the protesters gassed and the camp trampled and torched. This led to the death of several people, including an infant. This was one of his rare PR disasters, to say the least. And trying to sue journalists for libel didn’t help.

The later example was in 1950. In opposition to President Harry Truman, “MacArthur favored waging all-out war against China. He wanted to drop 20 to 30 atomic bombs on Manchuria, lay a “radioactive belt of nuclear-contaminated material” to sever North Korea from China, and use Chinese Nationalist and American forces to annihilate the million or so Communist Chinese troops in North Korea” (Max Boot, He Has Returned). Some feared that, if the General had his way, he might start another world war… or rather maybe the fear was about China not being the preferred enemy some of the ruling elite wanted to target for the next world war.

Certainly, he was not a nice guy nor did he have any respect for democracy, human rights, or any other such liberal values. If he had been born in Germany instead, he would have made not merely a good Nazi but a great Nazi. He was a right-wing reactionary and violent imperialist, as he was raised to be by his military father who modeled imperialist aspirations (Rethinking History, Rating General Douglas MacArthur). He felt no sympathy or pity for enemies. Consider how he was willing to treat his fellow citizens, including some veterans in the Bonus Army who served beside him in the previous world war. His only loyalty was to his own sense of greatness and the military-industry that promoted him into power.

But what did General MacArthur, right-wing authoritarian that he was, think about dropping atomic bombs on an already defeated Japan? He thought it an unnecessary and cruel act toward a helpless civilian population consisting mostly of women, children and the elderly; an opinion he shared with many other military leaders at the time. Besides, as Norman Cousins, consultant to General MacArthur during the occupation of Japan, wrote, “MacArthur… saw no military justification for dropping of the bomb. The war might have ended weeks earlier, he said, if the United States had agreed, as it later did anyway, to the retention of the institution of the emperor” (quoted in Cameron Reilly’s The Psychopath Epidemic).

There was no reason, in his mind, to destroy a country when it was already defeated and instead could serve the purposes of the American Empire. For all of his love of war and violence, he showed no interest in vengeance or public humiliation toward the Japanese people. After the war, he was essentially made an imperial administrator and colonial governor of Japan, and he ruled with paternalistic care and fair-minded understanding. War was one thing and ruling another. Even an authoritarian should be able to tell the difference between these two.

It made no sense, the reasons given for incinerating two large cities and their populations in a country that couldn’t have fought back at that point even if the leadership had wanted to. What MacArthur understood was that the Japanese simply wanted to save face as much as possible while coming to terms with defeat and negotiating their surrender. Further violence was simply psychopathic brutality. There is no way of getting around that ugly truth. So, why have Americans been lied to and indoctrinated to believe otherwise for generations since? Well, because the real reasons couldn’t be given.

The atomic bombing wasn’t an act to end a war but to start another one, this time against the Soviets. To honestly and openly declare a new war before the last war had even ended would not have gone over well with the American people. And once this action was taken it could never be revealed, not even when all those involved had long been dead. Propaganda narratives, once sustained long enough, take on a life of their own. The tide is slowly turning, though. As each generation passes, fewer and fewer remain who believe it was justified, from 85 percent in 1945 to 56 percent in 2015.

When the last generation raised on WWII propaganda dies, that percentage will finally drop below the 50 percent mark and maybe we will then have an honest discussion about the devastating results of moral failure that didn’t end with those atomic bombs but have been repeated in so many ways since then. The crimes against humanity in bombing of Japan were echoed in the travesty of the Vietnam War and the Iraq War. Millions upon millions dead over the decades from various military actions by the Pentagon and covert operations by the CIA combined with sanctions that are considered declarations of war. Sanctions, by the way, were what incited the Japanese to attack the United States. In enforcing sanctions against a foreign government, the United States entered the war of its own volition by effectively declaring war against Japan and then acted surprised when they defended themselves.

All combined, through direct and indirect means, that possibly adds up into hundreds of millions in body count of innocents sacrificed so far since American imperial aspirations began. This easily matches the levels of atrocity seen in the most brutal regimes of the past (Investing in Violence and Death, Endless Outrage, Evil Empire, & State and Non-State Violence Compared). The costs are high. When will there be a moral accounting?

* * *

Hiroshima, Nagasaki, and the Spies Who Kept a Criminal US with a Nuclear Monopoly from Making More of Them
by Dave Lindorff

It was the start of the nuclear age. Both bombs dropped on Japan were war crimes of the first order, particularly because we now know that the Japanese government, which at that time was having all its major cities destroyed by incendiary bombs that turned their mostly wooden structures into towering firestorms, was even before Aug. 6, desperately trying to surrender via entreaties through the Swiss government.

The Big Lie is that the bomb was dropped to save US troops from having to invade Japan. In fact, there was no need to invade. Japan was finished, surrounded, the Russians attacking finally from the north, its air force and navy destroyed, and its cities being systematically torched.

Actually, the US didn’t want Japan to surrender yet though.Washington and President Harry Truman wanted to test their two new super weapons on real urban targets, and even more importantly, wanted to send a stark message to the Soviet Union, the supposed World War II ally which US war strategists and national security staff actually viewed all through the conflict as America’s next existential enemy.

As authors Michio Kaku and Daniel Axelrod, two theoretical physicists, wrote in their frightening, disturbing and well researched book To Win a Nuclear War: The Pentagon’s Secret War Plans (South End Press, 1987), the US began treacherously planning to use its newly developed super weapon, the atom bomb, against the war-ravaged Soviet Union, even before the war had ended in Europe. Indeed a first plan, to drop 20-30 Hiroshima-sized bombs on 20 Russian Cities, code named JIC 329/1, was intended to be launched in December 1945. Fortunately that never happened because at that point the US only had two atomic bombs in its “stockpile.”

The describe how as the production of new bombs sped up, with 9 nuclear devices by June 1946, 35 by March 1948 and 150 by January 1949, new plans with such creepy names as Operations Pincher, Broiler, Bushwacker, Sizzle and Dropshot were developed, and the number of Soviet cities to be vaporized grew from 20 to 200.

Professors Kaku and Axelrod write that Pentagon strategists were reluctant to go forward with these early planned attacks not because of any unwillingness to launch an unprovoked war, but out of a fear that the destruction of Soviet targets would be inadequate to prevent the Soviet’s still powerful and battle-tested Red Army from responding by over-running war-ravaged Europe in response to such an attack—a counterattack the US would not have been able to prevent. These strategists recommended that no attack be made until the US military had at least 300 nukes at its disposal (remember, at this time there were no hydrogen bombs, and the size of fission bomb was  constrained by the small size of the core’s critical mass). It was felt, in fact, that the bombs were so limited in power that it could take two or three to decimate a city like Moscow or Leningrad.

So the plan for wiping out the Soviet Union was gradually deferred to January 1953, by which time it was estimated that there would be 400 larger Nagasaki bombs available, and that even if only 100 of these 25-50 kiloton weapons hit their targets it could “implement the concept of ‘killing a nation.’”

The reason this epic US holocaust never came to pass is now clear: to the astonishment of US planners and even many  of the US nuclear scientists who had worked so hard in the Manhattan Project to invent and produce the atomic bomb (two types of atomic bomb, really), in August 29, 1949 the Soviets exploded their own bomb, the “First Lightning”: an almost exact replica of the “Fat Man” Plutonium bomb that destroyed Nagasaki four years earlier.

And the reason the Soviet scientists, brilliant as they were but financially strapped by the massive destruction the country had suffered during the war, had been able to create their bomb in roughly the same amount of time that the hugely funded Manhattan Project had done was primarily the information provided by a pair of scientists working at Los Alamos who offered detailed plans, secrets about how to work with the very tricky and unpredictable element Plutonium, and how to get a Plutonium core to explode in a colossal fireball instead of just producing a pathetic “fizzle.”

The Psychopath Epidemic
by Cameron Reilly

Another of my favorite examples of the power of brainwashing by the military-industrial complex is that of the bombings of Hiroshima and Nagasaki by the United States in 1945. Within the first two to four months of the attacks, the acute effects killed 90,000-166,000 people in Hiroshima and 60,000-80,000 in Nagasaki, with roughly half of the deaths in each city occurring on the first day. The vast majority of the casualties were civilians.

In the seventy-three years that have passed since Hiroshima, poll after poll has shown that most Americans think that the bombings were wholly justified. According to a survey in 2015, fifty-six percent of Americans agreed that the attacks were justified, significantly less than the 85 percent who agreed in 1945 but still high considering the facts don’t support the conclusion.

The reasons most Americans cite for the justification of the bombings is that they stopped the war with Japan; that Japan started the war with the attack on Pearl Harbor and deserved punishment; and that the attacks prevented Americans from having to invade Japan causing more deaths on both sides. These “facts” are so deeply ingrained in most American minds that they believe them to be fundamental truths. Unfortunately, they don’t stand up to history.

The truth is that the United States started the war with Japan when it froze Japanese assets in the United States and embargoed the sale of oil the country needed. Economic sanctions then, as now, are considered acts of war.

As for using the bombings to end war, the U.S. was well aware in the middle 1945 that the Japanese were prepared to surrender and expected it would happen when the USSR entered the war against them in August 1945, as pre-arranged between Truman and Stalin. The primary sticking point for the Japanese was the status of Emperor Hirohito. He was considered a god by his people, and it was impossible for them to hand him over for execution by their enemies. It would be like American Christians handing over Jesus, or Italian Catholics handing over the pope. The Allies refused to clarify what Hirohito’s status would be post-surrender. In the end, they left him in place as emperor anyway.

One American who didn’t think using the atom bomb was necessary was Dwight Eisenhower, future president and, at the time, the supreme allied commander in Europe. He believed:

Japan was already defeated and that dropping the bomb was completely unnecessary, and… the use of a weapon whose employment was, I though, no longer mandatory as a measure to save American lives. It was my belief that Japan was, at that very moment, seeking some way to surrender with a minimum loss of “face.”…

Admiral William Leahy, chief of staff to Presidents Franklin Roosevelt and Harry Truman, agreed.

It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons. My own feeling was that in being the first to use it, we had adopted an ethical standard common to the barbarians of the Dark Ages. I was not taught to maek war in that fashion, and wars cannot be won by destroying women and children.

Norman Cousins was a consultant to General MacArthur during the American occupation of Japan. Cousins wrote that

MacArthur… saw no military justification for dropping of the bomb. The war might have ended weeks earlier, he said, if the United States had agreed, as it later did anyway, to the retention of the institution of the emperor.

If General Dwight Eisenhower, General Douglas MacArthur, and Admiral William Leahy all believed dropping atom bombs on Japan was unnecessary, why do so many American civilians still today think it was?

Probably because they have been told to think that, repeatedly, in a carefully orchestrated propaganda campaign, enforced by the military-industrial complex (that Eisenhower tried to warn us about), that has run continuously since 1945.

As recently as 1995, the fiftieth anniversary of the bombings of Hiroshima and Nagasaki, the Smithsonian Institute was forced to censor its retrospective on the attacks under fierce pressure from Congress and the media because it contained “text that would have raised questions about the morality of the decision to drop the bomb.”

On August 15, 1945, about a week after the bombing of Nagasaki, Truman tasked the U.S. Strategic Bombing Survey to conduct a study on the effectiveness of the aerial attacks on Japan, both conventional and atomic. Did they affect the Japanese surrender?

The survey team included hundreds of American officers, civilians, and enlisted men, based in Japan. They interviewed 700 Japanese military, government, and industry officials and had access to hundreds of Japanese wartime documents.

Less than a year later, they published their conclusion—that Japan would likely have surrendered in 1945 without the Soviet declaration of war and without an American invasion: “It cannot be said that the atomic bomb convinced the leaders who effected the peace of the necessity of surrender. The decision to surrender, influenced in part by knowledge of the low state of popular morale, had been taken at least as early as 26 June at a meeting of the Supreme War Guidance Council in the presence of the Emperor.”

June 26 was six weeks before the first bomb was dropped on Hiroshima. The emperor wanted to surrender and had been trying to open up discussions with the Soviets, the only country with whom they still had diplomatic relations.

According to many scholars, the final straw would have come on August 15 when the Soviet Union, as agreed months previously with the Truman administration, were planning to declare they were entering the war with Japan.

But instead of waiting, Truman dropped the first atomic bomb on Japan on August 6.

The proposed American invasion of the home islands wasn’t scheduled until November.

Mass Delusion of Mass Tree Planting

Mass tree planting is another example, as with EAT-Lancet and corporate veganism, of how good intentions can get co-opted by bad interests. Planting trees could be beneficial or not so much. It depends on how it is done. Still, even if done well, it would never be as beneficial as protecting and replenishing the forests that already exist as living ecosystems.

But governments and corporations like the idea of planting trees because it is a way of greenwashing the problem and so continuing on with the status quo, continuing with the exploitation of native lands and the destruction of indigenous populations. Just plant more trees, largely as monocrop tree plantations, and pretend the ongoing ecocide does not matter.

My brother is a naturalist who has worked in several states around the country. When I shared the below article with him, he responded that,

“Yep, that’s been a joke among naturalists for a while! It’s kind of like the north woods of MN and WI. What was once an old growth pine forest is now a essentially a tree plantation of nothing but maples and birch grown for paper pulp. Where there are still pines, they are in perfect rows and never more than 30 years old. It’s some of the most depressing “wilderness” I’ve ever seen.”

Holistic, sustainable and regenerative multi-use land management would be far better. That is essentially what hunter-gatherers do with the land they live on. It can also be done with mixed farming such as rotating animals between pastures that might also have trees for production of fruit and nuts while allowing natural habitat for wildlife.

Here is the key question: Does the land have healthy soil that absorbs rainfall and supports a living ecosystem with diverse species? If not, it is not an environmental solution to ecological destruction, collapse, and climate change.

* * *

Planting 1 Trillion Trees Might Not Actually Be A Good Idea
by Justine Calma

“But the science behind the campaign, a study that claims 1 trillion trees can significantly reduce greenhouse gases, is disputed. “People are getting caught up in the wrong solution,” says Forrest Fleischman, who teaches natural resources policy at the University of Minnesota and has spent years studying the effects of tree planting in India. “Instead of that guy from Salesforce saying, ‘I’m going to put money into planting a trillion trees,’ I’d like him to go and say, ‘I’m going to put my money into helping indigenous people in the Amazon defend their lands,’” Fleischman says. “That’s going to have a bigger impact.””

 

The mouth is missing out too…

“Its the usual issue, same as for rest of the body really, fat turns out to be protective in the mouth, all fermentable carbs harmful.”

The Science of Human Potential

Its the usual issue, same as for rest of the body really, fat turns out to be protective in the mouth, all fermentable carbs harmful. Poor dental health is an issue for us, especially our kids.
So we’ve gone about raising this issue. This work was lead by doctoral candidate Sarah Hancock with me, Dr Simon Thornley, and D Caryn Zinn chiming in.
Well done Sarah.Here’s the paper, and some media links TV here, online news here and a short form of the paper (written by Sarah) below.

Nutrition guidelines for dental care vs. the evidence: Is there a disconnect?

Sarah Hancock

Dental caries is the most common chronic childhood disease in New Zealand.[1] The…

View original post 1,266 more words

Mid-20th Century American Peasant Communities

Industrial capitalism is radically new. Late stage capitalism came rather late. That is true for the United States. Until about a century ago, most Americans still lived in rural communities, worked manual labor on small family farms, survived on subsistence in growing most of their own food, and bought what little else they needed through store tabs and barter. Many Americans were still living this way within living memory. A few such communities persist in places across the United States.

Segmented Worlds and Self
by Yi-Fu Tuan
pp. 17-21

Most peasants are villagers, members of cohesive communities. What is the nature of this cohesion and how is it maintained? A large and varied literature on peasants exists—historical studies of villages in medieval Europe and ethnographic surveys of peasant economy and livelihood in the poorer parts of the world at the turn of the century. Though differing from each other in significant details, peasant worlds nonetheless share certain broad traits that distinguish them from urban and modern societies. First, peasants establish intimate bonds with the land: that one must labor hard to survive is an accepted truth that is transformed into a propitiary and pious sentiment toward Mother Earth. Deities of the soil and ancestral spirits become fused. Peasants see themselves as belonging to the land, “children of the earth,” a link between past and future, ancestors and progeny. Biological realities and metaphors, so common in the peasant’s world, tend to suppress the idea of the self as a unique end or as a person capable of breaking loose from the repetitive and cyclical processes of nature to initiate something radically new. Although peasants may own the land they work on, they work more often in teams than individually. Many agricultural activities require cooperation; for example, when the fields need to be irrigated and drained, or when a heavy and expensive piece of equipment (such as the mill, winepress, or oven belonging to the landlord) is to be used. Scope for individual initative is limited except in small garden plots next to the house, and even there customary practices prevail. Individualism and individual success are suspect in peasant communities. Prosperity is so rare that it immediately suggest witchcraft.

In the peasant’s world the fundamental socioeconomic unit is the extended family, members of which—all except the youngest children—are engaged in some type of productive work. They may not, however, see much of each other during the day. Dinnertime may provide the only opportunity for family togetherness, when the webs of affection and lines of authority become evident to all. More distant relatives are drawn into the family net on special occasions, such as weddings and funerals. Besides kinsfolk, villagers can count on the assistance of neighbors when minor needs arise, whether for extra hands during harvest, for tools, or even for money. In southeast China, a neighborhood is clearly defined as five residences to each side of one’s own. Belonging to a neighborhood gives one a sense of security that kinsfolk alone cannot provide. Villagers are able to maintain good neighborly relations with each other because they have the time to socialized. In Europe the men may go to a tavern, where after a few  beers they feel relaxed enough to sing together— that most comradely of human activities. In China the men, and sometimes the women as well, may go to a teahouse in a market town, where they can exchange gossip among themselves and with visitors from other villages. More informally, neighbors meet to chat and relax in the village square in the cool of the evening. Peasants desire contentment rather than success, and contentment means essentially the absence of want. When a man achieves a certain level of comfort he is satisfied. He feels no compulsion to use his resource and energy for higher economic rewards. He has the time and sense of leisure to hobnob with his fellows and bathe in their undemanding good will. Besides these casual associations, peasants come together for planned festivals that might involve the entire village. the New Year and the period after harvest are such occasions in many parts of the world. the number of festivals and the days on which they occur vary from place to place, but without exception festivals come to pass when people are relatively free, that is, during the lax phases of the calendar year.

Festivals, of course, strengthen the idea of group self. Tehse are the times when the people as a whole express their joy in the success of a harvest, or the growing strength of the sun. Simultaneously, they reaffirm their piety toward the protective deities of the earth and sky, their sense of onennss with nature. Group cohesiveness s a product of need, a fact that is manifest n the traditional world of villagers at different scales, ranging from that of family and kinsfolk, through those of neighbors and work team, to the entire community as it celebrates the end of a period of toil or the passing of a crisis of natue, or as it is girded in self-defense against natural calamity or human predators. Necessity is not a condition that human beings can contemplate for long without transforming it into an ideal. Thus, the cooperation necessary to survival becomes a good in itself, a desirable way of life. Units of mutual help achieve strong identities that can persist long after the urgencies that called them into existence have passed. In such groups, forged initially out of need but sustained thereafter by a sense of collective superiority, wayward and questioning individuals have no place.

A common image of America is that it is a land of individualists. Even in the colonial period, when towns were small and isolated, intimately knit communal groups like those of Europe did not exist. The people who lived in them, particularly in the Middle Colonies, shared too few common traditions and habits. Moreover, they were continually moving in and out. In New England, where settlers made periodic attempts to establish communities artificially by means of consciously constructed models, the results were mixed in relation to satisfaction and permanence. In the countryside, the Jeffersonian ideal of the yeoman farmer seems to have held sway. Nevertheless, not only individualists but families and clusters of families migrated to the frontier, and in the course of time some of them became deeply rooted agglutinate communities, in which such characteristic American ideals as upward social mobility, individual intiative, and success were alien.

Traditional farming communities, relics from the past, persist in rural America at mid-twentieth century. Consider the sixty-odd families whose roots in the hollows of Tennessee, a few miles south of Nashville, go back to 1756. Over a course of two hundred years, inter-marriage has produced the closest bonds. Natural warmth between kinsfolk and neighbors is reinforced by a deep suspicion of outsiders. The community is strongly egalitarian. Work roles differ by age and sex, but social stratification as it exists in most parts of the country is unknown. “In work terms,” writes John Mogey, “no one is clearly leader: collective responsibility for work assignment is the rule to an extent that to speak of individual or family farming enterprises would be to violate the facts.” In her study of this community, Elmora Matthews notes how warm feelings between farmers can emerge from a combination of blood ties, laboring at common tasks, and informal socializing. One woman described the relation between her four brothers, who have adjoining farms: “They work all day long together, eat their meals together, and then always sit around and visit with each other before they go home. ” Ambition and even efficiency, when it is obtrusive, are bad. On the other hand, “no one ever condemns a husband who evades his work. If anything, a man who sits around home a lot blesses a family group.” One of the most respectable activities for man is to loaf and loiter with other men. The greatest satisfaction lies in the warm exchange of feeling among relatives and close friends at home, church, or store.

People in this Tennessee community almost never organize formally for special ends. There are no communal projects. The community is not a provisional state that might be altered and improved upon, or used for some larger, ulterior purpose. It is the supreme value and sole reality: whatever threatens to disrupt it is bad. Critical self-awareness seems minimal. Thus, although this Tennessee people fervently believe in freedom, anyone who exercises it to develop his talent and becomes a success is harshly judged. Thorough conformists in thinking and behavior, they nevertheless resent the government for its tendency to impose rules and regulations, and they regard communism as unimaginably horrible.

Close-knit communities of this kind can be found in the more isolated countrysides of Western Europe and North America even in the middle of the twentieth century.

Rainbow Pie: A Redneck Memoir
by Joe Bageant
pp. 15-20

When Virginia Iris Gano and Harry Preston Bageant crested that ridge in their buggy and began their life together, they stood an excellent chance of making it. For starters, in that world the maths of life was easier, even if the work was harder. If you could show the bank or the seller of the land that you were healthy and sober, and knew how to farm, you pretty much had the loan (at least when it came to the non-arid eastern American uplands; the American West was a different matter). At 5 percent simple interest, Pap bought a 108-acre farm — house, barn, and all — for $400. (It was a cash-poor county, and still is. As recently as 1950 you could buy a 200-acre farm there for about $1,000.) On those terms, a subsistence farmer could pay off the farm in twenty years, even one with such poor soils as in these Southern uplands. But a subsistence farmer did not farm to sell crops, though he did that, too, when possible. Instead, he balanced an entire life with land and human productivity, family needs, money needs, along with his own and his family’s skills in a labor economy, not a wealth economy. The idea was to require as little cash as possible, because there wasn’t any to be had.

Nor was much needed. The farm was not a business. It was a farm. Pap and millions of farmers like him were never in the “agribusiness”. They never participated in the modern “economy of scale” which comes down to exhausting as many resources as possible to make as much money as possible in the shortest time possible. If you’d talked to him about “producing commodities under contract to strict specifications”, he wouldn’t have recognized that as farming. “Goddamned jibber-jabber” is what he would have called it. And if a realtor had pressed him about the “speculative value” of his farmland as “agronomic leverage”, I suspect the old 12-gauge shotgun might have come down off the rack. Land value was based upon what it could produce, plain and simple. These farms were not large, credit-based “operations” requiring annual loans for machinery, chemicals, and seed.

Sure, farmers along Shanghai Road and the Unger Store community bought things at the junction store on credit, to be paid for in the autumn. Not much, though. The store’s present owners, descendants of the store’s founders, say that an annual bill at the store would run to about ten dollars. One of them, Richard Merica, told me, “People bought things like salt and pepper. Only what they couldn’t make for themselves, like shotgun shells or files.” Once I commented to an old Unger Store native still living there that, “I suspect there wasn’t more than $1,000 in the Unger Store community in the pre-war days.”

“You’re guessing way too high,” he said. “Try maybe $400 or $500. But most of it stayed here, and went round and round.”

So if Pap and the other subsistence farmers there spent eight bucks a year at the local crossroads store, it was eight bucks in a reciprocal exchange that made both their subsistence farming and the Unger Store possible as a business and as a community.

Moneyless as it was, Maw and Pap’s lives were far more stable than one might think today. In fact, the lives of most small farmers outside the nasty cotton sharecropping system of deep-southern America were stable. Dramatic as the roller-coaster economics of the cities and the ups and downs caused by crop commodity speculators in Chicago were, American farm life remained straightforward for the majority. Most were not big Midwestern broad-acre farmers who could be destroyed by a two-cent change in the price of wheat. Wheat in Maw and Pap’s time hovered at around fifty to fifty-five cents a bushel; corn, at forty-five; and oats at about fifty-six. Multiply the acreage by average bushels per acre for your piece of land, and you had a start at figuring out a realistic basis for your family’s future. It was realistic enough that, after making allowances for bad years, plus an assessment of the man seeking the loan, the banks lent Pap the price of a farm. That assessment was not shallow.

Pap was expected to bring to the equation several dozen already-honed skills, such as the repair, sharpening, and use of tools (if you think that is simple, try laying down wheat with a scythe sometime); the ability to husband several types of animal stock; and experience and instinct about soils and terrain, likely weather, and broadcasting seed by hand. Eastern mountain subsistence farms needed little or no planting equipment because plots were too small and steep. What harvesting equipment such as reapers and threshers might be needed was usually owned by one man who made part of his living reaping and threshing for the rest of the community. Other skills included planting in cultivated ridges, managing a woodlot, and estimating hours of available sunlight for both plant growth and working. The subsistence farm wife’s life required as much experience and skill on a different front of family provision.

That said, Pap wasn’t a particularly good farmer. He wasn’t a bad farmer, either. He was just an average farmer among millions of average farmers. The year my grandparents married, about 35 million Americans were successfully engaged in farming, mostly at a subsistence level. It’s doubtful that they were all especially gifted, or dedicated or resourceful. Nevertheless, their kind of human-scale family farming proved successful for twelve generations because it was something more — a collective consciousness rooted in the land that pervaded four-fifths of North American history.

They farmed with the aid of some 14 million draft horses and God only knows how many mules. Pap wasn’t much for mules; all the farming he had to do could easily be done with one horse. Without going into a treatise on horse farming, let me say that, around 1955 at the age of ten, I saw the last of Pap’s work horses in use, a coal-black draft animal named “Nig” (short for nigger, of course). By then, Nig, who was Nig number three, if I remember correctly, was over twenty years old, and put out to pasture — a loose use of the term, given that he spent his time in the shade of the backyard grape arbor waiting to be hand-fed treats. But Nig still pulled a single tree-plow in a four-acre truck garden down in the bottom land — mostly melons, tomatoes, and sweet corn — while I sometimes rode atop barefoot holding onto the wooden hames at the collar. Pap walked behind, guiding the plow. “Gee Nig! Haw Nig! Step right … Turn and baaack. Cluck-cluck.” The rabbit dogs, Nellie and Buck, trotted alongside in the spring sun.

Though Pap owned a tractor by then — a beaten-up old Farmall with huge, cleated steel wheels, a man-killer prone to flipping over backward and grinding the driver bloodily under the cleats — he could still do all his cultivation walking behind Nig in the spring. In summer he’d scratch out the weeds with a horseless garden plow, or “push plow”, and pick off bugs by hand, dropping them into a Maxwell House coffee can half-filled with kerosene. Pap hand-harvested most things, even large cornfields, using a corn cutter fashioned from an old Confederate sword. But it is that old horse and that old man with the long leather lines thrown up over his shoulders, the plow in his iron grip, and cutting such straight lines in the red clay and shale, that I remember most fondly. He made it look easy. Fifty years in the furrows will do that.

pp. 41-53

THE CULTURAL VALUES MAY REMAIN, HANGING over everything political and many things that are not, but there are few if any remaining practitioners of the traditional family or community culture by which Pap and Maw lived — the one with the woman in the home, and the man in the fields, although Maw certainly worked in the fields when push came to shove. This is not to advocate such as the natural order of things. I am neither Amish nor Taliban. But knee-jerk, middle-class, mostly urban feminists might do well to question how it all started and what the result has been — maybe by getting out and seeing how few of their sisters gutting chickens on the Tyson’s production line or telemarketing credit cards on the electronic plantation relish those dehumanizing jobs that they can never quit.

It would do them well to wonder why postwar economists and social planners, from their perches high in the executive and management class, deemed it best for the nation that more mothers become permanent fixtures of America’s work force. This transformation doubled the available labor supply, increased consumer spending, and kept wages lower than they would have otherwise been. National production and increased household income supposedly raised everyone’s quality of life to stratospheric heights, if Formica countertops and “happy motoring” can be called that. I’m sure it did so for the managing and owning classes, and urban people with good union jobs. In fact, it was the pre-war trade unions at full strength, particularly the United Auto Workers, that created the true American middle class, in terms of increased affluence for working people and affordable higher education for their children.

What Maw and Pap and millions of others got out of it, primarily, were a few durable goods, a washing machine, a television, and an indoor toilet where the pantry, with its cured meats, 100-pound sacks of brown sugar, flour, and cases of eggs had been. Non-durable commodities were vastly appreciated, too. One was toilet paper, which ended generations of deep-seated application of the pages of the Sears Roebuck mail-order catalog to the anus (the unspoken limit seemed to be one page to a person at a sitting). The other was canned milk, which had been around a long time, but had been unaffordable. Milk cows are a wonderful thing, but not so good when two wars and town work have drained off your family labor-supply of milkers. […]

The urging of women into the workplace, first propagandized by a war-making state, was much romanticized in the iconic poster image of Rosie the Riveter, with her blue-denim sleeves rolled up and a scarf tied over her hair. You see the image on the refrigerator magnets of fuzzy-minded feminists-lite everywhere. This liberal identity-statement is sold by the millions at Wal-Mart, and given away as a promotional premium by National Public Radio and television.

Being allowed to manufacture the planes that bombed so many terrified European families is now rewritten as a feminist milestone by women who were not born at the time. But I’ve never once heard working-class women of that period rave about how wonderful it was to work long days welding bomb-bay doors onto B-29s.

The machinery of state saw things differently, and so the new reality of women building war machinery was dubbed a social advance for American womankind, both married and single. In Russia, it was ballyhooed as Soviet socialist-worker equality. And one might even believe that equality was the prime motive, when viewed sixty years later by, for instance, a university-educated specimen of the gender writing her doctoral dissertation. But for the children and grandchildren of Rosie the Riveter, those women not writing a dissertation or thesis, there is less enthusiasm. Especially among working mothers. The Pew Research Center reports that only 13 percent of working mothers think that working benefits their children. But nearly 100 percent feel they have no choice. Half of working mothers think their employment is pointless for society. Forty-two percent of Americans, half of them women, say that working mothers have been bad for society on the whole. Nearly all working mothers say they feel guilty as they rush off to work.

Corporations couldn’t have been happier with the situation. Family labor was siphoned off into the industrial labor pool, creating a surplus of workers, which in turn created a cheaper work force. There were still the teeming second-generation immigrant populations available for labor, but there were misgivings about them — those second-generation Russian Jews, Italians, Irish, Polish, and Hungarians, and their like. From the very beginning, they were prone to commie notions such as trade unions and eight-hour workdays. They had a nasty history of tenacity, too.

On the other hand, out there in the country was an endless supply of placid mules, who said, “Yes, Ma’m” and “No, Ma’m”, and accepted whatever you paid them. Best of all, except for churches and the most intimate community groups, these family- and clan-oriented hillbillies were not joiners, especially at some outsiders’ urging. Thus, given the nature of union organizing — urging and convincing folks to join up — local anti-union businessmen and large companies alike had little to fear when it came to pulling in workers from the farms.

Ever since the Depression, some of the placid country mules had been drifting toward the nearest cities anyway. By the 1950s, the flow was again rapidly increasing. Generation after generation couldn’t keep piling up on subsistence farms, lest America come to be one vast Mennonite community, which it wasn’t about to become, attractive as that idea might seem now. Even given America’s historical agrarian resistance to “wage slavery” (and farmers were still calling it that when I was a kid), the promise of a regular paycheck seemed the only choice. We now needed far more money to survive, because we could no longer independently provide for ourselves.

Two back-to-back wars had effectively drained off available manpower to the point where our family farm offered only a fraction of its former sustenance. Even if we tried to raise our own food and make our own clothing out of the patterned multi-colored feed sacks as we had always done, it took more money than ever. […]

By the mid and late 1950s, the escalating monetized economy had rural folks on the ropes. No matter how frugal one was, there was no fighting it. In a county where cash had been scarce from the beginning — though not to disastrous effect — we children would overhear much talk about how this or that aunt or uncle “needs money real bad”. […]

WHEN IT COMES TO MONEY, I AM TOLD THAT BEFORE the war some Unger Store subsistence farmers got by on less than one hundred dollars a year. I cannot imagine that my grandfather ever brought in more than one thousand dollars in any year. Even before the postwar era’s forced commodification of every aspect of American life, at least some money was needed. So some in my family, like many of their neighbors, picked apples seasonally or worked as “hired-on help” for a few weeks in late summer at the many small family-owned apple- and tomato-canning sheds that dotted Morgan County. In the 1930s, 1940s, and 1950s, between farming and sporadic work at the local flour, corn, and feed-grinding outfits, and especially the small canning operations, a family could make it. Pap could grow a few acres of tomatoes for the canneries, and Maw or their kids could work a couple of weeks in them for cash.

This was local and human-scale industry and farming, with the tomatoes being grown on local plots ranging from five to ten acres. Canners depended on nearby farm families for crops and labor, and the farm families depended upon them in turn for cash or its equivalent. […]

Farm-transport vehicles were much scarcer then, especially anything bigger than a quarter-ton pickup truck. So the sight of Jackson Luttrell’s one-ton Chevy truck with its high wooden sideboards was exciting in itself. In those days, farmers did not buy new $45,000 trucks to impress other farmers, or run to the nearest farm supply in one of them to pick up a couple of connector bolts. Every farmer had a farm wagon, whether pulled by horse or tractor, but almost nobody owned a truck. Common sense and thrift prevented them from spending big money on something that would only be used during one month each year at harvest time. Beyond that, farmers would not even think of growing those small acreages of tomatoes that the canneries depended upon if they had to buy a truck to transport them there — any profit made on the tomatoes would be lost on the truck. So, for folks such as Jackson Luttrell, who had one, ownership made more economic sense. He profited through its maximized use in getting everyone else’s crops to the mill or processing plant. One truck served the farm community, at minimum expenditure to the entire group. They didn’t even have to pay Jackson Luttrell any cash for the hauling.

That was because Cotton Unger, who owned the canning operation, was expected to get the tomatoes to his factory himself. As a businessman and entrepreneur, it was Unger’s job to deal with the problems that came with his enterprise. Unger’s job was to run a business; a farmer’s job was to farm. These were two separate things in the days before the rigged game of agri-business put all the cost on the farmers through loading them with debt, and all the profits went to business corporations. Nor did Unger’s duties as a capitalist end with getting the hauling done at his own expense. It was also his job to turn the local crops such as wheat, corn, and tomatoes into money, through milling or canning them for sale to bulk contractors elsewhere.

Cotton owned more than just the family store, which he’d inherited from his father, Peery Unger, and for which the community was named sometime after the Civil War. The store at the junction had gasoline pumps, a grinding mill, and a feed and seed farm-supply adjunct. It was also the official post office for that end of the county; and, just to be safe, Cotton Unger also farmed. The Unger family’s store was a modest, localized example of a vertically integrated, agriculturally based business, mostly out of necessity.

Cotton never saw much cash, and never got rich by any means. Not on the ten-cent and fifteen-cent purchases that farmers made there for over one hundred years. Yet he could pay Jackson Luttrell for the tomato hauling — in credit at the store. That enabled Jackson to buy seed, feed, hardware, fertilizer, tools, and gasoline, and farm until harvest time with very little cash, leaving him with enough to invest in a truck. Unger could run his tomato cannery and transform local produce into cash, because he could barter credit for farm products and services. This was a community economic ecology that blended labor, money, and goods to sustain a modest but satisfactory life for all.

At the same time, like most American businessmen then and today, Cotton Unger was a Republican. He was a man of the Grand Old Party: the party of a liberator named Abraham, who freed millions of black men from the bondage of slavery; and the party of two presidents named George, the second of whom subsequently ushered Americans of all colors back into slavery through national indebtedness. Being of a Republican stripe made Cotton Unger a rare bird in the strongly Democratic Morgan County.

Today he would be even rarer, because he was a Republican with the common wisdom to understand something that no Republican has ever grasped since: he realized that any wealth he might acquire in life was due not only to his own efforts, but also to the efforts of all other men combined — men who built the roads that hauled his merchandise; men who laid rail track, grew crops, drilled wells, and undertook all the other earthly labors that make society possible. Whether they were Democrats or not, he needed the other citizens around him as friends, neighbors, and builders of the community. To that end, he provided transportation to the polls at election time for farmers without cars — and they were many, Pap and Maw among them — full knowing that nearly every last one of them was going to vote against his candidate. In his ancestors’ time they had voted for Andrew Jackson, Martin Van Buren, James Polk, James Buchanan, Woodrow Wilson, Franklin Roosevelt, and Harry Truman — all Democrats.

The old-timers say that Cotton always looked kinda weary around election time. And well he must have been. On election day, Cotton chauffeured around Democratic voters, people who would vote against his interests, vote in favor of higher business taxes or to increase teachers’ pay to the point where the school-marm could almost make a living. But Cotton also understood that his personal interests resided more with his community and neighbors than with his political affiliation. Republican politicians in faraway Charleston took the back seat to his face-to-face daily life with his neighbors. Cotton, like his father Peery, and his grandfather, C.J. Unger, before him, knew that when you depend directly on neighbors for your daily bread, you’d damned-well better have their respect and goodwill. And you’d best maintain it over generations, too, if you plan to pass the family store down to your sons and your sons’ sons. We may never see that level of operative community democracy again.

pp. 61-69

Not that money was unimportant. Money has been important since the first Sumerian decided it was easier to carry a pocket full of barley shekels than hump a four-foot urn of barley down to the marketplace on his back. And it was certainly important 5,000 years later to the West Virginia hill country’s subsistence farmers. But in the big picture, money was secondary to co-operation and the willingness to work hard. A considered ecology of family labor, frugality, and their interrelationship with community was the economy. And the economy was synonymous with their way of life, even though that would have been a pretentious term to Pap and his contemporaries. He always said, “You just do the next thing that needs doing. You keep doing that, and everything gets done that needs to be done.” When I’d ask him what to do next, he’d say, “Just look to see what needs doing, dammit!”

Understanding what needed doing was the glue of subsistence farming’s family-work ecology, which was also ecological in the environmental sense. Knowledge was passed along about which fields best grew what produce, the best practices to maintain fertility, and what the farm could sustainably produce year in and year out. It was a family act.

Those farm families strung out along Shanghai Road could never have imagined our existential problems or the environmental damage we now face. But, after having suffered such things as erosion from their own damaging early-American practices, they came to understand that nature and man do not stand separately. The mindfulness involved in human-scale farming demands such. To paraphrase Wendell Berry, we should understand our environmental problem as a kind of damage that has also been done to humans. In all likelihood, there is no solution for environmental destruction that does not first require a healing of the damage done to the human community. And most of that damage to the human world has been done through work, our jobs, and the world of money. Acknowledging such things about our destructive system requires honesty about what is all around us, and an intellectual conscience. And asking ourselves, “Who are we as a people?”

Meanwhile, as settlers migrated down the Great Valley of Virginia, as they called the Shenandoah Valley toward the fertile southlands, the poorer among them kept seeping westward into the uncleared Blue Ridge, where land was cheapest and work was hardest. When they settled on Fairfax’s land, they may have become human assets to his holdings. But they were not slaves and they were not employees. The overwhelming portion of the fruits of their labor were directly their own. They could not be fired. They could not incur oppressive financial debt. And if their farms were isolated specks in the blue Appalachian fog with their split-pine log floors, they were nevertheless specks located in a great, shared commons called nature.

In contrast to Fairfax and the planter society’s money-based economy of wealth, these settlers lived by a family-based economy of labor. Not that they had a choice. Any kind of coinage or currency was rare throughout the colonies. Their economy depended on the bartering of labor and sometimes goods between themselves. Dr Warren Hofstra, an eminent historian of the area, tells me this system was so complex that they kept sharply detailed ledger books of goods and services bartered, even of small favors done for one another. In essence, this was an economy whose currency was the human calorie. Be it a basket of apples or a week’s labor hauling stone for a house, everything produced (which was everything in their subsistence world, there being no money), was accomplished by an expenditure of human energy. Calories burned could only be replaced by an expenditure of calories to plant, grow, and preserve future calories for sustained sustenance. This was a chain of caloric expenditures or barter going all the way back to the forging of the iron hoe or plow that made subsistence possible at all. Keenly aware that both time and their own human energy were finite, they measured, balanced, and assigned value to nearly every effort, large or small. Wasting these resources could spell hunger or failure to subsist.

This attitude lives on today among the descendants of the settlers. When outsiders move into this area, they often comment on what they perceive as the miserliness of the natives. Or the fact that they will not let you do them even a small favor, lest they be obligated in return.

A lady new to the area, a physician who hails from Delaware, told me: “I went shopping with Anna at the mall last week. We went in my car. She tried to give me three dollars for ‘gas money’. I told her that was very kind, but we’d only driven two miles at best and that it wasn’t necessary. She kept pushing the money at me, saying ‘Here, take this,’ getting more and more insistent each time. I kept declining until I noticed that she was becoming honestly and truly angry with me. It was so damned strange, I’ve never seen anything like it. So I took the three dollars.”

I explained that many natives are like that, and told her about the early settlers’ rigid barter-and-favor economy, and how these attitudes have unconsciously come down through our cultural history, remaining as deeply instilled social practices and conventions. It can work the other way around, too. Some people will unexpectedly do something very nice for you, or give you something — maybe an antique or whatever.

“Don’t let the Southern charm fool you, though,” I said. “In the back of their mind they have marked it down as a favor or a social debt owed. And they’ll expect you to recognize when to pay it back. Maybe volunteer to feed their dog or water their lawn when they are away. At the same time, you should feel somewhat honored. It’s a down payment on developing further friendship. If they hadn’t judged you to be a worthy, reliable, and reciprocating person, dependable in a friendship, they wouldn’t even bother to know you at all. In fact, that’s why so many outsiders perceive some natives as snotty and cold.”

“Amazing,” she said. “I’d never guess their behavior had such deep cultural roots.”

“Neither would they,” I replied.

As the hill-country population grew, their isolation lessened. Farmers grew more connected in a community network of seasonal mutual efforts, such as threshing, hunting, hog slaughtering, haymaking, clannish marriages, and birth, burial, and worship. These conventions were still being observed into the 1950s as I was growing up there.

Family and community life in that early, non-wealth-based economy is impossible for us to comprehend. No man can fully grasp a life he has not lived, or for that matter completely grasp the one he is living. But we Blue Ridge folk most surely live subject to the continuing effects of that dead culture which is never really dead.

For example, the old agrarian culture of reserve, frugality, and thought-out productivity translate as political conservatism today, even though few of its practitioners could identify a baling hook if their lives depended on it. At its core stood — and still stand, for the most part — “family values”, which meant (duh!) valuing family. Valuing family above all else, except perhaps God’s word. Grasping the true meaning of this is to understand much of the conservative American character, both its good and its bad qualities. I dare say it also holds some solutions to the dissolution of human community, the destabilizing of world resources, and the loss of the great commons, human and natural, all sacrificed to the monstrous fetish of commodities, their acquisition and their production through an insane scale of work and round-the-clock commerce and busyness.

DNC Nomination Rigging Redux

“…clear evidence that Bloomberg, HuffPo, the New York Times, and the Washington Post are two months into a no-holds-barred, all-out narrative assault on the Sanders candidacy.

“This stuff makes a difference. Sanders is not dominating the other Democratic candidates in narrative-world centrality today as much as he was two months ago.”

~Ben Hunt, Stuck in the Middle With You

There is strong evidence, from analysis of media articles, that most major news outlets in corporate media, besides Fox News, suddenly did a simultaneous shift toward negative reporting on the presidential candidate Bernie Sanders in the past months. It appeared to be, one could easily argue, coordinated in preparation for the 2020 Democratic caucuses in Iowa.

This stands out because, in recent years, Sanders has been the most popular candidate in both parties. Last campaign, he received more small donations than any other candidate in United States history. And this campaign, he received even more and has accumulated more total donations than any other candidate. Most of the polls from last time around showed Sanders as the only candidate with any strong chance of defeating Trump, assuming defeating Trump was the priority, rather than keeping the political left out of power and maintaining the Clinton hold on the DNC — a big assumption, we might add.

That is why Clinton Democrats used so many dirty tricks to steal the nomination from Sanders in the previous election. Hillary Clinton was not only the candidate in opposition to Sanders for she also effectively controlled the DNC. She used DNC money to influence key figures and denied Sanders’ campaign access to necessary DNC voter information. Using DNC cronies in the corporate media, they controlled the narrative in news reporting, such as the Washington Post spinning continuous negativity toward Sanders right before a debate, almost an attack piece per hour.

Then at CNN, the insider Donna Brazile slipped Clinton questions before the CNN debate. By the way, the middle man who passed those questions directly onto Clinton was John Podesta who has also been caught red-handed right in the middle of the Ukranian fiasco. Even though he was the right-hand man of the Clintons, his brother’s lobbyist Democratic lobbyist firm was working with Manafort at a Republican lobbyist firm (John Podesta, Clinton Democrats, and Ukraine). The deep state can get messy at times and the ruling elite behind the scenes don’t care much about partisan politics, as can be seen in Donald Trump’s political cronyism that for decades has crossed partisan lines.

And if that wasn’t bad enough, Clinton bought off the superdelegates with DNC money and promises. Some states where Sanders won had the superdelegates go against public will and threw their support for Clinton instead. They didn’t even bother to pretend it was democracy. It was literally a stolen nomination.

The actions of the DNC elite in the previous campaign season was one of the most blatant power grabs I’d seen since Bush stole the 2000 election by fiat of the GOP-controlled Supreme Court, when later analysis showed that Bush actually had lost Florida which meant in a fair election and full count he would not have been elected. But everyone, including Republicans, expect the GOP to be corrupt in being anti-democratic (gerrymandering, voter purges, closing down polling stations in poor neighborhoods, etc) because it is their proudly declared position to be against democracy, often going so far as calling it mobocracy or worse (to the extreme reactionary right-wing, democracy and communism are identical).

It’s theoretically different with Democrats as they give lip service to democratic ideals and processes — after all, their party is named after democracy. That is why it feels like such a sucker punch, these anti-democratic tactics from the Clinton Democrats. And isn’t the media supposed to be the fourth estate? Or is it the fourth pillar of the deep state that extends beyond official governing bodies?

* * *

The above criticism is an appraisal of the situation as an outsider to the two-party system. This post is not an endorsement of a candidate. We have come to the conclusion that the U.S. lacks a functioning democracy. We are one of those supposedly rare Americans who is undecided and independent. We may or may not vote, depending on third party options. But for the time being, we’ve entirely given up on the Democratic Party and the two-party system in general.

Even Sanders is not overly impressive in the big scheme of things, though he is the best the Democrats have to offer. We don’t trust Sanders because he hasn’t shown he is willing to fight when the going gets tough, such as when after being betrayed by the DNC he threw his support behind Hillary Clinton who has since stabbed him in the back. We definitely don’t endorse any Clinton Democrat, certainly not a member of the Clinton Dynasty, nor will I endorse anyone who has endorsed such a miserable creature.

In our humble opinion, we are inclined to believe it’s best to leave Donald Trump in office. Our reasoning is similar to why we thought the same about Obama. Whatever a president does in the first term creates a mess that they should have to deal with in the second term. That way they can never convincingly deny responsibility by scapegoating the party that inherited the mess. We suspect that, for all the delaying tactics such as tariffs and tax breaks, there is going to be an economic crash in the near future and quite possibly in the next few years.

It would be best for all involved if Trump is in power when that happens. Trump has taken all the bigoted rhetoric, neocon posturing, and capitalist realism that the GOP elite has been pushing for decades and thrown it back in their face. This forces them to take ownership of what they previously had attempted to soft-pedal. Trump is devastating to the country, but he is even more devastating to the RNC and the conservative ruling elite won’t recover for a long time. Also, being forced out into the political desert will give the Democrats an opportunity for soul-searching and give the political left a chance to take over the party while the Clinton Democrats are in a weakened state.

Even more important, it’s an opportunity for third parties to rise up and play a larger role. Maybe one of them will even be able to take out one of the present two main parties. The only relevance Sanders has had is that he has promoted a new narrative framing of public debate about public policy and that in turn has shifted the tide back toward the left again, something not seen in my entire life. That is a good thing and we give him credit where it’s due. If imperfect and falling short of what is needed, his efforts have been honorable. As DC career politicians go, he is far above average.

I actually wish Sanders well. One of my closest friends caucused for him recently. And last election, I too caucused for him. I hope he can make a difference. But I’m personally finished with the Democratic Party. I no longer trust them. What we need now is something far more radical and revolutionary than Sanders or any other Democratic candidate can offer, specifically any that would ever get the nomination.

* * *

How the DNC Thwarted Democracy in Iowa Using 5 Easy Steps
by Veronica Persimmon

Step One: Enact a Plan to Subvert the Progressive Frontrunner
Step Two: Manufacture a Surge
Step Three: Develop a Private App to Report the Results of the Iowa Caucuses
Step Four: Use “Quality Control” in Order to Withhold Data
Step Five: Declare Victory with Zero Precincts Reporting

It appears that Buttigieg is the DNC’s Chosen One. The “Stop Bernie” candidate designed to exhaust and discourage progressives from partaking in the electoral process. The question is, will voters be more determined to fight for their rights, lives, and the future of the planet? Or will progressives put their desire for progress on the back burner in order to replace a dangerous, corrupt demagogue with a dangerous and corrupt candidate hand-chosen by the treacherous DNC?

The Curious Case of Candidate Sanders
by Rusty Guinn

There are two takeaways: first, yes, every outlet appears to have generally increased the extent to which they use language with negative affect to cover the Sanders campaign. For the reasons described above, that shouldn’t be taken as a sign of “bias” per se. But the second takeaway is concerning: four of these key outlets – the New York Times, Washington Post, Reuters and Huffington Post – used dramatically more negative language in their news, feature and opinion coverage of the Sanders campaign in the month of January 2020.

We are always skeptical of relying on sentiment scoring alone; accordingly, we also examined which outlets drove the breakdown in the previously cohesive use of language to describe Bernie Sanders, his policies and his campaign in the media. In other words, which outlets have “gone rogue” from the prevailing Sanders narrative? Are they the outlets who chose to stay “neutral” or at least relatively less negative in December and January? Or can we pin this on the ones who have found a new negative streak in their Bernie coverage? Is there even a relationship between the rapid shift in sentiment by some outlets and the breakdown in narrative structure?

Oh yeah. […]

I think they tell us that the Washington Post and, to a lesser extent, the New York Times experienced a shift in the nature of their coverage, the articles and topics which they included in their mix, and the specific language they used in the months of December and January.

I think they tell us that change was unusual in both magnitude and direction (i.e. sentiment) relative to other major outlets. Their coverage diverged from the pack in language and content.

I think that change was big enough to create the general breakdown in the Sanders that observers have intuitively ‘felt’ when they consume news. […]

Why now? Should we be concerned that a publication which used its editorial page to endorse two candidates suddenly experienced a simultaneous change in tenor of its news coverage?

Not a trick question. Obviously, the answer is yes.

Paradigm Shift: The Importance of the Right Anecdotal Evidence

“As we all know, often it is not what is said but who says it that matters. Nothing is truer than that, as shown by this case. After millions of dollars spent on clinical trials over several years of proving that low carbohydrate diets, especially the ketogenic diet, can reverse T2D, without making any of the success stories into any newsflash, the single anecdotal evidence, that the CEO of the ADA could reverse her T2D using the same way of eating, did make it as a newsflash.”

Clueless Doctors & Scientists

The Power of Anecdotal Evidence by the CEO

Tracey D Brown CEO ADA

As many of you know, I have been writing about nutrition for several years. Usually the story is disappointing because most of the time it’s about debunking a badly formulated peer reviewed academic publication. Well… here you are in for a bit of a surprise!

View original post 1,135 more words