“…because I couldn’t find a food that tasted good to me.”

“I’ve been a contrarian most of my life opposing stupidity, bigotry, racism, gender issues (under whatever banner), and oppression across the board never giving a shit who it was I was speaking against, but always specific and true to the people I sought to speak with not for, people who could not speak up for themselves and those who could.”

That is from a piece, Contrarian That I Am, by S.C. Hickman. I hadn’t given it much thought before, but reading that made me realize I’ve never thought of myself as a contrarian. Yet I have little doubt that there are those who would perceive me that way. I do have strongly voiced opinions motivated by a strong sense of morality. I’m not tolerant of bullshit. Still, I find no happiness in contradicting others or challenging the world, out of some sense of personal identity of opposition.

I understand what Hickman is expressing, though. He gets right to the point:

“Most of all was this deep knowing that I must go my own way, contrary to all that was dear to my people, and against the powers of church, state, and history. Something was driving to understand and know what it is that makes us so fucked up. Maybe that’s been my mission all along, to understand why humanity – this animal of planet earth is the only animal who could not accept its place in the order of things. We’ve always sought more, something else, to transcend our place in the natural order.”

Yes, to understand and know. But even that comes from a deeper sense. I don’t really know what motivates me. I often resonate with the concluding thoughts of the “Hunger Artist” by Fanz Kafka. In being asked why he fasts, the hunger artist states simply that it’s “because I couldn’t find a food that tasted good to me” — for the full context:

“Are you still fasting?” the supervisor asked. “When are you finally going to stop?” “Forgive me everything,” whispered the hunger artist. Only the supervisor, who was pressing his ear up against the cage, understood him. “Certainly,” said the supervisor, tapping his forehead with his finger in order to indicate to the staff the state the hunger artist was in, “we forgive you.” “I always wanted you to admire my fasting,” said the hunger artist. “But we do admire it,” said the supervisor obligingly. “But you shouldn’t admire it,” said the hunger artist. “Well then, we don’t admire it,” said the supervisor, “but why shouldn’t we admire it?” “Because I have to fast. I can’t do anything else,” said the hunger artist. “Just look at you,” said the supervisor, “why can’t you do anything else?” “Because,” said the hunger artist, lifting his head a little and, with his lips pursed as if for a kiss, speaking right into the supervisor’s ear so that he wouldn’t miss anything, “because I couldn’t find a food that tasted good to me. If had found that, believe me, I would not have made a spectacle of myself and would have eaten to my heart’s content, like you and everyone else.” Those were his last words, but in his failing eyes there was still the firm, if no longer proud, conviction that he was continuing to fast.

Was Fascism Unpredictable?

From 1934, here is an Italian claiming no one predicted fascism. Giuseppe Borgese writes that (“The Intellectual Origins of Fascism”):

“Not a single prophet, during more than a century of prophecies, analyzing the degradation of the romantic culture, or planning the split of the romantic atom, ever imagined anything like fascism. There was, in the lap of the future, communism and syndicalism and whatnot; there was anarchism, and legitimism, and even all-papacy; war, peace, pan-Germanism, pan-Slavism, Yellow Peril, signals to the planet Mars; there was no fascism. It came as a surprise to all, and to themselves, too.”

Is that true? It sounds unlikely, even as I understand how shocking fascism was to the Western world.

There was nothing about fascism that didn’t originate from old strains of European thought, tradition, and practice. Fascism contains elements of imperialism, nationalism, corporatism, authoritarianism, ethnocentrism, xenophobia, folk religiosity, etc. Corporatism aligning business and labor to government, for example, had been developing for many centuries at that point and had been central to colonial imperialism. Also, racism and eugenics had been powerfully taking hold for centuries. And it’s not like there hadn’t been populist demagoguery and cult of personality prior to Mussolini and Hitler.

If communism and syndicalism were predictable, why not fascism? The latter was a reactionary ideology that built on elements from these other ideologies. It seems to me that, if fascism wasn’t predictable, then the New Deal as a response to fascism (and all that followed from it) also couldn’t have been predicted. But the New Deal took part of its inspiration from the Populist movement that began in the last decades of the 19th century. Theodore Roosevelt, prior to fascism, felt a need to counter the proto-fascism of big biz corporatism. It wasn’t called fascism at the time, but the threat of what it represented was clear to many people.

What about fascism was new and unique, supposedly unpredictable according to anything that came before? I wouldn’t argue that fascism was inevitable, but something like it was more than probable. In many ways, such ideologies as communism and syndicalism were organizing in anticipation of fascism, as the connection between big government, big business, and big religion had long been obvious. Many of these were issues that had caused conflict during the colonial era and led to revolution. So, what was it that those like Borgese couldn’t see coming even as they were living in the middle of it?

Many have claimed that Donald Trump being elected as president was unpredictable. Yet many others have been predicting for decades the direction we’ve been heading in. Sure, no one ever knows the exact details of what new form of social order will form, but the broad outlines typically are apparent long before. The failure and increasing corruption of the US political system has been all too predictable. Whether or not fascism was predictable in its day, the conditions that made it possible and probable were out in the open for anyone to see.

Race as Lineage and Class

There is an intriguing shift in racial thought. It happened over the early modern era, but I’d argue that the earliest racial ideology is still relevant in explaining the world we find ourselves in. Discussing François Bernier (1620-1628), Justin E. H. Smith wrote that (Nature, Human Nature, and Human Difference, p. 22),

“This French physician and traveler is often credited with being the key innovator of the modern race concept. While some rigorous scholarship has recently appeared questioning Bernier’s significance, his racial theory is seldom placed in his context as a Gassendian natural philosopher who was, in particular, intent to bring his own brand of modern, materialistic philosophy to bear in his experiences in the Moghul Empire in Persia and northern India. It will be argued that Bernier’s principal innovation was to effectively decouple the concept of race from considerations of lineage, and instead to conceptualize it in biogeographical terms in which the precise origins or causes of the original differences of human physical appearance from region to region remain underdetermined.”

This new conception of race was introduced in the 17th century. But it would take a couple of centuries of imperial conquering, colonialism, and slavery to fully take hold.

The earliest conception of race was scientific, in explaining the diversity of species in nature. It technically meant a sub-species (and technically still does, despite non-scientific racial thought having since diverged far from this strict definition). Initially, this idea of scientific races was entirely kept separate from humanity. It was the common assumption, based on traditional views such as monotheistic theology, that all humans had a shared inheritance and that superficial differences of appearance didn’t indicate essentialist differences in human nature. Typical early explanations of human diversity pointed to other causes, from culture to climate. For example, the belief that dark-skinned people got that physical feature from living in hot and sunny environments, with the assumption that if the environment conditions changed so would the physical feature. As such, the dark skin of an African wasn’t any more inherited than the blue-pigmented skin of a Celt.

This millennia old view of human differences was slow to change. Slavery had been around since the ancient world, but it never had anything to do with race or usually even with ethnicity. Mostly, it was about one population being conquered by another and something had to be done with conquered people, if they weren’t to be genocidally slaughtered. The wars involved nearby people. Ancient Greeks more often fought other Greeks than anyone else and so it is unsurprising that most Greek slaves were ethnically Greek. Sure, there were some non-Greeks mixed into their slave population, but it was largely irrelevant. If anything, a foreign slave was valued more simply for the rarity. This began to change during the colonial era. With the rise of the British Empire, it was becoming standard for Christians to only enslave non-Christians. This was made possible as the last Pagan nation in Europe ended in the 14th century and the non-Christian populations in Europe dwindled over the centuries. But a complicating factor is that Eastern Europe, the Middle East, and Africa included a mix of Christians and non-Christians. Some of the early Church Fathers were not ethnically European (e.g., Augustine was African). As explained in a PBS article, From Indentured Servitude to Racial Slavery:

“Historically, the English only enslaved non-Christians, and not, in particular, Africans. And the status of slave (Europeans had African slaves prior to the colonization of the Americas) was not one that was life-long. A slave could become free by converting to Christianity. The first Virginia colonists did not even think of themselves as “white” or use that word to describe themselves. They saw themselves as Christians or Englishmen, or in terms of their social class. They were nobility, gentry, artisans, or servants.”

What initially allowed West Africans to be enslaved wasn’t that they were black but that they weren’t Christian, many of them having been Islamic. It wasn’t an issue of perceived racial inferiority (nor necessarily cultural and class inferiority). Enslaved Africans primarily came from the most developed parts of Africa — with centralized governments, road infrastructures, official monetary systems, and even universities. West Africa was heavily influenced by Islamic civilization and was an area of major kingdoms, the latter not being unlike much of Europe at the time. It wasn’t unusual for well educated and professionally trained people to end up in slavery. Early slaveholders were willing to pay good money for enslaved Africans that were highly skilled (metalworkers, translators, etc), as plantation owners often lacked the requisite skills for running a plantation. It was only after the plantation slave system was fully established that large numbers of unskilled workers were needed, but even many of these were farmers who knew advanced agricultural techniques, such as with rice growing (native to West Africa, as it was to China) which was a difficult crop requiring social organization.

We’ve largely forgotten the earlier views of race and slavery. Even with Europe having become Christianized, they didn’t see themselves as a single race, whether defined as European, Caucasian, or white. The English didn’t see the Irish as being the same race, instead portraying the Irish as primitive and ape-like or comparing them to Africans and Native Americans. This attitude continued into the early 20th century with WWI propaganda when the English portrayed the Germans as ape-like, justifying that they were racially ‘other’ and not fully human. There is an even more interesting aspect. Early racial thought was based on the idea of a common lineage, such that kin-based clan or tribe could be categorized as a separate race. But this was also used to justify the caste-based order that had been established by feudalism. English aristocrats perceived their own inherited position as being the result of good breeding, to such an extent that it was considered that the English aristocracy was a separate race from the English peasantry. As Americans, it’s hard for us to look at the rich and poor in England as two distinct races. Yet this strain of thought isn’t foreign to American culture.

Before slavery, there was indentured servitude in the British colonies. And it continued into the early period of the United States. Indentured servitude created the model for later adoption practices, such as seen with the Orphan Trains. Indentured servitude wasn’t race-based. Most of the indentured servants in the British colonies were poor and often Irish. My own ancestor, David Peebles, came to Virginia in 1649 to start a plantation and those who came with him were probably those who indentured themselves to him in payment for transportation to the New World see: Scottish Emigrants, Indentured Servants, and Slaves). There was much slavery in the Peebles family over the generations, but the only clear evidence of a slave owned by David Peebles was a Native American given to him as a reward for his having been an Indian Fighter. That Native American was made a slave not because of a clearly defined and ideologically-determined racial order but because he was captured in battle and not a Christian.

More important was who held the power, which in the colonial world meant the aristocrats and plutocrats, slave owners and other business interests. In that world as in ours, power was strongly tied to wealth. To have either indentured servants or slaves required money. Before it was a racial order, it was a class-based society built on a feudal caste system. Most people remained in the class they were born into, with primogeniture originally maintaining the concentration of wealth. Poor whites were a separate population, having been in continuous poverty for longer than anyone could remember and to this day in many cases having remained in continuous poverty.

A thought that came to mind is how, even when submerged, old ideas maintain their power. We still live in a class-based society that is built on a legacy from the caste system of slavery and feudalism. Racial segregation has always gone hand in hand with a class segregation that cuts across racial divides. Poor whites in many parts of the country interact with poor non-whites on a daily basis while likely never meeting a rich white at any point in their life. At the same time paternalistic upper class whites were suggesting ways of improving poor whites (forced assimilation, public education, English only laws, Prohibition, War on Poverty, etc), many of these privileged WASPs were also promoting eugenics directed at poor whites (encouraging abortions, forced sterilizations, removal of children to be adopted out, etc).

Even today, there are those like Charles Murray who suggest that the class divide among whites is a genetic divide. He actually blames poverty, across racial lines, on inferior genetics. This is why he doesn’t see there being any hope to change these populations. And this is why, out of paternalism, he supports a basic income to take care of these inferior people. He doesn’t use the earliest racial language, but that is essentially the way he is describing the social order. Those like Murray portray poor whites as if they were a separate race (i.e., a separate genetic sub-species) from upper class whites. This is a form of racism we’ve forgotten about. It’s always been with us, even as post-war prosperity softened its edges. Now it is being brought back out into the open.

Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors such as culture, peer influence, stress, nutrition and toxins, along with biological factors such as epigenetics, brain plasticity, microbiomes, parasites, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Social Construction & Ideological Abstraction

The following passages from two books help to explain what is social construction. As society has headed in a particular direction of development, abstract thought has become increasingly dominant.

But for us modern people who take abstractions for granted, we often don’t even recognize abstractions for what they are. Many abstractions simply become reality as we know it. They are ‘looped’ into existence, as race realism, capitalist realism, etc.

Ideological abstractions become so pervasive and systemic that we lose the capacity to think outside of them. They form our reality tunnel.

This wasn’t always so. Humans used to conceive of and hence perceive the world far differently. And this shaped their sense of identity, which is hard for us to imagine.

* * *

Dynamics of Human Biocultural Diversity:
A Unified Approach

by Elisa J. Sobo
Kindle Locations 94-104)

Until now, many biocultural anthropologists have focused mainly on the ‘bio’ half of the equation, using ‘biocultural’ generically, like biology, to refer to genetic, anatomical, physiological, and related features of the human body that vary across cultural groups. The number of scholars with a more sophisticated approach is on the upswing, but they often write only for super-educated expert audiences. Accordingly, although introductory biocultural anthropology texts make some attempt to acknowledge the role of culture, most still treat culture as an external variable— as an add-on to an essentially biological system. Most fail to present a model of biocultural diversity that gives adequate weight to the cultural side of things.

Note that I said most, not all: happily, things are changing. A movement is afoot to take anthropology’s claim of holism more seriously by doing more to connect— or reconnect— perspectives from both sides of the fence. Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world. Today, the leading edge of science recognizes the links and interdependencies that such thinking keeps falsely hidden.

Nature, Human Nature, and Human Difference:
Race in Early Modern Philosophy
by Justin E. H. Smith

pp. 9-10

The connection to the problem of race should be obvious: kinds of people are to no small extent administered into being, brought into existence through record keeping, census taking, and, indeed, bills of sale. A census form asks whether a citizen is “white,” and the possibility of answering this question affirmatively helps to bring into being a subkind of the human species that is by no means simply there and given, ready to be picked out, prior to the emergence of social practices such as the census. Censuses, in part, bring white people into existence, but once they are in existence they easily come to appear as if they had been there all along. This is in part what Hacking means by “looping”: human kinds, in contrast with properly natural kinds such as helium or water, come to be what they are in large part as a result of the human act of identifying them as this or that. Two millennia ago no one thought of themselves as neurotic, or straight, or white, and nothing has changed in human biology in the meantime that could explain how these categories came into being on their own. This is not to say that no one is melancholic, neurotic, straight, white, and so on, but only that how that person got to be that way cannot be accounted for in the same way as, say, how birds evolved the ability to fly, or how iron oxidizes.

In some cases, such as the diagnosis of mental illness, kinds of people are looped into existence out of a desire, successful or not, to help them. Racial categories seem to have been looped into existence, by contrast, for the facilitation of the systematic exploitation of certain groups of people by others. Again, the categories facilitate the exploitation in large part because of the way moral status flows from legal status. Why can the one man be enslaved, and the other not? Because the one belongs to the natural-seeming kind of people that is suitable for enslavement. This reasoning is tautological from the outside, yet self-evident from within. Edward Long, as we have seen, provides a vivid illustration of it in his defense of plantation labor in Jamaica. But again, categories cannot be made to stick on the slightest whim of their would-be coiner. They must build upon habits of thinking that are already somewhat in place. And this is where the history of natural science becomes crucial for understanding the history of modern racial thinking, for the latter built directly upon innovations in the former. Modern racial thinking could not have taken the form it did if it had not been able to piggyback, so to speak, on conceptual innovations in the way science was beginning to approach the diversity of the natural world, and in particular of the living world.

This much ought to be obvious: racial thinking could not have been biologized if there were no emerging science of biology. It may be worthwhile to dwell on this obvious point, however, and to see what more unexpected insights might be drawn out of it. What might not be so obvious, or what seems to be ever in need of renewed pointing out, is a point that ought to be of importance for our understanding of the differing, yet ideally parallel, scope and aims of the natural and social sciences: the emergence of racial categories, of categories of kinds of humans, may in large part be understood as an overextension of the project of biological classification that was proving so successful in the same period. We might go further, and suggest that all of the subsequent kinds of people that would emerge over the course of the nineteenth and twentieth centuries, the kinds of central interest to Foucault and Hacking, amount to a further reaching still, an unprecedented, peculiarly modern ambition to make sense of the slightest variations within the human species as if these were themselves species differentia. Thus for example Foucault’s well-known argument that until the nineteenth century there was no such thing as “the homosexual,” but only people whose desires could impel them to do various things at various times. But the last two centuries have witnessed a proliferation of purportedly natural kinds of humans, a typology of “extroverts,” “depressives,” and so on, whose objects are generally spoken of as if on an ontological par with elephants and slime molds. Things were not always this way. In fact, as we will see, they were not yet this way throughout much of the early part of the period we call “modern.”

Symbolic Dissociation of Nature/Nurture Debate

“One of the most striking features of the nature-nurture debate is the frequency with which it leads to two apparently contradictory results: the claim that the debate has finally been resolved (i.e., we now know that the answer is neither nature nor nurture, but both), and the debate’s refusal to die. As with the Lernian Hydra, each beheading seems merely to spur the growth of new heads.”

That is from the introduction to Evelyn Fox Keller’s The Mirage of a Space between Nature and Nurture (p. 1). I personally experienced this recently. There is a guy I’ve been discussing these kinds of issues with in recent years. We have been commenting on each other’s blogs for a long while, in an ongoing dialogue that has centered on childhood influences: peers, parenting, spanking, abuse, trauma, etc.

It seemed that we had finally come to an agreement on the terms of the debate, his having come around to my view that the entire nature-nurture debate is pointless or confused. But then recently, he once again tried to force this nature-nurture frame onto our discussion (see my last post). It’s one of these zombie ideas that isn’t easily killed, a memetic mind virus that infects the brain with no known cure. Keller throws some light on the issue (pp. 1-2):

“Part of the difficulty comes into view with the first question we must ask: what is the nature-nurture debate about? There is no single answer to this question, for a number of different questions take refuge under its umbrella. Some of the questions express legitimate and meaningful concerns that can in fact be addressed scientifically; others may be legitimate and meaningful, but perhaps not answerable; and still others simply make no sense. I will argue that a major reason we are unable to resolve the nature-nurture debate is that all these different questions are tangled together into an indissoluble knot, making it all but impossible for us to stay clearly focused on a single, well-defined and meaningful question. Furthermore, I will argue that they are so knitted together by chronic ambiguity, uncertainty, and slippage in the very language we use to talk about these issues. And finally, I will suggest that at least some of that ambiguity and uncertainty comes from the language of genetics itself.”

What occurred to me is that maybe this is intentional. It seems to be part of the design, a feature and not a flaw. That is how the debate maintains itself, by being nearly impossible to disentangle and so not allowing itself to be seen for what it is. It’s not a real debate for what appears to be an issue is really a distraction. There is much incentive to not look at it too closely, to not pick at the knot. Underneath, there is a raw nerve of Cartesian anxiety.

This goes back to my theory of symbolic conflation. The real issue (or set of issues) is hidden behind a symbolic issue. Maybe this usually or possibly always takes the form of a debate being framed in a particular way. The false dichotomy of dualistic thinking isn’t just a frame for it tells a narrative of conflict where, as long as you accepts the frame, you are forced to pick a side.

I often use abortion as an example because symbolic conflation operates most often and most clearly on visceral and emotional issues involving the body, especially sex and death (abortion involving both). This is framed as pro-life vs pro-choice, but the reality of public opinion is that most Americans are BOTH pro-life AND pro-choice. That is to say most Americans want to maintain a woman’s right to choose while simultaneously putting some minimal limitations on abortions. Besides, as research has shown, liberal and leftist policies (full sex education, easily available contraceptives, planned parenthood centers, high quality public healthcare available to all, etc) allow greater freedom to individuals while creating the conditions that decrease the actual rate of abortions because they decrease unwanted pregnancies.

One thing that occurs to me is that such frames tend to favor one side. It stands out to me that those promoting the nature vs nurture frame are those who tend to be arguing for biological determinism (or something along those lines), just like those creating the forced choice of pro-life or pro-choice usually are those against the political left worldview. That is another way in which it isn’t a real debate. The frame both tries to obscure the real issue(s) and to shut down debate before it happens. It’s all about social control by way of thought control. To control how an issue is portrayed and how a debate is framed is to control the sociopolitical narrative, the story being told and the conclusion it leads to. Meanwhile, the real concern of the social order is being manipulated behind the scenes. It’s a sleight-of-hand trick.

Symbolic conflation is a time-tested strategy of obfuscation. It’s also an indirect way of talking about what can’t or rather won’t otherwise be acknowledged, in the symbolic issue being used as a proxy. To understand what it all means, you have to look at the subtext. The framing aspect brings another layer to this process. A false dichotomy could be thought of as a symbolic dissociation, where what is inseparable in reality gets separated in the framing of symbolic ideology.

The fact of the matter is that nature and nurture are simply two ways of referring to the same thing. If the nature/nurture debate is a symbolic dissociation built on top of a symbolic conflation, is this acting as a proxy for something else? And if so, what is the real debate that is being hidden and obscured, in either being talked around or talked about indirectly?

False Dichotomy and Bad Science

Someone shared with me a link to a genetics study. The paper is “Behavioural individuality in clonal fish arises despite near-identical rearing conditions” by David Bierbach, Kate L. Laskowski, and Max Wolf. From the abstract:

“Behavioural individuality is thought to be caused by differences in genes and/or environmental conditions. Therefore, if these sources of variation are removed, individuals are predicted to develop similar phenotypes lacking repeatable individual variation. Moreover, even among genetically identical individuals, direct social interactions are predicted to be a powerful factor shaping the development of individuality. We use tightly controlled ontogenetic experiments with clonal fish, the Amazon molly (Poecilia formosa), to test whether near-identical rearing conditions and lack of social contact dampen individuality. In sharp contrast to our predictions, we find that (i) substantial individual variation in behaviour emerges among genetically identical individuals isolated directly after birth into highly standardized environments and (ii) increasing levels of social experience during ontogeny do not affect levels of individual behavioural variation. In contrast to the current research paradigm, which focuses on genes and/or environmental drivers, our findings suggest that individuality might be an inevitable and potentially unpredictable outcome of development.”

Here is what this seems to imply. We don’t as of yet understand (much less are able to identify, isolate, and control) all of the genetic, epigenetic, environmental, etc factors that causally affect and contribute to individual development. Not only that but we don’t understand the complex interaction of those factors, known and unknown. To put it simply, our ignorance is much more vast than our knowledge. We don’t even have enough knowledge to know what we don’t know. But we are beginning to realize that we need to rethink what we thought we knew.

It reminds me of the mouse research where genetically identical mice in environmentally identical conditions led to diverse behavioral results. I’ve mentioned it many times before here in my blog, including a post specifically about it: Of Mice and Men and Environments (also see Heritability & Inheritance, Genetics & Epigenetics, Etc). In the mice post, along with quoting an article, I pointed to a fascinating passage from David Shenk’s book, The Genius in All of Us. Although I was previously aware of the influence of environmental conditions, the research discussed there makes it starkly clear. I was reminded of this because of another discussion about mice research, from Richard Harris’ Rigor Mortis with the subtitle of “How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions” (pp. 79-81):

“Garner said that mice have great potential for biological studies, but at the moment, he believes, researchers are going about it all wrong. For the past several decades, they have pursued a common strategy in animal studies: eliminate as many variables as you can, so you can more clearly see an effect when it’s real. It sounds quite sensible, but Garner believes it has backfired in mouse research. To illustrate this point, he pointed to two cages of genetically identical mice. One cage was at the top of the rack near the ceiling, the other near the floor. Garner said cage position is enough of a difference to affect the outcome of an experiment. Mice are leery of bright lights and open spaces, but here they live in those conditions all the time. “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed,” he said.

“Garner was part of an experiment involving six different mouse labs in Europe to see whether behavioral tests with genetically identical mice would vary depending on the location. The mice were all exactly the same age and all female. Even so, these “identical” tests produced widely different results, depending on whether they were conducted in Giessen, Muenster, Zurich, Mannheim, Munich, or Utrecht. The scientists tried to catalog all possible differences: mouse handlers in Zurich didn’t wear gloves, for example, and the lab in Utrecht had the radio on in the background. Bedding, food, and lighting also varied. Scientists have only recently come to realize that the sex of the person who handles the mice can also make a dramatic difference. “Mice are so afraid of males that it actually induces analgesia,” a pain-numbing reaction that screws up all sorts of studies, Garner said. Even a man’s sweaty T-shirt in the same room can trigger this response.

“Behavioral tests are used extensively in research with mice (after all, rodents can’t tell handlers how an experimental drug is affecting them), so it was sobering to realize how much those results vary from lab to lab. But here’s the hopeful twist in this experiment: when the researchers relaxed some of their strict requirements and tested a more heterogeneous group of mice, they paradoxically got more consistent results. Garner is trying to convince his colleagues that it’s much better to embrace variation than to tie yourself in knots trying to eliminate it.

““Imagine that I was testing a new drug to help control nausea in pregnancy, and I suggested to the [Food and Drug Administration (FDA)] that I tested it purely in thirty-five-year-old white women all in one small town in Wisconsin with identical husbands, identical homes, identical diets which I formulate, identical thermostats that I’ve set, and identical IQs. And incidentally they all have the same grandfather.” That would instantly be recognized as a terrible experiment, “but that’s exactly how we do mouse work. And fundamentally that’s why I think we have this enormous failure rate.”

“Garner goes even further in his thinking, arguing that studies should consider mice not simply as physiological machines but as organisms with social interactions and responses to their environment that can significantly affect their health and strongly affect the experiment results. Scientists have lost sight of that. “I fundamentally believe that animals are good models of human disease,” Garner said. “I just don’t think the way we’re doing the research right now is.”

“Malcolm Macleod has offered a suggestion that would address some of the issues Garner raises: when a drug looks promising in mice, scale up the mouse experiments before trying it in people. “I simply don’t understand the logic that says I can take a drug to clinical trial on the basis of information from 500 animals, but I’m going to need 5,000 human animals to tell me whether it will work or not. That simply doesn’t compute.” Researchers have occasionally run large mouse experiments at multiple research centers, just as many human clinical trials are conducted at several medical centers. The challenge is funding. Someone else can propose the same study involving a lot fewer animals, and that looks like a bargain. “Actually, the guy promising to do it for a third of the price isn’t going to do it properly, but it’s hard to get that across,” Macleod said.”

This is the problem with the framing debate as nature vs nurture (or similar framings such as biology vs culture and organism vs environment). Even when people are aware of the limitations of this frame, the powerful sway it holds over people’s minds causes them to continually fall back on them. Even when I have no interest in such dualistic thinking, some people feel it necessary to categorize the sides of a debate accordingly, where apparently I’m supposed to play the role of ‘nurturist’ in opposition to their ‘biology’ advocacy: “feel your life-force, Benjamin. Come with me to the biology side!” Well, I have no desire to take sides in a false dichotomy. Oddly, this guy trying to win me over to the “biology side” in debate (about human violence and war) is the same person who shared the clonal fish study that demonstrated how genetics couldn’t explain the differences observed. So, I’m not entirely sure what he thinks ‘biology’ means, what ideological commitments it represents in his personal worldview.

(As he has mentioned in our various discussions, his studies about all of this are tied up with his experience as a father who has struggled with parenting and a husband who is recently separated, partly over parenting concerns. The sense of conflict and blame he is struggling with sounds quite serious and I’m sympathetic. But I suspect he is looking for some kind of life meaning that maybe can’t be found where he is looking for it. Obviously, it is a highly personal issue for him, not a disinterested debate of abstract philosophy or scientific hypotheses. I’m starting to think that we aren’t even involved in the same discussion, just talking past one another. It’s doubtful that I can meet him on the level he finds himself, and so I don’t see how I can join him in the debate that seems to matter so much to him. I won’t even try. I’m not in that headspace. We’ve commented on each other’s blogs for quite a while now, but for whatever reason we simply can’t quite fully connect. Apparently, we are unable to agree enough about what is the debate to even meaningfully disagree about a debate. Although he is a nice guy and we are on friendly terms, I don’t see further dialogue going anywhere. *shrug*)

When we are speaking of so-called ‘nature’, this doesn’t only refer to human biology of genetics and physiology of development but also includes supposed junk DNA and epigenetics, brain plasticity and gut-brain connection, viruses and bacteria, parasites and parasite load, allergies and inflammation, microbiome and cultured foods, diet and nutrition, undernourishment and malnutrition, hunger and starvation, food deserts and scarcity, addiction and alcoholism, pharmaceuticals and medicines, farm chemicals and food additives, hormone mimics and heavy metal toxicity, environmental stress and physical trauma, abuse and violence, diseases of affluence and nature-deficit disorder, in utero conditions and maternal bond, etc. All of these alter the expression of genetics, both within a single lifetime of individuals and across the generations of entire populations.

There are numerous varieties of confounding factors. I could also point to sociocultural, structural, and institutional aspects of humanity: linguistic relativity and WEIRD research subjects, culture of trust and culture of honor, lifeways and mazeways, habitus and neighborhood effect, parenting and peers, inequality and segregation, placebos and nocebos, Pygmalion effect and Hawthorne effect, and on and on. As humans are social creatures, one could write a lengthy book simply listing all the larger influences of society.

Many of these problems have become most apparent in social science, but it is far from limited to that area of knowledge. Very similar problems are found in the biological and medical sciences, with the hard sciences having clear overlap with the soft sciences considering social constructions get fed back into scientific research. With mostly WEIRD scientists studying mostly WEIRD subjects, it’s the same WEIRD culture that has dominated nearly all of science and so it is WEIRD biases that have been the greatest stumbling blocks. Plus, with what has been proven from linguistic relativity, we would expect that how we talk about science will shape the research done, the results gained, the conclusions made, and the theories proposed. It’s all of one piece.

The point is that there are no easy answers and certain conclusions. In many ways, science is still in its infancy. We have barely scratched the surface of what potentially could be known. And much of what we think we know is being challenged, which is leading to a paradigm change that we can barely imagine. There is a lot at stake. It goes far beyond abstract theory, hypothetical debate, and idle speculation.

Most importantly, we must never forget that no theory is value-neutral or consequence-free. The ideological worldview we commit to doesn’t merely frame debate and narrow our search for knowledge. There is a real world impact on public policy and human lives, such as when medial research and practice becomes racialized (with a dark past connecting race realism and genetic determinism, racial hygiene and eugenics, medical testing on minorities and the continuing impact on healthcare). All of this raises questions about whether germs are to be treated as invading enemies, whether war is an evolutionary trait, whether addiction is biological, whether intelligence is genetic, whether language is a module in the brain, and whether the ideology of individualism is human nature.

We have come to look to the body for answers to everything. And so we have come to project almost every issue onto the body. It’s too easy to shape scientific theory in such a way that confirms what we already believe and what is self-serving or simply what conforms to the social order. There is a long history of the intentional abuse and unintentional misuse of science. It’s impossible to separate biology from biopolitics.

Worse still, our imaginations are hobbled, making it all that more difficult to face the problems before us. And cultural biases have limited the search for greater knowledge. More than anything, we need to seriously develop our capacity to radically imagine new possibilities. That would require entirely shifting the context and approach of our thinking, maybe to the extent of altering our consciousness and our perception of the world. A paradigm change that mattered at all would be one that went far beyond abstract theory and was able to touch the core of our being. Our failure on this level may explain why so much scientific research has fallen into a rut.

* * *

I’ve been thinking about this for a long time. My thoughts here aren’t exactly new, but I wanted to share some new finds. It’s a topic worth returning to on occasion, as further research rolls in and the experts continue to debate. I’ll conclude with some more from Richard Harris’ Rigor Mortis. Below that are several earlier posts, a few relevant articles, and a bunch of interesting books (just because I love making long lists of books).

Rigor Mortis:
How Sloppy Science Creates Worthless Cures, Crushes Hope, and
Wastes Billions

by Richard Harris
pp. 13-16

There has been no systematic attempt to measure the quality of biomedical science as a whole, but Leonard Freedman, who started a nonprofit called the Global Biological Standards Institute, teamed up with two economists to put a dollar figure on the problem in the United States. Extrapolating results from the few small studies that have attempted to quantify it, they estimated that 20 percent of studies have untrustworthy designs; about 25 percent use dubious ingredients, such as contaminated cells or antibodies that aren’t nearly as selective and accurate as scientists assume them to be; 8 percent involve poor lab technique; and 18 percent of the time, scientists mishandle their data analysis. In sum, Freedman figured that about half of all preclinical research isn’t trustworthy. He went on to calculate that untrustworthy papers are produced at the cost of $28 billion a year. This eye-popping estimate has raised more than a few skeptical eyebrows—and Freedman is the first to admit that the figure is soft, representing “a reasonable starting point for further debate.”

“To be clear, this does not imply that there was no return on that investment,” Freedman and his colleagues wrote. A lot of what they define as “not reproducible” really means that scientists who pick up a scientific paper won’t find enough information in it to run the experiment themselves. That’s a problem, to be sure, but hardly a disaster. The bigger problem is that the errors and missteps that Freedman highlights are, as Begley found, exceptionally common. And while scientists readily acknowledge that failure is part of the fabric of science, they are less likely to recognize just how often preventable errors taint studies.

“I don’t think anyone gets up in the morning and goes to work with the intention to do bad science or sloppy science,” said Malcolm Macleod at the University of Edinburgh. He has been writing and thinking about this problem for more than a decade. He started off wondering why almost no treatment for stroke has succeeded (with the exception of the drug tPA, which dissolves blood clots but doesn’t act on damaged nerve cells), despite many seemingly promising leads from animal studies. As he dug into this question, he came to a sobering conclusion. Unconscious bias among scientists arises every step of the way: in selecting the correct number of animals for a study, in deciding which results to include and which to simply toss aside, and in analyzing the final results. Each step of that process introduces considerable uncertainty. Macleod said that when you compound those sources of bias and error, only around 15 percent of published studies may be correct. In many cases, the reported effect may be real but considerably weaker than the study concludes.

Mostly these estimated failure rates are educated guesses. Only a few studies have tried to measure the magnitude of this problem directly. Scientists at the MD Anderson Cancer Center asked their colleagues whether they’d ever had trouble reproducing a study. Two-thirds of the senior investigators answered yes. Asked whether the differences were ever resolved, only about a third said they had been. “This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to ‘presumed’ knowledge,” the authors wrote when they published the survey findings.

The American Society for Cell Biology (ASCB) surveyed its members in 2014 and found that 71 percent of those who responded had at some point been unable to replicate a published result. Again, 40 percent of the time, the conflict was never resolved. Two-thirds of the time, the scientists suspected that the original finding had been a false positive or had been tainted by “a lack of expertise or rigor.” ASCB adds an important caveat: of the 8,000 members it surveyed, it heard back from 11 percent, so its numbers aren’t convincing. That said, Nature surveyed more than 1,500 scientists in the spring of 2016 and saw very similar results: more than 70 percent of those scientists had tried and failed to reproduce an experiment, and about half of those who responded agreed that there’s a “significant crisis” of reproducibility.

pp. 126-129

The batch effect is a stark reminder that, as biomedicine becomes more heavily reliant on massive data analysis, there are ever more ways to go astray. Analytical errors alone account for almost one in four irreproducible results in biomedicine, according to Leonard Freedman’s estimate. A large part of the problem is that biomedical researchers are often not well trained in statistics. Worse, researchers often follow the traditional practices of their fields, even when those practices are deeply problematic. For example, biomedical research has embraced a dubious method of determining whether results are likely to be true by relying far too heavily on a gauge of significance called the p-value (more about that soon). Potential help is often not far away: major universities have biostatisticians on staff who are usually aware of the common pitfalls in experiment design and subsequent analysis, but they are not enlisted as often as they could be. […]

A few years ago, he placed an informal wager of sorts with a few of his colleagues at other universities. He challenged them to come up with the most egregious examples of the batch effect. The “winning” examples would be published in a journal article. It was a first stab at determining how widespread this error is in the world of biomedicine. The batch effect turns out to be common.

Baggerly had a head start in this contest because he’d already exposed the problems with the OvaCheck test. But colleagues at Johns Hopkins were not to be outdone. Their entry involved a research paper that appeared to get at the very heart of a controversial issue: one purporting to show genetic differences between Asians and Caucasians. There’s a long, painful, failure-plagued history of people using biology to support prejudice, so modern studies of race and genetics meet with suspicion. The paper in question had been coauthored by a white man and an Asian woman (a married couple, as it happens), lowering the index of suspicion. Still, the evidence would need to be substantial. […]

The University of Washington team tracked down the details about the microarrays used in the experiment at Penn. They discovered that the data taken from the Caucasians had mostly been produced in 2003 and 2004, while the microarrays studying Asians had been produced in 2005 and 2006. That’s a red flag because microarrays vary from one manufacturing lot to the next, so results can differ from one day to the next, let alone from year to year. They then asked a basic question of all the genes on the chips (not just the ones that differed between Asians and Caucasians): Were they behaving the same in 2003–2004 as they were in 2005–2006? The answer was an emphatic no. In fact, the difference between years overwhelmed the apparent difference between races. The researchers wrote up a short analysis and sent it to Nature Genetics, concluding that the original findings were another instance of the batch effect.

These case studies became central examples in the research paper that Baggerly, Leek, and colleagues published in 2010, pointing out the perils of the batch effect. In that Nature Reviews Genetics paper, they conclude that these problems “are widespread and critical to address.”

“Every single assay we looked at, we could find examples where this problem was not only large but it could lead to clinically incorrect findings,” Baggerly told me. That means in many instances a patient’s health could be on the line if scientists rely on findings of this sort. “And these are not avoidable problems.” If you start out with data from different batches you can’t correct for that in the analysis. In biology today, researchers are inevitably trying to tease out a faint message from the cacophony of data, so the tests themselves must be tuned to pick up tiny changes. That also leaves them exquisitely sensitive to small perturbations—like the small differences between microarray chips or the air temperature and humidity when a mass spectrometer is running. Baggerly now routinely checks the dates when data are collected—and if cases and controls have been processed at different times, his suspicions quickly rise. It’s a simple and surprisingly powerful method for rooting out spurious results.

p. 132

Over the years breathless headlines have celebrated scientists claiming to have found a gene linked to schizophrenia, obesity, depression, heart disease—you name it. These represent thousands of small-scale efforts in which labs went hunting for genes and thought they’d caught the big one. Most were dead wrong. John Ioannidis at Stanford set out in 2011 to review the vast sea of genomics papers. He and his colleagues looked at reported genetic links for obesity, depression, osteoporosis, coronary artery disease, high blood pressure, asthma, and other common conditions. He analyzed the flood of papers from the early days of genomics. “We’re talking tens of thousands of papers, and almost nothing survived” closer inspection. He says only 1.2 percent of the studies actually stood the test of time as truly positive results. The rest are what’s known in the business as false positives.

The field has come a long way since then. Ioannidis was among the scientists who pushed for more rigorous analytical approaches to genomics research. The formula for success was to insist on big studies, to make careful measurements, to use stringent statistics, and to have scientists in various labs collaborate with one another—“you know, doing things right, the way they should be done,” Ioannidis said. Under the best of these circumstances, several scientists go after exactly the same question in different labs. If they get the same results, that provides high confidence that they’re not chasing statistical ghosts. These improved standards for genomics research have largely taken hold, Ioannidis told me. “We went from an unreliable field to a highly reliable field.” He counts this as one of the great success stories in improving the reproducibility of biomedical science. Mostly. “There’s still tons of research being done the old fashioned way,” he lamented. He’s found that 70 percent of this substandard genomics work is taking place in China. The studies are being published in English-language journals, he said, “and almost all of them are wrong.”

pp. 182-183

Published retractions tend to be bland statements that some particular experiment was not reliable, but those notices often obscure the underlying reason. Arturo Casadevall at Johns Hopkins University and colleague Ferric Fang at the University of Washington dug into retractions and discovered a more disturbing truth: 70 percent of the retractions they studied resulted from bad behavior, not simply error. They also concluded that retractions are more common in high-profile journals—where scientists are most eager to publish in order to advance their careers. “We’re dealing with a real deep problem in the culture,” Casadevall said, “which is leading to significant degradation of the literature.” And even though retractions are on the rise, they are still rarities—only 0.02 percent of papers are retracted, Oransky estimates.

David Allison at the University of Alabama, Birmingham, and colleagues discovered just how hard it can be to get journals to set the record straight. Some scientists outright refuse to retract obviously wrong information, and journals may not insist. Allison and his colleagues sent letters to journals pointing out mistakes and asking for corrections. They were flabbergasted to find that some journals demanded payment—up to $2,100—just to publish their letter pointing out someone else’s error.

pp. 186-188

“Most people who work in science are working as hard as they can. They are working as long as they can in terms of the hours they are putting in,” said social scientist Brian Martinson. “They are often going beyond their own physical limits. And they are working as smart as they can. And so if you are doing all those things, what else can you do to get an edge, to get ahead, to be the person who crosses the finish line first? All you can do is cut corners. That’s the only option left you.” Martinson works at HealthPartners Institute, a nonprofit research agency in Minnesota. He has documented some of this behavior in anonymous surveys. Scientists rarely admit to outright misbehavior, but nearly a third of those he has surveyed admit to questionable practices such as dropping data that weakens a result, based on a “gut feeling,” or changing the design, methodology, or results of a study in response to pressures from a funding source. (Daniele Fanelli, now at Stanford University, came to a similar conclusion in a separate study.)

One of Martinson’s surveys found that 14 percent of scientists have observed serious misconduct such as fabrication or falsification, and 72 percent of scientists who responded said they were aware of less egregious behavior that falls into a category that universities label “questionable” and Martinson calls “detrimental.” In fact, almost half of the scientists acknowledged that they personally had used one or more of these practices in the past three years. And though he didn’t call these practices “questionable” or “detrimental” in his surveys, “I think people understand that they are admitting to something that they probably shouldn’t have done.” Martinson can’t directly link those reports to poor reproducibility in biomedicine. Nobody has funded a study exactly on that point. “But at the same time I think there’s plenty of social science theory, particularly coming out of social psychology, that tells us that if you set up a structure this way… it’s going to lead to bad behavior.”

Part of the problem boils down to an element of human nature that we develop as children and never let go of. Our notion of what’s “right” and “fair” doesn’t form in a vacuum. People look around and see how other people are behaving as a cue to their own behavior. If you perceive you have a fair shot, you’re less likely to bend the rules. “But if you feel the principles of distributive justice have been violated, you’ll say, ‘Screw it. Everybody cheats; I’m going to cheat too,’” Martinson said. If scientists perceive they are being treated unfairly, “they themselves are more likely to engage in less-than-ideal behavior. It’s that simple.” Scientists are smart, but that doesn’t exempt them from the rules that govern human behavior.

And once scientists start cutting corners, that practice has a natural tendency to spread throughout science. Martinson pointed to a paper arguing that sloppy labs actually outcompete good labs and gain an advantage. Paul Smaldino at the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology ran a model showing that labs that use quick-and-dirty practices will propagate more quickly than careful labs. The pressures of natural selection and evolution actually favor these labs because the volume of articles is rewarded over the quality of what gets published. Scientists who adopt these rapid-fire practices are more likely to succeed and to start new “progeny” labs that adopt the same dubious practices. “We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers,” Smaldino and McElreath wrote. This isn’t evolution in the strict biological sense, but they argue the same general principles apply as the culture of science evolves.

* * *

What do we inherit? And from whom?
Identically Different: A Scientist Changes His Mind
Race Realism, Social Constructs, and Genetics
Race Realism and Racialized Medicine
The Bouncing Basketball of Race Realism
To Control or Be Controlled
Flawed Scientific Research
Human Nature: Categories & Biases
Bias About Bias
Urban Weirdness
“Beyond that, there is only awe.”

Animal studies paint misleading picture by Janelle Weaver
Misleading mouse studies waste medical resources by Erika Check Hayden
A mouse’s house may ruin experiments by Sara Reardon
Curious mice need room to run by Laura Nelson
Male researchers stress out rodents by Alla Katsnelson
Bacteria bonanza found in remote Amazon village by Boer Deng
Case Closed: Apes Got Culture by Corey Binns
Study: Cat Parasite Affects Human Culture by Ker Than
Mind Control by Parasites by Bill Christensen

Human Biodiversity by Jonathan Marks
The Alternative Introduction to Biological Anthropology by Jonathan Marks
What it Means to be 98% Chimpanzee by Jonathan Marks
Tales of the Ex-Apes by Jonathan Marks
Why I Am Not a Scientist by Jonathan Marks
Is Science Racist? by Jonathan Marks
Biology Under the Influence by Lewontin & Levins
Biology as Ideology by Richard C. Lewontin
The Triple Helix by Richard Lewontin
Not In Our Genes by Lewontin & Rose
The Biopolitics of Race by Sokthan Yeng
The Brain’s Body by Victoria Pitts-Taylor
Misbehaving Science by Aaron Panofsky
The Flexible Phenotype by Piersma & Gils
Herding Hemingway’s Cats by Kat Arney
The Genome Factor by Conley & Fletcher
The Deeper Genome by John Parrington
Postgenomics by Richardson & Stevens
The Developing Genome by David S. Moore
The Epigenetics Revolution by Nessa Carey
Epigenetics by Richard C. Francis
Not In Your Genes by Oliver James
No Two Alike 
by Judith Rich Harris
Identically Different by Tim Spector
The Cultural Nature of Human Development by Barbara Rogoff
The Hidden Half of Nature by Montgomery & Biklé
10% Human by Alanna Collen
I Contain Multitudes by Ed Yong
The Mind-Gut Connection by Emeran Mayer
Bugs, Bowels, and Behavior by Arranga, Viadro, & Underwood
This Is Your Brain on Parasites by Kathleen McAuliffe
Infectious Behavior by Paul H. Patterson
Infectious Madness by Harriet A. Washington
Strange Contagion by Lee Daniel Kravetz
Childhood Interrupted by Beth Alison Maloney
Only One Chance 
by Philippe Grandjean
Why Zebras Don’t Get Ulcers by Robert M. Sapolsky
Resisting Reality by Sally Haslanger
Nature, Human Nature, and Human Difference by Justin E. H. Smith
Race, Monogamy, and Other Lies They Told You by Agustín Fuentes
The Invisible History of the Human Race by Christine Kenneally
Genetics and the Unsettled Past by Wailoo, Nelson, & Lee
The Mismeasure of Man by Stephen Jay Gould
Identity Politics and the New Genetics by Schramm, Skinner, & Rottenburg
The Material Gene by Kelly E. Happe
Fatal Invention by Dorothy Roberts
Inclusion by Steven Epstein
Black and Blue by John Hoberman
Race Decoded by Catherine Bliss
Breathing Race into the Machine by Lundy Braun
Race and the Genetic Revolution by Krimsky & Sloan
Race? by Tattersall & DeSalle
The Social Life of DNA by Alondra Nelson
Native American DNA by Kim TallBear
Making the Mexican Diabetic by Michael Montoya
Race in a Bottle by Jonathan Kahn
Uncertain Suffering by Carolyn Rouse
Sex Itself by Sarah S. Richardson
Building a Better Race by Wendy Kline
Choice and Coercion by Johanna Schoen
Sterilized by the State by Hansen & King
American Eugenics by Nancy Ordover
Eugenic Nation by Alexandra Minna Stern
A Century of Eugenics in America by Paul A. Lombardo
In the Name of Eugenics by Daniel J. Kevles
War Against the Weak by Edwin Black
Illiberal Reformers by Thomas C. Leonard
Defectives in the Land by Douglas C. Baynton
Framing the moron by Gerald V O’Brien
Imbeciles by Adam Cohen
Three Generations, No Imbeciles by Paul A. Lombardo
Defending the Master Race by Jonathan Peter Spiro
Hitler’s American Model by James Q. Whitman
Beyond Human Nature by Jesse J. Prinz
Beyond Nature and Culture by Philippe Descola
The Mirage of a Space between Nature and Nurture by Evelyn Fox Keller
Biocultural Creatures by Samantha Frost
Dynamics of Human Biocultural Diversity by Elisa J Sobo
Monoculture by F.S. Michaels
A Body Worth Defending by Ed Cohen
The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes
A Psychohistory of Metaphors by Brian J. McVeigh
The Master and His Emissary by Iain McGilchrist
From Bacteria to Bach and Back by Daniel C. Dennett
Consciousness by Susan Blackmore
The Meme Machine by Blackmore & Dawkins
Chasing the Scream by Johann Hari
Don’t Sleep, There Are Snakes by Daniel L. Everett
Dark Matter of the Mind by Daniel L. Everett
Language by Daniel L. Everett
Linguistic Relativity by Caleb Everett
Numbers and the Making of Us by Caleb Everett
Linguistic Relativities by John Leavitt
The Language Myth by Vyvyan Evans
The Language Parallax by Paul Friedrich
Louder Than Words by Benjamin K. Bergen
Out of Our Heads by Alva Noe
Strange Tools by Alva Noë
From Bacteria to Bach and Back by Daniel C. Dennett
The Embodied Mind by Varela, Thompson, & Rosch
Immaterial Bodies by Lisa Blackman
Radical Embodied Cognitive Science by Anthony Chemero
How Things Shape the Mind by Lambros Malafouris
Vibrant Matter by Jane Bennett
Entangled by Ian Hodder
How Forests Think by Eduardo Kohn
The New Science of the Mind by Mark Rowlands
Supersizing the Mind by Andy Clark
Living Systems by Jane Cull
The Systems View of Life by Capra & Luisi
Evolution in Four Dimensions by Jablonka & Lamb
Hyperobjects by Timothy Morton
Sync by Steven H. Strogatz
How Nature Works by Per Bak
Warless Societies and the Origin of War by Raymond C. Kelly
War, Peace, and Human Nature by Douglas P. Fry
Darwinism, War and History by Paul Crook

The End of an Empire

Let me share some thoughts about imperialism, something hard to grasp in the contemporary world. My thoughts are inspired by a comment I wrote, which was in response to a comparison of countries (US, UK, Canada, Australia, and New Zealand). We live in a large geopolitical order that no longer can be explained in national terms. The Anglo-American Empire is a project involving dozens of countries in the Western world. Even as it looks different than the old empires, it maybe operates more similarly than not.

There are many issues involved: who pays the most and who benefits the most from this geopolitical order, where is control of the social order maintained most strictly and oppressively, where is the center and periphery of the imperial project, how are alliances formed and maintained, where does the moral authority and political legitimacy come from, how does complicity and plausible deniability play a key role in participating populations, what is the role of the propaganda model of (increasingly international) media in managing public opinion and perception across multiple countries, what are the meeting points and battle grounds of vying geopolitical forces, etc.

I was wondering about how does a ruling elite maintain a vast geopolitical order like the Anglo-American Empire. It requires keeping submissive all of the diverse and far-flung populations of imperial subjects and allies, which means authoritarian control at the heart of the empire and looser control at the peripheries, at least in the early stages of the imperial project. Every imperial project maybe is in the end a Ponzi scheme. Eventually, the bills come due and someone has to pay for them. Wealth and resources can only flow in from foreign lands for so long before they begin drying up. This is problematic, as maintaining an empire is costly and ever more so as it expands. The ruling elite has little choice for it is either continually expand or collapse, although expanding inevitably leads to overreach and so in the end collapse can only be delayed (even if some empires can keep this charade going for centuries). Many people are happy to be imperial subjects receiving the benefits of imperialism until they have to admit to being imperial subjects and accept responsibility. Allowing plausible deniability of complicity goes a long way in gaining participation from various populations and retaining the alliances of their governments.

It could be interpreted that present conflicts indicate that this present geopolitical order is fraying at the edges. The formerly contented and submissive populations within the Western world order are becoming restless. Austerity politics are shifting the costs back home and the good times are coming to an end. The costs of imperialism are coming to seem greater than the benefits, but that is because the costs always come after the benefits. The American colonists came to learn that lesson, after generations of receiving the benefits of empire and then later on being asked to pay for maintaining the empire that ensured those benefits. Worse still, it rubbed American colonists the wrong way to be forced to admit their role as willing participants in an oppressive sociopolitical order. It didn’t fit their sense of identity as freedom-loving Americans.

My thought is that Europeans (along with Canadians and other allied national populations) are starting to similarly question their role within the Anglo-American Empire, now that the costs no longer can be ignored. The problem is someone has to pay for those costs, as the entire international trade system is built on this costly geopolitical order. It requires an immense military and intelligence apparatus to maintain a secure political order, guarantee trade agreements that allow the wealth to flow around, and keep open the trade routes and access to foreign resources.

So far, the US government has played this role and US citizens have sacrificed funding to public education, public healthcare, etc in order to fund the militarized imperial system. If other countries are asked to help pay for what they have benefited from, like the American colonists they might decline to do so. Still, these other countries have paid through other means, by offering their alliances with the US government which means offering moral authority and political legitimacy to the Anglo-American Empire. When the US goes to war, all of its allies also go to war. This is because the US government is less a nation-state and more the capital of a geopolitical order. These allied nations are no longer autonomous citizenries because such things as the UN, NATO, NAFTA, etc has created a larger international system of governance.

These allied non-American subjects of the Anglo-American Empire have bought their benefits from the system through their participation in it and compliance with it. This is beginning to make Europeans, Canadians, and others feel uncomfortable. US citizen are also suspecting they’ve gotten a raw deal, for why are they paying for an international order that serves international interests and profits international corporations. What is the point of being an imperial subject if you aren’t getting a fair cut of the take from imperial pillaging and looting? Why remain subservient to a system that funnels nearly all of the wealth and resources to the top? Such economic issues then lead to moral questioning of the system itself and soul-searching about one’s place within it.

This is how empires end.

* * *

Anyway, below is the aforementioned comment about trying to compare the US, UK, Canada, Australia, and New Zealand — the various components of the former British Empire that are now the major participants in the present Anglo-American Empire. Here it is:

There is difficulty in comparing them, as they are all part of the same basic set of ideological traditions and cultural influences. All of their economies and governments have been closely intertwined for centuries. Even the US economy quickly re-established trade with Britain after the revolution. It was always as much a civil war as it was a revolution.

The Western neoliberalism we see now is largely a byproduct of pre-revolutionary British imperialism (and other varieties of trade-based imperialism, such as even earlier seen in the influential Spanish Empire). The American Empire is simply an extension of the British Empire. There is no way to separate the two.

All those countries that are supposedly less war-like depend on the military of the American Empire to maintain international trade agreements and trade routes. The American Empire inherited this role from the British Empire, and ever since the two have been close allies in maintaining the Anglo-American geopolitical order.

So much of the US taxpayers money doesn’t go to healthcare and such because it has to pay for this international military regime. That is what is hard for Americans to understand. We get cheap products because of imperialism, but there is a high price paid for living in the belly of the beast.

There are in many ways greater advantages to living more on the edge of the empire. It’s why early American colonists in the pre-revolutionary era had more freedom and wealth than British subjects living in England. That is the advantage of living in Canada or whatever, getting many of the benefits of the Anglo-American imperial order without having to pay as much of the direct costs for maintaining it. Of course, those in developing countries pay the worst costs of all, both in blood and resources.

If not for the complicity of the governments and citizens of dozens of countries, the Anglo-American empire and Western geopolitical order wouldn’t be possible. It was a set of alliances that were cemented in place because of two world wars and a cold war. It is hard to find too many completely innocent people within such an evil system of authoritarian power.

It is a strange phenomenon that those at the center of empire are both heavily oppressed and among the most accepting of oppression. I think it’s because, when you’re so deep within such an authoritarian structure, oppression becomes normalized. It doesn’t occur to you that all your money going to maintain the empire could have been used to fund public education, public healthcare, etc.

Thomas Paine ran into this problem. When he came to the colonies, he became riled up and found it was easy through writing to rile up others. Being on the edge of the empire offers some psychological distance that allows greater critical clarity. But when Paine returned home to England, he couldn’t get the even more oppressed and impoverished English peasantry to join in revolution, even though they would have gained the most from it.

In fact, the reform that was forced by threat of revolution did end up benefiting those English lower classes. But that reform had to be inspired from fear of external threat. It was the ruling elite that embraced reform, rather than it having been enforced upon them by the lower classes in England. The British monarchy and aristocracy was talented at suppressing populism while allowing just enough reform to keep the threat of foreign revolution at bay. But if not for that revolutionary fervor kicking at their back door, such internal reform may never have happened.

Interestingly, what led to the American Revolution was when the British ruling elite decided to shift the costs of the empire to the colonies. The colonists were fine with empire when they benefited more than they had to pay. That is similar right now with the equivalent to colonists in the present Anglo-American imperial order. But if similar the costs of this empire were shifted to the allied nations, I bet you’d suddenly see revolutionary fervor against empire.

That probably would be a good thing.

What is the Moderate Center of a Banana Republic?

The Corruption Of Money
by Kevin Zeese and Margaret Flowers

Robert Wiessman, the director of Public Citizen, points out that there is broad popular support for transforming the economy and government. He writes: “. . . more Americans believe in witches and ghosts than support Citizens United . . . There is three-to-one support for a constitutional amendment to overturn Citizens United.” Some other key areas of national consensus:

  • 83% agree that “the rules of the economy matter and the top 1 percent have used their influence to shape the rules of the economy to their advantage;
  • Over 90% agree that it is important to regulate financial services and products to make sure they are fair for consumers;
  • Four-fifths say Wall Street financial companies should be held accountable with tougher rules and enforcement for the practices that caused the financial crisis;
  • By a three-to-one margin, the public supports closing tax loopholes that allow speculators and people who make money from short-term trades to pay less taxes on profits than full time workers pay on their income or wages.
  • About two-thirds oppose corporate trade deals like the Trans-Pacific Partnership and 75% believe such deals destroy more jobs than they create.

These are just a few examples that show near unanimity on issues where the government – answering to the oligarchs – does the opposite of what the public wants and needs.

90 Percent Of Public Lacks Trust In US Political System
by Staff

Seventy percent of Americans say they feel frustrated about this year’s presidential election, including roughly equal proportions of Democrats and Republicans, according to a recent national poll conducted by The Associated Press-NORC Center for Public Affairs Research. More than half feel helpless and a similar percent are angry.

Nine in 10 Americans lack confidence in the country’s political system, and among a normally polarized electorate, there are few partisan differences in the public’s lack of faith in the political parties, the nominating process, and the branches of government.

Americans do not see either the Republicans or the Democrats as particularly receptive to new ideas or the views of the rank-and-file membership. However, the candidacy of Bernie Sanders for the Democratic nomination is more likely to be viewed as good for his party than Donald Trump’s bid for the Republican Party.

The nationwide poll of 1,060 adults used the AmeriSpeak® Omnibus, a monthly multi-client survey using NORC at the University of Chicago’s probability based panel. Interviews were conducted between May 12 and 15, 2016, online and using landlines and cellphones.

Some of the poll’s key findings are:

  • Just 10 percent of Americans have a great deal of confidence in the country’s overall political system while 51 percent have only some confidence and 38 percent have hardly any confidence.
  • Similarly, only 13 percent say the two-party system for presidential elections works, while 38 percent consider it seriously broken. About half (49 percent) say that although the two-party system has real problems, it could still work well with some improvements.
  • Most Americans report feeling discouraged about this year’s election for president. Seventy percent say they experience frustration and 55 percent report they feel helpless.
  • Few Americans are feeling pride or excitement about the 2016 presidential campaign, but it is grabbing the public’s attention. Two-thirds (65 percent) of the public say they are interested in the election for president this year; only 31 percent say they are bored. However, only 37 percent are feeling hopeful about the campaign, 23 percent are excited, and just 13 percent say the presidential election make them feel proud.
  • The public has little confidence in the three branches of government. A quarter (24 percent) say they have a great deal of confidence in the Supreme Court and only 15 percent of Americans say the same of the executive branch. Merely 4 percent of Americans have much faith in Congress. However, more than half (56 percent) of Americans have a great deal of confidence in the military.
  • Only 29 percent of Democrats and just 16 percent of Republicans have a great deal of confidence in their party. Similarly, 31 percent of Democrats and 17 percent of Republicans have a lot of faith in the fairness of their party’s nominating process.
  • Neither party is seen as particularly receptive to fresh ideas. Only 17 percent of the public say the Democratic Party is open to new ideas about dealing with the country’s problems; 10 percent say that about the Republican Party.
  • The views of ordinary voters are not considered by either party, according to most Americans. Fourteen percent say the Democratic Party is responsive to the views of the rank-and-file; 8 percent report that about the Republican Party.
  • Donald Trump, the presumptive Republican nominee, has never held elected office or worked for the government, but most Americans do not regard the Republican Party as especially receptive to candidates from outside the usual influence of Washington and party politics. Only 9 percent consider the Republican Party open to outsiders.
  • Most Republicans (57 percent) say Trump’s candidacy has been good for the Republican Party, although only 15 percent of Democrats and 24 percent of independents agree.
  • The Democratic Party is not viewed as friendly to outsiders either. Only 10 percent say the Democratic Party is open to candidates that are independent of the established order.
  • However, in contrast to Trump, the entry of Bernie Sanders into the race for the Democratic nomination is not see as a negative for the party. Nearly two-thirds (64 percent) of Democrats say Sanders’ bid for the nomination has been good for the Democratic Party, along with 43 percent of Republicans and 22 percent of independents (54 percent of independents report it is neither good nor bad). Although Sanders has served in Congress as a House member and Senator for more than 25 years, he was an independent and did not register as a Democrat until recently. […]

While Americans have doubts about the overall political system and its fairness, nearly 3 in 4 say they have at least some confidence that their vote will be counted accurately. Just 1 in 4 report they have hardly any confidence that their vote will be counted.

Still, many Americans express qualms about how well the two-party system works for presidential elections. Nearly 4 in 10 regard the two-party system as seriously broken. About half say this system for electing a president has major problems, but could still work with some improvement. Just 13 percent of the public says the two-party system works fairly well.

Americans also question the fairness of the political parties’ presidential nominating processes. About 4 in 10 have little confidence in the equity of the parties’ nominating process for president. Four in 10 have some faith that the Republican Party’s means of selecting its standard bearer is fair, but only about 1 in 10 have a great deal of confidence in the process. Similarly, 38 percent have some confidence in the Democratic Party’s procedures, but only 17 percent have a great deal of confidence.

Again, while partisans are more confident in their own party, the levels are low. Thirty-one percent of Democrats express confidence in the Democratic Party’s nominating process, compared with 9 percent of Republicans and 6 percent of independents. Republicans have even less faith in their party’s system: 17 percent have confidence in the Republican Party’s nominating process. Only 11 percent of Democrats and 5 percent of independents agree.

Many Americans want changes to the process. Seven in 10 would prefer to see primaries and caucuses be open to all voters, regardless of the party registration. Only 3 in 10 favor a system of closed nominating contests, where only voters registered in a party can participate in that party’s primary or caucus. A majority of each party say they favor open primaries and caucuses, though Democrats are more likely than Republicans to support them (73 percent vs. 62 percent).

Most states hold primaries rather than caucuses, and most voters prefer primaries. Eight in 10 Americans say primaries are a more fair method of nominating a candidate. Less than 1 in 5 view caucuses as a more fair method.

US Is Not A Democracy
by Eric Zuesse

A study, to appear in the Fall 2014 issue of the academic journal Perspectives on Politics, finds that the U.S. is no democracy, but instead an oligarchy, meaning profoundly corrupt, so that the answer to the study’s opening question, “Who governs? Who really rules?” in this country, is:

“Despite the seemingly strong empirical support in previous studies for theories of majoritarian democracy, our analyses suggest that majorities of the American public actually have little influence over the policies our government adopts. Americans do enjoy many features central to democratic governance, such as regular elections, freedom of speech and association, and a widespread (if still contested) franchise. But, …” and then they go on to say, it’s not true, and that, “America’s claims to being a democratic society are seriously threatened” by the findings in this, the first-ever comprehensive scientific study of the subject, which shows that there is instead “the nearly total failure of ‘median voter’ and other Majoritarian Electoral Democracy theories [of America]. When the preferences of economic elites and the stands of organized interest groups are controlled for, the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy.”

To put it short: The United States is no democracy, but actually an oligarchy.

The authors of this historically important study are Martin Gilens and Benjamin I. Page, and their article is titled “Testing Theories of American Politics.” The authors clarify that the data available are probably under-representing the actual extent of control of the U.S. by the super-rich:

‘Economic Elite Domination theories do rather well in our analysis, even though our findings probably understate the political influence of elites. Our measure of the preferences of wealthy or elite Americans – though useful, and the best we could generate for a large set of policy cases – is probably less consistent with the relevant preferences than are our measures of the views of ordinary citizens or the alignments of engaged interest groups. Yet we found substantial estimated effects even when using this imperfect measure. The real-world impact of elites upon public policy may be still greater.”

Nonetheless, this is the first-ever scientific study of the question of whether the U.S. is a democracy. “Until recently it has not been possible to test these contrasting theoretical predictions [that U.S. policymaking operates as a democracy, versus as an oligarchy, versus as some mixture of the two] against each other within a single statistical model. This paper reports on an effort to do so, using a unique data set that includes measures of the key variables for 1,779 policy issues.” That’s an enormous number of policy-issues studied.

What the authors are able to find, despite the deficiencies of the data, is important: the first-ever scientific analysis of whether the U.S. is a democracy, or is instead an oligarchy, or some combination of the two. The clear finding is that the U.S. is an oligarchy, no democratic country, at all. American democracy is a sham, no matter how much it’s pumped by the oligarchs who run the country (and who control the nation’s “news” media). The U.S., in other words, is basically similar to Russia or most other dubious “electoral” “democratic” countries. We weren’t formerly, but we clearly are now. Today, after this exhaustive analysis of the data, “the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy.” That’s it, in a nutshell.

Fighting For A Legitimate Democracy, By And For The People
by Kevin Zeese & Margaret Flowers

Two weeks ago in reaction to the McCutcheon decision we touched on an issue that will become central to our movement: Has the democratic legitimacy of the US government been lost?

We raised this issue by quoting a Supreme Court Justice, former US president and a sitting US Senator:

“The legitimacy of the US government is now in question. By illegitimate we mean it is ruled by the 1%, not a democracy ‘of, by and for the people.’ The US has become a carefully designed plutocracy that creates laws to favor the few. As Stephen Breyer wrote in his dissenting opinion, American law is now ‘incapable of dealing with the grave problems of democratic legitimacy.’ Or, as former president, Jimmy Carter said on July 16, 2013 “America does not at the moment have a functioning democracy.”

“Even members of Congress admit there is a problem. Long before the McCutcheon decision Senator Dick Durbin (D-IL) described the impact of the big banks on the government saying: ‘They own the place.’ We have moved into an era of a predatory form of capitalism rooted in big finance where profits are more important than people’s needs or protection of the planet.”

The legitimacy of the US government derives from rule by the people. If the US government has lost its democratic legitimacy, what does that mean? What is the impact? And, what is our responsibility in these circumstances?

We can go back to the founding document of this nation, the Declaration of Independence for guidance. This revolutionary document begins by noting all humans are born with “inalienable rights” and explains “That to secure these rights, Governments are instituted” and that government derives its “powers from the consent of the governed.” Further, when the government “becomes destructive of these ends, it is the Right of the People to alter or abolish it, and to institute new government….”

After we wrote about the lost democratic legitimacy of the United States, this new academic study, which will be published in Perspectives on Politics,revealed that a review of a unique data set of 1,779 policy issues found:

“In the United States, our findings indicate, the majority does not rule — at least not in the causal sense of actually determining policy outcomes. When a majority of citizens disagrees with economic elites and/or with organized interests, they generally lose. Moreover, because of the strong status quo bias built into the U.S. political system, even when fairly large majorities of Americans favor policy change, they generally do not get it.”

And, this was not the only study to reach this conclusion this week. Another study published in the Political Research Quarterly found that only the rich get represented in the US senate. The researchers studied the voting records of senators in five Congresses and found the Senators were consistently aligned with their wealthiest constituents and lower-class constituents never appeared to influence the Senators’ voting behavior. This oligarchic tendency was even truer when the senate was controlled by Democrats.

Large Majorities of Americans Do Not Rule

Let the enormity of the finding sink in – “the majority does not rule” and “even when fairly large majorities of Americans favor policy change, they generally do not get it.”

Now, for many of us this is not news, but to have an academic study document it by looking at 1,779 policy issues and empirically proving the lack of democratic legitimacy, is a major step forward for people understanding what is really happening in the United States and what we must do.

Before the occupy movement began we published an article, We Stand With the Majority, that showed super majorities of the American people consistently support the following agenda:

  • Tax the rich and corporations
  • End the wars, bring the troops home, cut military spending
  • Protect the social safety net, strengthen Social Security and provide improved Medicare to everyone in the United States
  • End corporate welfare for oil companies and other big business interests
  • Transition to a clean energy economy, reverse environmental degradation
  • Protect worker rights including collective bargaining, create jobs and raise wages
  • Get money out of politics

While there was over 60% support for each item on this agenda, the supposed ‘representatives’ of the people were taking the opposite approach on each issue. On September 18, the day after OWS began we followed up with a second article dealing with additional issues that showed, the American people would rule better than the political and economic elites.

While many Americans think that the government representing wealthy interests is new, in fact it goes back to the founding of the country. Historian Charles Beard wrote in the early 1900’s that the chief aim of the authors of the U.S. Constitution was to protect private property, favoring the economic interests of wealthy merchants and plantation owners rather than the interests of the majority of Americans who were small farmers, laborers, and craft workers.

The person who is credited with being the primary author of the Constitution, James Madison, believed that the primary goal of government is “to protect the minority of the opulent against the majority.” He recognized that “if elections were open to all classes of people, the property of landed proprietors would be insecure.” As a result of these oligarchic views, only 6% of the US population was originally given the right to vote. And, the first chief justice of the US Supreme Court, John Jay believed that “those who own the country ought to govern it.”

This resulted in the wealth of the nation being concentrated among a small percentage of the population and their wealth being created by slaves and other low-paid workers who had no political participation in government. The many creating wealth for the few has continued throughout US history through sweat shops, child labor and now, poverty workers, like those at the nation’s largest employer, Walmart. By putting property ahead of human rights, the Constitution put in place a predatory economic system of wealth creation.

In fact, Sheldon Wolin describes the Constitutional Convention as blocking the colonists desire for democracy, as economic elites “organize[d] a counter-revolution aimed at institutionalizing a counterforce to challenge the prevailing decentralized system of thirteen sovereign states in which some state legislatures were controlled by ‘popular’ forces.” The Constitution was written “to minimize the direct expression of a popular will” and block the “American demos.” For more see our article, Lifting the Veil of Mirage Democracy in the United States.

In many respects, since the founding, the people of the United States have been working to democratize the United States. Gradually, the right to vote expanded to include all adults, direct election of US Senators was added as a constitutional amendment but these changes do not mean we have a real democracy. The work is not done. The legitimacy of people ruling has not been achieved.

While we have the right to vote, our carefully managed elections consistently give Americans a choice of candidates approved by the wealthiest; and through campaign financing, media coverage, ballot access, managing who participates in debates and other means, the ruling elite ensure an outcome that will not challenge the power of the wealthiest Americans and the country’s biggest businesses.

This week, Nomi Prins, a former managing partner at Goldman Sachs wrote about the long history of how the nation’s biggest bankers have controlled presidents throughout the last century. She writes: “With so much power in the hands of an elite few, America operates more as a plutocracy on behalf of the upper caste than a democracy or a republic. Voters are caught in the crossfire of two political parties vying to run Washington in a manner that benefits the banking caste, regardless of whether a Democrat or Republican is sitting in the Oval.”

In many respects, our task is to complete the American Revolution and create a real democracy where the people rule through fair elections of representatives and there is increased direct and participatory democracy.

Living In The Illusion Of Democracy
by Kevin Zeese and Margaret Flowers

The Democrats and Republicans Have Created Fraudulent Debates

The hubris and manipulation of the two establishment parties is evident in the presidential debates. The two Wall Street-funded parties decide who is allowed to participate in the debates. The so-called debate ‘commission’ is a disguise apparatus of the Democratic and Republican parties. It is a commission in name only, in reality it is a corporation created by the two parties and controlled by the two parties. When the disguise is removed, it becomes obvious that the Democrats and Republicans are choosing to only debate Democrats and Republicans, and preventing any competition.Democracy Not Plutocracy

In 1988, the Republican co-founder, Frank Fahrenkopf, who remains a co-chair, indicated at the news conference announcing the ‘commission’ that they were “not likely to look with favor on including third-party candidates in the debates.” The New York Times quoted the Democratic co-founder, Paul Kirk, saying: “As a party chairman, it’s my responsibility to strengthen the two-party system.” As a result, there has not been a third party candidate in the debates for 24 years, even though there have been third party candidates on enough ballots to win a majority of electoral college votes in every election. Closed debates create the illusion that there are only two candidates running for president.

When the ‘commission’ was founded, the League of Women Voters warned that the parties taking over the debates would “perpetrate a fraud on the American voter.” They resigned their historic role as the non-partisan sponsors of the debates because they refused to be “an accessory to the hoodwinking of the American public.” They foretold the truth, and now we must all work to undo the hoax.

This year, 76% of voters want four-candidates in the debates. A majority of people in the US believe neither party represents them. The two parties are shrinking and now each make up less than 30% of the voters, with a record 50% of voters considering themselves independents. The two establishment parties have nominated the two most unpopular candidates in history with six in ten voters disliking Clinton and Trump. An Associated Press/GfK poll found that four out of five voters fear at least one of the two nominees, and 25% fear both, a number confirmed by Gallup. Three-quarters of those planning to vote will do so based on whom they dislike rather than whom they support.

This is why three-quarters of voters want Jill Stein of the Green Party and Gary Johnson of the Libertarian Party included in the debates – people want more choices. Both will be on almost every ballot but voters will not get to hear from them and learn what they stand for. The dislike of the two parties and their candidates is also why the fake ‘commission’ must do all it can to prevent voters from knowing that they have more choices for president.

And, they have an ally in the media which expects to receive $6 billion in political advertising in 2016.The media wants that advertising more than they want a real democracy. As the CEO of CBS said, “Super PACs may be bad for America, but they’re very good for CBS.” As a result, you will see no criticism of the fake debate commission. Jill Stein was able to briefly sneak in an article on The Hill website about her experience during the first debate last week, i.e. being excluded from the debate, escorted off campus when she was doing media interviews, holding a people’s debate outside the debate area and 22 people being arrested for protesting the closed debates, as well as how her campaign used social media to break through. The article was up briefly, but quickly disappeared from the front page.

In almost every election a large majority of US voters want more candidates in the debates but the phony commission serves as a blockade, preventing real democracy. If we want a democracy that is of, by and for the people, it is critical we end the debate commission’s fraud on US voters. Rather than creating barriers to participation, the rule should be simple and objective: if a candidate is on enough ballots to win 270 electoral college votes they should be included in the debate as very few overcome the ballot access hurdles placed before independent parties.

The United States is in a Democracy Crisis

The fraudulent debates are one example of many of how US democracy is manipulated and managed to ensure that only candidates who represent the wealthy can be elected. The Associated Press-NORC Center for Public Affairs reported this year on the extent of the democracy crisis. They found the legitimacy of US government has disappeared:

“Nine in 10 Americans lack confidence in the country’s political system, and among a normally polarized electorate, there are few partisan differences in the public’s lack of faith in the political parties, the nominating process, and the branches of government.”

There is close to unanimous consensus that the elections fail voters and do not create a legitimate government. The poll taken as the primary season came to a close found “only 13% say the two-party system for presidential elections works.” The elections have left most Americans feeling discouraged with 70% saying they experience frustration and 55% reporting they feel helpless. Only 13% feel proud of the presidential election.

The excluded parties are taking unusual steps to reach voters. Jill Stein accomplished a historic breakthrough during the first presidential debate, by using cutting edge social media tools to insert her live voice into the debate in real time. The Stein-Baraka campaign used Facebook, Twitter and Periscope to reach approximately 15 million voters within 24 hours of the first debate, “Jill Stein” trended at #1 on Facebook on debate day and Google searches spiked with one of the top search phrases being “How do I vote for Jill Stein?” No 3rd party candidate has reached such a large audience since Ross Perot was included in the debates 24 years ago. But, this cannot compete with the two party debates which appeared on every network with an audience of more than 80 million and constant discussion in the media leading up to the debate and after it.

During the upcoming vice presidential debate on Tuesday, candidate Ajamu Baraka will be using the same social media tools as Stein as well as being inserted live into the debates by Democracy Now. Baraka will answer every question as if he were included by pausing the debate and then returning to it after he answers. This three-candidate debate can be viewed on Jill Stein’s Facebook page and website, as well as on Ajamu Baraka’s Facebook page and on Democracy Now.

Presidential debates are not only about getting someone elected, they are also about setting the political agenda for the country. With only the Democratic and Republican nominees included many key political issues are not being discussed. The debates spend a lot of time on nonsense while ignoring many important issues that impact the lives of the people of the United States as well as ensuring a liveable planet.

In the first debate, time was spent on whether President Obama was born in the United States, or whether Donald Trump’s criticism of a former Miss Universe was inappropriate. But there was no discussion of tens of millions of people living in poverty, what the country can do to confront climate change, how to erase student debt or whether the United States should be an empire.

In fact, the word “empire” has never been in a presidential debate as the political elites do not want to discuss the reality of US global domination. They do not want people considering that an empire economy is the reason for many of our economic problems. These are a few issues among many that will not be discussed this election season.

And, if an issue like healthcare is discussed there will be no one on stage who represents the views of the 60% of voters who support a single payer, improved Medicare for All, because neither of the establishment party nominees do. There will also be no one on stage to talk about key movement issues like the systemic racism exposed by Black Lives Matter, the wealth inequality demonstrated by Occupy, and the protests against pipelines by Indigenous Peoples and communities across the country. On these and many other issues there will be no discussion or only discussion from the point of view of two Wall Street-dominated parties. The political agenda will be warped and ignore the people’s concerns.

Mark Fisher’s Suicide

Mark Fisher died earlier this year. I didn’t even know about it. He wasn’t much older than me. But similarly he suffered from severe depression, having struggled with it for a long time. That is what finally got him, by way of suicide. He was an interesting writer and no doubt his depression gave an edge to his way of thinking and communicating.

His book on capitalist realism was insightful and brought difficult ideas down to ground level. He had a talent for explanation, connecting the unfamiliar to the familiar. His descriptions of capitalism in some ways fits in with Corey Robin’s theory of the reactionary mind, but with his own twist. Here is Fisher from Capitalist Realism:

“When it actually arrives, capitalism brings with it a massive desacralization of culture. It is a system which is no longer governed by any transcendent Law; on the contrary, it dismantles all such codes, only to re-install them on an ad hoc basis. The limits of capitalism are not fixed by fiat, but defined (and re-defined) pragmatically and improvisationally. This makes capitalism very much like the Thing in John Carpenter’s film of the same name: a monstrous, infinitely plastic entity, capable of metabolizing and absorbing anything with which it comes into contact.”

I always appreciate writers who can connect intellectual ideas to pop culture examples. It’s one thing to call something reactionary but it’s a whole other thing to offer a clear image of what that means. That which is reactionary is also dynamically creative in that it can take in anything — not just co-opt but absorb, assimilate, and transform anything and everything it touches (or that touches it). Portraying capitalism as the Thing makes it more real within the imagination.

I just bought his latest book that also just came out this year in the US. I’ll have to prioritize reading it before all else.

In Memoriam: Mark Fisher
by Dan Hassler-Forest, Ellie Mae O’Hagan, Mark Bould, Roger Luckhurst, Carl Freedman, Jeremy Gilbert

Mark Fisher’s K-punk blogs were required reading for a generation
by Simon Reynolds

Remembering Mark Fisher
by David Stubbs