Paul Adkin on Decadence & Stagnation

“Decadence: when things are just too good and easy that no one bothers to push forward anymore, bringing about stagnation …

But there is also another kind of stagnation: one which comes about because there just isn’t enough time to go forward; when all time is taken up with something that is essentially futile when considered from the point of view of the bigger picture. Like making money. Even the seemingly dynamic world of business, if it is dedicated only to business and not to authentically meaningful human progress (things associated with knowledge and discovery), it is essentially stagnating. Any society that is a simulacra society, hell-bent on reproducing copies rather than on developing its creativity, is a decadent, stagnating society. We are stagnant not because of what we are doing, our anthill society is always busy, but because what we are driven by, in all this anthill activity, is not creative. When production is synonymous with reproduction, then we know we have fallen into the stagnant pool of decadence.

“Nietzsche talked about the residual nature of decadence[1]. That decadence is a cumulative thing. Certainly, it is nurtured both by dogma and nihilism. Only a sceptical meaningfulness can push forward in a creative way.

“Sceptical meaningfulness? How can such a thing be? Surely it is a contradiction in terms.

“To understand how this oxymoron combination can work, we need to see meaningfulness as a forward pushing phenomenon. Once it stops pushing forward, meaningfulness slips into dogma. Meaning is fuelled by truth, but it does not swim in truth as if truth were a lake. Truth, in order to be lasting, has to be a river.”

from Decadence & Stagnation by Paul Adkin

The Way of Radical Imagination

Someone questioned me about what is radical imagination. I wasn’t sure if they were being merely disingenuous in playing Devil’s advocate as an intellectual pose. An intellectual debate about the issue wouldn’t have brought either of us closer to understanding.

Anyone who has ever had their mind shook loose by seeing in a new way knows the power of radical imagination, whether or not they could explain it. Radical means that which goes to the root. As such, radical imagination is what has the capacity to shake us to our foundation or send us tumbling down unexplored caverns.

The intellectual who was interrogating me seems more attracted to the dark imagination than to the radical imagination, not that the two are mutually exclusive. He considers himself a radical and yet he apparently has a hard time imagining what exists outside of the iron prison. I get the sense that he has come to romanticize dystopia and apocalypse, which he rationalizes as his seeking to understand. The danger is that it can lead to a mirror image of the dogmatic utopian, exchanging one absolutist fantasy for another.

I’m not dismissing this motivation to bleakly stare down ugly truths. Some of my favorite writers leaned heavily in this direction. There is a dark bent to Ursula K. Le Guin, Philip K. Dick, Octavia Butler, etc; but their speculations didn’t end in mere gloomy cynicism. They were always looking beyond. Even a perverse and pessimistic visionary like William S. Burroughs sought to creatively portray alternative societies and other ways of being.

In my own sense of radical imagination, what drives my thinking is a profound epistemological dissatisfaction and ideological disloyalty, not just toward the status quo but also toward much of what opposes it. I’ve grown tired of predictable conflicts that endlessly repeat, like some cosmic tragicomedy. Each side reinforces the other, making victory for either side impossible. Radical imagination, however, seeks to escape this trap.

No amount of studying the hegemonic order will necessarily help one to see the hidden aporia and lacuna, the gaps in the structure. Negative capability is only useful to the degree that it opens the mind to negative space as creative void and a passageway through. The darkness can paralyze us in blind immobility or it can shift our perception into other senses.

The stakes are high. And the consequences all too personal. It goes far beyond any social order. This touches upon our humanity, the psychological reality of our being.

We stand in a hallway of doors, not knowing what is behind them. The entire social reality we live within is that hallway. We stand there in that tight place, the crowd shuffling back and forth. Groups form taking up different positions along the hallway and sometimes fight with the other groups. A few curious souls notice the doors themselves, but the doors remain unopened. That hallway is warm and safe. We are surrounded by the familiar and we have no fear of loneliness.

But what if some of the doors were cracked open, allowing one to barely glimpse something else? What then? Radical imagination is that inability to ignore the light coming through the crack, the temptation to press against the door, the curiosity about what is on the other side.

 

Delirium of Hyper-Individualism

Individualism is a strange thing. For anyone who has spent much time meditating, it’s obvious that there is no there there. It slips through one’s grasp like an ancient philosopher trying to study aether. The individual self is the modernization of the soul. Like the ghost in the machine and the god in the gaps, it is a theological belief defined by its absence in the world. It’s a social construct, a statement that is easily misunderstood.

In modern society, individualism has been raised up to an entire ideological worldview. It is all-encompassing, having infiltrated nearly every aspect of our social lives and become internalized as a cognitive frame. Traditional societies didn’t have this obsession with an idealized self as isolated and autonomous. Go back far enough and the records seem to show societies that didn’t even have a concept, much less an experience, of individuality.

Yet for all its dominance, the ideology of individualism is superficial. It doesn’t explain much of our social order and personal behavior. We don’t act as if we actually believe in it. It’s a convenient fiction that we so easily disregard when inconvenient, as if it isn’t all that important after all. In our most direct experience, individuality simply makes no sense. We are social creatures through and through. We don’t know how to be anything else, no matter what stories we tell ourselves.

The ultimate value of this individualistic ideology is, ironically, as social control and social justification.

The wealthy, the powerful and privileged, even the mere middle class to a lesser degree — they get to be individuals when everything goes right. They get all the credit and all the benefits. All of society serves them because they deserve it. But when anything goes wrong, they hire lawyers who threaten anyone who challenges them or they settle out of court, they use their crony connections and regulatory capture to avoid consequences, they declare bankruptcy when one of their business ventures fail, and they endlessly scapegoat those far below them in the social hierarchy.

The profits and benefits are privatized while the costs are externalized. This is socialism for the rich and capitalism for the poor, with the middle class getting some combination of the two. This is why democratic rhetoric justifies plutocracy while authoritarianism keeps the masses in line. This stark reality is hidden behind the utopian ideal of individualism with its claims of meritocracy and a just world.

The fact of the matter is that no individual ever became successful. Let’s do an experiment. Take an individual baby, let’s say the little white male baby of wealthy parents with their superior genetics. Now leave that baby in the woods to raise himself into adulthood and bootstrap himself into a self-made man. I wonder how well that would work for his survival and future prospects. If privilege and power, if opportunity and resources, if social capital and collective inheritance, if public goods and the commons have no major role to play such that the individual is solely responsible to himself, we should expect great things from this self-raised wild baby.

But if it turns out that hyper-individualism is total bullshit, we should instead expect that baby to die of exposure and starvation or become the the prey of a predator feeding its own baby without any concerns for individuality. Even simply leaving a baby untouched and neglected in an orphanage will cause failure to thrive and death. Without social support, our very will to live disappears. Social science research has proven the immense social and environmental influences on humans. For a long time now there has been no real debate about this social reality of our shared humanity.

So why does this false belief and false idol persist? What horrible result do we fear if we were ever to be honest with ourselves? I get that the ruling elite are ruled by their own egotistic pride and narcissism. I get that the comfortable classes are attached to their comforting lies. But why do the rest of us go along with their self-serving delusions? It is the strangest thing in the world for a society to deny it is a society.

To Be Perceived As Low Class Or Not

In my mother’s family, hers was the first generation to attend college. She went to and graduated from Purdue University, a state college. Before that, her own mother and my grandmother was the first in her family to get a high school diploma.

I never thought of my grandmother as an overly smart person, not that I ever knew her IQ. She never seemed like an intellectually stimulating person, but apparently she was a good student. She always liked to read. I doubt she read too many classics that weren’t Reader’s Digest abridge books. Still, she read a lot and had a large vocabulary. She regularly did crossword puzzles and never used a dictionary to look up a word. For a woman of her age, graduating high school was a major accomplishment. Most people she grew up probably didn’t graduate, including the man she married. She became a secretary and such office work required a fair amount of intellectual ability. Specifically, my grandmother was a secretary at Purdue, when my mother was in high school and later attending Purdue. My grandfather was jealous of his wife spending so much time with professors, as he had an inferiority complex and was highly class conscious, a typical working class guy of the time.

A major reason my grandmother didn’t come across as intellectual was simply the way she spoke. She had a Hoosier accent, such as pronouncing fish as feesh, cushion as cooshion, and sink as zink (the latter known as the Hoosier apex); along with adding an extra ‘s’ to words as in “How’s come?”. It was the accent of poor whites, indicating that your family likely came from the South at some point. Like in the Ozarks, it seems to be a variant of the Appalachian accent where many Hoosiers came from. But there is maybe an old German influence mixed in because so much of my Upper Southern ancestry were early German immigrants. Even in Indiana, having a Hoosier accent marks you as ‘Southern’ and, for many Northerners, it sounds Southern. When my family moved to a Chicago suburb, my mother was often asked if she was Southern. At Purdue, her speech pathology professors would correct her because of  her slurring the ‘s’ sound (partly because of an overbite) and because of her saying bofe instead of both (common among Hoosiers, Southerners, and some black populations).

The point is that speaking with such an accent is not correct, according to Standard English. It is stereotyped as unsophisticated or even unintelligent. My grandmother sounded like this to a strong degree. But she knew proper English. Part of her job as a secretary at Purdue was to rewrite and revise official documents, including research papers and dissertations. It was my not-so-smart-sounding grandmother whose job it was to correct and polish the writing of professors and others who sought her out. She helped make them sound smart, on paper. And she helped two of her children graduate college. Apparently, she ended up writing many of my uncle’s papers for his classes.

One of my grandmother’s bosses was Earl L. Butz. He was the head of the agricultural economics department. After a stint under president Eisenhower, Butz returned to Purdue and became the dean of the college of agriculture. He later returned to politics under the Nixon and Ford administrations. After destroying his career because of Hoosier-style racism, he headed back to Purdue again — it might be noted that Butz’s hometown, Albion (1, 2), and the location of Purdue, West Lafayette (3), had a history of racism; and the FBI in recent years has listed Purdue as having one of the highest hate crime rates among colleges and universities in the US (4). This downturn didn’t stop his legacy of government-subsidized big ag that destroyed the small family farm and created a glut of corn products found in almost everything Americans eat.

Butz died in West Lafayette where my mother was born and grew up. Like my maternal family, Butz came from poor Hoosier stock. If my grandmother had been a man, instead of a woman, or if she had been born later, she surely could have gotten a college education. Butz apparently was ambitious, but I don’t know that his career indicates he was smarter than average. Maybe my grandmother was far smarter than she appeared, even if the world she lived in didn’t give her much opportunity. She would have spent years reading highly academic writing and likely at one point could have had an intelligent discussion about agricultural economics. Being a poor Hoosier woman, she didn’t have many choices other than marrying young. She did what was expected of her. Most people do what is expected of them. It’s just that some people have greater expectations placed upon them, along with greater privileges and resources made available to them. A poor woman, like minorities, in the past had very little chance to go from working poverty to a major political figure determining national agricultural policy.

It’s so easy to judge people by how they appear or how they sound. My mother, unlike my grandmother, came of age at a time when women were finally given a better chance in life. Still, my mother was directed into what was considered women’s work, a low-paying career as a speech pathologist in public schools. Yet this did give my mother the opportunity to escape her working class upbringing and to eventually lose her Hoosier accent. My mother who is no smarter than my grandmother can now speak in a Standard American non-accent accent that sounds intelligent according to mainstream society, but that wouldn’t have been the case back when my mother had an overbite because of lack of dental work and spoke like a poor white. What changed was society, the conditions under which human potential is either developed or suppressed.

The Head of Capital and the Body Politic

What is capitalism? The term is etymologically related to cattle (and chattel). The basic notion of capitalism is fungible wealth. That is property that can be moved around, like cattle (or else what can be moved by cattle, such as being put in a wagon pulled by cattle or some other beast of burden). It relates to a head of cattle. The term capitalism is derived from capital or rather capitale, a late Latin word based on caput, meaning of the head.

A capital is the head of a society and the capitol is where the head of power resides — Capital vs. Capitol:

Both capital and capitol are derived from the Latin root caput, meaning “head.” Capital evolved from the words capitālis, “of the head,” and capitāle, “wealth.” Capitol comes from Capitōlium, the name of a temple (dedicated to Jupiter, the Roman equivalent of the Greek god Zeus) that once sat on the smallest of Rome’s seven hills, Capitoline Hill.

But there is also the body politic or the body of Christ. The head has become the symbolic representation of the body, but the head is just one part of the body. It is the body that is the organic whole, with the people as demos: national citizenry, community members, church congregants, etc. This is the corporeal existence of the social order. And it is the traditional basis of a corporation, specifically as representing some kind of personhood. At one time, objects and organizations were treated as having actual, not just legal, personhood. The body of Christ was perceived as a living reality, not just a convenient way for the powerful to wield their power.

If you go back far enough, the head of a society was apparently quite literal. In the ancient world, when a leader died, they often lopped off his head because that was the source of the voice of authority. Supposedly, bicameral societies involved an experience where people continued hearing voices of dead kings and godmen, presumably why they kept the skull around. The earliest known permanent structures were temples of death cults with headless imagery, and these temples were built prior to humans settling down — prior to agriculture, pottery, and domesticating cattle. They built houses to their gods before they built houses for themselves. The capital of these societies were temples and that was the convenient location for storing their holy skulls.

Gobekli Tepe, like many other similar sites, was located on a hill. That has long been symbolic of power. After bicameral societies developed, they built artificial hills such as mounding up dirt or stacking large stones as pyramids. The head is at the top of the body and it is from that vantage point that all of the world can be seen. It was natural to associate the panoramic view of a hill or mountain with power and authority, to associate vision with visionary experience. Therefore, it made sense to locate a god’s house in such a high place. Temples and churches, until recent history, were typically the tallest structures in any town or city. In this age of capitalism, it is unsurprising that buildings of business now serve that symbolic role of what is held highest in esteem and so housed in the tallest buildings. The CEO is the head of our society, quite literally at the moment with a businessman as president, a new plutocratic aristocracy forming.

What we’ve forgotten is that the head is part of a body. As a mere part of the body, the head should serve the body in that the part should serve the whole and not the other way around. In tribal societies, there is the big man who represents the tribe. He is the head of the community, but his ability to command submission was severely limited. In Native American tribes, it was common for clans to make their own decisions, whether to follow the tribal leader or not. The real power was in the community, in the social order. The Amazonian Piraha go so far as to have no permanent leadership roles at all.

Even in the more complex Western social order before capitalism took hold, feudal lords were constricted by social responsibilities and obligations to their communities. These feudal lords originated from a tradition where kings were beholden to and originally chosen by the community. Their power wasn’t separate from the community, although feudalism slowly developed in that direction which made possible for the takeover of privatized capitalism. But even in early capitalism, plantation owners were still acting as the big men of their communities where they took care of trade with the external word and took care of problems within the local population. Store owners began taking over this role. Joe Bageant described the West Virginian town he grew up in during the early 20th century and it was still operating according to a barter economy where all outside trade flowed through the store owner with no monetary system being required within the community.

A major difference of the early societies is how important was social order. It was taken as reality, in the way we today take individuality as reality. For most of human existence, most humans would never have been able to comprehend our modern notion of individuality. Primary value was not placed on the individual, not even the individual leader who represented something greater than himself. Even the Roosevelts as presidents still carried a notion of noblesse oblige which signified that there was something more important than their own individuality, one of the most ancient ideas that has almost entirely disappeared.

Interestingly, pre-modern people as with tribal people in some ways had greater freedom in their identity for the very reason their identity was social, rather than individual. The Piraha can change their name and become a new person, as far as other Piraha are concerned. In Feudalism, carnival allowed people to regularly lose their sense of identity and become something else. We modern people are so attached to our individuality that losing our self seems like madness. Our modern social order is built on the rhetoric of individuality and this puts immense weight on individuals, possibly explaining the high rates of mental illness and suicide in modern society. Madness and death is our only escape from the ego.

Capitalism, as globalized neoliberalism, is a high pressure system. Instead of the head of society serving the body politic, we worship the detached head as if a new death cult has taken hold. A corporation is the zombie body without a soul, the preeminent form of our corporatist society with the transnational CEO as the god king standing upon his temple hill. We worship individuality to such a degree that only a few at the top are allowed to be genuine individuals, a cult of death by way of a cult of personality, power detached from the demos like a plant uprooted. The ruling elite are the privileged individuals who tell the dirty masses what to do, the voices we hear on the all-pervasive media. The poor are just bodies to be sacrificed on the altar of desperation, homelessness, prison, and war. As Margaret Thatcher stated in no uncertain terms, there is no society. That is to say there is no body politic, just a mass of bodies as food for the gods.

The head of power, like a cancerous tumor, has grown larger than the body politic. The fungible wealth of capitalism can be moved, but where is it to move. The head can’t move without the body. Wealth can’t be separated from what the world that creates it. Do the plutocrats plan on herding their wealth across the starry heavens in the hope of escaping gravity of the corporeal earth? If we take the plutocrats hallowed skull and trap the plutocrat’s divine being in a temple hill, what would the voice tell us?

At the end of the Bronze Age, a major factor of the mass collapse of civilizations was the horse-drawn chariot. Horses were an early domesticated animal, a major form of fungible wealth. Horses and chariots made new forms of warfare possible, involving large standing armies that could be quickly moved across vast distances with supply chains to keep them fed and armed. Along with other factors, this was a game-changer and the once stable bicameral societies fell one after another. Bicameral societies were non-capitalistic, but the following Axial Age would set the foundations for what would eventually become modern capitalism. Bicameral civilization remained stable for millennia. The civilization formed from the Axial Age has maintained itself and we are the inheritors of its traditions. The danger is that, like bicameral societies, we might become the victims of our own success in growing so large. Our situation is precarious. A single unforeseen factor could send it all tumbling down. Maybe globalized neoliberalism is our horse-drawn chariot.

A head detached from its body is the symbol of modernity, grotesquely demonstrated by the guillotine of the French Revolution, the horror of horrors to the defenders of the ancien regime. Abstract ideas have taken on a life of their own with ideological systems far outreaching what supports them. It’s like a tree clinging to a crumbling cliffside, as if it were hoping to spread its limbs like wings to take flight out across the chasm below. In forgetting the ground of our being, what has been lost and what even greater loss threatens? Before revolution had begun but with revolution in the air, Jean-Jacques Rousseau wrote in 1750 (Discourse on the Arts and Sciences), “What will become of virtue if riches are to be acquired at any cost? The politicians of the ancient world spoke constantly of morals and virtue; ours speak of nothing but commerce and money.” That question is now being answered.

* * *

There was one detail I forgot to work into this piece. Feudalism was on my mind. The end of feudalism was the final nail in the coffin for the societal transformation that began during the Axial Age. What finally forced the feudal order, upon which the ancien regime was dependent, to fall apart or rather be dismantled was sheep, another domesticated animal.

Feudalism was dependent on labor-intensive agriculture that required a large local peasant population. With sheep herding, fewer people were required. The feudal commons were privatized, the peasants kicked off the land, entire villages were razed to the ground, and probably millions of people over several centuries were made destitute, homeless, and starving.

Vast wealth was transferred into private hands. This created a new plutocratic class within a new capitalist order. There is an interesting relationship between domesticated animals and social change. Another example of this is how free-ranging pigs in the American colonies wreaked havoc on Native American villages and gardens, making impossible their way of life.

This process of destruction is how civilization as we know it was built. Some call this creative destruction. For others, it has been plain destruction.

Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors, epigenetics, brain plasticity, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Social Construction & Ideological Abstraction

The following passages from two books help to explain what is social construction. As society has headed in a particular direction of development, abstract thought has become increasingly dominant.

But for us modern people who take abstractions for granted, we often don’t even recognize abstractions for what they are. Many abstractions simply become reality as we know it. They are ‘looped’ into existence, as race realism, capitalist realism, etc.

Ideological abstractions become so pervasive and systemic that we lose the capacity to think outside of them. They form our reality tunnel.

This wasn’t always so. Humans used to conceive of and hence perceive the world far differently. And this shaped their sense of identity, which is hard for us to imagine.

* * *

Dynamics of Human Biocultural Diversity:
A Unified Approach

by Elisa J. Sobo
Kindle Locations 94-104)

Until now, many biocultural anthropologists have focused mainly on the ‘bio’ half of the equation, using ‘biocultural’ generically, like biology, to refer to genetic, anatomical, physiological, and related features of the human body that vary across cultural groups. The number of scholars with a more sophisticated approach is on the upswing, but they often write only for super-educated expert audiences. Accordingly, although introductory biocultural anthropology texts make some attempt to acknowledge the role of culture, most still treat culture as an external variable— as an add-on to an essentially biological system. Most fail to present a model of biocultural diversity that gives adequate weight to the cultural side of things.

Note that I said most, not all: happily, things are changing. A movement is afoot to take anthropology’s claim of holism more seriously by doing more to connect— or reconnect— perspectives from both sides of the fence. Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world. Today, the leading edge of science recognizes the links and interdependencies that such thinking keeps falsely hidden.

Nature, Human Nature, and Human Difference:
Race in Early Modern Philosophy
by Justin E. H. Smith

pp. 9-10

The connection to the problem of race should be obvious: kinds of people are to no small extent administered into being, brought into existence through record keeping, census taking, and, indeed, bills of sale. A census form asks whether a citizen is “white,” and the possibility of answering this question affirmatively helps to bring into being a subkind of the human species that is by no means simply there and given, ready to be picked out, prior to the emergence of social practices such as the census. Censuses, in part, bring white people into existence, but once they are in existence they easily come to appear as if they had been there all along. This is in part what Hacking means by “looping”: human kinds, in contrast with properly natural kinds such as helium or water, come to be what they are in large part as a result of the human act of identifying them as this or that. Two millennia ago no one thought of themselves as neurotic, or straight, or white, and nothing has changed in human biology in the meantime that could explain how these categories came into being on their own. This is not to say that no one is melancholic, neurotic, straight, white, and so on, but only that how that person got to be that way cannot be accounted for in the same way as, say, how birds evolved the ability to fly, or how iron oxidizes.

In some cases, such as the diagnosis of mental illness, kinds of people are looped into existence out of a desire, successful or not, to help them. Racial categories seem to have been looped into existence, by contrast, for the facilitation of the systematic exploitation of certain groups of people by others. Again, the categories facilitate the exploitation in large part because of the way moral status flows from legal status. Why can the one man be enslaved, and the other not? Because the one belongs to the natural-seeming kind of people that is suitable for enslavement. This reasoning is tautological from the outside, yet self-evident from within. Edward Long, as we have seen, provides a vivid illustration of it in his defense of plantation labor in Jamaica. But again, categories cannot be made to stick on the slightest whim of their would-be coiner. They must build upon habits of thinking that are already somewhat in place. And this is where the history of natural science becomes crucial for understanding the history of modern racial thinking, for the latter built directly upon innovations in the former. Modern racial thinking could not have taken the form it did if it had not been able to piggyback, so to speak, on conceptual innovations in the way science was beginning to approach the diversity of the natural world, and in particular of the living world.

This much ought to be obvious: racial thinking could not have been biologized if there were no emerging science of biology. It may be worthwhile to dwell on this obvious point, however, and to see what more unexpected insights might be drawn out of it. What might not be so obvious, or what seems to be ever in need of renewed pointing out, is a point that ought to be of importance for our understanding of the differing, yet ideally parallel, scope and aims of the natural and social sciences: the emergence of racial categories, of categories of kinds of humans, may in large part be understood as an overextension of the project of biological classification that was proving so successful in the same period. We might go further, and suggest that all of the subsequent kinds of people that would emerge over the course of the nineteenth and twentieth centuries, the kinds of central interest to Foucault and Hacking, amount to a further reaching still, an unprecedented, peculiarly modern ambition to make sense of the slightest variations within the human species as if these were themselves species differentia. Thus for example Foucault’s well-known argument that until the nineteenth century there was no such thing as “the homosexual,” but only people whose desires could impel them to do various things at various times. But the last two centuries have witnessed a proliferation of purportedly natural kinds of humans, a typology of “extroverts,” “depressives,” and so on, whose objects are generally spoken of as if on an ontological par with elephants and slime molds. Things were not always this way. In fact, as we will see, they were not yet this way throughout much of the early part of the period we call “modern.”

Symbolic Dissociation of Nature/Nurture Debate

“One of the most striking features of the nature-nurture debate is the frequency with which it leads to two apparently contradictory results: the claim that the debate has finally been resolved (i.e., we now know that the answer is neither nature nor nurture, but both), and the debate’s refusal to die. As with the Lernian Hydra, each beheading seems merely to spur the growth of new heads.”

That is from the introduction to Evelyn Fox Keller’s The Mirage of a Space between Nature and Nurture (p. 1). I personally experienced this recently. There is a guy I’ve been discussing these kinds of issues with in recent years. We have been commenting on each other’s blogs for a long while, in an ongoing dialogue that has centered on childhood influences: peers, parenting, spanking, abuse, trauma, etc.

It seemed that we had finally come to an agreement on the terms of the debate, his having come around to my view that the entire nature-nurture debate is pointless or confused. But then recently, he once again tried to force this nature-nurture frame onto our discussion (see my last post). It’s one of these zombie ideas that isn’t easily killed, a memetic mind virus that infects the brain with no known cure. Keller throws some light on the issue (pp. 1-2):

“Part of the difficulty comes into view with the first question we must ask: what is the nature-nurture debate about? There is no single answer to this question, for a number of different questions take refuge under its umbrella. Some of the questions express legitimate and meaningful concerns that can in fact be addressed scientifically; others may be legitimate and meaningful, but perhaps not answerable; and still others simply make no sense. I will argue that a major reason we are unable to resolve the nature-nurture debate is that all these different questions are tangled together into an indissoluble knot, making it all but impossible for us to stay clearly focused on a single, well-defined and meaningful question. Furthermore, I will argue that they are so knitted together by chronic ambiguity, uncertainty, and slippage in the very language we use to talk about these issues. And finally, I will suggest that at least some of that ambiguity and uncertainty comes from the language of genetics itself.”

What occurred to me is that maybe this is intentional. It seems to be part of the design, a feature and not a flaw. That is how the debate maintains itself, by being nearly impossible to disentangle and so not allowing itself to be seen for what it is. It’s not a real debate for what appears to be an issue is really a distraction. There is much incentive to not look at it too closely, to not pick at the knot. Underneath, there is a raw nerve of Cartesian anxiety.

This goes back to my theory of symbolic conflation. The real issue (or set of issues) is hidden behind a symbolic issue. Maybe this usually or possibly always takes the form of a debate being framed in a particular way. The false dichotomy of dualistic thinking isn’t just a frame for it tells a narrative of conflict where, as long as you accepts the frame, you are forced to pick a side.

I often use abortion as an example because symbolic conflation operates most often and most clearly on visceral and emotional issues involving the body, especially sex and death (abortion involving both). This is framed as pro-life vs pro-choice, but the reality of public opinion is that most Americans are BOTH pro-life AND pro-choice. That is to say most Americans want to maintain a woman’s right to choose while simultaneously putting some minimal limitations on abortions. Besides, as research has shown, liberal and leftist policies (full sex education, easily available contraceptives, planned parenthood centers, high quality public healthcare available to all, etc) allow greater freedom to individuals while creating the conditions that decrease the actual rate of abortions because they decrease unwanted pregnancies.

One thing that occurs to me is that such frames tend to favor one side. It stands out to me that those promoting the nature vs nurture frame are those who tend to be arguing for biological determinism (or something along those lines), just like those creating the forced choice of pro-life or pro-choice usually are those against the political left worldview. That is another way in which it isn’t a real debate. The frame both tries to obscure the real issue(s) and to shut down debate before it happens. It’s all about social control by way of thought control. To control how an issue is portrayed and how a debate is framed is to control the sociopolitical narrative, the story being told and the conclusion it leads to. Meanwhile, the real concern of the social order is being manipulated behind the scenes. It’s a sleight-of-hand trick.

Symbolic conflation is a time-tested strategy of obfuscation. It’s also an indirect way of talking about what can’t or rather won’t otherwise be acknowledged, in the symbolic issue being used as a proxy. To understand what it all means, you have to look at the subtext. The framing aspect brings another layer to this process. A false dichotomy could be thought of as a symbolic dissociation, where what is inseparable in reality gets separated in the framing of symbolic ideology.

The fact of the matter is that nature and nurture are simply two ways of referring to the same thing. If the nature/nurture debate is a symbolic dissociation built on top of a symbolic conflation, is this acting as a proxy for something else? And if so, what is the real debate that is being hidden and obscured, in either being talked around or talked about indirectly?

False Dichotomy and Bad Science

Someone shared with me a link to a genetics study. The paper is “Behavioural individuality in clonal fish arises despite near-identical rearing conditions” by David Bierbach, Kate L. Laskowski, and Max Wolf. From the abstract:

“Behavioural individuality is thought to be caused by differences in genes and/or environmental conditions. Therefore, if these sources of variation are removed, individuals are predicted to develop similar phenotypes lacking repeatable individual variation. Moreover, even among genetically identical individuals, direct social interactions are predicted to be a powerful factor shaping the development of individuality. We use tightly controlled ontogenetic experiments with clonal fish, the Amazon molly (Poecilia formosa), to test whether near-identical rearing conditions and lack of social contact dampen individuality. In sharp contrast to our predictions, we find that (i) substantial individual variation in behaviour emerges among genetically identical individuals isolated directly after birth into highly standardized environments and (ii) increasing levels of social experience during ontogeny do not affect levels of individual behavioural variation. In contrast to the current research paradigm, which focuses on genes and/or environmental drivers, our findings suggest that individuality might be an inevitable and potentially unpredictable outcome of development.”

Here is what this seems to imply. We don’t as of yet understand (much less are able to identify, isolate, and control) all of the genetic, epigenetic, environmental, etc factors that causally affect and contribute to individual development. Not only that but we don’t understand the complex interaction of those factors, known and unknown. To put it simply, our ignorance is much more vast than our knowledge. We don’t even have enough knowledge to know what we don’t know. But we are beginning to realize that we need to rethink what we thought we knew.

It reminds me of the mouse research where genetically identical mice in environmentally identical conditions led to diverse behavioral results. I’ve mentioned it many times before here in my blog, including a post specifically about it: Of Mice and Men and Environments (also see Heritability & Inheritance, Genetics & Epigenetics, Etc). In the mice post, along with quoting an article, I pointed to a fascinating passage from David Shenk’s book, The Genius in All of Us. Although I was previously aware of the influence of environmental conditions, the research discussed there makes it starkly clear. I was reminded of this because of another discussion about mice research, from Richard Harris’ Rigor Mortis with the subtitle of “How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions” (pp. 79-81):

“Garner said that mice have great potential for biological studies, but at the moment, he believes, researchers are going about it all wrong. For the past several decades, they have pursued a common strategy in animal studies: eliminate as many variables as you can, so you can more clearly see an effect when it’s real. It sounds quite sensible, but Garner believes it has backfired in mouse research. To illustrate this point, he pointed to two cages of genetically identical mice. One cage was at the top of the rack near the ceiling, the other near the floor. Garner said cage position is enough of a difference to affect the outcome of an experiment. Mice are leery of bright lights and open spaces, but here they live in those conditions all the time. “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed,” he said.

“Garner was part of an experiment involving six different mouse labs in Europe to see whether behavioral tests with genetically identical mice would vary depending on the location. The mice were all exactly the same age and all female. Even so, these “identical” tests produced widely different results, depending on whether they were conducted in Giessen, Muenster, Zurich, Mannheim, Munich, or Utrecht. The scientists tried to catalog all possible differences: mouse handlers in Zurich didn’t wear gloves, for example, and the lab in Utrecht had the radio on in the background. Bedding, food, and lighting also varied. Scientists have only recently come to realize that the sex of the person who handles the mice can also make a dramatic difference. “Mice are so afraid of males that it actually induces analgesia,” a pain-numbing reaction that screws up all sorts of studies, Garner said. Even a man’s sweaty T-shirt in the same room can trigger this response.

“Behavioral tests are used extensively in research with mice (after all, rodents can’t tell handlers how an experimental drug is affecting them), so it was sobering to realize how much those results vary from lab to lab. But here’s the hopeful twist in this experiment: when the researchers relaxed some of their strict requirements and tested a more heterogeneous group of mice, they paradoxically got more consistent results. Garner is trying to convince his colleagues that it’s much better to embrace variation than to tie yourself in knots trying to eliminate it.

““Imagine that I was testing a new drug to help control nausea in pregnancy, and I suggested to the [Food and Drug Administration (FDA)] that I tested it purely in thirty-five-year-old white women all in one small town in Wisconsin with identical husbands, identical homes, identical diets which I formulate, identical thermostats that I’ve set, and identical IQs. And incidentally they all have the same grandfather.” That would instantly be recognized as a terrible experiment, “but that’s exactly how we do mouse work. And fundamentally that’s why I think we have this enormous failure rate.”

“Garner goes even further in his thinking, arguing that studies should consider mice not simply as physiological machines but as organisms with social interactions and responses to their environment that can significantly affect their health and strongly affect the experiment results. Scientists have lost sight of that. “I fundamentally believe that animals are good models of human disease,” Garner said. “I just don’t think the way we’re doing the research right now is.”

“Malcolm Macleod has offered a suggestion that would address some of the issues Garner raises: when a drug looks promising in mice, scale up the mouse experiments before trying it in people. “I simply don’t understand the logic that says I can take a drug to clinical trial on the basis of information from 500 animals, but I’m going to need 5,000 human animals to tell me whether it will work or not. That simply doesn’t compute.” Researchers have occasionally run large mouse experiments at multiple research centers, just as many human clinical trials are conducted at several medical centers. The challenge is funding. Someone else can propose the same study involving a lot fewer animals, and that looks like a bargain. “Actually, the guy promising to do it for a third of the price isn’t going to do it properly, but it’s hard to get that across,” Macleod said.”

This is the problem with the framing debate as nature vs nurture (or similar framings such as biology vs culture and organism vs environment). Even when people are aware of the limitations of this frame, the powerful sway it holds over people’s minds causes them to continually fall back on them. Even when I have no interest in such dualistic thinking, some people feel it necessary to categorize the sides of a debate accordingly, where apparently I’m supposed to play the role of ‘nurturist’ in opposition to their ‘biology’ advocacy: “feel your life-force, Benjamin. Come with me to the biology side!” Well, I have no desire to take sides in a false dichotomy. Oddly, this guy trying to win me over to the “biology side” in debate (about human violence and war) is the same person who shared the clonal fish study that demonstrated how genetics couldn’t explain the differences observed. So, I’m not entirely sure what he thinks ‘biology’ means, what ideological commitments it represents in his personal worldview.

(As he has mentioned in our various discussions, his studies about all of this are tied up with his experience as a father who has struggled with parenting and a husband who is recently separated, partly over parenting concerns. The sense of conflict and blame he is struggling with sounds quite serious and I’m sympathetic. But I suspect he is looking for some kind of life meaning that maybe can’t be found where he is looking for it. Obviously, it is a highly personal issue for him, not a disinterested debate of abstract philosophy or scientific hypotheses. I’m starting to think that we aren’t even involved in the same discussion, just talking past one another. It’s doubtful that I can meet him on the level he finds himself, and so I don’t see how I can join him in the debate that seems to matter so much to him. I won’t even try. I’m not in that headspace. We’ve commented on each other’s blogs for quite a while now, but for whatever reason we simply can’t quite fully connect. Apparently, we are unable to agree enough about what is the debate to even meaningfully disagree about a debate. Although he is a nice guy and we are on friendly terms, I don’t see further dialogue going anywhere. *shrug*)

When we are speaking of so-called ‘nature’, this doesn’t only refer to human biology of genetics and physiology of development but also includes supposed junk DNA and epigenetics, brain plasticity and gut-brain connection, viruses and bacteria, parasites and parasite load, allergies and inflammation, microbiome and cultured foods, diet and nutrition, undernourishment and malnutrition, hunger and starvation, food deserts and scarcity, addiction and alcoholism, pharmaceuticals and medicines, farm chemicals and food additives, hormone mimics and heavy metal toxicity, environmental stress and physical trauma, abuse and violence, diseases of affluence and nature-deficit disorder, in utero conditions and maternal bond, etc. All of these alter the expression of genetics, both within a single lifetime of individuals and across the generations of entire populations.

There are numerous varieties of confounding factors. I could also point to sociocultural, structural, and institutional aspects of humanity: linguistic relativity and WEIRD research subjects, culture of trust and culture of honor, lifeways and mazeways, habitus and neighborhood effect, parenting and peers, inequality and segregation, placebos and nocebos, Pygmalion effect and Hawthorne effect, and on and on. As humans are social creatures, one could write a lengthy book simply listing all the larger influences of society.

Many of these problems have become most apparent in social science, but it is far from limited to that area of knowledge. Very similar problems are found in the biological and medical sciences, with the hard sciences having clear overlap with the soft sciences considering social constructions get fed back into scientific research. With mostly WEIRD scientists studying mostly WEIRD subjects, it’s the same WEIRD culture that has dominated nearly all of science and so it is WEIRD biases that have been the greatest stumbling blocks. Plus, with what has been proven from linguistic relativity, we would expect that how we talk about science will shape the research done, the results gained, the conclusions made, and the theories proposed. It’s all of one piece.

The point is that there are no easy answers and certain conclusions. In many ways, science is still in its infancy. We have barely scratched the surface of what potentially could be known. And much of what we think we know is being challenged, which is leading to a paradigm change that we can barely imagine. There is a lot at stake. It goes far beyond abstract theory, hypothetical debate, and idle speculation.

Most importantly, we must never forget that no theory is value-neutral or consequence-free. The ideological worldview we commit to doesn’t merely frame debate and narrow our search for knowledge. There is a real world impact on public policy and human lives, such as when medial research and practice becomes racialized (with a dark past connecting race realism and genetic determinism, racial hygiene and eugenics, medical testing on minorities and the continuing impact on healthcare). All of this raises questions about whether germs are to be treated as invading enemies, whether war is an evolutionary trait, whether addiction is biological, whether intelligence is genetic, whether language is a module in the brain, and whether the ideology of individualism is human nature.

We have come to look to the body for answers to everything. And so we have come to project almost every issue onto the body. It’s too easy to shape scientific theory in such a way that confirms what we already believe and what is self-serving or simply what conforms to the social order. There is a long history of the intentional abuse and unintentional misuse of science. It’s impossible to separate biology from biopolitics.

Worse still, our imaginations are hobbled, making it all that more difficult to face the problems before us. And cultural biases have limited the search for greater knowledge. More than anything, we need to seriously develop our capacity to radically imagine new possibilities. That would require entirely shifting the context and approach of our thinking, maybe to the extent of altering our consciousness and our perception of the world. A paradigm change that mattered at all would be one that went far beyond abstract theory and was able to touch the core of our being. Our failure on this level may explain why so much scientific research has fallen into a rut.

* * *

I’ve been thinking about this for a long time. My thoughts here aren’t exactly new, but I wanted to share some new finds. It’s a topic worth returning to on occasion, as further research rolls in and the experts continue to debate. I’ll conclude with some more from Richard Harris’ Rigor Mortis. Below that are several earlier posts, a few relevant articles, and a bunch of interesting books (just because I love making long lists of books).

Rigor Mortis:
How Sloppy Science Creates Worthless Cures, Crushes Hope, and
Wastes Billions

by Richard Harris
pp. 13-16

There has been no systematic attempt to measure the quality of biomedical science as a whole, but Leonard Freedman, who started a nonprofit called the Global Biological Standards Institute, teamed up with two economists to put a dollar figure on the problem in the United States. Extrapolating results from the few small studies that have attempted to quantify it, they estimated that 20 percent of studies have untrustworthy designs; about 25 percent use dubious ingredients, such as contaminated cells or antibodies that aren’t nearly as selective and accurate as scientists assume them to be; 8 percent involve poor lab technique; and 18 percent of the time, scientists mishandle their data analysis. In sum, Freedman figured that about half of all preclinical research isn’t trustworthy. He went on to calculate that untrustworthy papers are produced at the cost of $28 billion a year. This eye-popping estimate has raised more than a few skeptical eyebrows—and Freedman is the first to admit that the figure is soft, representing “a reasonable starting point for further debate.”

“To be clear, this does not imply that there was no return on that investment,” Freedman and his colleagues wrote. A lot of what they define as “not reproducible” really means that scientists who pick up a scientific paper won’t find enough information in it to run the experiment themselves. That’s a problem, to be sure, but hardly a disaster. The bigger problem is that the errors and missteps that Freedman highlights are, as Begley found, exceptionally common. And while scientists readily acknowledge that failure is part of the fabric of science, they are less likely to recognize just how often preventable errors taint studies.

“I don’t think anyone gets up in the morning and goes to work with the intention to do bad science or sloppy science,” said Malcolm Macleod at the University of Edinburgh. He has been writing and thinking about this problem for more than a decade. He started off wondering why almost no treatment for stroke has succeeded (with the exception of the drug tPA, which dissolves blood clots but doesn’t act on damaged nerve cells), despite many seemingly promising leads from animal studies. As he dug into this question, he came to a sobering conclusion. Unconscious bias among scientists arises every step of the way: in selecting the correct number of animals for a study, in deciding which results to include and which to simply toss aside, and in analyzing the final results. Each step of that process introduces considerable uncertainty. Macleod said that when you compound those sources of bias and error, only around 15 percent of published studies may be correct. In many cases, the reported effect may be real but considerably weaker than the study concludes.

Mostly these estimated failure rates are educated guesses. Only a few studies have tried to measure the magnitude of this problem directly. Scientists at the MD Anderson Cancer Center asked their colleagues whether they’d ever had trouble reproducing a study. Two-thirds of the senior investigators answered yes. Asked whether the differences were ever resolved, only about a third said they had been. “This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to ‘presumed’ knowledge,” the authors wrote when they published the survey findings.

The American Society for Cell Biology (ASCB) surveyed its members in 2014 and found that 71 percent of those who responded had at some point been unable to replicate a published result. Again, 40 percent of the time, the conflict was never resolved. Two-thirds of the time, the scientists suspected that the original finding had been a false positive or had been tainted by “a lack of expertise or rigor.” ASCB adds an important caveat: of the 8,000 members it surveyed, it heard back from 11 percent, so its numbers aren’t convincing. That said, Nature surveyed more than 1,500 scientists in the spring of 2016 and saw very similar results: more than 70 percent of those scientists had tried and failed to reproduce an experiment, and about half of those who responded agreed that there’s a “significant crisis” of reproducibility.

pp. 126-129

The batch effect is a stark reminder that, as biomedicine becomes more heavily reliant on massive data analysis, there are ever more ways to go astray. Analytical errors alone account for almost one in four irreproducible results in biomedicine, according to Leonard Freedman’s estimate. A large part of the problem is that biomedical researchers are often not well trained in statistics. Worse, researchers often follow the traditional practices of their fields, even when those practices are deeply problematic. For example, biomedical research has embraced a dubious method of determining whether results are likely to be true by relying far too heavily on a gauge of significance called the p-value (more about that soon). Potential help is often not far away: major universities have biostatisticians on staff who are usually aware of the common pitfalls in experiment design and subsequent analysis, but they are not enlisted as often as they could be. […]

A few years ago, he placed an informal wager of sorts with a few of his colleagues at other universities. He challenged them to come up with the most egregious examples of the batch effect. The “winning” examples would be published in a journal article. It was a first stab at determining how widespread this error is in the world of biomedicine. The batch effect turns out to be common.

Baggerly had a head start in this contest because he’d already exposed the problems with the OvaCheck test. But colleagues at Johns Hopkins were not to be outdone. Their entry involved a research paper that appeared to get at the very heart of a controversial issue: one purporting to show genetic differences between Asians and Caucasians. There’s a long, painful, failure-plagued history of people using biology to support prejudice, so modern studies of race and genetics meet with suspicion. The paper in question had been coauthored by a white man and an Asian woman (a married couple, as it happens), lowering the index of suspicion. Still, the evidence would need to be substantial. […]

The University of Washington team tracked down the details about the microarrays used in the experiment at Penn. They discovered that the data taken from the Caucasians had mostly been produced in 2003 and 2004, while the microarrays studying Asians had been produced in 2005 and 2006. That’s a red flag because microarrays vary from one manufacturing lot to the next, so results can differ from one day to the next, let alone from year to year. They then asked a basic question of all the genes on the chips (not just the ones that differed between Asians and Caucasians): Were they behaving the same in 2003–2004 as they were in 2005–2006? The answer was an emphatic no. In fact, the difference between years overwhelmed the apparent difference between races. The researchers wrote up a short analysis and sent it to Nature Genetics, concluding that the original findings were another instance of the batch effect.

These case studies became central examples in the research paper that Baggerly, Leek, and colleagues published in 2010, pointing out the perils of the batch effect. In that Nature Reviews Genetics paper, they conclude that these problems “are widespread and critical to address.”

“Every single assay we looked at, we could find examples where this problem was not only large but it could lead to clinically incorrect findings,” Baggerly told me. That means in many instances a patient’s health could be on the line if scientists rely on findings of this sort. “And these are not avoidable problems.” If you start out with data from different batches you can’t correct for that in the analysis. In biology today, researchers are inevitably trying to tease out a faint message from the cacophony of data, so the tests themselves must be tuned to pick up tiny changes. That also leaves them exquisitely sensitive to small perturbations—like the small differences between microarray chips or the air temperature and humidity when a mass spectrometer is running. Baggerly now routinely checks the dates when data are collected—and if cases and controls have been processed at different times, his suspicions quickly rise. It’s a simple and surprisingly powerful method for rooting out spurious results.

p. 132

Over the years breathless headlines have celebrated scientists claiming to have found a gene linked to schizophrenia, obesity, depression, heart disease—you name it. These represent thousands of small-scale efforts in which labs went hunting for genes and thought they’d caught the big one. Most were dead wrong. John Ioannidis at Stanford set out in 2011 to review the vast sea of genomics papers. He and his colleagues looked at reported genetic links for obesity, depression, osteoporosis, coronary artery disease, high blood pressure, asthma, and other common conditions. He analyzed the flood of papers from the early days of genomics. “We’re talking tens of thousands of papers, and almost nothing survived” closer inspection. He says only 1.2 percent of the studies actually stood the test of time as truly positive results. The rest are what’s known in the business as false positives.

The field has come a long way since then. Ioannidis was among the scientists who pushed for more rigorous analytical approaches to genomics research. The formula for success was to insist on big studies, to make careful measurements, to use stringent statistics, and to have scientists in various labs collaborate with one another—“you know, doing things right, the way they should be done,” Ioannidis said. Under the best of these circumstances, several scientists go after exactly the same question in different labs. If they get the same results, that provides high confidence that they’re not chasing statistical ghosts. These improved standards for genomics research have largely taken hold, Ioannidis told me. “We went from an unreliable field to a highly reliable field.” He counts this as one of the great success stories in improving the reproducibility of biomedical science. Mostly. “There’s still tons of research being done the old fashioned way,” he lamented. He’s found that 70 percent of this substandard genomics work is taking place in China. The studies are being published in English-language journals, he said, “and almost all of them are wrong.”

pp. 182-183

Published retractions tend to be bland statements that some particular experiment was not reliable, but those notices often obscure the underlying reason. Arturo Casadevall at Johns Hopkins University and colleague Ferric Fang at the University of Washington dug into retractions and discovered a more disturbing truth: 70 percent of the retractions they studied resulted from bad behavior, not simply error. They also concluded that retractions are more common in high-profile journals—where scientists are most eager to publish in order to advance their careers. “We’re dealing with a real deep problem in the culture,” Casadevall said, “which is leading to significant degradation of the literature.” And even though retractions are on the rise, they are still rarities—only 0.02 percent of papers are retracted, Oransky estimates.

David Allison at the University of Alabama, Birmingham, and colleagues discovered just how hard it can be to get journals to set the record straight. Some scientists outright refuse to retract obviously wrong information, and journals may not insist. Allison and his colleagues sent letters to journals pointing out mistakes and asking for corrections. They were flabbergasted to find that some journals demanded payment—up to $2,100—just to publish their letter pointing out someone else’s error.

pp. 186-188

“Most people who work in science are working as hard as they can. They are working as long as they can in terms of the hours they are putting in,” said social scientist Brian Martinson. “They are often going beyond their own physical limits. And they are working as smart as they can. And so if you are doing all those things, what else can you do to get an edge, to get ahead, to be the person who crosses the finish line first? All you can do is cut corners. That’s the only option left you.” Martinson works at HealthPartners Institute, a nonprofit research agency in Minnesota. He has documented some of this behavior in anonymous surveys. Scientists rarely admit to outright misbehavior, but nearly a third of those he has surveyed admit to questionable practices such as dropping data that weakens a result, based on a “gut feeling,” or changing the design, methodology, or results of a study in response to pressures from a funding source. (Daniele Fanelli, now at Stanford University, came to a similar conclusion in a separate study.)

One of Martinson’s surveys found that 14 percent of scientists have observed serious misconduct such as fabrication or falsification, and 72 percent of scientists who responded said they were aware of less egregious behavior that falls into a category that universities label “questionable” and Martinson calls “detrimental.” In fact, almost half of the scientists acknowledged that they personally had used one or more of these practices in the past three years. And though he didn’t call these practices “questionable” or “detrimental” in his surveys, “I think people understand that they are admitting to something that they probably shouldn’t have done.” Martinson can’t directly link those reports to poor reproducibility in biomedicine. Nobody has funded a study exactly on that point. “But at the same time I think there’s plenty of social science theory, particularly coming out of social psychology, that tells us that if you set up a structure this way… it’s going to lead to bad behavior.”

Part of the problem boils down to an element of human nature that we develop as children and never let go of. Our notion of what’s “right” and “fair” doesn’t form in a vacuum. People look around and see how other people are behaving as a cue to their own behavior. If you perceive you have a fair shot, you’re less likely to bend the rules. “But if you feel the principles of distributive justice have been violated, you’ll say, ‘Screw it. Everybody cheats; I’m going to cheat too,’” Martinson said. If scientists perceive they are being treated unfairly, “they themselves are more likely to engage in less-than-ideal behavior. It’s that simple.” Scientists are smart, but that doesn’t exempt them from the rules that govern human behavior.

And once scientists start cutting corners, that practice has a natural tendency to spread throughout science. Martinson pointed to a paper arguing that sloppy labs actually outcompete good labs and gain an advantage. Paul Smaldino at the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology ran a model showing that labs that use quick-and-dirty practices will propagate more quickly than careful labs. The pressures of natural selection and evolution actually favor these labs because the volume of articles is rewarded over the quality of what gets published. Scientists who adopt these rapid-fire practices are more likely to succeed and to start new “progeny” labs that adopt the same dubious practices. “We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers,” Smaldino and McElreath wrote. This isn’t evolution in the strict biological sense, but they argue the same general principles apply as the culture of science evolves.

* * *

What do we inherit? And from whom?
Identically Different: A Scientist Changes His Mind
Race Realism, Social Constructs, and Genetics
Race Realism and Racialized Medicine
The Bouncing Basketball of Race Realism
To Control or Be Controlled
Flawed Scientific Research
Human Nature: Categories & Biases
Bias About Bias
Urban Weirdness
“Beyond that, there is only awe.”

Animal studies paint misleading picture by Janelle Weaver
Misleading mouse studies waste medical resources by Erika Check Hayden
A mouse’s house may ruin experiments by Sara Reardon
Curious mice need room to run by Laura Nelson
Male researchers stress out rodents by Alla Katsnelson
Bacteria bonanza found in remote Amazon village by Boer Deng
Case Closed: Apes Got Culture by Corey Binns
Study: Cat Parasite Affects Human Culture by Ker Than
Mind Control by Parasites by Bill Christensen

Human Biodiversity by Jonathan Marks
The Alternative Introduction to Biological Anthropology by Jonathan Marks
What it Means to be 98% Chimpanzee by Jonathan Marks
Tales of the Ex-Apes by Jonathan Marks
Why I Am Not a Scientist by Jonathan Marks
Is Science Racist? by Jonathan Marks
Biology Under the Influence by Lewontin & Levins
Biology as Ideology by Richard C. Lewontin
The Triple Helix by Richard Lewontin
Not In Our Genes by Lewontin & Rose
The Biopolitics of Race by Sokthan Yeng
The Brain’s Body by Victoria Pitts-Taylor
Misbehaving Science by Aaron Panofsky
The Flexible Phenotype by Piersma & Gils
Herding Hemingway’s Cats by Kat Arney
The Genome Factor by Conley & Fletcher
The Deeper Genome by John Parrington
Postgenomics by Richardson & Stevens
The Developing Genome by David S. Moore
The Epigenetics Revolution by Nessa Carey
Epigenetics by Richard C. Francis
Not In Your Genes by Oliver James
No Two Alike 
by Judith Rich Harris
Identically Different by Tim Spector
The Cultural Nature of Human Development by Barbara Rogoff
The Hidden Half of Nature by Montgomery & Biklé
10% Human by Alanna Collen
I Contain Multitudes by Ed Yong
The Mind-Gut Connection by Emeran Mayer
Bugs, Bowels, and Behavior by Arranga, Viadro, & Underwood
This Is Your Brain on Parasites by Kathleen McAuliffe
Infectious Behavior by Paul H. Patterson
Infectious Madness by Harriet A. Washington
Strange Contagion by Lee Daniel Kravetz
Childhood Interrupted by Beth Alison Maloney
Only One Chance 
by Philippe Grandjean
Why Zebras Don’t Get Ulcers by Robert M. Sapolsky
Resisting Reality by Sally Haslanger
Nature, Human Nature, and Human Difference by Justin E. H. Smith
Race, Monogamy, and Other Lies They Told You by Agustín Fuentes
The Invisible History of the Human Race by Christine Kenneally
Genetics and the Unsettled Past by Wailoo, Nelson, & Lee
The Mismeasure of Man by Stephen Jay Gould
Identity Politics and the New Genetics by Schramm, Skinner, & Rottenburg
The Material Gene by Kelly E. Happe
Fatal Invention by Dorothy Roberts
Inclusion by Steven Epstein
Black and Blue by John Hoberman
Race Decoded by Catherine Bliss
Breathing Race into the Machine by Lundy Braun
Race and the Genetic Revolution by Krimsky & Sloan
Race? by Tattersall & DeSalle
The Social Life of DNA by Alondra Nelson
Native American DNA by Kim TallBear
Making the Mexican Diabetic by Michael Montoya
Race in a Bottle by Jonathan Kahn
Uncertain Suffering by Carolyn Rouse
Sex Itself by Sarah S. Richardson
Building a Better Race by Wendy Kline
Choice and Coercion by Johanna Schoen
Sterilized by the State by Hansen & King
American Eugenics by Nancy Ordover
Eugenic Nation by Alexandra Minna Stern
A Century of Eugenics in America by Paul A. Lombardo
In the Name of Eugenics by Daniel J. Kevles
War Against the Weak by Edwin Black
Illiberal Reformers by Thomas C. Leonard
Defectives in the Land by Douglas C. Baynton
Framing the moron by Gerald V O’Brien
Imbeciles by Adam Cohen
Three Generations, No Imbeciles by Paul A. Lombardo
Defending the Master Race by Jonathan Peter Spiro
Hitler’s American Model by James Q. Whitman
Beyond Human Nature by Jesse J. Prinz
Beyond Nature and Culture by Philippe Descola
The Mirage of a Space between Nature and Nurture by Evelyn Fox Keller
Biocultural Creatures by Samantha Frost
Dynamics of Human Biocultural Diversity by Elisa J Sobo
Monoculture by F.S. Michaels
A Body Worth Defending by Ed Cohen
The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes
A Psychohistory of Metaphors by Brian J. McVeigh
The Master and His Emissary by Iain McGilchrist
From Bacteria to Bach and Back by Daniel C. Dennett
Consciousness by Susan Blackmore
The Meme Machine by Blackmore & Dawkins
Chasing the Scream by Johann Hari
Don’t Sleep, There Are Snakes by Daniel L. Everett
Dark Matter of the Mind by Daniel L. Everett
Language by Daniel L. Everett
Linguistic Relativity by Caleb Everett
Numbers and the Making of Us by Caleb Everett
Linguistic Relativities by John Leavitt
The Language Myth by Vyvyan Evans
The Language Parallax by Paul Friedrich
Louder Than Words by Benjamin K. Bergen
Out of Our Heads by Alva Noe
Strange Tools by Alva Noë
From Bacteria to Bach and Back by Daniel C. Dennett
The Embodied Mind by Varela, Thompson, & Rosch
Immaterial Bodies by Lisa Blackman
Radical Embodied Cognitive Science by Anthony Chemero
How Things Shape the Mind by Lambros Malafouris
Vibrant Matter by Jane Bennett
Entangled by Ian Hodder
How Forests Think by Eduardo Kohn
The New Science of the Mind by Mark Rowlands
Supersizing the Mind by Andy Clark
Living Systems by Jane Cull
The Systems View of Life by Capra & Luisi
Evolution in Four Dimensions by Jablonka & Lamb
Hyperobjects by Timothy Morton
Sync by Steven H. Strogatz
How Nature Works by Per Bak
Warless Societies and the Origin of War by Raymond C. Kelly
War, Peace, and Human Nature by Douglas P. Fry
Darwinism, War and History by Paul Crook

Time and Trauma

And I think of that “Groundhog Day” movie with Bill Murray in which he repeats the same day, again and again, with only minor changes. If you’ve seen the movie, Murray finally breaks out of what appears to be an infinite loop only when he changes his ways, his approach to life, his mentality. He becomes a better person and even gets the girl.

When is the USA going to break out of its infinite loop of war? Only when we change our culture, our mentality.

A “war on terror” is a forever war, an infinite loop, in which the same place names and similar actions crop up again and again. Names like Mosul and Helmand province. Actions like reprisals and war crimes and the deaths of innocents, because that is the face of war.

~W.J. Astore, Happy 4th of July! And a Global War on Something

* * *

The impression we form is that it is not that linear time perception or experience that has been corrupted by trauma; it is that time “itself” has been traumatized — so that we come to comprehend “history” not as a random sequence of events, but as a series of traumatic clusters. This broken time, this sense of history as a malign repetition, is “experienced” as seizure and breakdown; I have placed “experienced” in inverted commas here because the kind of voiding interruption of subjectivity seems to obliterate the very conditions that allows experience to happen.

It is as if the combination of adolescent erotic energy with an inorganic artefact … produces a trigger for a repeating of the ancient legend. It is not clear that “repeating” is the right word here, though. It might be better to say that the myth has been re-instantiated, with the myth being understood as a kind of structure that can be implemented whenever the conditions are right. But the myth doesn’t repeat so much as it abducts individuals out of linear time and into its “own” time, in which each iteration of the myth is in some sense always the first time.

…the mythic is part of the virtual infrastructure which makes human life as such possible. It is not the case that first of all there are human beings, and the mythic arrives afterwards, as a kind of cultural carapace added to a biological core. Humans are from the start — or from before the start, before the birth of the individual — enmeshed in mythic structures.

~Mark Fisher, Eerie ThanatosThe Weird and the Eerie (pp. 96-97)