What is inheritance?

The original meaning of a gene was simply a heritable unit. This was long before the discovery of DNA. The theory was based on phenotype, i.e., observable characteristics. What they didn’t know and what still doesn’t often get acknowledged is that much gets inherited from parents, especially from the mother. This includes everything from epigenetics to microbiome, the former determining which genes express and how they express while the latter consists of the majority of genetics in the human body. The fetus will also inherit health conditions from the mother, such as malnutrition and stress, viruses and parasites — all of those surely having epigenetic effects that could get passed on for generations.

Even more interestingly, DNA itself gets passed on in diverse ways. Viruses will snip out sections of DNA and then put them into the DNA of new hosts. Mothers, including surrogate mothers, can gain DNA from the fetuses they carry. And then those mothers can pass that DNA to any fetus she carries after that, which could cause a fetus to have DNA from two fathers. Fetuses can also absorb the DNA from fraternal twins or even entirely absorb the other fetus, forming what is called a chimera. Bone marrow transplantees also become chimeras because they inherit the stem cells for blood cells from the donor, along with inheriting epigentics from the donor. These chimeras could pass this on during a transplantee’s pregnancy.

We hardly know what all that might mean. There is no single heritable unit that by itself does anything. That is not the direct source of causation. A gene only acts as part of DNA within a specific cell and all of that within the entire biological system existing within specific environmental conditions. The most important causal factors are various. What is in DNA only matters to the degree it is expressed, but what determines its expression will also determine how it expresses. Evelyn Keller Fox writes that, “the causal interactions between DNA, proteins, and trait development are so entangled, so dynamic, and so dependent on context that the very question of what genes do no longer makes much sense. Indeed, biologists are no longer confident that it is possible to provide an unambiguous answer to the question of what a gene is. The particulate gene is a concept that has become increasingly ambiguous and unstable, and some scientists have begun to argue that the concept has outlived its productive prime” (The Mirage of a Space between Nature and Nurture, p. 50). Gene expression as seen in phenotype is determined by a complex system of overlapping factors. Talk of genes doesn’t help us much, if at all. And heritability rates tells us absolutely nothing about the details, such as distinguishing what exactly is a gene as a heritable unit and causal factor, much less differentiating that from everything else. As Fox further explains:

“It is true that many authors continue to refer to genes, but I suspect that this is largely due to the lack of a better terminology. In any case, continuing reference to “genes” does not obscure the fact that the early notion of clearly identifiable, particulate units of inheritance— which not only can be associated with particular traits, but also serve as agents whose actions produce those traits— has become hopelessly confounded by what we have learned about the intricacies of genetic processes. Furthermore, recent experimental focus has shifted away from the structural composition of DNA to the variety of sequences on DNA that can be made available for (or blocked from) transcription— in other words, the focus is now on gene expression. Finally, and relatedly, it has become evident that nucleotide sequences are used not only to provide transcripts for protein synthesis, but also for multilevel systems of regulation at the level of transcription, translation, and posttranslational dynamics. None of this need impede our ability to correlate differences in sequence with phenotypic differences, but it does give us a picture of such an immensely complex causal dynamic between DNA, RNA, and protein molecules as to definitely put to rest all hopes of a simple parsing of causal factors. Because of this, today’s biologists are far less likely than their predecessors were to attribute causal agency either to genes or to DNA itself— recognizing that, however crucial the role of DNA in development and evolution, by itself, DNA doesn’t do anything. It does not make a trait; it does not even encode a program for development. Rather, it is more accurate to think of DNA as a standing resource on which a cell can draw for survival and reproduction, a resource it can deploy in many different ways, a resource so rich as to enable the cell to respond to its changing environment with immense subtlety and variety. As a resource, DNA is indispensable; it can even be said to be a primary resource. But a cell’s DNA is always and necessarily embedded in an immensely complex and entangled system of interacting resources that are, collectively, what give rise to the development of traits. Not surprisingly, the causal dynamics of the process by which development unfolds are also complex and entangled, involving causal influences that extend upward, downward, and sideways.” (pp. 50-52)

Even something seemingly as simple as gender is far from simple. Claire Ainsworth has a fascinating piece, Sex redefined (nature.com), where she describes the new understanding that has developed. She writes that, “Sex can be much more complicated than it at first seems. According to the simple scenario, the presence or absence of a Y chromosome is what counts: with it, you are male, and without it, you are female. But doctors have long known that some people straddle the boundary — their sex chromosomes say one thing, but their gonads (ovaries or testes) or sexual anatomy say another. Parents of children with these kinds of conditions — known as intersex conditions, or differences or disorders of sex development (DSDs) — often face difficult decisions about whether to bring up their child as a boy or a girl.”

This isn’t all that rare considering that, “Some researchers now say that as many as 1 person in 100 has some form of DSD.” And, “What’s more, new technologies in DNA sequencing and cell biology are revealing that almost everyone is, to varying degrees, a patchwork of genetically distinct cells, some with a sex that might not match that of the rest of their body. Some studies even suggest that the sex of each cell drives its behaviour, through a complicated network of molecular interactions. Gender should be one of the most obvious areas to prove genetic determinism, if it could be proven. But clearly there is more going on here. The inheritance and expression of traits is a messy process. And we are barely scratching the surface. I haven’t seen any research that explores how epigenetics, microbiome, etc could influence gender or similar developmental results.

Advertisements

Too Much Success

It’s amazing the abilities some species have. But that brings up a question. If they are such an advantage, why doesn’t every species have equally amazing abilities? This particularly comes to mind with perceptual abilities.

Human senses are fairly mediocre. We can’t sense much of the world that many other species can. We make up for it with opposable thumbs and cognitive development. Just imagine how much more bad ass humans would be if we could see like a hawk, hear like an owl, and smell like a wolf.

Maybe there is no evolutionary advantage to having the best possible abilities in all ways. It might actually be a disadvantage, both for the species and for the ecosystem or even biosphere. Any given species being too successful might throw off the balance between species. Evolution isn’t only seeking the survival of species but also the survival of complex relationships between species.

Consider one of the earliest microbes, cyanobacteria. They were so successful that it led to what is called the Great Oxygenation Event. Most other microbes at the time were anaerobic and oxygen was toxic to them. It caused earth’s first mass extinction. Even the cyanobacteria didn’t benefit, as there numbers also precipitously dropped.

Too much success can be a dangerous thing, for all involved. This is a lesson of evolution. It’s the success of the entire system of species that matters, not the success of a single species. The survival of the fittest species is secondary to the survival of the fittest ecosystem and biosphere. As Phil Plait put it (Poisoned Planet):

“It’s an interesting tale, don’t you think? The dominant form of life on Earth, spread to the far reaches of the globe, blissfully and blithely pumping out vast amounts of pollution, changing the environment on a planetary scale, sealing their fate. They wouldn’t have been able to stop even if they knew what they were doing, even if they had been warned far, far in advance of the effects they were creating.

“If this is a cautionary tale, if there is some moral you can take away from this, you are free to extract it for yourself. If you do, perhaps you can act on it. One can hope that in this climate, change is always possible.”

Snow Crash vs Star Trek

“[C]yberpunk sci-fi of the 1980s and early 1990s accurately predicted a lot about our current world. Our modern society is totally wired and connected, but also totally unequal,” writes Noah Smith (What we didn’t get, Noahpinion). “We are, roughly, living in the world the cyberpunks envisioned.”

I don’t find that surprising. Cyberpunk writers were looking at ongoing trends and extrapolating about the near future. We are living in that near future.

Considering inequality in the US began growing several decades ago when cyberpunk became a genre, it wasn’t hard to imagine that such inequality would continue to grow and play out within technology itself. And the foundations for present technology were developed in the decades before cyberpunk. The broad outlines of the world we now live in could be seen earlier last century.

That isn’t to downplay the predictions made and envisioned. But it puts it into context.

Smith then asks, “What happened? Why did mid-20th-century sci fi whiff so badly? Why didn’t we get the Star Trek future, or the Jetsons future, or the Asimov future?” His answer is that, “Two things happened. First, we ran out of theoretical physics. Second, we ran out of energy.”

That question and answer is premature. We haven’t yet fully entered the Star Trek future. One of the first major events from its future history are the Bell Riots, which happen seven years from now this month, but conditions are supposed to worsen over the years preceding it (i.e., the present). Like the cyberpunk writers, Star Trek predicted an age of growing inequality, poverty, and homelessness. And that is to be followed by international conflict, global nuclear war, and massive decimation of civilization.

World War III will end in 2053. The death toll will be 600 million. Scientific research continues, but it will take decades for civilization to recover. It’s not until the 22nd century that serious space exploration begins. And it’s not until later in that century that the Federation is formed. The Star Trek visionaries weren’t starry-eyed optimists offering much hope to living generations. They made clear that the immediate future was going to be as dark or darker than most cyberpunk fiction.

The utopian world that I watched in the 1990s was from The Next Generation and Deep Space Nine. Those two shows portray the world 250 years from now. That is why I would argue it’s premature to say that no further major advancements in science will be made over that time period.

Scientific discoveries and technological developments tend to happen in spurts. We can be guaranteed that, assuming we survive, future science will seem like magic to us — based as it would be on knowledge we don’t yet comprehend. At the beginning of the 20th century, there were those who predicted that nothing significant was left for humans to learn and discover. I laugh at anyone who makes the same naive prediction here at the beginning of the 21st century.

To be fair, Smith doesn’t end there. He asks, “These haven’t happened yet, but it’s only been a couple of decades since this sort of futurism became popular. Will we eventually get these things?” And he adds that, “we also don’t really have any idea how to start making these things.”

Well, no one could answer what the world will be like a century from now any more than anyone a century ago was able to predict the world we now live in. Nothing happens yet, until it happens. And no one really has any idea how to start making anything, until someone figures out how to do so. History is an endless parade of the supposedly impossible becoming possible, the unforeseen becoming commonplace.

Smith goes on to conjecture that, “maybe it’s the authors at the very beginning of a tech boom, before progress in a particular area really kicks into high gear, who are able to see more clearly where the boom will take us.” Sure. But no one can be certain one is or is not at the beginning of a tech boom. That can only be seen clearly in retrospect.

If the Star Trek future is more or less correct, the coming half century will be the beginning of a new tech boom that leads to the development of warp drive in 2063 (or something akin to it). And so following it will be an era of distant space travel and colonization. That would be the equivalent of my grandparents generation growing up with the first commercially sold cars and by adulthood, a half century later, experiencing the first manned space flight — there being no way to predict the latter from the former.

As a concluding thought, Smith states that, “We’ll never know.” I’m sure many in my grandparents generation said the same thing. Yet they did come to know, as the future came faster than most expected. When that next stage of technological development is in full force, according to Star Trek’s future historians, those born right now will be hitting middle age and those reaching young adulthood now will be in their sixties. Plenty in the present living generations will be around to know what the future holds.

Maybe the world of Snow Crash we seem to be entering into will be the trigger that sends us hurtling toward Star Trek’s World War III and all that comes after. Maybe what seems like an endpoint is just another beginning.

Research on Jayne’s Bicameral Theory

The onset of data-driven mental archeology
by Sidarta Ribeiro

For many years this shrewd hypothesis seemed untestable. Corollaries such as the right lateralization of auditory hallucinations were dismissed as too simplistic—although schizophrenic patients present less language lateralization (Sommer et al., 2001). Yet, the investigation by Diuk et al. (2012) represents a pioneering successful attempt to test Jaynes’ theory in a quantitative manner. The authors assessed dozens of Judeo-Christian and Greco-Roman texts from up to the second century CE, as well contemporary Google n-grams, to calculate semantic distances between the reference word “introspection” and all the words in these texts. Cleverly, “introspection” is actually absent from these ancient texts, serving as an “invisible” probe. Semantic distances were evaluated by Latent Semantic Analysis, a high-dimensional model in which the semantic similitude between words is proportional to their co-occurrence in texts with coherent topics (Deerwester et al., 1990; Landauer and Dumais, 1997). The approach goes well beyond the mere counting of word occurrence in a corpus, actually measuring how much the concept of introspection is represented in each text in a “distributed semantic sense,” in accordance with the semantic holism (Frege, 1884, 1980; Quine, 1951; Wittgenstein, 1953, 1967; Davidson, 1967) that became mainstream in artificial intelligence (AI) and machine learning (Cancho and Sole, 2001; Sigman and Cecchi, 2002).

The results were remarkable. In Judeo-Christian texts, similitude to introspection increased monotonically over time, with a big change in slope from the Old to the New Testaments. In Greco-Roman texts, comprising 53 authors from Homer to Julius Cesar, a more complex dynamics appeared, with increases in similitude to introspection through periods of cultural development, and decreases during periods of cultural decadence. Contemporary texts showed overall increase, with periods of decline prior to and during the two World Wars. As Jaynes would have predicted, the rise and fall of entire societies seems to be paralleled by increases and decreases in introspection, respectively.

Diuk et al. show that the evolution of mental life can be quantified from the cultural record, opening a whole new avenue of hypothesis testing for Jaynes’ theory. While it is impossible to prove that pre-Axial people “heard” the voices of the gods, the findings suggest new ways of studying historical and contemporary texts. In particular, the probing of ancient texts with words like “dream,” “god” and “hallucination” has great potential to test Jaynesian concepts.

The featured study lends supports to the notion that consciousness is a social construct in constant flux. Quoting senior author Guillermo Cecchi, “it is not just the “trending topics,” but the entire cognitive make-up that changes over time, indicating that culture co-evolves with available cognitive states, and what is socially considered dysfunction can be tested in a more quantitative way.”

Parasites Among the Poor and the Plutocrats

Hookworm rates in parts of the United States have reached the levels seen in developing countries.

This was a major problem in the past, specifically in the rural South. It was thought to have been largely eliminated, although that might not have been true. The most harmed populations just so happen to be the very populations most ignored — these are mostly poor rural populations with little healthcare and hence limited availability of public health data. The problem was maybe more hidden than solved. Until a study was recently done, it apparently wasn’t an issue of concern beyond the local level and so there was no motivation to research it.

As hookworm is a parasite, with it comes the problems of parasite load. Parasitism and parasite load effect not just general health but also energy levels, neurocognitive development, intelligence, and personality traits; for example, toxoplsasmosis is correlated to higher rates of neuroticism and parasite load is correlated to lower rates of openness. Populations with heavy parasite load will behave in ways that are stereotyped as being poor, such as acting lethargic and unmotivated.

Research indicates that poverty rates are an indicator of diverse other factors, many being environmental. People dealing with such things as stress, malnutrition, and parasites literally have less energy and cognitive ability available to them. Under these oppressively draining conditions, the body and mind simply go into survival mode and short-term preparedness. This is seen on the physiological level with stressful conditions causing early sexual maturity and increase in fat reserves.

This relates to the worsening poverty in many parts of the country, exacerbated by growing inequality across the country. But in many cases these are problems that aren’t necessarily worsening, as they have simply been ignored up to this point. Put this also into the context of problems that are clearly worsening, specifically among lower class whites: unemployment, homelessness, stress-related diseases, mental health conditions, alcoholism, drug addiction, and suicides. It’s not just poor minorities that have been shoved out of the way in the march of progress. Even the middle class is feeling the pressure, many of them falling down the economic ladder.

This is why most Americans at present neither trust big government nor big business. And this is why economic populism has taken hold. Since the DNC silenced Sanders in order to maintain the status quo, we got Trump as president instead. If we ignore these basic problems any longer, we are looking toward the possibility of an authoritarian takeover of our government and that would mean something far worse than Trump. That is what happens when a large part of the citizenry loses faith in the system and, unless a democratic revolution happens, are willing to look to a strongman who promises to do what needs to be done.

Simply put, we are long past the point of tolerating this inequality. This inequality is not just of income and wealth but also of political representation and public voice, of life opportunities and basic health. We shouldn’t tolerate this because the oppressed will only tolerate it for so long. Once we get beyond the point of collective failure, there is no turning back. The upper classes might prefer to continue ignoring it, but that isn’t a choice that is available. If push comes to shove, the upper classes might not like the choice that the oppressed will eventually demand by force. That is precisely why FDR created the New Deal. It was either that or something far worse: fascist coup, communist revolution, or societal collapse.

It would be nice if we Americans proactively solved our problems for once, instead of waiting for them to become an emergency and then haphazardly reacting. We probably won’t be so lucky to get another Roosevelt-like leader with a sense of noblesse oblige, belief in the duty to defend and uphold the public good. With that in mind, a useful beginning toward preventing catastrophe would be taking care of the basic the public health issues of rampant parasitism, lead toxicity, etc. That is the very least we can do, assuming we hope to avoid the worst. If we need an existential crisis to motivate ourselves and gain the political will to take action, we appear to be at that point or close to it.

Yet before we can deal with the parasites in poor areas, we might have to purge the body politic of the more dangerous parasites breeding within the plutocracy. That might require strong medicine.

* * *

Hookworm, a disease of extreme poverty, is thriving in the US south. Why?
by Ed Pilkington, The Guardian

These are the findings of a new study into endemic tropical diseases, not in places usually associated with them in the developing world of sub-Saharan Africa and Asia, but in a corner of the richest nation on earth: Alabama.

Scientists in Houston, Texas, have lifted the lid on one of America’s darkest and deepest secrets: that hidden beneath fabulous wealth, the US tolerates poverty-related illness at levels comparable to the world’s poorest countries. More than one in three people sampled in a poor area of Alabama tested positive for traces of hookworm, a gastrointestinal parasite that was thought to have been eradicated from the US decades ago.

The long-awaited findings, revealed by the Guardian for the first time, are a wake-up call for the world’s only superpower as it grapples with growing inequality. Donald Trump has promised to “Make America Great Again” and tackle the nation’s crumbling infrastructure, but he has said very little about enduring chronic poverty, particularly in the southern states. […]

The parasite, better known as hookworm, enters the body through the skin, usually through the soles of bare feet, and travels around the body until it attaches itself to the small intestine where it proceeds to suck the blood of its host. Over months or years it causes iron deficiency and anemia, weight loss, tiredness and impaired mental function, especially in children, helping to trap them into the poverty in which the disease flourishes.

Hookworm was rampant in the deep south of the US in the earlier 20th century, sapping the energy and educational achievements of both white and black kids and helping to create the stereotype of the lazy and lethargic southern redneck. As public health improved, most experts assumed it had disappeared altogether by the 1980s.

But the new study reveals that hookworm not only survives in communities of Americans lacking even basic sanitation, but does so on a breathtaking scale. None of the people included in the research had travelled outside the US, yet parasite exposure was found to be prevalent, as was shockingly inadequate waste treatment.

The peer-reviewed research paper, published in the American Journal of Tropical Medicine and Hygiene, focuses on Lowndes County, Alabama – the home state of the US attorney general, Jeff Sessions, and a landmark region in the history of the nation’s civil rights movement. “Bloody Lowndes”, the area was called in reference to the violent reaction of white residents towards attempts to undo racial segregation in the 1950s.

It was through this county that Martin Luther King led marchers from Selma to Montgomery in 1965 in search of voting rights for black citizens, More than half a century later, King’s dream of what he called the “dignity of equality” remains elusive for many of the 11,000 residents of Lowndes County, 74% of whom are African American.

The average income is just $18,046 (£13,850) a year, and almost a third of the population live below the official US poverty line. The most elementary waste disposal infrastructure is often non-existent.

Some 73% of residents included in the Baylor survey reported that they had been exposed to raw sewage washing back into their homes as a result of faulty septic tanks or waste pipes becoming overwhelmed in torrential rains.

The Baylor study was inspired by Catherine Flowers, ACRE’s founder, who encouraged the Houston scientists to carry out the review after she became concerned about the health consequences of having so many open sewers in her home county. “Hookworm is a 19th-century disease that should by now have been addressed, yet we are still struggling with it in the United States in the 21st century,” she said.

“Our billionaire philanthropists like Bill Gates fund water treatment around the world, but they don’t fund it here in the US because no one acknowledges that this level of poverty exists in the richest nation in the world.” […]

He added that people were afraid to report the problems, given the spate of criminal prosecutions that were launched by Alabama state between 2002 and 2008 against residents who were open-piping sewage from their homes, unable to afford proper treatment systems. One grandmother was jailed over a weekend for failing to buy a septic tank that cost more than her entire annual income. […]

The challenge to places like Lowndes County is not to restore existing public infrastructure, as Trump has promised, because there is no public infrastructure here to begin with. Flowers estimates that 80% of the county is uncovered by any municipal sewerage system, and in its absence people are expected – and in some cases legally forced – to provide their own.

Even where individuals can afford up to $15,000 to install a septic tank – and very few can – the terrain is against them. Lowndes County is located within the “Black Belt”, the southern sweep of loamy soil that is well suited to growing cotton and as a result spawned a multitude of plantations, each worked by a large enslaved population.

The same thing that made the land so good for cotton – its water-retaining properties – also makes it a hazard to the thousands of African Americans who still live on it today. When the rains come, the soil becomes saturated, overwhelming inadequate waste systems and providing a perfect breeding ground for hookworm. […]

“We now need to find how widespread hookworm is across the US,” said Dr Peter Hotez, dean of the National School of Tropical Medicine, who led the research team along with Rojelio Mejia. Hotez, who has estimated that as many as 12 million Americans could be suffering from neglected tropical diseases in poor parts of the south and midwest, told the Guardian the results were a wake-up call for the nation.

“This is the inconvenient truth that nobody in America wants to talk about,” he said. “These people live in the southern United States, and nobody seems to care; they are poor, and nobody seems to care; and more often than not they are people of color, and nobody seems to care.”

Tortured Data

“Beware of testing too many hypotheses; the more you torture the data, the more likely they are to confesss, but confession obtained under duress may not be admissible in the court of scientific opinion. ”
—Stephen M. Stigler, “Testing Hypotheses or Fitting Models?” (1987)

That is useful advice for everyone, but even moreso a warning to those seeking to massage cherrypicked data to tell just-so stories. In particular, a few HBDers (human biodversity advocates) can be quite brilliant in their ability to speculate and gather data to support their speculations, while ignoring data that contradicts them. This is seen in the defense of race realism, a popular ideology among HBDers.

Some HBDers and other race realists are so talented at speculating that they come to treat their ideologically-driven interpretations as factual statements of truth, even when they deny this is the case. Just as they deny the consequences of such ideologies being enforced for centuries through social control, political oppression, and economic inequality. A result can be misinterpreted as cause, an easy error to make when evidence for direction of causation is lacking. It leaves the field open to self-serving bias.

When one starts with a hypothesis that one assumes is true, it’s easy to look for evidence to support what one already wants to believe. There are few people in the world who couldn’t offer what they consider evidence in support of their beliefs, no matter how weak and grasping it might appear to others. This is even easier to accomplish when looking for correlations, as anything can be correlated with many other things without ever having to prove a causal connection, and it’s easy to ignore the fact that most correlations are spurious.

None of that matters to the true believer, though. Torturing the data until it confesses is the whole point. As in real world incidents of torture, the validity of the confession is irrelevant.

Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors such as culture, peer influence, stress, nutrition and toxins, along with biological factors such as epigenetics, brain plasticity, microbiomes, parasites, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Social Construction & Ideological Abstraction

The following passages from two books help to explain what is social construction. As society has headed in a particular direction of development, abstract thought has become increasingly dominant.

But for us modern people who take abstractions for granted, we often don’t even recognize abstractions for what they are. Many abstractions simply become reality as we know it. They are ‘looped’ into existence, as race realism, capitalist realism, etc.

Ideological abstractions become so pervasive and systemic that we lose the capacity to think outside of them. They form our reality tunnel.

This wasn’t always so. Humans used to conceive of and hence perceive the world far differently. And this shaped their sense of identity, which is hard for us to imagine.

* * *

Dynamics of Human Biocultural Diversity:
A Unified Approach

by Elisa J. Sobo
Kindle Locations 94-104)

Until now, many biocultural anthropologists have focused mainly on the ‘bio’ half of the equation, using ‘biocultural’ generically, like biology, to refer to genetic, anatomical, physiological, and related features of the human body that vary across cultural groups. The number of scholars with a more sophisticated approach is on the upswing, but they often write only for super-educated expert audiences. Accordingly, although introductory biocultural anthropology texts make some attempt to acknowledge the role of culture, most still treat culture as an external variable— as an add-on to an essentially biological system. Most fail to present a model of biocultural diversity that gives adequate weight to the cultural side of things.

Note that I said most, not all: happily, things are changing. A movement is afoot to take anthropology’s claim of holism more seriously by doing more to connect— or reconnect— perspectives from both sides of the fence. Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world. Today, the leading edge of science recognizes the links and interdependencies that such thinking keeps falsely hidden.

Nature, Human Nature, and Human Difference:
Race in Early Modern Philosophy
by Justin E. H. Smith

pp. 9-10

The connection to the problem of race should be obvious: kinds of people are to no small extent administered into being, brought into existence through record keeping, census taking, and, indeed, bills of sale. A census form asks whether a citizen is “white,” and the possibility of answering this question affirmatively helps to bring into being a subkind of the human species that is by no means simply there and given, ready to be picked out, prior to the emergence of social practices such as the census. Censuses, in part, bring white people into existence, but once they are in existence they easily come to appear as if they had been there all along. This is in part what Hacking means by “looping”: human kinds, in contrast with properly natural kinds such as helium or water, come to be what they are in large part as a result of the human act of identifying them as this or that. Two millennia ago no one thought of themselves as neurotic, or straight, or white, and nothing has changed in human biology in the meantime that could explain how these categories came into being on their own. This is not to say that no one is melancholic, neurotic, straight, white, and so on, but only that how that person got to be that way cannot be accounted for in the same way as, say, how birds evolved the ability to fly, or how iron oxidizes.

In some cases, such as the diagnosis of mental illness, kinds of people are looped into existence out of a desire, successful or not, to help them. Racial categories seem to have been looped into existence, by contrast, for the facilitation of the systematic exploitation of certain groups of people by others. Again, the categories facilitate the exploitation in large part because of the way moral status flows from legal status. Why can the one man be enslaved, and the other not? Because the one belongs to the natural-seeming kind of people that is suitable for enslavement. This reasoning is tautological from the outside, yet self-evident from within. Edward Long, as we have seen, provides a vivid illustration of it in his defense of plantation labor in Jamaica. But again, categories cannot be made to stick on the slightest whim of their would-be coiner. They must build upon habits of thinking that are already somewhat in place. And this is where the history of natural science becomes crucial for understanding the history of modern racial thinking, for the latter built directly upon innovations in the former. Modern racial thinking could not have taken the form it did if it had not been able to piggyback, so to speak, on conceptual innovations in the way science was beginning to approach the diversity of the natural world, and in particular of the living world.

This much ought to be obvious: racial thinking could not have been biologized if there were no emerging science of biology. It may be worthwhile to dwell on this obvious point, however, and to see what more unexpected insights might be drawn out of it. What might not be so obvious, or what seems to be ever in need of renewed pointing out, is a point that ought to be of importance for our understanding of the differing, yet ideally parallel, scope and aims of the natural and social sciences: the emergence of racial categories, of categories of kinds of humans, may in large part be understood as an overextension of the project of biological classification that was proving so successful in the same period. We might go further, and suggest that all of the subsequent kinds of people that would emerge over the course of the nineteenth and twentieth centuries, the kinds of central interest to Foucault and Hacking, amount to a further reaching still, an unprecedented, peculiarly modern ambition to make sense of the slightest variations within the human species as if these were themselves species differentia. Thus for example Foucault’s well-known argument that until the nineteenth century there was no such thing as “the homosexual,” but only people whose desires could impel them to do various things at various times. But the last two centuries have witnessed a proliferation of purportedly natural kinds of humans, a typology of “extroverts,” “depressives,” and so on, whose objects are generally spoken of as if on an ontological par with elephants and slime molds. Things were not always this way. In fact, as we will see, they were not yet this way throughout much of the early part of the period we call “modern.”

Symbolic Dissociation of Nature/Nurture Debate

“One of the most striking features of the nature-nurture debate is the frequency with which it leads to two apparently contradictory results: the claim that the debate has finally been resolved (i.e., we now know that the answer is neither nature nor nurture, but both), and the debate’s refusal to die. As with the Lernian Hydra, each beheading seems merely to spur the growth of new heads.”

That is from the introduction to Evelyn Fox Keller’s The Mirage of a Space between Nature and Nurture (p. 1). I personally experienced this recently. There is a guy I’ve been discussing these kinds of issues with in recent years. We have been commenting on each other’s blogs for a long while, in an ongoing dialogue that has centered on childhood influences: peers, parenting, spanking, abuse, trauma, etc.

It seemed that we had finally come to an agreement on the terms of the debate, his having come around to my view that the entire nature-nurture debate is pointless or confused. But then recently, he once again tried to force this nature-nurture frame onto our discussion (see my last post). It’s one of these zombie ideas that isn’t easily killed, a memetic mind virus that infects the brain with no known cure. Keller throws some light on the issue (pp. 1-2):

“Part of the difficulty comes into view with the first question we must ask: what is the nature-nurture debate about? There is no single answer to this question, for a number of different questions take refuge under its umbrella. Some of the questions express legitimate and meaningful concerns that can in fact be addressed scientifically; others may be legitimate and meaningful, but perhaps not answerable; and still others simply make no sense. I will argue that a major reason we are unable to resolve the nature-nurture debate is that all these different questions are tangled together into an indissoluble knot, making it all but impossible for us to stay clearly focused on a single, well-defined and meaningful question. Furthermore, I will argue that they are so knitted together by chronic ambiguity, uncertainty, and slippage in the very language we use to talk about these issues. And finally, I will suggest that at least some of that ambiguity and uncertainty comes from the language of genetics itself.”

What occurred to me is that maybe this is intentional. It seems to be part of the design, a feature and not a flaw. That is how the debate maintains itself, by being nearly impossible to disentangle and so not allowing itself to be seen for what it is. It’s not a real debate for what appears to be an issue is really a distraction. There is much incentive to not look at it too closely, to not pick at the knot. Underneath, there is a raw nerve of Cartesian anxiety.

This goes back to my theory of symbolic conflation. The real issue (or set of issues) is hidden behind a symbolic issue. Maybe this usually or possibly always takes the form of a debate being framed in a particular way. The false dichotomy of dualistic thinking isn’t just a frame for it tells a narrative of conflict where, as long as you accepts the frame, you are forced to pick a side.

I often use abortion as an example because symbolic conflation operates most often and most clearly on visceral and emotional issues involving the body, especially sex and death (abortion involving both). This is framed as pro-life vs pro-choice, but the reality of public opinion is that most Americans are BOTH pro-life AND pro-choice. That is to say most Americans want to maintain a woman’s right to choose while simultaneously putting some minimal limitations on abortions. Besides, as research has shown, liberal and leftist policies (full sex education, easily available contraceptives, planned parenthood centers, high quality public healthcare available to all, etc) allow greater freedom to individuals while creating the conditions that decrease the actual rate of abortions because they decrease unwanted pregnancies.

One thing that occurs to me is that such frames tend to favor one side. It stands out to me that those promoting the nature vs nurture frame are those who tend to be arguing for biological determinism (or something along those lines), just like those creating the forced choice of pro-life or pro-choice usually are those against the political left worldview. That is another way in which it isn’t a real debate. The frame both tries to obscure the real issue(s) and to shut down debate before it happens. It’s all about social control by way of thought control. To control how an issue is portrayed and how a debate is framed is to control the sociopolitical narrative, the story being told and the conclusion it leads to. Meanwhile, the real concern of the social order is being manipulated behind the scenes. It’s a sleight-of-hand trick.

Symbolic conflation is a time-tested strategy of obfuscation. It’s also an indirect way of talking about what can’t or rather won’t otherwise be acknowledged, in the symbolic issue being used as a proxy. To understand what it all means, you have to look at the subtext. The framing aspect brings another layer to this process. A false dichotomy could be thought of as a symbolic dissociation, where what is inseparable in reality gets separated in the framing of symbolic ideology.

The fact of the matter is that nature and nurture are simply two ways of referring to the same thing. If the nature/nurture debate is a symbolic dissociation built on top of a symbolic conflation, is this acting as a proxy for something else? And if so, what is the real debate that is being hidden and obscured, in either being talked around or talked about indirectly?

False Dichotomy and Bad Science

Someone shared with me a link to a genetics study. The paper is “Behavioural individuality in clonal fish arises despite near-identical rearing conditions” by David Bierbach, Kate L. Laskowski, and Max Wolf. From the abstract:

“Behavioural individuality is thought to be caused by differences in genes and/or environmental conditions. Therefore, if these sources of variation are removed, individuals are predicted to develop similar phenotypes lacking repeatable individual variation. Moreover, even among genetically identical individuals, direct social interactions are predicted to be a powerful factor shaping the development of individuality. We use tightly controlled ontogenetic experiments with clonal fish, the Amazon molly (Poecilia formosa), to test whether near-identical rearing conditions and lack of social contact dampen individuality. In sharp contrast to our predictions, we find that (i) substantial individual variation in behaviour emerges among genetically identical individuals isolated directly after birth into highly standardized environments and (ii) increasing levels of social experience during ontogeny do not affect levels of individual behavioural variation. In contrast to the current research paradigm, which focuses on genes and/or environmental drivers, our findings suggest that individuality might be an inevitable and potentially unpredictable outcome of development.”

Here is what this seems to imply. We don’t as of yet understand (much less are able to identify, isolate, and control) all of the genetic, epigenetic, environmental, etc factors that causally affect and contribute to individual development. Not only that but we don’t understand the complex interaction of those factors, known and unknown. To put it simply, our ignorance is much more vast than our knowledge. We don’t even have enough knowledge to know what we don’t know. But we are beginning to realize that we need to rethink what we thought we knew.

It reminds me of the mouse research where genetically identical mice in environmentally identical conditions led to diverse behavioral results. I’ve mentioned it many times before here in my blog, including a post specifically about it: Of Mice and Men and Environments (also see Heritability & Inheritance, Genetics & Epigenetics, Etc). In the mice post, along with quoting an article, I pointed to a fascinating passage from David Shenk’s book, The Genius in All of Us. Although I was previously aware of the influence of environmental conditions, the research discussed there makes it starkly clear. I was reminded of this because of another discussion about mice research, from Richard Harris’ Rigor Mortis with the subtitle of “How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions” (pp. 79-81):

“Garner said that mice have great potential for biological studies, but at the moment, he believes, researchers are going about it all wrong. For the past several decades, they have pursued a common strategy in animal studies: eliminate as many variables as you can, so you can more clearly see an effect when it’s real. It sounds quite sensible, but Garner believes it has backfired in mouse research. To illustrate this point, he pointed to two cages of genetically identical mice. One cage was at the top of the rack near the ceiling, the other near the floor. Garner said cage position is enough of a difference to affect the outcome of an experiment. Mice are leery of bright lights and open spaces, but here they live in those conditions all the time. “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed,” he said.

“Garner was part of an experiment involving six different mouse labs in Europe to see whether behavioral tests with genetically identical mice would vary depending on the location. The mice were all exactly the same age and all female. Even so, these “identical” tests produced widely different results, depending on whether they were conducted in Giessen, Muenster, Zurich, Mannheim, Munich, or Utrecht. The scientists tried to catalog all possible differences: mouse handlers in Zurich didn’t wear gloves, for example, and the lab in Utrecht had the radio on in the background. Bedding, food, and lighting also varied. Scientists have only recently come to realize that the sex of the person who handles the mice can also make a dramatic difference. “Mice are so afraid of males that it actually induces analgesia,” a pain-numbing reaction that screws up all sorts of studies, Garner said. Even a man’s sweaty T-shirt in the same room can trigger this response.

“Behavioral tests are used extensively in research with mice (after all, rodents can’t tell handlers how an experimental drug is affecting them), so it was sobering to realize how much those results vary from lab to lab. But here’s the hopeful twist in this experiment: when the researchers relaxed some of their strict requirements and tested a more heterogeneous group of mice, they paradoxically got more consistent results. Garner is trying to convince his colleagues that it’s much better to embrace variation than to tie yourself in knots trying to eliminate it.

““Imagine that I was testing a new drug to help control nausea in pregnancy, and I suggested to the [Food and Drug Administration (FDA)] that I tested it purely in thirty-five-year-old white women all in one small town in Wisconsin with identical husbands, identical homes, identical diets which I formulate, identical thermostats that I’ve set, and identical IQs. And incidentally they all have the same grandfather.” That would instantly be recognized as a terrible experiment, “but that’s exactly how we do mouse work. And fundamentally that’s why I think we have this enormous failure rate.”

“Garner goes even further in his thinking, arguing that studies should consider mice not simply as physiological machines but as organisms with social interactions and responses to their environment that can significantly affect their health and strongly affect the experiment results. Scientists have lost sight of that. “I fundamentally believe that animals are good models of human disease,” Garner said. “I just don’t think the way we’re doing the research right now is.”

“Malcolm Macleod has offered a suggestion that would address some of the issues Garner raises: when a drug looks promising in mice, scale up the mouse experiments before trying it in people. “I simply don’t understand the logic that says I can take a drug to clinical trial on the basis of information from 500 animals, but I’m going to need 5,000 human animals to tell me whether it will work or not. That simply doesn’t compute.” Researchers have occasionally run large mouse experiments at multiple research centers, just as many human clinical trials are conducted at several medical centers. The challenge is funding. Someone else can propose the same study involving a lot fewer animals, and that looks like a bargain. “Actually, the guy promising to do it for a third of the price isn’t going to do it properly, but it’s hard to get that across,” Macleod said.”

This is the problem with the framing debate as nature vs nurture (or similar framings such as biology vs culture and organism vs environment). Even when people are aware of the limitations of this frame, the powerful sway it holds over people’s minds causes them to continually fall back on them. Even when I have no interest in such dualistic thinking, some people feel it necessary to categorize the sides of a debate accordingly, where apparently I’m supposed to play the role of ‘nurturist’ in opposition to their ‘biology’ advocacy: “feel your life-force, Benjamin. Come with me to the biology side!” Well, I have no desire to take sides in a false dichotomy. Oddly, this guy trying to win me over to the “biology side” in debate (about human violence and war) is the same person who shared the clonal fish study that demonstrated how genetics couldn’t explain the differences observed. So, I’m not entirely sure what he thinks ‘biology’ means, what ideological commitments it represents in his personal worldview.

(As he has mentioned in our various discussions, his studies about all of this are tied up with his experience as a father who has struggled with parenting and a husband who is recently separated, partly over parenting concerns. The sense of conflict and blame he is struggling with sounds quite serious and I’m sympathetic. But I suspect he is looking for some kind of life meaning that maybe can’t be found where he is looking for it. Obviously, it is a highly personal issue for him, not a disinterested debate of abstract philosophy or scientific hypotheses. I’m starting to think that we aren’t even involved in the same discussion, just talking past one another. It’s doubtful that I can meet him on the level he finds himself, and so I don’t see how I can join him in the debate that seems to matter so much to him. I won’t even try. I’m not in that headspace. We’ve commented on each other’s blogs for quite a while now, but for whatever reason we simply can’t quite fully connect. Apparently, we are unable to agree enough about what is the debate to even meaningfully disagree about a debate. Although he is a nice guy and we are on friendly terms, I don’t see further dialogue going anywhere. *shrug*)

When we are speaking of so-called ‘nature’, this doesn’t only refer to human biology of genetics and physiology of development but also includes supposed junk DNA and epigenetics, brain plasticity and gut-brain connection, viruses and bacteria, parasites and parasite load, allergies and inflammation, microbiome and cultured foods, diet and nutrition, undernourishment and malnutrition, hunger and starvation, food deserts and scarcity, addiction and alcoholism, pharmaceuticals and medicines, farm chemicals and food additives, hormone mimics and heavy metal toxicity, environmental stress and physical trauma, abuse and violence, diseases of affluence and nature-deficit disorder, in utero conditions and maternal bond, etc. All of these alter the expression of genetics, both within a single lifetime of individuals and across the generations of entire populations.

There are numerous varieties of confounding factors. I could also point to sociocultural, structural, and institutional aspects of humanity: linguistic relativity and WEIRD research subjects, culture of trust and culture of honor, lifeways and mazeways, habitus and neighborhood effect, parenting and peers, inequality and segregation, placebos and nocebos, Pygmalion effect and Hawthorne effect, and on and on. As humans are social creatures, one could write a lengthy book simply listing all the larger influences of society.

Many of these problems have become most apparent in social science, but it is far from limited to that area of knowledge. Very similar problems are found in the biological and medical sciences, with the hard sciences having clear overlap with the soft sciences considering social constructions get fed back into scientific research. With mostly WEIRD scientists studying mostly WEIRD subjects, it’s the same WEIRD culture that has dominated nearly all of science and so it is WEIRD biases that have been the greatest stumbling blocks. Plus, with what has been proven from linguistic relativity, we would expect that how we talk about science will shape the research done, the results gained, the conclusions made, and the theories proposed. It’s all of one piece.

The point is that there are no easy answers and certain conclusions. In many ways, science is still in its infancy. We have barely scratched the surface of what potentially could be known. And much of what we think we know is being challenged, which is leading to a paradigm change that we can barely imagine. There is a lot at stake. It goes far beyond abstract theory, hypothetical debate, and idle speculation.

Most importantly, we must never forget that no theory is value-neutral or consequence-free. The ideological worldview we commit to doesn’t merely frame debate and narrow our search for knowledge. There is a real world impact on public policy and human lives, such as when medial research and practice becomes racialized (with a dark past connecting race realism and genetic determinism, racial hygiene and eugenics, medical testing on minorities and the continuing impact on healthcare). All of this raises questions about whether germs are to be treated as invading enemies, whether war is an evolutionary trait, whether addiction is biological, whether intelligence is genetic, whether language is a module in the brain, and whether the ideology of individualism is human nature.

We have come to look to the body for answers to everything. And so we have come to project almost every issue onto the body. It’s too easy to shape scientific theory in such a way that confirms what we already believe and what is self-serving or simply what conforms to the social order. There is a long history of the intentional abuse and unintentional misuse of science. It’s impossible to separate biology from biopolitics.

Worse still, our imaginations are hobbled, making it all that more difficult to face the problems before us. And cultural biases have limited the search for greater knowledge. More than anything, we need to seriously develop our capacity to radically imagine new possibilities. That would require entirely shifting the context and approach of our thinking, maybe to the extent of altering our consciousness and our perception of the world. A paradigm change that mattered at all would be one that went far beyond abstract theory and was able to touch the core of our being. Our failure on this level may explain why so much scientific research has fallen into a rut.

* * *

I’ve been thinking about this for a long time. My thoughts here aren’t exactly new, but I wanted to share some new finds. It’s a topic worth returning to on occasion, as further research rolls in and the experts continue to debate. I’ll conclude with some more from Richard Harris’ Rigor Mortis. Below that are several earlier posts, a few relevant articles, and a bunch of interesting books (just because I love making long lists of books).

Rigor Mortis:
How Sloppy Science Creates Worthless Cures, Crushes Hope, and
Wastes Billions

by Richard Harris
pp. 13-16

There has been no systematic attempt to measure the quality of biomedical science as a whole, but Leonard Freedman, who started a nonprofit called the Global Biological Standards Institute, teamed up with two economists to put a dollar figure on the problem in the United States. Extrapolating results from the few small studies that have attempted to quantify it, they estimated that 20 percent of studies have untrustworthy designs; about 25 percent use dubious ingredients, such as contaminated cells or antibodies that aren’t nearly as selective and accurate as scientists assume them to be; 8 percent involve poor lab technique; and 18 percent of the time, scientists mishandle their data analysis. In sum, Freedman figured that about half of all preclinical research isn’t trustworthy. He went on to calculate that untrustworthy papers are produced at the cost of $28 billion a year. This eye-popping estimate has raised more than a few skeptical eyebrows—and Freedman is the first to admit that the figure is soft, representing “a reasonable starting point for further debate.”

“To be clear, this does not imply that there was no return on that investment,” Freedman and his colleagues wrote. A lot of what they define as “not reproducible” really means that scientists who pick up a scientific paper won’t find enough information in it to run the experiment themselves. That’s a problem, to be sure, but hardly a disaster. The bigger problem is that the errors and missteps that Freedman highlights are, as Begley found, exceptionally common. And while scientists readily acknowledge that failure is part of the fabric of science, they are less likely to recognize just how often preventable errors taint studies.

“I don’t think anyone gets up in the morning and goes to work with the intention to do bad science or sloppy science,” said Malcolm Macleod at the University of Edinburgh. He has been writing and thinking about this problem for more than a decade. He started off wondering why almost no treatment for stroke has succeeded (with the exception of the drug tPA, which dissolves blood clots but doesn’t act on damaged nerve cells), despite many seemingly promising leads from animal studies. As he dug into this question, he came to a sobering conclusion. Unconscious bias among scientists arises every step of the way: in selecting the correct number of animals for a study, in deciding which results to include and which to simply toss aside, and in analyzing the final results. Each step of that process introduces considerable uncertainty. Macleod said that when you compound those sources of bias and error, only around 15 percent of published studies may be correct. In many cases, the reported effect may be real but considerably weaker than the study concludes.

Mostly these estimated failure rates are educated guesses. Only a few studies have tried to measure the magnitude of this problem directly. Scientists at the MD Anderson Cancer Center asked their colleagues whether they’d ever had trouble reproducing a study. Two-thirds of the senior investigators answered yes. Asked whether the differences were ever resolved, only about a third said they had been. “This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to ‘presumed’ knowledge,” the authors wrote when they published the survey findings.

The American Society for Cell Biology (ASCB) surveyed its members in 2014 and found that 71 percent of those who responded had at some point been unable to replicate a published result. Again, 40 percent of the time, the conflict was never resolved. Two-thirds of the time, the scientists suspected that the original finding had been a false positive or had been tainted by “a lack of expertise or rigor.” ASCB adds an important caveat: of the 8,000 members it surveyed, it heard back from 11 percent, so its numbers aren’t convincing. That said, Nature surveyed more than 1,500 scientists in the spring of 2016 and saw very similar results: more than 70 percent of those scientists had tried and failed to reproduce an experiment, and about half of those who responded agreed that there’s a “significant crisis” of reproducibility.

pp. 126-129

The batch effect is a stark reminder that, as biomedicine becomes more heavily reliant on massive data analysis, there are ever more ways to go astray. Analytical errors alone account for almost one in four irreproducible results in biomedicine, according to Leonard Freedman’s estimate. A large part of the problem is that biomedical researchers are often not well trained in statistics. Worse, researchers often follow the traditional practices of their fields, even when those practices are deeply problematic. For example, biomedical research has embraced a dubious method of determining whether results are likely to be true by relying far too heavily on a gauge of significance called the p-value (more about that soon). Potential help is often not far away: major universities have biostatisticians on staff who are usually aware of the common pitfalls in experiment design and subsequent analysis, but they are not enlisted as often as they could be. […]

A few years ago, he placed an informal wager of sorts with a few of his colleagues at other universities. He challenged them to come up with the most egregious examples of the batch effect. The “winning” examples would be published in a journal article. It was a first stab at determining how widespread this error is in the world of biomedicine. The batch effect turns out to be common.

Baggerly had a head start in this contest because he’d already exposed the problems with the OvaCheck test. But colleagues at Johns Hopkins were not to be outdone. Their entry involved a research paper that appeared to get at the very heart of a controversial issue: one purporting to show genetic differences between Asians and Caucasians. There’s a long, painful, failure-plagued history of people using biology to support prejudice, so modern studies of race and genetics meet with suspicion. The paper in question had been coauthored by a white man and an Asian woman (a married couple, as it happens), lowering the index of suspicion. Still, the evidence would need to be substantial. […]

The University of Washington team tracked down the details about the microarrays used in the experiment at Penn. They discovered that the data taken from the Caucasians had mostly been produced in 2003 and 2004, while the microarrays studying Asians had been produced in 2005 and 2006. That’s a red flag because microarrays vary from one manufacturing lot to the next, so results can differ from one day to the next, let alone from year to year. They then asked a basic question of all the genes on the chips (not just the ones that differed between Asians and Caucasians): Were they behaving the same in 2003–2004 as they were in 2005–2006? The answer was an emphatic no. In fact, the difference between years overwhelmed the apparent difference between races. The researchers wrote up a short analysis and sent it to Nature Genetics, concluding that the original findings were another instance of the batch effect.

These case studies became central examples in the research paper that Baggerly, Leek, and colleagues published in 2010, pointing out the perils of the batch effect. In that Nature Reviews Genetics paper, they conclude that these problems “are widespread and critical to address.”

“Every single assay we looked at, we could find examples where this problem was not only large but it could lead to clinically incorrect findings,” Baggerly told me. That means in many instances a patient’s health could be on the line if scientists rely on findings of this sort. “And these are not avoidable problems.” If you start out with data from different batches you can’t correct for that in the analysis. In biology today, researchers are inevitably trying to tease out a faint message from the cacophony of data, so the tests themselves must be tuned to pick up tiny changes. That also leaves them exquisitely sensitive to small perturbations—like the small differences between microarray chips or the air temperature and humidity when a mass spectrometer is running. Baggerly now routinely checks the dates when data are collected—and if cases and controls have been processed at different times, his suspicions quickly rise. It’s a simple and surprisingly powerful method for rooting out spurious results.

p. 132

Over the years breathless headlines have celebrated scientists claiming to have found a gene linked to schizophrenia, obesity, depression, heart disease—you name it. These represent thousands of small-scale efforts in which labs went hunting for genes and thought they’d caught the big one. Most were dead wrong. John Ioannidis at Stanford set out in 2011 to review the vast sea of genomics papers. He and his colleagues looked at reported genetic links for obesity, depression, osteoporosis, coronary artery disease, high blood pressure, asthma, and other common conditions. He analyzed the flood of papers from the early days of genomics. “We’re talking tens of thousands of papers, and almost nothing survived” closer inspection. He says only 1.2 percent of the studies actually stood the test of time as truly positive results. The rest are what’s known in the business as false positives.

The field has come a long way since then. Ioannidis was among the scientists who pushed for more rigorous analytical approaches to genomics research. The formula for success was to insist on big studies, to make careful measurements, to use stringent statistics, and to have scientists in various labs collaborate with one another—“you know, doing things right, the way they should be done,” Ioannidis said. Under the best of these circumstances, several scientists go after exactly the same question in different labs. If they get the same results, that provides high confidence that they’re not chasing statistical ghosts. These improved standards for genomics research have largely taken hold, Ioannidis told me. “We went from an unreliable field to a highly reliable field.” He counts this as one of the great success stories in improving the reproducibility of biomedical science. Mostly. “There’s still tons of research being done the old fashioned way,” he lamented. He’s found that 70 percent of this substandard genomics work is taking place in China. The studies are being published in English-language journals, he said, “and almost all of them are wrong.”

pp. 182-183

Published retractions tend to be bland statements that some particular experiment was not reliable, but those notices often obscure the underlying reason. Arturo Casadevall at Johns Hopkins University and colleague Ferric Fang at the University of Washington dug into retractions and discovered a more disturbing truth: 70 percent of the retractions they studied resulted from bad behavior, not simply error. They also concluded that retractions are more common in high-profile journals—where scientists are most eager to publish in order to advance their careers. “We’re dealing with a real deep problem in the culture,” Casadevall said, “which is leading to significant degradation of the literature.” And even though retractions are on the rise, they are still rarities—only 0.02 percent of papers are retracted, Oransky estimates.

David Allison at the University of Alabama, Birmingham, and colleagues discovered just how hard it can be to get journals to set the record straight. Some scientists outright refuse to retract obviously wrong information, and journals may not insist. Allison and his colleagues sent letters to journals pointing out mistakes and asking for corrections. They were flabbergasted to find that some journals demanded payment—up to $2,100—just to publish their letter pointing out someone else’s error.

pp. 186-188

“Most people who work in science are working as hard as they can. They are working as long as they can in terms of the hours they are putting in,” said social scientist Brian Martinson. “They are often going beyond their own physical limits. And they are working as smart as they can. And so if you are doing all those things, what else can you do to get an edge, to get ahead, to be the person who crosses the finish line first? All you can do is cut corners. That’s the only option left you.” Martinson works at HealthPartners Institute, a nonprofit research agency in Minnesota. He has documented some of this behavior in anonymous surveys. Scientists rarely admit to outright misbehavior, but nearly a third of those he has surveyed admit to questionable practices such as dropping data that weakens a result, based on a “gut feeling,” or changing the design, methodology, or results of a study in response to pressures from a funding source. (Daniele Fanelli, now at Stanford University, came to a similar conclusion in a separate study.)

One of Martinson’s surveys found that 14 percent of scientists have observed serious misconduct such as fabrication or falsification, and 72 percent of scientists who responded said they were aware of less egregious behavior that falls into a category that universities label “questionable” and Martinson calls “detrimental.” In fact, almost half of the scientists acknowledged that they personally had used one or more of these practices in the past three years. And though he didn’t call these practices “questionable” or “detrimental” in his surveys, “I think people understand that they are admitting to something that they probably shouldn’t have done.” Martinson can’t directly link those reports to poor reproducibility in biomedicine. Nobody has funded a study exactly on that point. “But at the same time I think there’s plenty of social science theory, particularly coming out of social psychology, that tells us that if you set up a structure this way… it’s going to lead to bad behavior.”

Part of the problem boils down to an element of human nature that we develop as children and never let go of. Our notion of what’s “right” and “fair” doesn’t form in a vacuum. People look around and see how other people are behaving as a cue to their own behavior. If you perceive you have a fair shot, you’re less likely to bend the rules. “But if you feel the principles of distributive justice have been violated, you’ll say, ‘Screw it. Everybody cheats; I’m going to cheat too,’” Martinson said. If scientists perceive they are being treated unfairly, “they themselves are more likely to engage in less-than-ideal behavior. It’s that simple.” Scientists are smart, but that doesn’t exempt them from the rules that govern human behavior.

And once scientists start cutting corners, that practice has a natural tendency to spread throughout science. Martinson pointed to a paper arguing that sloppy labs actually outcompete good labs and gain an advantage. Paul Smaldino at the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology ran a model showing that labs that use quick-and-dirty practices will propagate more quickly than careful labs. The pressures of natural selection and evolution actually favor these labs because the volume of articles is rewarded over the quality of what gets published. Scientists who adopt these rapid-fire practices are more likely to succeed and to start new “progeny” labs that adopt the same dubious practices. “We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers,” Smaldino and McElreath wrote. This isn’t evolution in the strict biological sense, but they argue the same general principles apply as the culture of science evolves.

* * *

What do we inherit? And from whom?
Identically Different: A Scientist Changes His Mind
Race Realism, Social Constructs, and Genetics
Race Realism and Racialized Medicine
The Bouncing Basketball of Race Realism
To Control or Be Controlled
Flawed Scientific Research
Human Nature: Categories & Biases
Bias About Bias
Urban Weirdness
“Beyond that, there is only awe.”

Animal studies paint misleading picture by Janelle Weaver
Misleading mouse studies waste medical resources by Erika Check Hayden
A mouse’s house may ruin experiments by Sara Reardon
Curious mice need room to run by Laura Nelson
Male researchers stress out rodents by Alla Katsnelson
Bacteria bonanza found in remote Amazon village by Boer Deng
Case Closed: Apes Got Culture by Corey Binns
Study: Cat Parasite Affects Human Culture by Ker Than
Mind Control by Parasites by Bill Christensen

Human Biodiversity by Jonathan Marks
The Alternative Introduction to Biological Anthropology by Jonathan Marks
What it Means to be 98% Chimpanzee by Jonathan Marks
Tales of the Ex-Apes by Jonathan Marks
Why I Am Not a Scientist by Jonathan Marks
Is Science Racist? by Jonathan Marks
Biology Under the Influence by Lewontin & Levins
Biology as Ideology by Richard C. Lewontin
The Triple Helix by Richard Lewontin
Not In Our Genes by Lewontin & Rose
The Biopolitics of Race by Sokthan Yeng
The Brain’s Body by Victoria Pitts-Taylor
Misbehaving Science by Aaron Panofsky
The Flexible Phenotype by Piersma & Gils
Herding Hemingway’s Cats by Kat Arney
The Genome Factor by Conley & Fletcher
The Deeper Genome by John Parrington
Postgenomics by Richardson & Stevens
The Developing Genome by David S. Moore
The Epigenetics Revolution by Nessa Carey
Epigenetics by Richard C. Francis
Not In Your Genes by Oliver James
No Two Alike 
by Judith Rich Harris
Identically Different by Tim Spector
The Cultural Nature of Human Development by Barbara Rogoff
The Hidden Half of Nature by Montgomery & Biklé
10% Human by Alanna Collen
I Contain Multitudes by Ed Yong
The Mind-Gut Connection by Emeran Mayer
Bugs, Bowels, and Behavior by Arranga, Viadro, & Underwood
This Is Your Brain on Parasites by Kathleen McAuliffe
Infectious Behavior by Paul H. Patterson
Infectious Madness by Harriet A. Washington
Strange Contagion by Lee Daniel Kravetz
Childhood Interrupted by Beth Alison Maloney
Only One Chance 
by Philippe Grandjean
Why Zebras Don’t Get Ulcers by Robert M. Sapolsky
Resisting Reality by Sally Haslanger
Nature, Human Nature, and Human Difference by Justin E. H. Smith
Race, Monogamy, and Other Lies They Told You by Agustín Fuentes
The Invisible History of the Human Race by Christine Kenneally
Genetics and the Unsettled Past by Wailoo, Nelson, & Lee
The Mismeasure of Man by Stephen Jay Gould
Identity Politics and the New Genetics by Schramm, Skinner, & Rottenburg
The Material Gene by Kelly E. Happe
Fatal Invention by Dorothy Roberts
Inclusion by Steven Epstein
Black and Blue by John Hoberman
Race Decoded by Catherine Bliss
Breathing Race into the Machine by Lundy Braun
Race and the Genetic Revolution by Krimsky & Sloan
Race? by Tattersall & DeSalle
The Social Life of DNA by Alondra Nelson
Native American DNA by Kim TallBear
Making the Mexican Diabetic by Michael Montoya
Race in a Bottle by Jonathan Kahn
Uncertain Suffering by Carolyn Rouse
Sex Itself by Sarah S. Richardson
Building a Better Race by Wendy Kline
Choice and Coercion by Johanna Schoen
Sterilized by the State by Hansen & King
American Eugenics by Nancy Ordover
Eugenic Nation by Alexandra Minna Stern
A Century of Eugenics in America by Paul A. Lombardo
In the Name of Eugenics by Daniel J. Kevles
War Against the Weak by Edwin Black
Illiberal Reformers by Thomas C. Leonard
Defectives in the Land by Douglas C. Baynton
Framing the moron by Gerald V O’Brien
Imbeciles by Adam Cohen
Three Generations, No Imbeciles by Paul A. Lombardo
Defending the Master Race by Jonathan Peter Spiro
Hitler’s American Model by James Q. Whitman
Beyond Human Nature by Jesse J. Prinz
Beyond Nature and Culture by Philippe Descola
The Mirage of a Space between Nature and Nurture by Evelyn Fox Keller
Biocultural Creatures by Samantha Frost
Dynamics of Human Biocultural Diversity by Elisa J Sobo
Monoculture by F.S. Michaels
A Body Worth Defending by Ed Cohen
The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes
A Psychohistory of Metaphors by Brian J. McVeigh
The Master and His Emissary by Iain McGilchrist
From Bacteria to Bach and Back by Daniel C. Dennett
Consciousness by Susan Blackmore
The Meme Machine by Blackmore & Dawkins
Chasing the Scream by Johann Hari
Don’t Sleep, There Are Snakes by Daniel L. Everett
Dark Matter of the Mind by Daniel L. Everett
Language by Daniel L. Everett
Linguistic Relativity by Caleb Everett
Numbers and the Making of Us by Caleb Everett
Linguistic Relativities by John Leavitt
The Language Myth by Vyvyan Evans
The Language Parallax by Paul Friedrich
Louder Than Words by Benjamin K. Bergen
Out of Our Heads by Alva Noe
Strange Tools by Alva Noë
From Bacteria to Bach and Back by Daniel C. Dennett
The Embodied Mind by Varela, Thompson, & Rosch
Immaterial Bodies by Lisa Blackman
Radical Embodied Cognitive Science by Anthony Chemero
How Things Shape the Mind by Lambros Malafouris
Vibrant Matter by Jane Bennett
Entangled by Ian Hodder
How Forests Think by Eduardo Kohn
The New Science of the Mind by Mark Rowlands
Supersizing the Mind by Andy Clark
Living Systems by Jane Cull
The Systems View of Life by Capra & Luisi
Evolution in Four Dimensions by Jablonka & Lamb
Hyperobjects by Timothy Morton
Sync by Steven H. Strogatz
How Nature Works by Per Bak
Warless Societies and the Origin of War by Raymond C. Kelly
War, Peace, and Human Nature by Douglas P. Fry
Darwinism, War and History by Paul Crook