What is a gene?

Now: The Rest of the Genome
by Carl Zimmer

In this jungle of invading viruses, undead pseudogenes, shuffled exons and epigenetic marks, can the classical concept of the gene survive? It is an open question, one that Dr. Prohaska hopes to address at a meeting she is organizing at the Santa Fe Institute in New Mexico next March.

In the current issue of American Scientist, Dr. Gerstein and his former graduate student Michael Seringhaus argue that in order to define a gene, scientists must start with the RNA transcript and trace it back to the DNA. Whatever exons are used to make that transcript would constitute a gene. Dr. Prohaska argues that a gene should be the smallest unit underlying inherited traits. It may include not just a collection of exons, but the epigenetic marks on them that are inherited as well.

These new concepts are moving the gene away from a physical snippet of DNA and back to a more abstract definition. “It’s almost a recapture of what the term was originally meant to convey,” Dr. Gingeras said.

A hundred years after it was born, the gene is coming home.

Genome 2.0: Mountains Of New Data Are Challenging Old Views
by Patrick Barry

This complex interweaving of genes, transcripts, and regulation makes the net effect of a single mutation on an organism much more difficult to predict, Gingeras says.

More fundamentally, it muddies scientists’ conception of just what constitutes a gene. In the established definition, a gene is a discrete region of DNA that produces a single, identifiable protein in a cell. But the functioning of a protein often depends on a host of RNAs that control its activity. If a stretch of DNA known to be a protein-coding gene also produces regulatory RNAs essential for several other genes, is it somehow a part of all those other genes as well?

To make things even messier, the genetic code for a protein can be scattered far and wide around the genome. The ENCODE project revealed that about 90 percent of protein-coding genes possessed previously unknown coding fragments that were located far from the main gene, sometimes on other chromosomes. Many scientists now argue that this overlapping and dispersal of genes, along with the swelling ranks of functional RNAs, renders the standard gene concept of the central dogma obsolete.

Long Live The Gene

Offering a radical new conception of the genome, Gingeras proposes shifting the focus away from protein-coding genes. Instead, he suggests that the fundamental units of the genome could be defined as functional RNA transcripts.

Since some of these transcripts ferry code for proteins as dutiful mRNAs, this new perspective would encompass traditional genes. But it would also accommodate new classes of functional RNAs as they’re discovered, while avoiding the confusion caused by several overlapping genes laying claim to a single stretch of DNA. The emerging picture of the genome “definitely shifts the emphasis from genes to transcripts,” agrees Mark B. Gerstein, a bioinformaticist at Yale University.

Scientists’ definition of a gene has evolved several times since Gregor Mendel first deduced the idea in the 1860s from his work with pea plants. Now, about 50 years after its last major revision, the gene concept is once again being called into question.

Theory Suggests That All Genes Affect Every Complex Trait
by Veronique Greenwood

Over the years, however, what scientists might consider “a lot” in this context has quietly inflated. Last June, Pritchard and his Stanford colleagues Evan Boyle and Yang Li (now at the University of Chicago) published a paper about this in Cell that immediately sparked controversy, although it also had many people nodding in cautious agreement. The authors described what they called the “omnigenic” model of complex traits. Drawing on GWAS analyses of three diseases, they concluded that in the cell types that are relevant to a disease, it appears that not 15, not 100, but essentially all genes contribute to the condition. The authors suggested that for some traits, “multiple” loci could mean more than 100,000. […]

For most complex conditions and diseases, however, she thinks that the idea of a tiny coterie of identifiable core genes is a red herring because the effects might truly stem from disturbances at innumerable loci — and from the environment — working in concert. In a new paper out in Cell this week, Wray and her colleagues argue that the core gene idea amounts to an unwarranted assumption, and that researchers should simply let the experimental data about particular traits or conditions lead their thinking. (In their paper proposing omnigenics, Pritchard and his co-authors also asked whether the distinction between core and peripheral genes was useful and acknowledged that some diseases might not have them.)

Advertisements

Two Views of Present Christianity

First, everyone can be skeptical of science, including of course scientists themselves — after all, scientists are skeptics by profession. But skepticism pushed toward extreme denialism is mostly limited to the political right, some scientific issues standing out (e.g., climate change). And general distrust of science is broadly and consistently found only among religious conservatives.

This is a point that was made by Chris Mooney in his research showing that there is no equivalent on the political left — as far as I know, not even among the religious left. For example, the smart idiot effect is primarily found on the political right, such that knowledge really does matter to those on the political left (research shows that liberals, unlike conservatives, will more likely change their mind when they learn new info).

The role religion plays is in magnifying this difference between ideological tendencies.

Not All Skepticism Is Equal: Exploring the Ideological Antecedents of Science Acceptance and Rejection
by Bastiaan T. Rutjens, Robbie M. Sutton, & Romy van der Lee

To sum up the current findings, in four studies, both political conservatism and religiosity independently predict science skepticism and rejection. Climate skepticism was consistently predicted by political conservatism, vaccine skepticism was consistently predicted by religiosity, and GM food skepticism was consistently predicted by low faith in science and knowledge of science. General low faith in science and unwillingness to support science in turn were primarily associated with religiosity, in particular religious conservatism. Thus, different forms of science acceptance and rejection have different ideological roots, although the case could be made that these are generally grounded in conservatism.

Study: Conservatives’ Trust In Science At Record Low
by Eyder Peralta

While trust in science has remained flat for most Americans, a new study finds that for those who identify as conservatives trust in science has plummeted to its lowest level since 1974.

Gordon Gauchat, a sociology professor at the University of North Carolina at Chapel Hill, studied data from the General Social Survey and found that changes in confidence in science are not uniform across all groups.

“Moreover, conservatives clearly experienced group-specific declines in trust in science over the period,” Gauchat reports. “These declines appear to be long-term rather than abrupt.”

Just 35 percent of conservatives said they had a “great deal of trust in science” in 2010. That number was 48 percent in 1974. […]

Speaking to Gauchat, he said that what surprised him most about his study is that he ran statistical analysis on a host of different groups of people. He only saw significant change in conservatives and people who frequently attend church.

Gauchat said that even conservatives with bachelor’s degrees expressed distrust in science.

I asked him what could explain this and he offered two theories: First that science is now responsible for providing answers to questions that religion used to answer and secondly that conservatives seem to believe that science is now responsible for policy decisions. […]

Another bit of surprising news from the study, said Gauchat, is that trust in science for moderates has remained the same.

Here is the second point, which is more positive.

Religious conservatives are a shrinking and aging demographic, as liberal and left-wing views and labels continually take hold. So, as their numbers decrease and their influence lessens, we Americans might finally be able to have rational public debate about science that leads to pragmatic implementation of scientific knowledge.

The old guard of reactionaries are losing their grip on power, even within the once strong bastions of right-wing religiosity. But like an injured and dying wild animal, they will make a lot of noise and still can be dangerous. The reactionaries will become more reactionary, as we have recently seen. This moment of conflict shall pass, as it always does. Like it or not, change will happen and indeed it already is happening.

There is one possible explanation for this change. Science denialism is a hard attitude to maintain over time, even with the backfire effect. It turns out that even conservatives do change their opinions based on expert knowledge, even if it takes longer. So, despite the evidence showing no short term change with policies, we should expect that a political shift will continue happen across the generations.

Knowledge does matter. But it requires immense repetition and patience. Also, keep in mind that, as knowledge matters even more for the political left, the power of knowledge will increase as the general population moves further left. This might be related to the fact that the average American is increasingly better educated — admittedly, Americans aren’t all that well educated in comparison to some countries, but in comparison to the state of education in the past there has been a dramatic improvement.

However you wish to explain it, the religious and non-religious alike are becoming more liberal and progressive, even more open to social democracy and democratic socialism. There is no evidence that this shift has stopped or reversed. Conservatism will remain a movement in the future, but it will probably look more like the present Democratic Party than the present Republican Party. As the political parties have gone far right, the American public has moved so far left as to be outside of the mainstream spectrum of partisan politics.

We are beginning to see the results.

Pro-Life, Pro-Left
by Molly Worthen
(see Evangelicals Turn Left)

70 percent of evangelicals now tell pollsters they don’t identify with the religious right, and younger evangelicals often have more enthusiasm for social justice than for the culture wars

Trump Is Bringing Progressive Protestants Back to Church
by Emma Green

In the wake of Donald Trump’s election, some conservative Christians have been reckoning with feelings of alienation from their peers, who generally voted for Trump in strong numbers. But at least some progressive Protestant churches are experiencing the opposite effect: People have been returning to the pews.

“The Sunday after the election was the size of an average Palm Sunday,” wrote Eric Folkerth, the senior pastor at Dallas’s Northaven United Methodist Church, in an email. More than 30 first-time visitors signed in that day, “which is more than double the average [across] three weeks of a typical year,” he added. “I sincerely don’t recall another time when it feels like there has been a sustained desire on people’s part to be together with other progressive Christians.”

Anecdotal evidence suggests other liberal churches from a variety of denominations have been experiencing a similar spike over the past month, with their higher-than-usual levels of attendance staying relatively constant for several weeks. It’s not at all clear that the Trump bump, as the writer Diana Butler Bass termed it in a conversation with me, will be sustained beyond the first few months of the new administration. But it suggests that some progressives are searching for a moral vocabulary in grappling with the president-elect—including ways of thinking about community that don’t have to do with electoral politics. […]

Even if Trump doesn’t bring about a membership revolution in the American mainline, which has been steadily shrinking for years, some of the conversations these Protestant pastors reported were fascinating—and suggest that this political environment might be theologically, morally, and intellectually generative for progressive religious traditions.

Southern Baptists Call Off the Culture War
by Jonathan Merritt

Indeed, disentangling the SBC from the GOP is central to the denomination’s makeover. For example, a motion to defund the ERLC in response to the agency’s full-throated opposition to Donald Trump failed miserably.

In years past, Republican politicians have spoken to messengers at the annual meeting. In 1991, President George H.W. Bush addressed the group, Vice President Dan Quayle spoke in 1992, and President George W. Bush did so in 2001 and 2002 (when my father, James Merritt, was SBC president). Neither President Bill Clinton nor President Barack Obama were invited to speak to Southern Baptists during their terms. Though Southern Baptists claim not to be affiliated with either major party, it’s not difficult to discern the pattern at play.

Vice President Mike Pence addressed the convention this year, which may seem like the same old song to outsiders. But there was widespread resistance to Pence’s participation. A motion to disinvite the vice president was proposed and debated, but was ultimately voted down. During his address, which hit some notes more typical of a campaign speech, a few Southern Baptists left the room out of protest. Others criticized the move to reporters or spoke out on Twitter. The newly elected Greear tweeted that the invitation “sent a terribly mixed signal” and reminded his fellow Baptists that “commissioned missionaries, not political platforms, are what we do.”

Though most Southern Baptists remain politically conservative, it seems that some are now less willing to have their denomination serve as a handmaiden to the GOP, especially in the current political moment. They appear to recognize that tethering themselves to Donald Trump—a thrice-married man who has bragged about committing adultery, lies with impunity, allegedly paid hush money to a porn star with whom he had an affair, and says he has never asked God for forgiveness—places the moral credibility of the Southern Baptist Convention at risk.

By elevating women and distancing themselves from partisan engagement, the members of the SBC appear to be signaling their determination to head in a different direction, out of a mix of pragmatism and principle.

For more than a decade, the denomination has been experiencing precipitous decline by almost every metric. Baptisms are at a 70-year low, and Sunday attendance is at a 20-year low. Southern Baptist churches lost almost 80,000 members from 2016 to 2017 and they have hemorrhaged a whopping one million members since 2003. For years, Southern Baptists have criticized more liberal denominations for their declines, but their own trends are now running parallel. The next crop of leaders knows something must be done.

“Southern Baptists thought that if they became more conservative, their growth would continue unabated. But they couldn’t outrun the demographics and hold the decline at bay,” said Leonard. “Classic fundamentalist old-guard churches are either dead or dying, and the younger generation is realizing that the old way of articulating the gospel is turning away more people than it is attracting. “

Regardless of their motivations, this shift away from a more culturally strident and politically partisan stance is significant.

As the late pastor Adrian Rogers said at the 2002 SBC annual meeting in St. Louis, “As the West goes, so goes the world. As America goes, so goes the West. As Christianity goes, so goes America. As evangelicals go, so goes Christianity. As Southern Baptists go, so go evangelicals.”

Rogers may have had an inflated sense of the denomination’s importance, but the fact remains that what happens in the SBC often ripples across culture. In Trump’s America, where the religious right wields outsized influence, the shifts among Southern Baptists could be a harbinger of broader change among evangelicals.

The divide between the religious and the rest of the population is smaller than it seems. That is because media likes to play up conflict. To demonstrate the actual views of the religious in the United States, consider a hot button issue like abortion:

  • “As an example of the complexity, data shows that there isn’t even an anti-abortion consensus among Christians, only one Christian demographic showing a strong majority [White Evangelical Protestants].” (Claims of US Becoming Pro-Life)
  • “[A]long with most doctors, most church-going Catholics support public option and so are in agreement with most Americans in general. Even more interesting is the fact that the church-going Catholics even support a national plan that includes funding for abortion.” (Health Reform & Public Option (polls & other info))
  • “[M]ost Americans identify as Christian and have done so for generations. Yet most Americans are pro-choice, supporting abortion in most or all situations, even as most Americans also support there being strong and clear regulations for where abortions shouldn’t be allowed. It’s complicated, specifically among Christians. The vast majority (70%) seeking abortions considered themselves Christians, including over 50% who attend church regularly having kept their abortions secret from their church community and 40% feeling that churches are not equipped to help them make decisions about unwanted pregnancies.” (American Christianity: History, Politics, & Social Issues)

Whatever ideological and political conflicts we might have in the future, it won’t be a continuation of the culture wars we have known up to this point. Nor will it likely conform to battle of ideologies as seen during the Cold War. The entire frame of debate will be different and, barring unforeseen events, most likely far to the left.

* * *

As an additional point, there is another shift that is happening. There is a reason why there feels to be a growing antagonism, even though it’s not ideological per se.

The fact of the matter is “religious nones” (atheists, agnostics, religiously non-identifying, religiously indifferent, etc) is growing faster than any religious group. Mainline Christians have been losing membership for decades and now so are Evangelicals. This is getting to the point where young Americans are evenly split between the religious and non-religious. That means the religious majority will quickly disappear.

This isn’t motivated by overt ideology or it doesn’t seem to be, since it is a shift happening in many other countries as well. But it puts pressure on ideology and can get expressed or manipulated through ideological rhetoric. So, we might see increasing conflict between ideologies, maybe in new forms that could create a new left vs right.

Younger people are less religious than older ones in many countries, especially in the U.S. and Europe
by Stephanie Kramer & Dalia Fahmy

In the U.S., the age gap is considerable: 43% of people under age 40 say religion is very important to them, compared with 60% of adults ages 40 and over.

If nothing else, this contributes to a generational conflict. There is a reason much of right-wing media has viewers that are on average older. This is why many older Americans are still fighting the culture wars, if only in their own minds.

But Americans in general, including most young Evangelicals, have lost interest in politicized religion. Christianity simply won’t play the same kind of central role in coming decades. Religion will remain an issue, but even Republicans will have to deal with the fact that even the young on the political right are less religious and less socially conservative.

Are Wrens Smarter Than Racists?

Race realists and racial supremacists have many odd notions. For one, they believe humans are separate species, despite all the evidence to the contrary (e.g., unusually low genetic diversity as compared similar species; two random humans are more likely to be genetically similar than two random chimpanzees).

But an even stranger belief is that humans, despite being such a highly social species, are assumed to be incapable of cooperating with other humans who are perceived as different based on modern social constructions of ‘race’. Yet, even ignoring the fact that all humans are of the same species, numerous other species cooperate all the time across large genetic divides. This includes the development of close relationships between individuals of separate species.

So, why do racists believe that ‘white’ Americans and ‘black’ Americans must be treated as separate species and be inevitably segregated in different communities and countries? That particularly doesn’t make sense considering most so-called African-Americans are significantly of European ancestry, not to mention a surprising number of supposed European-Americans in the South that have non-European genetics (African, Native American, etc).

Wrens don’t let racism get in the way of promoting their own survival through befriending other species who share their territory. Do human racists think they have less cognitive capacity than wrens? If that is their honest assessment of their own abilities, that is fine. But why do they assume everyone else is as deficient as they are?

* * *

Birds from different species recognize each other and cooperate
by Matt Wood, University of Chicago

 

Cooperation among different species of birds is common. Some birds build their nests near those of larger, more aggressive species to deter predators, and flocks of mixed species forage for food and defend territories together in alliances that can last for years. In most cases, though, these partnerships are not between specific individuals of the other species—any bird from the other species will do.

But in a new study published in the journal Behavioral Ecology, scientists from the University of Chicago and University of Nebraska show how two different species of Australian fairy-wrens not only recognize individual birds from other species, but also form long-term partnerships that help them forage and defend their shared space as a group.

“Finding that these two species associate was not surprising, as mixed species flocks of birds are observed all over the world,” said Allison Johnson, PhD, a postdoctoral scholar at the University of Nebraska who conducted the study as part of her dissertation research at UChicago. “But when we realized they were sharing territories with specific individuals and responding aggressively only to unknown individuals, we knew this was really unique. It completely changed our research and we knew we had to investigate it.”

Variegated fairy-wrens and splendid fairy-wrens are two small songbirds that live in Australia. The males of each species have striking, bright blue feathers that make them popular with bird watchers. Their behavior also makes them an appealing subject for biologists. Both species feed on insects, live in large family groups, and breed during the same time of year. They are also non-migratory, meaning they live in one area for their entire lives, occupying the same eucalyptus scrublands that provide plenty of bushes and trees for cover.

When these territories overlap, the two species interact with each other. They forage together, travel together, and seem to be aware of what the other species is doing. They also help each other defend their territory from rivals. Variegated fairy-wrens will defend their shared territory from both variegated and splendid outsiders; splendid fairy-wrens will do the same, while fending off unfamiliar birds from both species.

“Splendid and variegated fairy-wrens are so similar in their habitat preferences and behavior, we would expect them to act as competitors. Instead, we’ve found stable, positive relationships between individuals of the two species,” said Christina Masco, PhD, a graduate student at UChicago and a co-author on the new paper.

Epigenetic Memory and the Mind

Epigenetics is fascinating, even bizarre by conventional thought. Some worry that it’s another variety of determinism, just not located in the genes. I have other worries, if not that particular one.

How epigenetics work is that a gene gets switched on or off. The key point is that it’s not permanently set. Some later incident, conditions, behavior, or whatever can switch it back the other way again. Genes in your body are switched on and off throughout your lifetime. But presumably if no significant changes occur in one’s life some epigenetic expressions remain permanently set for your entire life.

Where it gets fascinating is that it’s been proven that epigenetics gets passed on across multiple generations and no one is certain how many generations. In mice, it can extend at least upwards of 7 generations or so, as I recall. Humans, of course, haven’t been studied for that many generations. But present evidence indicates it operates similarly in humans.

Potentially, all of the major tragedies in modern history (violence of colonialism all around the world, major famines in places like Ireland and China, genocides in places like the United States and Rwanda, international conflicts like the world wars, etc), all of that is within the range of epigenetis. It’s been shown that famine, for example, switches genes for a few generations that causes increased fat retention and in the modern world that means higher obesity rates.

I’m not sure what is the precise mechanism that causes genes to switch on and off (e.g., precisely how does starvation get imprinted on biology and become set that way for multiple generations). All I know is it has to do with the proteins that encase the DNA. The main interest is that, once we do understand the mechanism, we will be able to control the process. This might be a way of preventing or managing numerous physical and psychiatric health conditions. So, it really will mean the opposite of determinism.

This research reminds me of other scientific and anecdotal evidence. Consider the recipients of organ transplants, blood and bone marrow transfusions, and microbiome transference. This involves the exchange of cells from one body to another. The results have shown changes in mood, behavior, biological functioning, etc

For example, introducing a new microbiome can make a skinny rodent fat or a fat rodent skinny. But also observed are shifts in fairly specific memories, such as an organ transplant recipient craving something the organ donor craved. Furthermore, research has shown that genetics can jump from the introduced cells to the already present cells, which is how a baby can potentially end up with the cells of two fathers if a previous pregnancy was by a different father, and actually it’s rather common for people to have multiple DNAs in their body.

It intuitively makes sense that epigenetics would be behind memory. It’s easy to argue that there is no other function in the body that has this kind and degree of capacity. And that possibility would blow up our ideas of the human mind. In that case, some element of memories would get passed on multiple generations, explaining certain similarities seen in families and larger populations with shared epigenetic backgrounds.

This gives new meaning to the theories of both the embodied mind and the extended mind. There might also having some interesting implications for the bundle theory of mind. I wonder too about something like enactivism which is about the human mind’s relation to the world. Of course, there are obvious connections of this specific research with neurological plasticity and of epigenetics more generally with intergenerational trauma.

So, it wouldn’t only be the symptoms of trauma or else the benefits of privilege (or whatever other conditions that shape individuals, generational cohorts, and sub-populations) being inherited but some of the memory itself. This puts bodily memory in a much larger context, maybe even something along the lines of Jungian thought, in terms of collective memory and archetypes (depending on how long-lasting some epigenetic effects might be). Also, much of what people think of as cultural, ethnic, and racial differences might simply be epigenetics. This would puncture an even larger hole in genetic determinism and race realism. Unlike genetics, epigenetics can be changed.

Our understanding of so much is going to be completely altered. What once seemed crazy or unthinkable will become the new dominant paradigm. This is both promising and scary. Imagine what authoritarian governments could do with this scientific knowledge. The Nazis could only dream of creating a superman. But between genetic engineering and epigenetic manipulations, the possibilities are wide open. And right now, we have no clue what we are doing. The early experimentation, specifically research done covertly, is going to be of the mad scientist variety.

These interesting times are going to get way more interesting.

* * *

Could Memory Traces Exist in Cell Bodies?
by Susan Cosier

The finding is surprising because it suggests that a nerve cell body “knows” how many synapses it is supposed to form, meaning it is encoding a crucial part of memory. The researchers also ran a similar experiment on live sea slugs, in which they found that a long-term memory could be totally erased (as gauged by its synapses being destroyed) and then re-formed with only a small reminder stimulus—again suggesting that some information was being stored in a neuron’s body.

Synapses may be like a concert pianist’s fingers, explains principal investigator David Glanzman, a neurologist at U.C.L.A. Even if Chopin did not have his fingers, he would still know how to play his sonatas. “This is a radical idea, and I don’t deny it: memory really isn’t stored in synapses,” Glanzman says.

Other memory experts are intrigued by the findings but cautious about interpreting the results. Even if neurons retain information about how many synapses to form, it is unclear how the cells could know where to put the synapses or how strong they should be—which are crucial components of memory storage. Yet the work indeed suggests that synapses might not be set in stone as they encode memory: they may wither and re-form as a memory waxes and wanes. “The results are really just kind of surprising,” says Todd Sacktor, a neurologist at SUNY Downstate Medical Center. “It has always been this assumption that it’s the same synapses that are storing the memory,” he says. “And the essence of what [Glanzman] is saying is that it’s far more dynamic.”

Memory Transferred Between Snails, Challenging Standard Theory of How the Brain Remembers
by Usha Lee McFarling

Glanzman’s experiments—funded by the National Institutes of Health and the National Science Foundation—involved giving mild electrical shocks to the marine snail Aplysia californica. Shocked snails learn to withdraw their delicate siphons and gills for nearly a minute as a defense when they subsequently receive a weak touch; snails that have not been shocked withdraw only briefly.

The researchers extracted RNA from the nervous systems of snails that had been shocked and injected the material into unshocked snails. RNA’s primary role is to serve as a messenger inside cells, carrying protein-making instructions from its cousin DNA. But when this RNA was injected, these naive snails withdrew their siphons for extended periods of time after a soft touch. Control snails that received injections of RNA from snails that had not received shocks did not withdraw their siphons for as long.

“It’s as if we transferred a memory,” Glanzman said.

Glanzman’s group went further, showing that Aplysia sensory neurons in Petri dishes were more excitable, as they tend to be after being shocked, if they were exposed to RNA from shocked snails. Exposure to RNA from snails that had never been shocked did not cause the cells to become more excitable.

The results, said Glanzman, suggest that memories may be stored within the nucleus of neurons, where RNA is synthesized and can act on DNA to turn genes on and off. He said he thought memory storage involved these epigenetic changes—changes in the activity of genes and not in the DNA sequences that make up those genes—that are mediated by RNA.

This view challenges the widely held notion that memories are stored by enhancing synaptic connections between neurons. Rather, Glanzman sees synaptic changes that occur during memory formation as flowing from the information that the RNA is carrying.

Stress Is Real, As Are The Symptoms

I was reading a book, Strange Contagion by Lee Daniel Kravetz, where he dismisses complaints about wind turbines (e.g. low frequency sounds). It’s actually a great read, even as I disagree with elements of it, such as his entirely overlooking of inequality as a cause of strange contagions (public hysteria, suicide clusters, etc) — an issue explored in depth by Keith Payne in The Broken Ladder and briefly touched upon by Kurt Andersen in Fantasyland.

By the way, one might note that where wind farms are located, as with where toxic dumps are located, has everything to do with economic, social, and political disparities — specifically as exacerbated by poverty, economic segregation, residential isolation, failing local economies, dying small towns, inadequate healthcare, underfunded or non-existent public services, limited coverage in the corporate media, underrepresentation in positions of power and authority, etc (many of the things that get dismissed in defense of the establishment and status quo). And one might note that the dismissiveness toward inequality problems has strong resemblances to the dismissiveness toward wind turbine syndrome or wind farm syndrome.

About wind turbines, Kravetz details the claims against them in writing that, “People closest to the four-hundred-foot-tall turrets receive more than just electricity. The turbines interrupt their sleep patterns. They also generate faint ringing in their ears. Emissions cause pounding migraine headaches. The motion of the vanes also creates a shadow flicker that triggers disorientation, vertigo, and nausea” (Kindle Locations 959-961). But he goes onto assert that the explanation of cause is entirely without scientific substantiation, even as the symptoms are real:

“Grievances against wind farms are not exclusive to DeKalb County, with a perplexing illness dogging many a wind turbine project. Similar complaints have surfaced in Canada, the UK, Italy, and various US cities like Falmouth, Massachusetts. In 2009 the Connecticut pediatrician Nina Pierpont offered an explanation. Wind turbines, she argued, produce low-frequency noises that induce disruptions in the inner ear and lead to an illness she calls wind turbine syndrome. Her evidence, now largely discredited for sample size errors, a lack of a control group, and no peer review, seemed to point to infrasound coming off of the wind farms. Since then more than a dozen scientific reviews have firmly established that wind turbines pose no unique health risks and are fundamentally safe. It doesn’t seem to matter to the residents of DeKalb County, whose symptoms are quite real.” (Kindle Locations 961-968)

He concludes that it is “wind farm hysteria”. It is one example he uses in exploring the larger issue of what he calls strange contagions, partly related to Richard Dawkin’s theory of memes, although he considers it more broadly to include the spread of not just thoughts and ideas but emotions and behaviors. Indeed, he makes a strong overall case in his book and I’m largely persuaded or rather it fits the evidence I’ve previously seen elsewhere. But sometimes his focus is too narrow and conventional. There are valid reasons to consider wind turbines as potentially problematic for human health, despite our not having precisely ascertained and absolutely proven the path of causation.

Stranger Dimensions put out an article by Rob Schwarz, Infrasound: The Fear Frequency, that is directly relevant to the issue. He writes that, “Infrasound is sound below 20 Hz, lower than humans can perceive. But just because we don’t consciously hear it, that doesn’t mean we don’t respond to it; in certain individuals, low-frequency sound can induce feelings of fear or dread or even depression. […] In humans, infrasound can cause a number of strange, seemingly inexplicable effects: headaches, nausea, night terrors and sleep disorders.”

Keep in mind that wind turbines do emit infrasound. The debate has been on whether infrasound can cause ‘disease’ or mere irritation and annoyance. This is based on a simplistic and uninformed understanding of stress. A wide array of research has already proven beyond any doubt that continuous stress is a major contributing factor to numerous physiological and psychological health conditions, and of course this relates to high levels of stress in high inequality societies. In fact, background stress when it is ongoing, as research shows, can be more traumatizing over the long-term than traumatizing events that are brief. Trauma is simply unresolved stress and, when there are multiple stressors in one’s environment, there is no way to either confront it or escape it. It is only some of the population that suffers from severe stress, because of either a single or multiple stressors, but stress in general has vastly increased — as Kravetz states in a straightforward manner: “Americans, meanwhile, continue to experience more stress than ever, with one study I read citing an increase of more than 1,000 percent in the past three decades” (Kindle Locations 2194-2195).

The question isn’t whether stress is problematic but how stressful is continuous low frequency sound, specifically when combined with other stressors as is the case for many disadvantaged populations near wind farms — plus, besides infrasound, wind turbines are obtrusive with blinking lights along with causing shadow flicker and rhythmic pressure pulses on buildings. No research so far has studied the direct influence of long-term, even if low level, exposure to multiple and often simultaneous stressors and so there is no way for anyone to honestly conclude that wind turbines aren’t significantly contributing to health concerns, at least for those already sensitized or otherwise in a state of distress (which would describe many rural residents near wind farms, considering communities dying and young generations leaving, contributing to a loss of social support that otherwise would lessen the impact of stress). Even the doubters admit that it has been proven that wind turbines cause annoyance and stress, the debate being over how much and what impact. Still, that isn’t to argue against wind power and for old energy industries like coal, but maybe wind energy technology could be improved which would ease our transition to alternative energy.

It does make one wonder what we don’t yet understand about how not easily observed factors can have significant influence over us. Human senses are severely limited and so we are largely unaware of the world around us, even when it is causing us harm. The human senses can’t detect tiny parasites, toxins, climate change, etc. And the human tendency is to deny the unknown, even when it is obvious something is going on. It is particularly easy for those not impacted to dismiss those impacted, such as middle-to-upper class citizens, corporate media, government agencies, and politicians ignoring the severe lead toxicity rates for mostly poor minorities in old industrial areas. Considering that, maybe scientists who do research and politicians who pass laws should be required to live for several years surrounded by lead toxicity and wind turbines. Then maybe the symptoms would seem more real and we might finally find a way to help those harmed, if only to reduce some of risk factors, including stress.

The article by Schwarz went beyond this. And in doing so, went in an interesting direction. He explains that, “If infrasound hits at just the right strength and frequency, it can resonate with human eyes, causing them to vibrate. This can lead to distorted vision and the possibility of “ghost” sightings. Or, at least, what some would call ghost sightings. Infrasound may also cause a person to “feel” that there’s an entity in the room with him or her, accompanied by that aforementioned sense of dread.” He describes an incident in a laboratory that came to have a reputation for feeling haunted, the oppressive atmosphere having disappeared when a particular fan was turned off. It turns out it was vibrating at just the right frequency to produce a particular low frequency sound. Now, that is fascinating.

This reminds me of Fortean observations. It’s been noted by a number of paranormal and UFO researchers, such as John Keel, that various odd experiences tend to happen in the same places. UFOs are often repeatedly sighted by different people in the same locations and often at those same locations there will be bigfoot sightings and accounts of other unusual happenings. Jacques Vallee also noted that the certain Fortean incidents tend to follow the same pattern, such as numerous descriptions of UFO abductions matching the folktales about fairy abductions and the anthropological literature on shamanistic initiations.

Or consider what sometimes are called fairy lights. No one knows what causes them, but even scientists have observed them. There are many sites that are specifically known for their fairy lights. My oldest brother went to one of those places and indeed he saw the same thing that thousands of others had seen. The weird thing about these balls of light is it is hard to discern exactly where they are in terms of distance from you, going from seeming close to seeming far. It’s possible that there is nothing actually there and instead it is some frequency affecting the brain.

Maybe there is a diversity of human experiences that have common mechanisms or involve overlapping factors. In that case, we simply haven’t yet figured them out yet. But improved research methods might allow us to look more closely at typically ignored and previously unknown causes. Not only might this lead to betterment for the lives of many but also greater understanding of the human condition.

Fantasyland, An American Tradition

“The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, every individual free to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, each of us free to reinvent himself by imagination and will. In America those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts.”
~ Kurt Andersen, Fantasyland

It’s hard to have public debate in the United States for a number of reasons. The most basic reason is that Americans are severely uninformed and disinformed. We also tend to lack a larger context for knowledge. Historical amnesia is rampant and scientific literacy is limited, exacerbated by centuries old strains of anti-intellectualism and dogmatic idealism, hyper-individualism and sectarian groupthink, public distrust and authoritarian demagoguery.

This doesn’t seem as common in countries elsewhere. Part of this is that Americans are less aware and informed about other countries than the citizens of other countries are of the United States. Living anywhere else in the world, it is near impossible to not know in great detail about the United States and other Western powers as the entire world cannot escape these influences that cast a long shadow of colonial imperialism, neoliberal globalization, transnational corporations, mass media, monocultural dominance, soft power, international propaganda campaigns during the Cold War, military interventionism, etc. The rest of the world can’t afford the luxury of ignorance that Americans enjoy.

Earlier last century when the United States was a rising global superpower competing against other rising global superpowers, the US was known for having one of the better education systems in the world. International competition motivated us in investing in education. Now we are famous for how pathetic recent generations of students compare to many other developed countries. But even the brief moment of seeming American greatness following World War II might have had more to do with the wide scale decimation of Europe, a temporary lowering of other developed countries rather than a vast improvement in the United States.

There has also been a failure of big biz mass media to inform the public and the continuing oligopolistic consolidation of corporate media into a few hands has not allowed for a competitive free market to force corporations to offer something better. On top of that, Americans are one of the most propagandized and indoctrinated populations on the planet, with only a few comparable countries such as China and Russia exceeding us in this area.

See how the near unanimity of the American mass media was able, by way of beating the war drum, to change majority public opinion from being against the Iraq War to being in support of it. It just so happens that the parent companies of most of the corporate media, with ties to the main political parties and the military-industrial complex, profits immensely from the endless wars of the war state.

Corporate media is in the business of making money which means selling a product. In late stage capitalism, all of media is entertainment and news media is infotainment. Even the viewers are sold as a product to advertisers. There is no profit in offering a public service to inform the citizenry and create the conditions for informed public debate. As part of consumerist society, we consume as we are consumed by endless fantasies, just-so stories, comforting lies, simplistic narratives, and political spectacle.

This is a dark truth that should concern and scare Americans. But that would require them to be informed first. There is the rub.

Every public debate in the United States begins with mainstream framing. It requires hours of interacting with a typical American even to maybe get them to acknowledge their lack of knowledge, assuming they have the intellectual humility that makes that likely. Americans are so uninformed and misinformed that they don’t realize they are ignorant, so indoctrinated that they don’t realize how much their minds are manipulated and saturated in bullshit (I speak from the expertise of being an American who has been woefully ignorant for most of my life). To simply get to the level of knowledge where debate is even within the realm of possibility is itself almost an impossible task. To say it is frustrating is an extreme understatement.

Consider how most Americans know that tough-on-crime laws, stop-and-frisk, broken window policies, heavy policing, and mass incarceration were the cause of decreased crime. How do they know? Because decades of political rhetoric and media narratives have told them so. Just as various authority figures in government and media told them or implied or remained silent while others pushed the lies that the 9/11 terrorist attack was somehow connected to Iraq which supposedly had weapons of mass destruction, despite that the US intelligence agencies and foreign governments at the time knew these were lies.

Sure, you can look to alternative media for regularly reporting of different info that undermines and disproves these beliefs. But few Americans get much if any of their news from alternative media. There have been at least hundreds of high quality scientific studies, careful analyses, and scholarly books that have come out since the violent crime decline began. This information, however, is almost entirely unknown to the average American citizen and one suspects largely unknown to the average American mainstream news reporter, media personality, talking head, pundit, think tank hack, and politician.

That isn’t to say there isn’t ignorance found in other populations as well. Having been in the online world since the early naughts, I’ve met and talked with many people from other countries and admittedly some of them are less than perfectly informed. Still, the level of ignorance in the United States is unique, at least in the Western world.

That much can’t be doubted. Other serious thinkers might have differing explanations for why the US has diverged so greatly from much of the rest of the world, from its level of education to its rate of violence. But one way or another, it needs to be explained in the hope of finding a remedy. Sadly, even if we could agree on a solution, those in power benefit too greatly from the ongoing state of an easily manipulated citizenry that lacks knowledge and critical thinking skills.

This isn’t merely an attack on low-information voters and right-wing nut jobs. Even in dealing with highly educated Americans among the liberal class, I rarely come across someone who is deeply and widely informed across various major topics of public concern.

American society is highly insular. We Americans are not only disconnected from the rest of the world but disconnected from each other. And so we have little sense of what is going on outside of the narrow constraints of our neighborhoods, communities, workplaces, social networks, and echo chambers. The United States is psychologically and geographically segregated into separate reality tunnel enclaves defined by region and residency, education and class, race and religion, politics and media.

It’s because we so rarely step outside of our respective worlds that we so rarely realize how little we know and how much of what we think we know is not true. Most of us live in neighborhoods, go to churches and stores, attend or send our kids to schools, work and socialize with people who are exactly like ourselves. They share our beliefs and values, our talking points and political persuasion, our biases and prejudices, our social and class position. We are hermetically sealed within our safe walled-in social identities. Nothing can reach us, threaten us, or change us.

That is until something happens like Donald Trump being elected. Then there is a panic about what has become of America in this post-fact age. The sad reality, however, is America has always been this way. It’s just finally getting to a point where it’s harder to ignore and that potential for public awakening offers some hope.

* * *

Fantasyland
by Kurt Anderson
pp. 10-14

Why are we like this?

. . . The short answer is because we’re Americans, because being American means we can believe any damn thing we want, that our beliefs are equal or superior to anyone else’s, experts be damned. Once people commit to that approach, the world turns inside out, and no cause-and-effect connection is fixed. The credible becomes incredible and the incredible credible.

The word mainstream has recently become a pejorative, shorthand for bias, lies, oppression by the elites. Yet that hated Establishment, the institutions and forces that once kept us from overdoing the flagrantly untrue or absurd—media, academia, politics, government, corporate America, professional associations, respectable opinion in the aggregate—has enabled and encouraged every species of fantasy over the last few decades.

A senior physician at one of America’s most prestigious university hospitals promotes miracle cures on his daily TV show. Major cable channels air documentaries treating mermaids, monsters, ghosts, and angels as real. A CNN anchor speculated on the air that the disappearance of a Malaysian airliner was a supernatural event. State legislatures and one of our two big political parties pass resolutions to resist the imaginary impositions of a New World Order and Islamic law. When a political scientist attacks the idea that “there is some ‘public’ that shares a notion of reality, a concept of reason, and a set of criteria by which claims to reason and rationality are judged,” colleagues just nod and grant tenure. A white woman felt black, pretended to be, and under those fantasy auspices became an NAACP official—and then, busted, said, “It’s not a costume…not something that I can put on and take off anymore. I wouldn’t say I’m African American, but I would say I’m black.” Bill Gates’s foundation has funded an institute devoted to creationist pseudoscience. Despite his nonstop lies and obvious fantasies—rather, because of them—Donald Trump was elected president. The old fringes have been folded into the new center. The irrational has become respectable and often unstoppable. As particular fantasies get traction and become contagious, other fantasists are encouraged by a cascade of out-of-control tolerance. It’s a kind of twisted Golden Rule unconsciously followed: If those people believe that , then certainly we can believe this.

Our whole social environment and each of its overlapping parts—cultural, religious, political, intellectual, psychological—have become conducive to spectacular fallacy and make-believe. There are many slippery slopes, leading in various directions to other exciting nonsense. During the last several decades, those naturally slippery slopes have been turned into a colossal and permanent complex of interconnected, crisscrossing bobsled tracks with no easy exit. Voilà: Fantasyland. . . .

When John Adams said in the 1700s that “facts are stubborn things,” the overriding American principle of personal freedom was not yet enshrined in the Declaration or the Constitution, and the United States of America was itself still a dream. Two and a half centuries later the nation Adams cofounded has become a majority-rule de facto refutation of his truism: “our wishes, our inclinations” and “the dictates of our passions” now apparently do “alter the state of facts and evidence,” because extreme cognitive liberty and the pursuit of happiness rule.

This is not unique to America, people treating real life as fantasy and vice versa, and taking preposterous ideas seriously. We’re just uniquely immersed. In the developed world, our predilection is extreme, distinctly different in the breadth and depth of our embrace of fantasies of many different kinds. Sure, the physician whose fraudulent research launched the antivaccine movement was a Brit, and young Japanese otaku invented cosplay, dressing up as fantasy characters. And while there are believers in flamboyant supernaturalism and prophecy and religious pseudoscience in other developed countries, nowhere else in the rich world are such beliefs central to the self-identities of so many people. We are Fantasyland’s global crucible and epicenter.

This is American exceptionalism in the twenty-first century. America has always been a one-of-a-kind place. Our singularity is different now. We’re still rich and free, still more influential and powerful than any nation, practically a synonym for developed country . But at the same time, our drift toward credulity, doing our own thing, and having an altogether uncertain grip on reality has overwhelmed our other exceptional national traits and turned us into a less-developed country as well.

People tend to regard the Trump moment—this post-truth, alternative facts moment—as some inexplicable and crazy new American phenomenon. In fact, what’s happening is just the ultimate extrapolation and expression of attitudes and instincts that have made America exceptional for its entire history—and really, from its prehistory. . . .

America was created by true believers and passionate dreamers, by hucksters and their suckers—which over the course of four centuries has made us susceptible to fantasy, as epitomized by everything from Salem hunting witches to Joseph Smith creating Mormonism, from P. T. Barnum to Henry David Thoreau to speaking in tongues, from Hollywood to Scientology to conspiracy theories, from Walt Disney to Billy Graham to Ronald Reagan to Oprah Winfrey to Donald Trump. In other words: mix epic individualism with extreme religion; mix show business with everything else; let all that steep and simmer for a few centuries; run it through the anything-goes 1960s and the Internet age; the result is the America we inhabit today, where reality and fantasy are weirdly and dangerously blurred and commingled.

I hope we’re only on a long temporary detour, that we’ll manage somehow to get back on track. If we’re on a bender, suffering the effects of guzzling too much fantasy cocktail for too long, if that’s why we’re stumbling, manic and hysterical, mightn’t we somehow sober up and recover? You would think. But first you need to understand how deeply this tendency has been encoded in our national DNA.

Fake News: It’s as American as George Washington’s Cherry Tree
by Hanna Rosin

Fake news. Post-truth. Alternative facts. For Andersen, these are not momentary perversions but habits baked into our DNA, the ultimate expressions of attitudes “that have made America exceptional for its entire history.” The country’s initial devotion to religious and intellectual freedom, Andersen argues, has over the centuries morphed into a fierce entitlement to custom-made reality. So your right to believe in angels and your neighbor’s right to believe in U.F.O.s and Rachel Dolezal’s right to believe she is black lead naturally to our president’s right to insist that his crowds were bigger.

Andersen’s history begins at the beginning, with the first comforting lie we tell ourselves. Each year we teach our children about Pilgrims, those gentle robed creatures who landed at Plymouth Rock. But our real progenitors were the Puritans, who passed the weeks on the trans-Atlantic voyage preaching about the end times and who, when they arrived, vowed to hang any Quaker or Catholic who landed on their shores. They were zealots and also well-educated British gentlemen, which set the tone for what Andersen identifies as a distinctly American endeavor: propping up magical thinking with elaborate scientific proof.

While Newton and Locke were ushering in an Age of Reason in Europe, over in America unreason was taking new seductive forms. A series of mystic visionaries were planting the seeds of extreme entitlement, teaching Americans that they didn’t have to study any book or old English theologian to know what to think, that whatever they felt to be true was true. In Andersen’s telling, you can easily trace the line from the self-appointed 17th-century prophet Anne Hutchinson to Kanye West: She was, he writes, uniquely American “because she was so confident in herself, in her intuitions and idiosyncratic, subjective understanding of reality,” a total stranger to self-doubt.

What happens next in American history, according to Andersen, happens without malevolence, or even intention. Our national character gels into one that’s distinctly comfortable fogging up the boundary between fantasy and reality in nearly every realm. As soon as George Washington dies fake news is born — the story about the cherry tree, or his kneeling in prayer at Valley Forge. Enterprising businessmen quickly figure out ways to make money off the Americans who gleefully embrace untruths.

Cultural Body-Mind

Daniel Everett is an expert on the Piraha, although he has studied other cultures. It’s unsurprising then to find him use the same example in different books. One particular example (seen below) is about bodily form. I bring it up becomes it contradicts much of the right-wing and reactionary ideology found in genetic determinism, race realism, evolutionary psychology, and present human biodiversity (as opposed to the earlier HBD theory originated by Jonathan Marks).

From the second book below, the excerpt is part of a larger section where Everett responded to the evolutionary psychologist John Tooby, the latter arguing that there is no such thing as ‘culture’ and hence everything is genetic or otherwise biological. Everett’s use of dark matter of the mind is his way of attempting to get at more deeply complex view. This dark matter is of the mind but also of the body.

* * *

How Language Began:
The Story of Humanity’s Greatest Invention

by Daniel L. Everett
pp. 220-221

Culture, patterns of being – such as eating, sleeping, thinking and posture – have been cultivated. A Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that their minds have been cultivated – because of the roles they play in a particular set of values and because of how they define, live out and prioritise these values, the roles of individuals in a society and the knowledge they have acquired.

It would be worth exploring further just how understanding language and culture together can enable us better to understand each. Such an understanding would also help to clarify how new languages or dialects or any other variants of speech come about. I think that this principle ‘you talk like who you talk with’ represents all human behaviour. We also eat like who we eat with, think like those we think with, etc. We take on a wide range of shared attributes – our associations shape how we live and behave and appear – our phenotype. Culture affects our gestures and our talk. It can even affect our bodies. Early American anthropologist Franz Boas studied in detail the relationship between environment, culture and bodily form. Boas made a solid case that human body types are highly plastic and change to adapt to local environmental forces, both ecological and cultural.

Less industrialised cultures show biology-culture connections. Among the Pirahã, facial features range impressionistically from slightly Negroid to East Asian, to Native American. Differences between villages or families may have a biological basis, originating in different tribes merging over the last 200 years. One sizeable group of Pirahãs (perhaps thirty to forty) – usually found occupying a single village – are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two centuries ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different – broader noses, some with epicanthic folds, large foreheads – giving an overall impression of similarity to East Asian features. ‡ Yet body dimensions across all Pirahãs are constant. Men’s waists are, or were when I worked with them, uniformly 27 inches (68 cm), their average height 5 feet 2 inches (157.5 cm) and their average weight 55 kilos (121 pounds). The Pirahã phenotypes are similar not because all Pirahãs necessarily share a single genotype, but because they share a culture, including values, knowledge of what to eat and values about how much to eat, when to eat and the like.

These examples show that even the body does not escape our earlier observation that studies of culture and human social behaviour can be summed up in the slogan that ‘you talk like who you talk with’ or ‘grow like who you grow with’. And the same would have held for all our ancestors, even erectus .

Dark Matter of the Mind:
The Culturally Articulated Unconscious

by Daniel L. Everett
Kindle Locations 1499-1576

Thus while Tooby may be absolutely right that to have meaning, “culture” must be implemented in individual minds, this is no indictment of the concept. In fact, this requirement has long been insisted on by careful students of culture, such as Sapir. Yet unlike, say, Sapir, Tooby has no account of how individual minds— like ants in a colony or neurons in a brain or cells in a body— can form a larger entity emerging from multi-individual sets of knowledge, values, and roles. His own nativist views offer little insight into the unique “unconscious patterning of society” (to paraphrase Sapir) that establishes the “social set” to which individuals belong.

The idea of culture, after all, is just that certain patterns of being— eating, sleeping, thinking, posture, and so forth— have been cultivated and that minds arising from one such “field” will not be like minds cultivated in another “field.” The Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that his or her mind has been cultivated— because of the roles he or she plays in a particular value grouping, because of the ranking of values that her or she has come to share, and so on.

We must be clear, of course, that the idea of “cultivation” we are speaking of here is not merely of minds, but of entire individuals— their minds a way of talking about their bodies. From the earliest work on ethnography in the US, for example, Boas showed how cultures affect even body shape. And body shape is a good indication that it is not merely cognition that is effected and affected by culture. The uses, experiences, emotions, senses, and social engagements of our bodies forget the patterns of thought we call mind. […]

Exploring this idea that understanding language can help us understand culture, consider how linguists account for the rise of languages, dialects, and all other local variants of speech. Part of their account is captured in linguistic truism that “you talk like who you talk with.” And, I argue, this principle actually impinges upon all human behavior. We not only talk like who we talk with, but we also eat like who we eat with, think like those we think with, and so on. We take on a wide range of shared attributes; our associations shape how we live and behave and appear— our phenotype. Culture can affect our gestures and many other aspects of our talk. Boas (1912a, 1912b) takes up the issue of environment, culture, and bodily form. He provides extensive evidence that human body phenotypes are highly plastic and subject to nongenetic local environmental forces (whether dietary, climatological, or social). Had Boas lived later, he might have studied a very clear and dramatic case; namely, the body height of Dutch citizens before and after World War II. This example is worth a close look because it shows that bodies— like behaviors and beliefs— are cultural products and shapers simultaneously.

The curious case of the Netherlanders fascinates me. The Dutch went from among the shortest peoples of Europe to the tallest in the world in just over one century. One account simplistically links the growth in Dutch height with the change in political system (Olson 2014): “The Dutch growth spurt of the mid-19th century coincided with the establishment of the first liberal democracy. Before this time, the Netherlands had grown rich off its colonies but the wealth had stayed in the hands of the elite. After this time, the wealth began to trickle down to all levels of society, the average income went up and so did the height.” Tempting as this single account may be, there were undoubtedly other factors involved, including gene flow and sexual selection between Dutch and other (mainly European) populations, that contribute to explain European body shape relative to the Dutch. But democracy, a new political change from strengthened and enforced cultural values, is a crucial component of the change in the average height of the Dutch, even though the Dutch genotype has not changed significantly in the past two hundred years. For example, consider figures 2.1 and 2.2. In 1825, US male median height was roughly ten centimeters (roughly four inches) taller than the average Dutch. In the 1850s, the median heights of most males in Europe and the USA were lowered. But then around 1900, they begin to rise again. Dutch male median height lagged behind that of most of the world until the late ’50s and early ’60s, when it began to rise at a faster rate than all other nations represented in the chart. By 1975 the Dutch were taller than Americans. Today, the median Dutch male height (183 cm, or roughly just above six feet) is approximately three inches more than the median American male height (177 cm, or roughly five ten). Thus an apparent biological change turns out to be largely a cultural phenomenon.

To see this culture-body connection even more clearly, consider figure 2.2. In this chart, the correlation between wealth and height emerges clearly (not forgetting that the primary determiner of height is the genome). As wealth grew, so did men (and women). This wasn’t matched in the US, however, even though wealth also grew in the US (precise figures are unnecessary). What emerges from this is that Dutch genes are implicated in the Dutch height transformation, from below average to the tallest people in the world. And yet the genes had to await the right cultural conditions before they could be so dramatically expressed. Other cultural differences that contribute to height increases are: (i) economic (e.g., “white collar”) background; (ii) size of family (more children, shorter children); (iii) literacy of the child’s mother (literate mothers provide better diets); (iv) place of residence (residents of agricultural areas tend to be taller than those in industrial environments— better and more plentiful food); and so on (Khazan 2014). Obviously, these factors all have to do with food access. But looked at from a broader angle, food access is clearly a function of values, knowledge, and social roles— that is, culture.

Just as with the Dutch, less-industrialized cultures show culture-body connections. For example, Pirahã phenotype is also subject to change. Facial features among the Pirahãs range impressionistically from slightly Negroid to East Asian to American Indian (to use terms from physical anthropology). Phenotypical differences between villages or families seem to have a biological basis (though no genetic tests have been conducted). This would be due in part to the fact Pirahã women have trysts with various non-Pirahã visitors (mainly river traders and their crews, but also government workers and contract employees on health assistance assignments, demarcating the Pirahã reservation, etc.). The genetic differences are also partly historical. One sizeable group of Pirahãs (perhaps thirty to forty)— usually found occupying a single village— are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two hundred years ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different— broader noses; some with epicanthic folds; large foreheads— giving an overall impression of similarity to Cambodian features. This and other evidence show us that the Pirahã gene pool is not closed. 4 Yet body dimensions across all Pirahãs are constant. Men’s waists are or were uniformly 83 centimeters (about 32.5 inches), their average height 157.5 centimeters (five two), and their average weight 55 kilos (about 121 pounds).

I learned about the uniformity in these measurements over the past several decades as I have taken Pirahã men, women, and children to stores in nearby towns to purchase Western clothes, when they came out of their villages for medical help. (The Pirahãs always asked that I purchase Brazilian clothes for them so that they would not attract unnecessary stares and comments.) Thus I learned that the measurements for men were nearly identical. Biology alone cannot account for this homogeneity of body form; culture is implicated as well. For example, Pirahãs raised since infancy outside the village are somewhat taller and much heavier than Pirahãs raised in their culture and communities. Even the body does not escape our earlier observation that studies of culture and human social behavior can be summed up in the slogan that “you talk like who you talk with” or “grow like who you grow with.”

 

Connecting the Dots of Violence

When talking to people or reading articles, alternative viewpoints and interpretations often pop up in my mind. It’s easy for me to see multiple perspectives simultaneously, to hold multiple ideas. I have a creative mind, but I’m hardly a genius. So, why does this ability seem so rare?

The lack of this ability is not simply a lack of knowledge. I spent the first half of my life in an overwhelming state of ignorance because of inferior public education, exacerbated by a learning disability and depression. But I always had the ability of divergent thinking. It’s just hard to do much with divergent thinking without greater knowledge to work with. I’ve since then remedied my state of ignorance with an extensive program of self-education.

I still don’t know exactly what is this ability to see what others don’t see. There is an odd disconnect I regularly come across, even among the well educated. I encountered a perfect example of this from Yes! Magazine. It’s an article by Mike Males, Gun Violence Has Dropped Dramatically in 3 States With Very Different Gun Laws.

In reading that article, I immediately noticed the lack of any mention of lead toxicity. Then I went to the comments section and saw other people noticed this as well. The divergent thinking it takes to make this connection doesn’t require all that much education and brain power. I’m not particularly special in seeing what the author didn’t see. What is strange is precisely that the author didn’t see it, that the same would be true for so many like him. It is strange because the author isn’t some random person opinionating on the internet.

This became even stranger when I looked into Mike Males’ previous writing elsewhere. In the past, he himself had made this connection between violent crime and lead toxicity. Yet  somehow the connection slipped from his mind in writing this article. This more recent article was in response to an event, the Parkland school shooting in Florida. And the author seems to have gotten caught up in the short term memory of the news cycle, not only unable to connect it to other data but failing to connect it to his own previous writing on that data. Maybe it shows the power of context-dependent memory. The school shooting was immediately put into the context of gun violence and so the framing elicited certain ways of thinking while excluding others. Like so many others, the author got pulled into the media hype of the moment, entirely forgetting what he otherwise would have considered.

This is how people can simultaneously know and not know all kinds of things. The human mind is built on vast disconnections, maybe because there has been little evolutionary advantage to constantly perceive larger patterns of causation beyond immediate situations. I’m not entirely sure what to make of this. It’s all too common. The thing is when such a disconnect happens the person is unaware of it — we don’t know what we don’t know and, as bizarre as it sounds, sometimes we don’t even know what we do know. So, even if I’m better than average at divergent thinking, there is no doubt that in other areas I too demonstrate this same cognitive limitation. It’s hard to see what doesn’t fit into our preconception, our worldview.

For whatever reason, lead toxicity has struggled to become included within public debate and political framing. Lead toxicity doesn’t fit into familiar narratives and the dominant paradigm, specifically in terms of a hyper-individualistic society. Even mental health tends to fall into this attitude of emphasizing the individual level, such as how the signs of mental illness could have been detected so that intervention could have stopped an individual from committing mass murder. It’s easier to talk about someone being crazy and doing crazy things than to question what caused them to become that way, be it toxicity or something else.

As such, Males’ article focuses narrowly without even entertaining fundamental causes, not limited to his overlooking lead toxicity. This is odd. We already know so much about what causes violence. The author himself has written multiple times on the topic, specifically in his professional capacity as a Senior Research Fellow at the Center on Juvenile and Criminal Justice (CJCJ). It’s his job to look for explanations and to communicate them, having written several hundred articles for CJCJ alone.

The human mind tends to go straight to the obvious, that is to say what is perceived as obvious within conventional thought. If the problem is gun violence, then the solution is gun control. Like most Americans (and increasingly so), I support more effective gun control. Still, that is merely dealing with the symptoms and doesn’t explain why someone wants to kill others. The views of the American public, though, don’t stop there. What the majority blames mass gun violence on is mental illness, a rather nebulous explanation. Mental illness also is a symptom.

That is what stands out about the omission I’m discussing here. Lead toxicity is one of most strongly proven causes of neugocognitive problems: stunted brain development, lowered IQ, learning disabilities, autism and Asperger’s, ADHD, depression, impulsivity, nervousness, irritability, anger, aggression, etc. All the heavy metals mess people up in the head, along with causing physical ailments such as hearing impairment, asthma, obesity, kidney failure, and much else. And that is talking about only one toxin among many, mercury being another widespread pollutant but there are many beyond that — this being directly relevant to the issue of violent behavior and crime, such as the high levels of toxins found in mass murderers:

“Three studies in the California prison system found those in prison for violent activity had significantly higher levels of hair manganese than controls, and studies of an area in Australia with much higher levels of violence as well as autopsies of several mass-murderers also found high levels of manganese to be a common factor. Such violent behavior has long been known in those with high manganese exposure. Other studies in the California prison and juvenile justice systems found that those with 5 or more essential mineral imbalances were 90% more likely to be violent and 50% more likely to be violent with two or more mineral imbalances. A study analyzing hair of 28 mass-murderers found that all had high metals and abnormal essential mineral levels.”

(See also: Lead was scourge before and after Beethoven by Kristina R. Anderson; Violent Crime, Hyperactivity and Metal Imbalance by Neil Ward; The Seeds that Give Birth to Terrorism by Kimberly Key; and An Updated Lead-Crime Roundup for 2018 by Kevin Drum)

Besides toxins, other factors have also been seriously studied. For example, high inequality is strongly correlated to increased mental illness rates along with aggressive, risky and other harmful behaviors (as written about in Keith Payne’s The Broken Ladder; an excerpt can be found at the end of this post). And indeed, even as lead toxicity has decreased overall (while remaining a severe problem among the poor), inequality has worsened.

There are multiple omissions going on here. And they are related. Where there are large disparities of wealth, there are also large disparities of health. Because of environmental classism and racism, toxic dumps are more likely to be located in poor and minority communities along with the problem of old housing with lead paint found where poverty is concentrated, all of it being related to a long history of economic and racial segregation. And I would point out that the evidence supports that, along with inequality, segregation creates a culture of distrust — as Eric Uslaner concluded: “It wasn’t diversity but segregation that led to less trust” (Segregation and Mistrust). In post-colonial countries like the United States, inequality and segregation go hand in hand, built on a socioeconomic system ethnic/racial castes and a permanent underclass that has developed over several centuries. The fact that this is the normal conditions of our country makes it all the harder for someone born here to fully sense its enormity. It’s simply the world we Americans have always known — it is our shared reality, rarely perceived for what it is and even more rarely interrogated.

These are far from being problems limited to those on the bottom of society. Lead toxicity ends up impacting a large part of the population. In reference to serious health concerns, Mark Hyman wrote, “that nearly 40 percent of all Americans are estimated to have blood levels of lead high enough to cause these problems” (Why Lead Poisoning May Be Causing Your Health Problems). The same thing goes for high inequality that creates dysfunction all across society, increasing social and health problems even among the upper classes, not to mention breeding an atmosphere of conflict and divisiveness (see James Gilligan’s Preventing Violence; an excerpt can be found at the end of this post). Everyone is worse off in a high amidst the unhappiness and dysfunction of a highly unequal society, far beyond homicides but also suicides, along with addiction and stress-related diseases.

Let’s look at the facts. Besides lead toxicity remaining a major problem in poor communities and old industrial inner cities, the United States has one of the highest rates of inequality in the world and the highest in the Western world, and this problem has been worsening for decades with present levels not seen since the Wall Street crash that led to the Great Depression. To go into the details, Florida has the fifth highest inequality in the United States, according to Mark Price and Estelle Sommeiller, with Florida having “all income growth between 2009 and 2011 accrued to the top 1 percent” (Economic Policy Institute). And Parkland, where the school shooting happened, specifically has high inequality: “The income inequality of Parkland, FL (measured using the Gini index) is 0.529 which is higher than the national average” (DATA USA).

In a sense, it is true that guns don’t kill people, that people kill people. But then again, it could be argued that people don’t kill people, that entire systemic problems triggers the violence that kills people, not even to talk about the immensity of slow violence that slowly kills people in even higher numbers. Lead toxicity is a great example of slow violence because of the 20 year lag time to fully measure its effects, disallowing the direct observation and visceral experience of causality and consequence. The topic of violence is important taken on its own terms (e.g., eliminating gun sales and permits to those with a history of violence would decrease gun violence), but my concern is exploring why it is so difficult to talk about violence in a larger and more meaningful way.

Lead toxicity is a great example for many reasons. It has been hard for advocates to get people to pay attention and take this seriously. Lead toxicity momentarily fell under the media spotlight with the Flint, Michigan case but that was just one of thousands of places with such problems, many of them with far worse rates. As always, as the media’s short attention span turned to some new shiny object, the lead toxicity crisis was forgotten again, as the poisoning continues. You can’t see it happening because it is always happening, an ever present tragedy that even when known remains abstract data. It is in the background and so has become part of our normal experience, operating at a level outside of our awareness.

School shootings are better able to capture the public imagination and so make for compelling dramatic narratives that the media can easily spin. Unlike lead toxicity, school shootings and their victims aren’t invisible. Lead toxins are hidden in the soil of playgrounds and the bodies of children (and prisoners who, as disproportionate victims of lead toxicity, are literally hidden away), whereas a bullet leaves dead bodies, splattered blood, terrified parents, and crying students. Neither can inequality compete with such emotional imagery. People can understand poverty because you can see poor people and poor communities, but you can’t see the societal pattern of dysfunction that exits between the dynamics of extreme poverty and extreme wealth. It can’t be seen, can’t be touched or felt, can’t be concretely known in personal experience.

Whether lead toxicity or high inequality, it is yet more abstract data that never quite gets a toehold within the public mind and the moral imagination. Even for those who should know better, it’s difficult for them to put the pieces together.

* * *

Here is the comment I left at Mike Males’ article:

I was earlier noting that Mike Males doesn’t mention lead exposure/toxicity/poisoning. I’m used to this being ignored in articles like this. Still, it’s disappointing.

It is the single most well supported explanation that has been carefully studied for decades. And the same conclusions have been found in other countries. But for whatever reason, public debate has yet to fully embrace this evidence.

Out of curiosity, I decided to do a web search. Mike Males works for Center on Juvenile and Criminal Justice. He writes articles there. I was able to find two articles where he directly and thoroughly discusses this topic:
http://www.cjcj.org/news/5548
http://www.cjcj.org/news/5552

He also mentions lead toxicity in passing in another article:
http://www.cjcj.org/news/9734

And Mike Males’ work gets referenced in a piece by Kevin Drum:
https://www.motherjones.com/kevin-drum/2016/03/kids-are-becoming-less-violent-adults-not-so-much/

This makes it odd that he doesn’t even mention it in passing here in this article. It’s not because he doesn’t know about the evidence, as he has already written about it. So, what is the reason for not offering the one scientific theory that is most relevant to the data he shares?

This seems straightforward to me. Consider the details from the article.

“Over the last 25 years—though other time periods show similar results—New York, California, and Texas show massive declines in gun homicides, ones that far exceed those of any other state. These three states also show the country’s largest decreases in gun suicide and gun accident death rates.”

The specific states in question were among the most polluting and hence polluted states. This means they had high rates of lead toxicity. And that means they had the most room for improvement. It goes without saying that national regulations and local programs will have the greatest impact where there are the worst problems (similar to the reason, as studies show, it is easier to increase the IQ of the poor than the wealthy by improving basic conditions).

“These major states containing seven in 10 of the country’s largest cities once had gun homicide rates far above the national average; now, their rates are well below those elsewhere in the country.”

That is as obvious as obvious can be. Yeah, the largest cities are also the places of the largest concentrations of pollution. Hence, one would expect to find the highest rates and largest improvements in lead toxicity, which has been proven to directly correlate to violent crime rates (with causality proven through dose-response curve, the same methodology used to prove efficacy of pharmaceuticals).

“The declines are most pronounced in urban young people.”

Once again, this is the complete opposite of surprising. It is exactly as what we would expect. Urban areas have the heaviest and most concentrated vehicular traffic along with the pollution that goes with it. And urban areas are often old industrial centers with a century of accumulated toxins in the soil, water, and elsewhere in the environment. These specific old urban areas are also where old houses are found which are affordable for the poor, but unfortunately are more likely to have old lead paint that is chipping away and turning into dust.

So, problem solved. The great mystery is no more. You’re welcome.

https://www.epa.gov/air-pollution-transportation/accomplishments-and-success-air-pollution-transportation

“Congress passed the landmark Clean Air Act in 1970 and gave the newly-formed EPA the legal authority to regulate pollution from cars and other forms of transportation. EPA and the State of California have led the national effort to reduce vehicle pollution by adopting increasingly stringent standards.”

https://www1.nyc.gov/assets/doh/downloads/pdf/lead/lead-2012report.pdf

The progress has been dramatic. For both children and adults, the number and severity of poisonings has declined. At the same time, blood lead testing rates have increased, especially in populations at high risk for lead poisoning. This public health success is due to a combination of factors, most notably commitment to lead poisoning prevention at the federal, state and city levels. New York City and New York State have implemented comprehensive policies and programs that support lead poisoning prevention. […]

“New York City’s progress in reducing childhood lead poisoning has been striking. Not only has the number of children with lead poisoning declined —a 68% drop from 2005 to 2012 — but the severity of poisonings has also declined. In 2005, there were 14 children newly identified with blood lead levels of 45 µg/dL and above, and in 2012 there were 5 children. At these levels, children require immediate medical intervention and may require hospitalization for chelation, a treatment that removes lead from the body.

“Forty years ago, tackling childhood lead poisoning seemed a daunting task. In 1970, when New York City established the Health Department’s Lead Poisoning Prevention Program, there were over 2,600 children identified with blood lead levels of 60 µg/dL or greater — levels today considered medical emergencies. Compared with other parts of the nation, New York City’s children were at higher risk for lead poisoning primarily due to the age of New York City’s housing stock, the prevalence of poverty and the associated deteriorated housing conditions. Older homes and apartments, especially those built before 1950, are most likely to contain lead­based paint. In New York City, more than 60% of the housing stock — around 2 million units — was built before 1950, compared with about 22% of housing nationwide.

“New York City banned the use of lead­based paint in residential buildings in 1960, but homes built before the ban may still have lead in older layers of paint. Lead dust hazards are created when housing is poorly maintained, with deteriorated and peeling lead paint, or when repair work in old housing is done unsafely. Young children living in such housing are especially at risk for lead poisoning. They are more likely to ingest lead dust because they crawl on the floor and put their hands and toys in their mouths.

“While lead paint hazards remain the primary source of lead poisoning in New York City children, the number and rate of newly identified cases and the associated blood lead levels have greatly declined.

“Strong Policies Aimed at Reducing Childhood Lead Exposure

“Declines in blood lead levels can be attributed largely to government regulations instituted in the 1960s, 1970s and 1980s that banned or limited the use of lead in gasoline, house paint, water pipes, solder for food cans and other consumer products. Abatement and remediation of lead­based paint hazards in housing, and increased consumer awareness of lead hazards have also contributed to lower blood lead levels.

“New York City developed strong policies to support lead poisoning prevention. Laws and regulations were adopted to prevent lead exposure before children are poisoned and to protect those with elevated blood lead levels from further exposure.”

https://www.motherjones.com/environment/2016/02/lead-exposure-gasoline-crime-increase-children-health/

“But if all of this solves one mystery, it shines a high-powered klieg light on another: Why has the lead/crime connection been almost completely ignored in the criminology community? In the two big books I mentioned earlier, one has no mention of lead at all and the other has a grand total of two passing references. Nevin calls it “exasperating” that crime researchers haven’t seriously engaged with lead, and Reyes told me that although the public health community was interested in her paper, criminologists have largely been AWOL. When I asked Sammy Zahran about the reaction to his paper with Howard Mielke on correlations between lead and crime at the city level, he just sighed. “I don’t think criminologists have even read it,” he said. All of this jibes with my own reporting. Before he died last year, James Q. Wilson—father of the broken-windows theory, and the dean of the criminology community—had begun to accept that lead probably played a meaningful role in the crime drop of the ’90s. But he was apparently an outlier. None of the criminology experts I contacted showed any interest in the lead hypothesis at all.

“Why not? Mark Kleiman, a public policy professor at the University of California-Los Angeles who has studied promising methods of controlling crime, suggests that because criminologists are basically sociologists, they look for sociological explanations, not medical ones. My own sense is that interest groups probably play a crucial role: Political conservatives want to blame the social upheaval of the ’60s for the rise in crime that followed. Police unions have reasons for crediting its decline to an increase in the number of cops. Prison guards like the idea that increased incarceration is the answer. Drug warriors want the story to be about drug policy. If the actual answer turns out to be lead poisoning, they all lose a big pillar of support for their pet issue. And while lead abatement could be big business for contractors and builders, for some reason their trade groups have never taken it seriously.

“More generally, we all have a deep stake in affirming the power of deliberate human action. When Reyes once presented her results to a conference of police chiefs, it was, unsurprisingly, a tough sell. “They want to think that what they do on a daily basis matters,” she says. “And it does.” But it may not matter as much as they think.”

* * *

The Broken Ladder:
How Inequality Affects the Way We Think, Live, and Die

by Keith Payne
pp. 69-80

How extensive are the effects of the fast-slow trade-off among humans? Psychology experiments suggest that they are much more prevalent than anyone previously suspected, influencing people’s behaviors and decisions in ways that have nothing to do with reproduction. Some of the most important now versus later trade-offs involve money. Financial advisers tell us that if we skip our daily latte and instead save that three dollars a day, we could increase our savings by more than a thousand dollars a year. But that means facing a daily choice: How much do I want a thousand dollars in the bank at the end of the year? And how great would a latte taste right now?

The same evaluations lurk behind larger life decisions. Do I invest time and money in going to college, hoping for a higher salary in the long run, or do I take a job that guarantees an income now? Do I work at a regular job and play by the rules, even if I will probably struggle financially all my life, or do I sell drugs? If I choose drugs, I might lose everything in the long run and end up broke, in jail, or dead. But I might make a lot of money today.

Even short-term feelings of affluence or poverty can make people more or less shortsighted. Recall from the earlier chapters that subjective sensations of poverty and plenty have powerful effects, and those are usually based on how we measure ourselves against other people. Psychologist Mitch Callan and colleagues combined these two principles and predicted that when people are made to feel poor, they will become myopic, taking whatever they can get immediately and ignoring the future. When they are made to feel rich, they would take the long view.

Their study began by asking research participants a long series of probing questions about their finances, their spending habits, and even their personality traits and personal tastes. They told participants that they needed all this detailed information because their computer program was going to calculate a personalized “Comparative Discretionary Income Index.” They were informed that the computer would give them a score that indicated how much money they had compared with other people who were similar to them in age, education level, personality traits, and so on. In reality, the computer program did none of that, but merely displayed a little flashing progress bar and the words “Calculating. Please wait . . .” Then it provided random feedback to participants, telling half that they had more money than most people like them, and the other half that they had less money than other people like them.

Next, participants were asked to make some financial decisions, and were offered a series of choices that would give them either smaller rewards received sooner or larger rewards received later. For example, they might be asked, “Would you rather have $ 100 today or $ 120 next week? How about $ 100 today or $ 150 next week?” After they answered many such questions, the researchers could calculate how much value participants placed on immediate rewards, and how much they were willing to wait for a better long-term payoff.

The study found that, when people felt poor, they tilted to the fast end of the fast-slow trade-off, preferring immediate gratification. But when they felt relatively rich, they took the long view. To underscore the point that this was not simply some abstract decision without consequences in the real world, the researchers performed the study again with a second group of participants. This time, instead of hypothetical choices, the participants were given twenty dollars and offered the chance to gamble with it. They could decline, pocket the money, and go home, or they could play a card game against the computer and take their chances, in which case they either would lose everything or might make much more money. When participants were made to feel relatively rich, 60 percent chose to gamble. When they were made to feel poor, the number rose to 88 percent. Feeling poor made people more willing to roll the dice.

The astonishing thing about these experiments was that it did not take an entire childhood spent in poverty or affluence to change people’s level of shortsightedness. Even the mere subjective feeling of being less well-off than others was sufficient to trigger the live fast, die young approach to life.

Nothing to Lose

Most of the drug-dealing gang members that Sudhir Venkatesh followed were earning the equivalent of minimum wage and living with their mothers. If they weren’t getting rich and the job was so dangerous, then why did they choose to do it? Because there were a few top gang members who were making several hundred thousand dollars a year. They made their wealth conspicuous by driving luxury cars and wearing expensive clothes and flashy jewelry. They traveled with entourages. The rank-and-file gang members did not look at one another’s lives and conclude that this was a terrible job. They looked instead at the top and imagined what they could be. Despite the fact that their odds of success were impossibly low, even the slim chance of making it big drove them to take outrageous risks.

The live fast, die young theory explains why people would focus on the here and now and neglect the future when conditions make them feel poor. But it does not tell the whole story. The research described in Chapter 2 revealed that rates of many health and social problems were higher, even among members of the middle class, in societies where there was more inequality. One of the puzzling aspects of the rapid rise of inequality over the past three decades is that almost all of the change in fortune has taken place at the top. The incomes of the poor and the middle class are not too different from where they were in 1980, once the numbers are adjusted for inflation. But the income and wealth of the top 1 percent have soared, and those of the top one tenth of a percent dwarfed even their increases. How are the gains of the superrich having harmful effects on the health and well-being of the rest of us? […]

As Cartar suspected, when the bees received bonus nectar, they played it safe and fed in the seablush fields. But when their nectar was removed, they headed straight for the dwarf huckleberry fields.

Calculating the best option in an uncertain environment is a complicated matter; even humans have a hard time with it. According to traditional economic theories, rational decision making means maximizing your payoffs. You can calculate your “expected utility” by multiplying the size of the reward by the likelihood of getting it. So, an option that gives you a 90 percent chance of winning $ 500 has a greater expected utility than an option that gives you a 40 percent chance of winning $ 1,000 ($ 500 × .90 = $ 450 as compared with $ 1,000 × .40 = $ 400). But the kind of decision making demonstrated by the bumblebees doesn’t necessarily line up well with the expected utility model. Neither, it turns out, do the risky decisions made by the many other species that also show the same tendency to take big risks when they are needy.

Humans are one of those species. Imagine what you would do if you owed a thousand dollars in rent that was due today or you would lose your home. In a gamble, would you take the 90 percent chance of winning $ 500, or the 40 percent chance of winning $ 1,000? Most people would opt for the smaller chance of getting the $ 1,000, because if they won, their need would be met. Although it is irrational from the expected utility perspective, it is rational in another sense, because meeting basic needs is sometimes more important than the mathematically best deal. The fact that we see the same pattern across animal species suggests that evolution has found need-based decision making to be adaptive, too. From the humble bumblebee, with its tiny brain, to people trying to make ends meet, we do not always seek to maximize our profits. Call it Mick Jagger logic: If we can’t always get what we want, we try to get what we need. Sometimes that means taking huge risks.

We saw in Chapter 2 that people judge what they need by making comparisons to others, and the impact of comparing to those at the top is much larger than comparing to those at the bottom. If rising inequality makes people feel that they need more, and higher levels of need lead to risky choices, it implies a fundamentally new relationship between inequality and risk: Regardless of whether you are poor or middle class, inequality itself might cause you to engage in riskier behavior. […]

People googling terms like “lottery tickets” and “payday loans,” for example, are probably already involved in some risky spending. To measure sexual riskiness, we counted searches for the morning-after pill and for STD testing. And to measure drug- and alcohol-related risks, we counted searches for how to get rid of a hangover and how to pass a drug test. Of course, a person might search for any of these terms for reasons unrelated to engaging in risky behaviors. But, on average, if there are more people involved in sex, drugs, and money risks, you would expect to find more of these searches.

Armed with billions of such data points from Google, we asked whether the states where people searched most often for those terms were also the states with higher levels of income inequality. To help reduce the impact of idiosyncrasies related to each search term, we averaged the six terms together into a general risk-taking index. Then we plotted that index against the degree of inequality in each state. The states with higher inequality had much higher risk taking, as estimated from their Google searches. This relationship remained strong after statistically adjusting for the average income in each state.

If the index of risky googling tracks real-life risky behavior, then we would expect it to be associated with poor life outcomes. So we took our Google index and tested whether it could explain the link, reported in Chapter 2, between inequality and Richard Wilkinson and Kate Pickett’s index of ten major health and social problems. Indeed, the risky googling index was strongly correlated with the index of life problems. Using sophisticated statistical analyses, we found that inequality was a strong predictor of risk taking, which in turn was a strong predictor of health and social problems. These findings suggest that risky behavior is a pathway that helps explain the link between inequality and bad outcomes in everyday life. The evidence becomes much stronger still when we consider these correlations together with the evidence of cause and effect provided by the laboratory experiments.

Experiments like the ones described in this chapter are essential for understanding the effects of inequality, because only experiments can separate the effects of the environment from individual differences in character traits. Surely there were some brilliant luminaries and some dullards in each experimental group. Surely there were some hearty souls endowed with great self-control, and some irresponsible slackers, too. Because they were assigned to the experimental groups at random, it is exceedingly unlikely that the groups differed consistently in their personalities or abilities. Instead, we can be confident that the differences we see are caused by the experimental factor, in this case making decisions in a context of high or low inequality. […]

Experiments are gentle reminders that, in the words of John Bradford, “There but for the grace of God go I.” If we deeply understand behavioral experiments, they make us humble. They challenge our assumption that we are always in control of our own successes and failures. They remind us that, like John Bradford, we are not simply the products of our thoughts, our plans, or our bootstraps.

These experiments suggest that any average person, thrust into these different situations, will start behaving differently. Imagine that you are an evil scientist with a giant research budget and no ethical review board. You decide to take ten thousand newborn babies and randomly assign them to be raised by families in a variety of places. You place some with affluent, well-educated parents in the suburbs of Atlanta. You place others with single mothers in inner-city Milwaukee, and so on. The studies we’ve looked at suggest that the environments you assign them to will have major effects on their futures. The children you assign to highly unequal places, like Texas, will have poorer outcomes than those you assign to more equal places, like Iowa, even though Texas and Iowa have about the same average income.

In part, this will occur because bad things are more likely to happen to them in unequal places. And in part, it will occur because the children raised in unequal places will behave differently. All of this can transpire even though the babies you are randomly assigning begin life with the same potential abilities and values.

pp. 116-121

If you look carefully at Figure 5.1, you’ll notice that the curve comparing different countries is bent. The relatively small income advantage that India has over Mozambique, for example, translates into much longer lives in India. Once countries reach the level of development of Chile or Costa Rica, something interesting happens: The curve flattens out. Very rich countries like the United States cease to have any life expectancy advantage over moderately rich countries like Bahrain or even Cuba. At a certain level of economic development, increases in average income stop mattering much.

But within a rich country, there is no bend; the relationship between money and longevity remains linear. If the relationship was driven by high mortality rates among the very poor, you would expect to see a bend. That is, you would expect dramatically shorter lives among the very poor, and then, once above the poverty line, additional income would have little effect. This curious absence of the bend in the line suggests that the link between money and health is not actually a reflection of poverty per se, at least not among economically developed countries. If it was extreme poverty driving the effect, then there would be a big spike in mortality among the very poorest and little difference between the middle- and highest-status groups.

The linear pattern in the British Civil Service study is also striking, because the subjects in this study all have decent government jobs and the salaries, health insurance, pensions, and other benefits that are associated with them. If you thought that elevated mortality rates were only a function of the desperately poor being unable to meet their basic needs, this study would disprove that, because it did not include any desperately poor subjects and still found elevated mortality among those with lower status.

Psychologist Nancy Adler and colleagues have found that where people place themselves on the Status Ladder is a better predictor of health than their actual income or education. In fact, in collaboration with Marmot, Adler’s team revisited the study of British civil servants and asked the research subjects to rate themselves on the ladder. Their subjective assessments of where they stood compared with others proved to be a better predictor of their health than their occupational status. Adler’s analyses suggest that occupational status shapes subjective status, and this subjective feeling of one’s standing, in turn, affects health.

If health and longevity in developed countries are more closely linked to relative comparisons than to income, then you would expect that societies with greater inequality would have poorer health. And, in fact, they do. Across the developed nations surveyed by Wilkinson and Pickett, those with greater income equality had longer life expectancies (see Figure 5.3). Likewise, in the United States, people who lived in states with greater income equality lived longer (see Figure 5.4). Both of these relationships remain once we statistically control for average income, which means that inequality in incomes, not just income itself, is responsible.

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Taken together, this evidence implies that inequality leads to illness and shorter lives in part because it gives rise to unhealthy behaviors. That conclusion has been very controversial, especially on the political left. Some argue that it blames the victim because it implies that the poor and those who live in high-inequality areas are partly responsible for their fates by making bad choices. But I don’t think it’s assigning blame to point out the obvious fact that health is affected by smoking, drinking too much, poor diet and exercise, and so on. It becomes a matter of blaming the victim only if you assume that these behaviors are exclusively the result of the weak characters of the less fortunate. On the contrary, we have seen plenty of evidence that poverty and inequality have effects on the thinking and decision making of people living in those conditions. If you or I were thrust into such situations, we might well start behaving in more unhealthy ways, too.

The link between inequality and unhealthy behaviors helps shed light on a surprising trend discovered in a 2015 paper by economists Anne Case and Angus Deaton. Death rates have been steadily declining in the United States and throughout the economically developed world for decades, but these authors noticed a glaring exception: Since the 1990s, the death rate for middle-aged white Americans has been rising. The increase is concentrated among men and whites without a college degree. The death rate for black Americans of the same age remains higher, but is trending slowly downward, like that of all other minority groups.

The wounds in this group seem to be largely self-inflicted. They are not dying from higher rates of heart disease or cancer. They are dying of cirrhosis of the liver, suicide, and a cycle of chronic pain and overdoses of opiates and painkillers.

The trend itself is striking because it speaks to the power of subjective social comparisons. This demographic group is dying of violated expectations. Although high school– educated whites make more money on average than similarly educated blacks, the whites expect more because of their history of privilege. Widening income inequality and stagnant social mobility, Case and Deaton suggest, mean that this generation is likely to be the first in American history that is not more affluent than its parents.

Unhealthy behaviors among those who feel left behind can explain part of the link between inequality and health, but only part. The best estimates have found that such behavior accounts for about one third of the association between inequality and health. Much of the rest is a function of how the body itself responds to crises. Just as our decisions and actions prioritize short-term gains over longer-term interests when in a crisis, the body has a sophisticated mechanism that adopts the same strategy. This crisis management system is specifically designed to save you now, even if it has to shorten your life to do so.

* * *

Preventing Violence
by James Gilligan
Kindle Locations 552-706

The Social Cause of Violence

In order to understand the spread of contagious disease so that one can prevent epidemics, it is just as important to know the vector by which the pathogenic organism that causes the disease is spread throughout the population as it is to identify the pathogen itself. In the nineteenth century, for example, the water supply and the sewer system were discovered to be vectors through which some diseases became epidemic. What is the vector by which shame, the pathogen that causes violence, is spread to its hosts, the people who succumb to the illness of violence? There is a great deal of evidence, which I will summarize here, that shame is spread via the social and economic system. This happens in two ways. The first is through what we might call the “vertical” division of the population into a hierarchical ranking of upper and lower status groups, chiefly classes, castes, and age groups, but also other means by which people are divided into in-groups and out-groups, the accepted and the rejected, the powerful and the weak, the rich and the poor, the honored and the dishonored. For people are shamed on a systematic, wholesale basis, and their vulnerability to feelings of humiliation is increased when they are assigned an inferior social or economic status; and the more inferior and humble it is, the more frequent and intense the feelings of shame, and the more frequent and intense the acts of violence. The second way is by what we could call the “horizontal” asymmetry of social roles, or gender roles, to which the two sexes are assigned in patriarchal cultures, one consequence of which is that men are shamed or honored for different and in some respects opposite behavior from that which brings shame or honor to women. That is, men are shamed for not being violent enough (called cowards or even shot as deserters), and are more honored the more violent they are (with medals, promotions, titles, and estates)—violence for men is successful as a strategy. Women, however, are shamed for being too active and aggressive (called bitches or unfeminine) and honored for being passive and submissive—violence is much less likely to protect them against shame.

Relative Poverty and Unemployment

The most powerful predictor of the homicide rate in comparisons of the different nations of the world, the different states in the United States, different counties, and different cities and census tracts, is the size of the disparities in income and wealth between the rich and the poor. Some three dozen studies, at least, have found statistically significant correlations between the degree of absolute as well as relative poverty and the incidence of homicide, Hsieh and Pugh in 1993 did a meta-analysis of thirty-four such studies and found strong statistical support for these findings, as have several other reviews of this literature: two on homicide by Smith and Zahn in 1999; Chasin in 1998; Short in 1997; James in 1995; and individual studies, such as Braithwaite in 1979 and Messner in 1980.

On a worldwide basis, the nations with the highest inequities in wealth and income, such as many Third World countries in Latin America, Africa, and Asia, have the highest homicide rates (and also the most collective or political violence). Among the developed nations, the United States has the highest inequities in wealth and income, and also has by far the highest homicide rates, five to ten times larger than the other First World nations, all of which have the lowest levels of inequity and relative poverty in the world, and the lowest homicide rates. Sweden and Japan, for example, have had the lowest degree of inequity in the world in recent years, according to the World Bank’s measures; but in fact, all the other countries of western Europe, including Ireland and the United Kingdom, as well as Canada, Australia, and New Zealand, have a much more equal sharing of their collective wealth and income than either the United States or virtually any of the Second or Third World countries, as well as the lowest murder rates.

Those are cross-sectional studies, which analyze the populations being studied at one point in time. Longitudinal studies find the same result: violence rates climb and fall over time as the disparity in income rises and decreases, both in the less violent and the more violent nations. For example, in England and Wales, as Figures 1 and 2 show, there was an almost perfect fit between the rise in several different measures of the size of the gap between the rich and the poor, and the number of serious crimes recorded by the police between 1950 and 1990. Figure 1 shows two measures of the gradual widening of income differences, which accelerated dramatically from 1984 and 1985. Figure 2 shows the increasing percentage of households and families living in relative poverty, a rate that has been particularly rapid since the late 1970s, and also the number of notifiable offences recorded by the police during the same years. As you can see, the increase in crime rates follows the increase in rates of relative poverty almost perfectly. As both inequality and crime accelerated their growth rates simultaneously, the annual increases in crime from one year to the next became larger than the total crime rate had been in the early 1950s. If we examine the rates for murder alone during the same period, as reported by the Home Office, we find the same pattern, namely a progression from a murder rate that averaged 0.6 per 100,000 between 1946 and 1970, increased to 0.9 from 1971–78, and increased yet again to an average of 1.1 between 1979 and 1997 (with a range of 1.0 to 1.3) To put it another way, 1.2 and 1.3, the five highest levels since the end of World War II, were recorded in 1987, 1991, 1994, 1995 and 1997, ail twice as high as the 1946–70 average.

The same correlation between violence and relative poverty has been found in the United States. The economist James Galbraith in Created Unequal (1997) has used inequity in wages as one measure of the size and history of income inequity between the rich and the poor from 1920 to 1992. If we correlate this with fluctuations in the American homicide rate during the same period, we find that both wage inequity and the homicide rate increased sharply in the slump of 1920–21, and remained at those historically high levels until the Great Crash of 1929, when they both jumped again, literally doubling together and suddenly, to the highest levels ever observed up to that time. These record levels of economic inequality (which increase, as Galbraith shows, when unemployment increases) were accompanied by epidemic violence; both murder rates and wage inequity remained twice as high as they had previously been, until the economic leveling effects of Roosevelt’s New Deal, beginning in 1933, and the Second World War a few years later, combined to bring both violence and wage inequity down by the end of the war to the same low levels as at the end of the First World War, and they both remained at those low levels for the next quarter of a century, from roughly 1944 to 1968.

That was the modern turning point. In 1968 the median wage began falling, after having risen steadily for the previous three decades, and “beginning in 1969 inequality started to rise, and continued to increase sharply for fifteen years,” (J. K. Galbraith). The homicide rate soon reached levels twice as high as they had been during the previous quarter of a century (1942–66). Both wage inequality and homicide rates remained at those relatively high levels for the next quarter of a century, from 1973 to 1997. That is, the murder rate averaged 5 per 100,000 population from 1942 to 1966, and 10 per 100,000 from 1970 to 1997. Finally, by 1998 unemployment dropped to the lowest level since 1970; both the minimum wage and the median wage began increasing again in real terms for the first time in thirty years; and the poverty rate began dropping. Not surprisingly, the homicide rate also fell, for the first time in nearly thirty years, below the range in which it had been fluctuating since 1970–71 (though both rates, of murder and of economic inequality, are still higher than they were from the early 1940s to the mid-1960s).

As mentioned before, unemployment rates are also relevant to rates of violence. M. H. Brenner found that every one per cent rise in the unemployment rate is followed within a year by a 6 per cent rise in the homicide rate, together with similar increases in the rates of suicide, imprisonment, mental hospitalization, infant mortality, and deaths from natural causes such as heart attacks and strokes (Mental Illness and the Economy, 1973, and uPersonal Stability and Economic Security,” 1977). Theodore Chiricos reviewed sixty-three American studies and concluded that while the relationship between unemployment and crime may have been inconsistent during the 1960s (some studies found a relationship, some did not), it became overwhelmingly positive in the 1970s, as unemployment changed from a brief interval between jobs to enduring worklessness (“Rates of Crime and Unemployment,” 1987). David Dickinson found an exceptionally close relationship between rates of burglary and unemployment for men under twenty-five in the U.K. in the 1980s and 1990s (“Crime and Unemployment,” 1993). Bernstein and Houston have also found statistically significant correlations between unemployment and crime rates, and negative correlations between wages and crime rates, in the U.S. between 1989 and 1998 (Crime and Work, 2000).

If we compare Galbraith’s data with U.S. homicide statistics, we find that the U.S. unemployment rate has moved in the same direction as the homicide rate from 1920 to 1992: increasing sharply in 1920–21, then jumping to even higher levels from the Crash of 1929 until Roosevelt’s reforms began in 1933, at which point the rates of both unemployment and homicide also began to fall, a trend that accelerated further with the advent of the war. Both rates then remained low (with brief fluctuations) until 1968, when they began a steady rise which kept them both at levels higher than they had been in any postwar period, until the last half of 1997, when unemployment fell below that range and has continued to decline ever since, followed closely by the murder rate.

Why do economic inequality and unemployment both stimulate violence? Ultimately, because both increase feelings of shame (Gilligan, Violence). For example, we speak of the poor as the lower classes, who have lower social and economic status, and the rich as the upper classes who have higher status. But the Latin for lower is inferior, and the word for the lower classes in Roman law was the humiliores. Even in English, the poor are sometimes referred to as the humbler classes. Our language itself tells us that to be poor is to be humiliated and inferior, which makes it more difficult not to feel inferior. The word for upper or higher was superior, which is related to the word for pride, superbia (the opposite of shame), also the root of our word superb (another antonym of inferior). And a word for the upper classes, in Roman law, was the honestiores (related to the word honor, also the opposite of shame and dishonor).

Inferiority and superiority are relative concepts, which is why it is relative poverty, not absolute poverty, that exposes people to feelings of inferiority. When everyone is on the same level, there is no shame in being poor, for in those circumstances the very concept of poverty loses its meaning. Shame is also a function of the gap between one’s level of aspiration and one’s level of achievement. In a society with extremely rigid caste or class hierarchies, it may not feel so shameful to be poor, since it is a matter of bad luck rather than of any personal failing. Under those conditions, lower social status may be more likely to motivate apathy, fatalism, and passivity (or “passive aggressiveness”), and to inhibit ambition and the need for achievement, as Gunnar Myrdal noted in many of the caste-ridden peasant cultures that he studied in Asian Drama (1968). Caste-ridden cultures, however, may have the potential to erupt into violence on a revolutionary or even genocidal scale, once they reject the notion that the caste or class one is born into is immutable, and replace it with the notion that one has only oneself to blame if one remains poor while others are rich. This we have seen repeatedly in the political and revolutionary violence that has characterized the history of Indonesia, Kampuchea, India, Ceylon, China, Vietnam, the Philippines, and many other areas throughout Asia during the past half-century.

All of which is another way of saying that one of the costs people pay for the benefits associated with belief in the “American Dream,” the myth of equal opportunity, is an increased potential for violence. In fact, the social and economic system of the United States combines almost every characteristic that maximizes shame and hence violence. First, there is the “Horatio Alger” myth that everyone can get rich if they are smart and work hard (which means that if they are not rich they must be stupid or lazy, or both). Second, we are not only told that we can get rich, we are also stimulated to want to get rich. For the whole economic system of mass production depends on whetting people’s appetites to consume the flood of goods that are being produced (hence the flood of advertisements). Third, the social and economic reality is the opposite of the Horatio Alger myth, since social mobility is actually less likely in the U.S. than in the supposedly more rigid social structures of Europe and the U.K. As Mishel, Bernstein and Schmitt have noted:

Contrary to widely held perceptions, the U.S. offers less economic mobility than other rich countries. In one study, for example, low-wage workers in the U.S. were more likely to remain in the low-wage labor market five years longer than workers in Germany, France, Italy, the United Kingdom, Denmark, Finland, and Sweden (all the other countries studied in this analysis). In another study, poor households in the U.S. were less likely to leave poverty from one year to the next than were poor households in Canada, Germany, the Netherlands, Sweden, and the United Kingdom (all the countries included in this second analysis).
(The State of Working America 2000–2001, 2001)

Fourth, as they also mention, “the U.S. has the most unequal income distribution and the highest poverty rates among all the advanced economies in the world. The U.S. tax and benefit system is also one of the least effective in reducing poverty.” The net effect of all these features of U.S. society is to maximize the gap between aspiration and attainment, which maximizes the frequency and intensity of feelings of shame, which maximizes the rates of violent crimes.

It is difficult not to feel inferior if one is poor when others are rich, especially in a society that equates self-worth with net worth; and it is difficult not to feel rejected and worthless if one cannot get or hold a job while others continue to be employed. Of course, most people who lose jobs or income do not commit murders as a result; but there are always some men who are just barely maintaining their self-esteem at minimally tolerable levels even when they do have jobs and incomes. And when large numbers of them lose those sources of self-esteem, the number who explode into homicidal rage increases as measurably, regularly, and predictably as any epidemic does when the balance between pathogenic forces and the immune system is altered.

And those are not just statistics. I have seen many individual men who have responded in exactly that way under exactly these circumstances. For example, one African-American man was sent to the prison mental hospital I directed in order to have a psychiatric evaluation before his murder trial. A few months before that, he had had a good job. Then he was laid off at work, but he was so ashamed of this that he concealed the fact from his wife (who was a schoolteacher) and their children, going off as if to work every morning and returning at the usual time every night. Finally, after two or three months of this, his wife noticed that he was not bringing in any money. He had to admit the truth, and then his wife fatally said, “What kind of man are you? What kind of man would behave this way?” To prove that he was a man, and to undo the feeling of emasculation, he took out his gun and shot his wife and children. (Keeping a gun is, of course, also a way that some people reassure themselves that they are really men.) What I was struck by, in addition to the tragedy of the whole story, was the intensity of the shame he felt over being unemployed, which led him to go to such lengths to conceal what had happened to him.

Caste Stratification

Caste stratification also stimulates violence, for the same reasons. The United States, perhaps even more than the other Western democracies, has a caste system that is just as real as that of India, except that it is based on skin color and ethnicity more than on hereditary occupation. The fact that it is a caste system similar to India’s is registered by the fact that in my home city, Boston, members of the highest caste are called “Bsoston Brahmins” (a.k.a. “WASPs,” or White Anglo-Saxon Protestants). The lowest rung on the caste ladder, corresponding to the “untouchables” or Harijan, of India, is occupied by African-Americans, Native Americans, and some Hispanic-Americans. To be lower caste is to be rejected, socially and vocationally, by the upper castes, and regarded and treated as inferior. For example, whites often move out of neighborhoods when blacks move in; blacks are “the last to be hired and the first to be fired,” so that their unemployment rate has remained twice as high as the white rate ever since it began being measured; black citizens are arrested and publicly humiliated under circumstances in which no white citizen would be; respectable white authors continue to write books and articles claiming that blacks are intellectually inferior to whites; and so on and on, ad infinitum. It is not surprising that the constant shaming and attributions of inferiority to which the lower caste groups are subjected would cause members of those groups to feel shamed, insulted, disrespected, disdained, and treated as inferior—because they have been, and because many of their greatest writers and leaders have told us that this is how they feel they have been treated by whites. Nor is it surprising that this in turn would give rise to feelings of resentment if not rage, nor that the most vulnerable, those who lacked any non-violent means of restoring their sense of personal dignity, such as educational achievements, success, and social status, might well see violence as the only way of expressing those feelings. And since one of the major disadvantages of lower-caste status is lack of equal access to educational and vocational opportunities, it is not surprising that the rates of homicide and other violent crimes among all the lower-caste groups mentioned are many times higher, year after year, than those of the upper-caste groups.

Kindle Locations 1218-1256

Single-Parent Families Another factor that correlates with rates of violence in the United States is the rate of single-parent families: children raised in them are more likely to be abused, and are more likely to become delinquent and criminal as they grow older, than are children who are raised by two parents. For example, over the past three decades those two variables—the rates of violent crime and of one-parent families—have increased in tandem with each other; the correlation is very close. For some theorists, this has suggested that the enormous increase in the rate of youth violence in the U.S. over the past few decades has been caused by the proportionately similar increase in the rate of single-parent families.

As a parent myself, I would be the first to agree that child-rearing is such a complex and demanding task that parents need all the help they can get, and certainly having two caring and responsible parents available has many advantages over having only one. In addition, children, especially boys, can be shown to benefit in many ways, including diminished risk of delinquency and violent criminality, from having a positive male role-model in the household. The adult who is most often missing in single-parent families is the father. Some criminologists have noticed that Japan, for example, has practically no single-parent families, and its murder rate is only about one-tenth as high as that of the United States.

Sweden’s rate of one-parent families, however, has grown almost to equal that in the United States, and over the same period (the past few decades), yet Sweden’s homicide rate has also been on average only about one-tenth as high as that of the U.S., during that same time. To understand these differences, we should consider another variable, namely, the size of the gap between the rich and the poor. As stated earlier, Sweden and Japan both have among the lowest degrees of economic inequity in the world, whereas the U.S. has the highest polarization of both wealth and income of any industrialized nation. And these differences exist even when comparing different family structures. For example, as Timothy M. Smeeding has shown, the rate of relative poverty is very much lower among single-parent families in Sweden than it is among those in the U.S. Even more astonishing, however, is the fact that the rate of relative poverty among single-parent families in Sweden is much lower than it is among two-parent families in the United States (“Financial Poverty in Developed Countries,” 1997). Thus, it would seem that however much family structure may influence the rate of violence in a society, the overall social and economic structure of the society—the degree to which it is or is not stratified into highly polarized upper and lower social classes and castes—is a much more powerful determinant of the level of violence.

There are other differences between the cultures of Sweden and the U.S. that may also contribute to the differences in the correlation between single-parenthood and violent crime. The United States, with its strongly Puritanical and Calvinist cultural heritage, is much more intolerant of both economic dependency and out-of-wedlock sex than Sweden. Thus, the main form of welfare support for single-parent families in the U.S. (until it was ended a year ago) A.F.D.C., Aid to Families with Dependent Children, was specifically denied to families in which the father (or any other man) was living with the mother; indeed, government agents have been known to raid the homes of single mothers with no warning in the middle of the night in order to “catch” them in bed with a man, so that they could then deprive them (and their children) of their welfare benefits. This practice, promulgated by politicians who claimed that they were supporting what they called “family values,” of course had the effect of destroying whatever family life did exist. Fortunately for single mothers in Sweden, the whole society is much more tolerant of people’s right to organize their sexual life as they wish, and as a result many more single mothers are in fact able to raise their children with the help of a man.

Another difference between Sweden and the U.S. is that fewer single mothers in Sweden are actually dependent on welfare than is true in the U.S. The main reason for this is that mothers in Sweden receive much more help from the government in getting an education, including vocational training; more help in finding a job; and access to high-quality free childcare, so that mothers can work without leaving their children uncared for. The U.S. system, which claims to be based on opposition to dependency, thus fosters more welfare dependency among single mothers than Sweden’s does, largely because it is so more miserly and punitive with the “welfare” it does provide. Even more tragically, however, it also fosters much more violence. It is not single motherhood as such that causes the extremely high levels of violence in the United States, then; it is the intense degree of shaming to which single mothers and their children are exposed by the punitive, miserly, Puritanical elements that still constitute a powerful strain in the culture of the United States.

Kindle Locations 1310-1338

Social and Political Democracy Since the end of the Second World War, the homicide rates of the nations of western Europe, and Japan, for example, have been only about a tenth as high as those of the United States, which is another way of saying that they have been preventing 90 per cent of the violence that the U.S. still experiences. Their rates of homicide were not lower than those in the U.S. before. On the contrary, Europe and Asia were scenes of the largest numbers of homicides ever recorded in the history of the world, both in terms of absolute numbers killed and in the death rates per 100,000 population, in the “thirty years’ war” that lasted from 1914 to 1945. Wars, and governments, have always caused far more homicides than all the individual murderers put together (Richardson, Statistics of Deadly Quarrels, 1960; Keeley, War Before Civilization, 1996.) After that war ended, however, they all took two steps which have been empirically demonstrated throughout the world to prevent violence. They instituted social democracy (or “welfare states,” as they are sometimes called), and achieved an unprecedented decrease in the inequities in wealth and income between the richest and poorest groups in the population, one effect of which is to reduce the frequency of interpersonal or “criminal” violence. And Germany, Japan and Italy adopted political democracy as well, the effect of which is to reduce the frequency of international violence, or warfare (including “war crimes”).

While the United States adopted political democracy at its inception, it is the only developed nation on earth that has never adopted social democracy (a “welfare state”). The United States alone among the developed nations does not provide universal health insurance for all its citizens; it has the highest rate of relative poverty among both children and adults, and the largest gap between the rich and the poor, of any of the major economies; vastly less adequate levels of unemployment insurance and other components of shared responsibility for human welfare; and so on. Thus, it is not surprising that it also has murder rates that have been five to ten times as high as those of any other developed nation, year after year. It is also consistent with that analysis that the murder rate finally fell below the epidemic range in which it had fluctuated without exception for the previous thirty years (namely, 8 to II homicides per 100,000 population per year), only in 1998, after the unemployment rate reached its lowest level in thirty years and the rate of poverty among the demographic groups most vulnerable to violence began to diminish—slightly—for the first time in thirty years.

Some American politicians, such as President Eisenhower, have suggested that the nations of western Europe have merely substituted a high suicide rate for the high homicide rate that the U.S. has. In fact, the suicide rates in most of the other developed nations are also substantially lower than those of the United States, or at worst not substantially higher. The suicide rates throughout the British Isles, the Netherlands, and the southern European nations are around one-third lower than those of the U.S.; the rates in Canada, Australia, and New Zealand, as well as Norway and Luxembourg, are about the same. Only the remaining northern and central European countries and Japan have suicide rates that are higher, ranging from 30 per cent higher to roughly twice as high as the suicide rate of the U.S. By comparison, the U.S. homicide rate is roughly ten times as high as those of western Europe (including the U.K., Scandinavia, France, Germany, Switzerland, Austria), southern Europe, and Japan; and five times as high as those of Canada, Australia and New Zealand. No other developed nation has a homicide rate that is even close to that of the U.S.

Verbal Behavior

There is a somewhat interesting discussion of the friendship between B.F. Skinner and W.V.O. Quine. The piece explores their shared interests and possible influences on one another. It’s not exactly an area of personal interest, but it got me thinking about Julian Jaynes.

Skinner is famous for his behaviorist research. When behaviorism is mentioned, what immediately comes to mind for most people is Pavlov’s dog. But behaviorism wasn’t limited to animals and simple responses to stimuli. Skinner developed his theory toward verbal behavior as well. As Michael Karson explains,

“Skinner called his behaviorism “radical,” (i.e., thorough or complete) because he rejected then-behaviorism’s lack of interest in private events. Just as Galileo insisted that the laws of physics would apply in the sky just as much as on the ground, Skinner insisted that the laws of psychology would apply just as much to the psychologist’s inner life as to the rat’s observable life.

“Consciousness has nothing to do with the so-called and now-solved philosophical problem of mind-body duality, or in current terms, how the physical brain can give rise to immaterial thought. The answer to this pseudo-problem is that even though thought seems to be immaterial, it is not. Thought is no more immaterial than sound, light, or odor. Even educated people used to believe, a long time ago, that these things were immaterial, but now we know that sound requires a material medium to transmit waves, light is made up of photons, and odor consists of molecules. Thus, hearing, seeing, and smelling are not immaterial activities, and there is nothing in so-called consciousness besides hearing, seeing, and smelling (and tasting and feeling). Once you learn how to see and hear things that are there, you can also see and hear things that are not there, just as you can kick a ball that is not there once you have learned to kick a ball that is there. Engaging in the behavior of seeing and hearing things that are not there is called imagination. Its survival value is obvious, since it allows trial and error learning in the safe space of imagination. There is nothing in so-called consciousness that is not some version of the five senses operating on their own. Once you have learned to hear words spoken in a way that makes sense, you can have thoughts; thinking is hearing yourself make language; it is verbal behavior and nothing more. It’s not private speech, as once was believed; thinking is private hearing.”

It’s amazing how much this is resonates with Jaynes’ bicameral theory. This maybe shouldn’t be surprising. After all, Jaynes was trained in behaviorism and early on did animal research. He was mentored by the behaviorist Frank A. Beach and was friends with Edward Boring who wrote a book about consciousness in relation to behaviorism. Reading about Skinner’s ideas about verbal behavior, I was reminded of Jaynes’ view of authorization as it relates to linguistic commands and how they become internalized to form an interiorized mind-space (i.e., Jaynesian consciousness).

I’m not the only person to think along these lines. On Reddit, someone wrote: “It is possible that before there were verbal communities that reinforced the basic verbal operants in full, people didn’t have complete “thinking” and really ran on operant auto-pilot since they didn’t have a full covert verbal repertoire and internal reinforcement/shaping process for verbal responses covert or overt, but this would be aeons before 2-3 kya. Wonder if Jaynes ever encountered Skinner’s “Verbal Behavior”…” Jaynes only references Skinner once in his book on bicameralism and consciousness. But he discusses behaviorism in general to some extent.

In the introduction, he describes behaviorism in this way: “From the outside, this revolt against consciousness seemed to storm the ancient citadels of human thought and set its arrogant banners up in one university after another. But having once been a part of its major school, I confess it was not really what it seemed. Off the printed page, behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious. And there was a very real hypocrisy abroad, as those interested in its problems were forcibly excluded from academic psychology, as text after text tried to smother the unwanted problem from student view. In essence, behaviorism was a method, not the theory that it tried to be. And as a method, it exorcised old ghosts. It gave psychology a thorough house cleaning. And now the closets have been swept out and the cupboards washed and aired, and we are ready to examine the problem again.” As dissatisfying as animal research was for Jaynes, it nonetheless set the stage for deeper questioning by way of a broader approach. It made possible new understanding.

Like Skinner, he wanted to take the next step, shifting from behavior to experience. Even their strategies to accomplish this appear to have been similar. Sensory experience itself becomes internalized, according to both of their theories. For Jaynes, perception of external space becomes the metaphorical model for a sense of internal space. When Karson says of Skinner’s view that “thinking is hearing yourself make language,” that seems close to Jaynes discussion of hearing voices as it develops into an ‘I’ and a ‘me’, the sense of identity split into subject and object which asserted was required for one to hear one’s own thoughts.

I don’t know Skinner’s thinking in detail or how it changed over time. He too pushed beyond the bounds of behavioral research. It’s not clear that Jaynes’ ever acknowledged this commonality. In his 1990 afterword to his book, Jaynes’ makes his one mention of Skinner without pointing out Skinner’s work on verbal behavior:

“This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

“This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.”

Jaynes was making the simple, if oft overlooked, point that perception of body is not the same thing as consciousness of mind. A behavioral response to one’s own body isn’t fundamentally different than a behavioral response to anything else. Behavioral responses are found in every species. This isn’t helpful in exploring consciousness itself. Skinner too wanted to get beyond this level of basic behavioral research, so it seems. Interestingly, without any mention of Skinner, Jaynes does use the exact phrasing of Skinner in speaking about the unconscious learning of ‘verbal behavior’ (Book One, Chapter 1):

“Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, or whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying words. The important thing here is that the subject is not aware that he is learning anything at all. [13] He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it.”

This is just a passing comment in using one example among many, and he states that “Such unconscious learning is not confined to verbal behavior.” He doesn’t further explore language in this immediate section or repeat again the phrase ‘verbal behavior’ in any other section, although the notion of verbal behavior is central to the entire book. But a decade after the original publication date of his book, Jaynes wrote a paper where he does talk about Skinner’s ideas about language:

“One needs language for consciousness. We think consciousness is learned by children between two and a half and five or six years in what we can call the verbal surround, or the verbal community as B.F Skinner calls it. It is an aspect of learning to speak. Mental words are out there as part of the culture and part of the family. A child fits himself into these words and uses them even before he knows the meaning of them. A mother is constantly instilling the seeds of consciousness in a two- and three-year-old, telling the child to stop and think, asking him “What shall we do today?” or “Do you remember when we did such and such or were somewhere?” And all this while metaphor and analogy are hard at work. There are many different ways that different children come to this, but indeed I would say that children without some kind of language are not conscious.”
(Jaynes, J. 1986. “Consciousness and the Voices of the Mind.” Canadian Psychology, 27, 128– 148.)

I don’t have access to that paper. That quote comes from an article by John E. Limber: “Language and consciousness: Jaynes’s “Preposterous idea” reconsidered.” It is found in Reflections on the Dawn of Consciousness edited by Marcel Kuijsten (pp. 169-202).

Anyway, the point Jaynes makes is that language is required for consciousness as an inner sense of self because language is required to hear ourselves think. So verbal behavior is a necessary, if not sufficient, condition for the emergence of consciousness as we know it. As long as verbal behavior remains an external event, conscious experience won’t follow. Humans have to learn to hear themselves as they hear others, to split themselves into a speaker and a listener.

This relates to what makes possible the differentiation of hearing a voice being spoken by someone in the external world and hearing a voice as a memory of someone in one’s internal mind-space. Without this distinction, imagination isn’t possible for anything imagined would become a hallucination where internal and external hearing are conflated or rather never separated. Jaynes proposes this is why ancient texts regularly describe people as hearing voices of deities and deified kings, spirits and ancestors. The bicameral person, according to the theory, hears their own voice without being conscious that it is their own thought.

All of that emerges from those early studies of animal behavior. Behaviorism plays a key role simply in placing the emphasis on behavior. From there, one can come to the insight that consciousness is a neurocognitive behavior modeled on physical and verbal behavior. The self is a metaphor built on embodied experience in the world. This relates to many similar views, such as that humans learn a theory of mind within themselves by first developing a theory of mind in perceiving others. This goes along with attention schema and the attribution of consciousness. And some have pointed out what is called the double subject fallacy, a hidden form of dualism that infects neuroscience. However described, it gets at the same issue.

It all comes down our being both social animals and inhabitants of the world. Human development begins with a focus outward, culture and language determining what kind of identity forms. How we learn to behave is who we become.

Fluidity of Perceived Speciation

There is a Princeton article that discusses a study on speciation. Some researchers observed a single finch that became isolated from its own species. The island it ended up on, though, had several other species of finch. So, it crossed the species divide to mate with one of the other populations.

That alone questions the very meaning of species. It was neither genetics nor behavior that kept these breeding populations separate. It was simply geographic distance. Eliminate that geographic factor and hybridization quickly follows. The researchers argue that this hybridization represents a new species. But their observations are over a short period of time. There is no reason to assume that further hybridization won’t occur, causing this population to slowly assimilate back into the original local population, the genetic variance lessening over time (as with populations of homo sapiens that hybridized with other homonids such as neanderthals).

All this proves is that our present definition of ‘species’ isn’t always particularly scientific, in being useful for careful understanding. Of course, it’s not hard to create a separate breeding population. But if separate breeding populations don’t have much genetic difference and can easily interbreed, then how is calling them separate species meaningful in any possible sense of that word? Well, it isn’t meaningful.

This study showed that sub-populations can become isolated for periods of times. What it doesn’t show is that this isolation will be long-lasting, as it isn’t entirely known what caused the separation of the breeding populations in the first place. For example, we don’t know to what extent the altered bird songs are related to genetics versus epigenetics, microbiome, environmental shifts, learned behavior, etc. The original lost and isolated finch carried with it much more than the genetics of its species. It would be unscientific to conclude much from such limited info and observations.

The original cause(s) might change again. In that case, the temporary sub-population would lose the traits, in this case birdsong, that have separated it. That probably happens all the time, temporary changes within populations and occasional hybridized populations appearing only to disappear again. But it’s probably rare that these changes become permanent so as to develop into genuinely separate species, in the meaningful sense of being genetically and behaviorally distinct to a large enough degree.

Also, the researches didn’t eliminate the possible explanation of what in humans would be called culture. Consider mountain lions. Different mountain lion populations will only hunt certain prey species. This isn’t genetically determined behavior. Rather, specific hunting techniques are taught from mother to cub. But this could create separate breeding populations for, in some cases, they might hunt in different areas where the various prey are concentrated. Even so, this hasn’t separated the mountain lion populations into different species. They remain genetically the same.

Sure, give it enough time combined with environmental changes, and then speciation might follow. But speciation can’t be determined by behavior alone, even when combined with minor genetic differences. Otherwise, that would mean every human culture on the planet is a separate species. The Irish wold be a separate species from the English. The Germans would be a separate species from the French. The Chinese would be a separate species from the Japanese. Et cetera. This is ludicrous, even though some right-wingers might love this idea and in fact this was an early pre-scientific definition of races as species or sub-species. But as we know, humans have some of the lowest levels of genetic diversity as seen among similar species.

Our notion of species is too simplistic. We have this simplistic view because, as our lives are short and science is young, we only have a snapshot of nature. Supposed species are probably a lot more fluid than the present paradigm allows for. The perceived or imposed boundaries of ‘species’ could constantly be changing with various sub-populations constantly emerging and merging, with environmental niches constantly shifting and coalescing. The idea of static species generally seems unhelpful, except maybe in rare cases where a species becomes isolated over long periods of time (e.g., the ice age snails surviving in a few ice caves in Iowa and Illinois) or else in species that are so perfectly adapted that evolutionary conditions have had little apparent impact (e.g., crocodiles).

We easily forget that modern science hasn’t been studying nature for very long. As I often repeat, our ignorance is vast beyond comprehension, much greater than our present knowledge.

As an amusing related case, some species will have sex with entirely different species. Hybridization isn’t even possible in such situations. It’s not clear why this happens. An example of this is a particular population of monkeys sexually mounting deer and, as they sometimes get grooming and food out of the deal, a fair number of the deer tolerate the behavior. There is no reason to assume these deer-mounting monkeys have evolved into a new species, as compared to nearby populations of monkeys who don’t sexually molest hoofed animals. Wild animals don’t seem to care all that much what modern humans think of them. Abstract categories of species don’t stop them from acting however they so desire. And it hasn’t always stopped humans either, whether between the supposed races within the human species or across the supposed divide of species.

From the lascivious monkey article (linked directly above):

“Finally, the researchers say, this might be a kind of cultural practice. Japanese macaques display different behaviors in different locations — some wash their food, or take hot-spring baths, or play with snowballs.

“Adolescent females grinding on the backs of deer might similarly be a cultural phenomenon. But it has only been observed at Minoo within the past few years.

“The monkey-deer sexual interactions reported in our paper may reflect the early stage development of a new behavioural tradition at Minoo,” Gunst-Leca told The Guardian.

“Alternatively, the paper notes, it could be a “short-lived fad.” Time will tell.”