When talking to people or reading articles, alternative viewpoints and interpretations often pop up in my mind. It’s easy for me to see multiple perspectives simultaneously, to hold multiple ideas. I have a creative mind, but I’m hardly a genius. So, why does this ability seem so rare?
The lack of this ability is not simply a lack of knowledge. I spent the first half of my life in an overwhelming state of ignorance because of inferior public education, exacerbated by a learning disability and depression. But I always had the ability of divergent thinking. It’s just hard to do much with divergent thinking without greater knowledge to work with. I’ve since then remedied my state of ignorance with an extensive program of self-education.
I still don’t know exactly what is this ability to see what others don’t see. There is an odd disconnect I regularly come across, even among the well educated. I encountered a perfect example of this from Yes! Magazine. It’s an article by Mike Males, Gun Violence Has Dropped Dramatically in 3 States With Very Different Gun Laws.
In reading that article, I immediately noticed the lack of any mention of lead toxicity. Then I went to the comments section and saw other people noticed this as well. The divergent thinking it takes to make this connection doesn’t require all that much education and brain power. I’m not particularly special in seeing what the author didn’t see. What is strange is precisely that the author didn’t see it, that the same would be true for so many like him. It is strange because the author isn’t some random person opinionating on the internet.
This became even stranger when I looked into Mike Males’ previous writing elsewhere. In the past, he himself had made this connection between violent crime and lead toxicity. Yet somehow the connection slipped from his mind in writing this article. This more recent article was in response to an event, the Parkland school shooting in Florida. And the author seems to have gotten caught up in the short term memory of the news cycle, not only unable to connect it to other data but failing to connect it to his own previous writing on that data. Maybe it shows the power of context-dependent memory. The school shooting was immediately put into the context of gun violence and so the framing elicited certain ways of thinking while excluding others. Like so many others, the author got pulled into the media hype of the moment, entirely forgetting what he otherwise would have considered.
This is how people can simultaneously know and not know all kinds of things. The human mind is built on vast disconnections, maybe because there has been little evolutionary advantage to constantly perceive larger patterns of causation beyond immediate situations. I’m not entirely sure what to make of this. It’s all too common. The thing is when such a disconnect happens the person is unaware of it — we don’t know what we don’t know and, as bizarre as it sounds, sometimes we don’t even know what we do know. So, even if I’m better than average at divergent thinking, there is no doubt that in other areas I too demonstrate this same cognitive limitation. It’s hard to see what doesn’t fit into our preconception, our worldview.
For whatever reason, lead toxicity has struggled to become included within public debate and political framing. Lead toxicity doesn’t fit into familiar narratives and the dominant paradigm, specifically in terms of a hyper-individualistic society. Even mental health tends to fall into this attitude of emphasizing the individual level, such as how the signs of mental illness could have been detected so that intervention could have stopped an individual from committing mass murder. It’s easier to talk about someone being crazy and doing crazy things than to question what caused them to become that way, be it toxicity or something else.
As such, Males’ article focuses narrowly without even entertaining fundamental causes, not limited to his overlooking lead toxicity. This is odd. We already know so much about what causes violence. The author himself has written multiple times on the topic, specifically in his professional capacity as a Senior Research Fellow at the Center on Juvenile and Criminal Justice (CJCJ). It’s his job to look for explanations and to communicate them, having written several hundred articles for CJCJ alone.
The human mind tends to go straight to the obvious, that is to say what is perceived as obvious within conventional thought. If the problem is gun violence, then the solution is gun control. Like most Americans (and increasingly so), I support more effective gun control. Still, that is merely dealing with the symptoms and doesn’t explain why someone wants to kill others. The views of the American public, though, don’t stop there. What the majority blames mass gun violence on is mental illness, a rather nebulous explanation. Mental illness also is a symptom.
That is what stands out about the omission I’m discussing here. Lead toxicity is one of most strongly proven causes of neugocognitive problems: stunted brain development, lowered IQ, learning disabilities, autism and Asperger’s, ADHD, depression, impulsivity, nervousness, irritability, anger, aggression, etc. All the heavy metals mess people up in the head, along with causing physical ailments such as hearing impairment, asthma, obesity, kidney failure, and much else. And that is talking about only one toxin among many, mercury being another widespread pollutant but there are many beyond that — this being directly relevant to the issue of violent behavior and crime, such as the high levels of toxins found in mass murderers:
“Three studies in the California prison system found those in prison for violent activity had significantly higher levels of hair manganese than controls, and studies of an area in Australia with much higher levels of violence as well as autopsies of several mass-murderers also found high levels of manganese to be a common factor. Such violent behavior has long been known in those with high manganese exposure. Other studies in the California prison and juvenile justice systems found that those with 5 or more essential mineral imbalances were 90% more likely to be violent and 50% more likely to be violent with two or more mineral imbalances. A study analyzing hair of 28 mass-murderers found that all had high metals and abnormal essential mineral levels.”
(See also: Lead was scourge before and after Beethoven by Kristina R. Anderson; Violent Crime, Hyperactivity and Metal Imbalance by Neil Ward; The Seeds that Give Birth to Terrorism by Kimberly Key; and An Updated Lead-Crime Roundup for 2018 by Kevin Drum)
Besides toxins, other factors have also been seriously studied. For example, high inequality is strongly correlated to increased mental illness rates along with aggressive, risky and other harmful behaviors (as written about in Keith Payne’s The Broken Ladder; an excerpt can be found at the end of this post). And indeed, even as lead toxicity has decreased overall (while remaining a severe problem among the poor), inequality has worsened.
There are multiple omissions going on here. And they are related. Where there are large disparities of wealth, there are also large disparities of health. Because of environmental classism and racism, toxic dumps are more likely to be located in poor and minority communities along with the problem of old housing with lead paint found where poverty is concentrated, all of it being related to a long history of economic and racial segregation. And I would point out that the evidence supports that, along with inequality, segregation creates a culture of distrust — as Eric Uslaner concluded: “It wasn’t diversity but segregation that led to less trust” (Segregation and Mistrust). In post-colonial countries like the United States, inequality and segregation go hand in hand, built on a socioeconomic system ethnic/racial castes and a permanent underclass that has developed over several centuries. The fact that this is the normal conditions of our country makes it all the harder for someone born here to fully sense its enormity. It’s simply the world we Americans have always known — it is our shared reality, rarely perceived for what it is and even more rarely interrogated.
These are far from being problems limited to those on the bottom of society. Lead toxicity ends up impacting a large part of the population. In reference to serious health concerns, Mark Hyman wrote, “that nearly 40 percent of all Americans are estimated to have blood levels of lead high enough to cause these problems” (Why Lead Poisoning May Be Causing Your Health Problems). The same thing goes for high inequality that creates dysfunction all across society, increasing social and health problems even among the upper classes, not to mention breeding an atmosphere of conflict and divisiveness (see James Gilligan’s Preventing Violence; an excerpt can be found at the end of this post). Everyone is worse off in a high amidst the unhappiness and dysfunction of a highly unequal society, far beyond homicides but also suicides, along with addiction and stress-related diseases.
Let’s look at the facts. Besides lead toxicity remaining a major problem in poor communities and old industrial inner cities, the United States has one of the highest rates of inequality in the world and the highest in the Western world, and this problem has been worsening for decades with present levels not seen since the Wall Street crash that led to the Great Depression. To go into the details, Florida has the fifth highest inequality in the United States, according to Mark Price and Estelle Sommeiller, with Florida having “all income growth between 2009 and 2011 accrued to the top 1 percent” (Economic Policy Institute). And Parkland, where the school shooting happened, specifically has high inequality: “The income inequality of Parkland, FL (measured using the Gini index) is 0.529 which is higher than the national average” (DATA USA).
In a sense, it is true that guns don’t kill people, that people kill people. But then again, it could be argued that people don’t kill people, that entire systemic problems triggers the violence that kills people, not even to talk about the immensity of slow violence that slowly kills people in even higher numbers. Lead toxicity is a great example of slow violence because of the 20 year lag time to fully measure its effects, disallowing the direct observation and visceral experience of causality and consequence. The topic of violence is important taken on its own terms (e.g., eliminating gun sales and permits to those with a history of violence would decrease gun violence), but my concern is exploring why it is so difficult to talk about violence in a larger and more meaningful way.
Lead toxicity is a great example for many reasons. It has been hard for advocates to get people to pay attention and take this seriously. Lead toxicity momentarily fell under the media spotlight with the Flint, Michigan case but that was just one of thousands of places with such problems, many of them with far worse rates. As always, as the media’s short attention span turned to some new shiny object, the lead toxicity crisis was forgotten again, as the poisoning continues. You can’t see it happening because it is always happening, an ever present tragedy that even when known remains abstract data. It is in the background and so has become part of our normal experience, operating at a level outside of our awareness.
School shootings are better able to capture the public imagination and so make for compelling dramatic narratives that the media can easily spin. Unlike lead toxicity, school shootings and their victims aren’t invisible. Lead toxins are hidden in the soil of playgrounds and the bodies of children (and prisoners who, as disproportionate victims of lead toxicity, are literally hidden away), whereas a bullet leaves dead bodies, splattered blood, terrified parents, and crying students. Neither can inequality compete with such emotional imagery. People can understand poverty because you can see poor people and poor communities, but you can’t see the societal pattern of dysfunction that exits between the dynamics of extreme poverty and extreme wealth. It can’t be seen, can’t be touched or felt, can’t be concretely known in personal experience.
Whether lead toxicity or high inequality, it is yet more abstract data that never quite gets a toehold within the public mind and the moral imagination. Even for those who should know better, it’s difficult for them to put the pieces together.
* * *
Here is the comment I left at Mike Males’ article:
I was earlier noting that Mike Males doesn’t mention lead exposure/toxicity/poisoning. I’m used to this being ignored in articles like this. Still, it’s disappointing.
It is the single most well supported explanation that has been carefully studied for decades. And the same conclusions have been found in other countries. But for whatever reason, public debate has yet to fully embrace this evidence.
Out of curiosity, I decided to do a web search. Mike Males works for Center on Juvenile and Criminal Justice. He writes articles there. I was able to find two articles where he directly and thoroughly discusses this topic:
He also mentions lead toxicity in passing in another article:
And Mike Males’ work gets referenced in a piece by Kevin Drum:
This makes it odd that he doesn’t even mention it in passing here in this article. It’s not because he doesn’t know about the evidence, as he has already written about it. So, what is the reason for not offering the one scientific theory that is most relevant to the data he shares?
This seems straightforward to me. Consider the details from the article.
“Over the last 25 years—though other time periods show similar results—New York, California, and Texas show massive declines in gun homicides, ones that far exceed those of any other state. These three states also show the country’s largest decreases in gun suicide and gun accident death rates.”
The specific states in question were among the most polluting and hence polluted states. This means they had high rates of lead toxicity. And that means they had the most room for improvement. It goes without saying that national regulations and local programs will have the greatest impact where there are the worst problems (similar to the reason, as studies show, it is easier to increase the IQ of the poor than the wealthy by improving basic conditions).
“These major states containing seven in 10 of the country’s largest cities once had gun homicide rates far above the national average; now, their rates are well below those elsewhere in the country.”
That is as obvious as obvious can be. Yeah, the largest cities are also the places of the largest concentrations of pollution. Hence, one would expect to find the highest rates and largest improvements in lead toxicity, which has been proven to directly correlate to violent crime rates (with causality proven through dose-response curve, the same methodology used to prove efficacy of pharmaceuticals).
“The declines are most pronounced in urban young people.”
Once again, this is the complete opposite of surprising. It is exactly as what we would expect. Urban areas have the heaviest and most concentrated vehicular traffic along with the pollution that goes with it. And urban areas are often old industrial centers with a century of accumulated toxins in the soil, water, and elsewhere in the environment. These specific old urban areas are also where old houses are found which are affordable for the poor, but unfortunately are more likely to have old lead paint that is chipping away and turning into dust.
So, problem solved. The great mystery is no more. You’re welcome.
“Congress passed the landmark Clean Air Act in 1970 and gave the newly-formed EPA the legal authority to regulate pollution from cars and other forms of transportation. EPA and the State of California have led the national effort to reduce vehicle pollution by adopting increasingly stringent standards.”
The progress has been dramatic. For both children and adults, the number and severity of poisonings has declined. At the same time, blood lead testing rates have increased, especially in populations at high risk for lead poisoning. This public health success is due to a combination of factors, most notably commitment to lead poisoning prevention at the federal, state and city levels. New York City and New York State have implemented comprehensive policies and programs that support lead poisoning prevention. […]
“New York City’s progress in reducing childhood lead poisoning has been striking. Not only has the number of children with lead poisoning declined —a 68% drop from 2005 to 2012 — but the severity of poisonings has also declined. In 2005, there were 14 children newly identified with blood lead levels of 45 µg/dL and above, and in 2012 there were 5 children. At these levels, children require immediate medical intervention and may require hospitalization for chelation, a treatment that removes lead from the body.
“Forty years ago, tackling childhood lead poisoning seemed a daunting task. In 1970, when New York City established the Health Department’s Lead Poisoning Prevention Program, there were over 2,600 children identified with blood lead levels of 60 µg/dL or greater — levels today considered medical emergencies. Compared with other parts of the nation, New York City’s children were at higher risk for lead poisoning primarily due to the age of New York City’s housing stock, the prevalence of poverty and the associated deteriorated housing conditions. Older homes and apartments, especially those built before 1950, are most likely to contain leadbased paint. In New York City, more than 60% of the housing stock — around 2 million units — was built before 1950, compared with about 22% of housing nationwide.
“New York City banned the use of leadbased paint in residential buildings in 1960, but homes built before the ban may still have lead in older layers of paint. Lead dust hazards are created when housing is poorly maintained, with deteriorated and peeling lead paint, or when repair work in old housing is done unsafely. Young children living in such housing are especially at risk for lead poisoning. They are more likely to ingest lead dust because they crawl on the floor and put their hands and toys in their mouths.
“While lead paint hazards remain the primary source of lead poisoning in New York City children, the number and rate of newly identified cases and the associated blood lead levels have greatly declined.
“Strong Policies Aimed at Reducing Childhood Lead Exposure
“Declines in blood lead levels can be attributed largely to government regulations instituted in the 1960s, 1970s and 1980s that banned or limited the use of lead in gasoline, house paint, water pipes, solder for food cans and other consumer products. Abatement and remediation of leadbased paint hazards in housing, and increased consumer awareness of lead hazards have also contributed to lower blood lead levels.
“New York City developed strong policies to support lead poisoning prevention. Laws and regulations were adopted to prevent lead exposure before children are poisoned and to protect those with elevated blood lead levels from further exposure.”
“But if all of this solves one mystery, it shines a high-powered klieg light on another: Why has the lead/crime connection been almost completely ignored in the criminology community? In the two big books I mentioned earlier, one has no mention of lead at all and the other has a grand total of two passing references. Nevin calls it “exasperating” that crime researchers haven’t seriously engaged with lead, and Reyes told me that although the public health community was interested in her paper, criminologists have largely been AWOL. When I asked Sammy Zahran about the reaction to his paper with Howard Mielke on correlations between lead and crime at the city level, he just sighed. “I don’t think criminologists have even read it,” he said. All of this jibes with my own reporting. Before he died last year, James Q. Wilson—father of the broken-windows theory, and the dean of the criminology community—had begun to accept that lead probably played a meaningful role in the crime drop of the ’90s. But he was apparently an outlier. None of the criminology experts I contacted showed any interest in the lead hypothesis at all.
“Why not? Mark Kleiman, a public policy professor at the University of California-Los Angeles who has studied promising methods of controlling crime, suggests that because criminologists are basically sociologists, they look for sociological explanations, not medical ones. My own sense is that interest groups probably play a crucial role: Political conservatives want to blame the social upheaval of the ’60s for the rise in crime that followed. Police unions have reasons for crediting its decline to an increase in the number of cops. Prison guards like the idea that increased incarceration is the answer. Drug warriors want the story to be about drug policy. If the actual answer turns out to be lead poisoning, they all lose a big pillar of support for their pet issue. And while lead abatement could be big business for contractors and builders, for some reason their trade groups have never taken it seriously.
“More generally, we all have a deep stake in affirming the power of deliberate human action. When Reyes once presented her results to a conference of police chiefs, it was, unsurprisingly, a tough sell. “They want to think that what they do on a daily basis matters,” she says. “And it does.” But it may not matter as much as they think.”
* * *
The Broken Ladder:
How Inequality Affects the Way We Think, Live, and Die
by Keith Payne
How extensive are the effects of the fast-slow trade-off among humans? Psychology experiments suggest that they are much more prevalent than anyone previously suspected, influencing people’s behaviors and decisions in ways that have nothing to do with reproduction. Some of the most important now versus later trade-offs involve money. Financial advisers tell us that if we skip our daily latte and instead save that three dollars a day, we could increase our savings by more than a thousand dollars a year. But that means facing a daily choice: How much do I want a thousand dollars in the bank at the end of the year? And how great would a latte taste right now?
The same evaluations lurk behind larger life decisions. Do I invest time and money in going to college, hoping for a higher salary in the long run, or do I take a job that guarantees an income now? Do I work at a regular job and play by the rules, even if I will probably struggle financially all my life, or do I sell drugs? If I choose drugs, I might lose everything in the long run and end up broke, in jail, or dead. But I might make a lot of money today.
Even short-term feelings of affluence or poverty can make people more or less shortsighted. Recall from the earlier chapters that subjective sensations of poverty and plenty have powerful effects, and those are usually based on how we measure ourselves against other people. Psychologist Mitch Callan and colleagues combined these two principles and predicted that when people are made to feel poor, they will become myopic, taking whatever they can get immediately and ignoring the future. When they are made to feel rich, they would take the long view.
Their study began by asking research participants a long series of probing questions about their finances, their spending habits, and even their personality traits and personal tastes. They told participants that they needed all this detailed information because their computer program was going to calculate a personalized “Comparative Discretionary Income Index.” They were informed that the computer would give them a score that indicated how much money they had compared with other people who were similar to them in age, education level, personality traits, and so on. In reality, the computer program did none of that, but merely displayed a little flashing progress bar and the words “Calculating. Please wait . . .” Then it provided random feedback to participants, telling half that they had more money than most people like them, and the other half that they had less money than other people like them.
Next, participants were asked to make some financial decisions, and were offered a series of choices that would give them either smaller rewards received sooner or larger rewards received later. For example, they might be asked, “Would you rather have $ 100 today or $ 120 next week? How about $ 100 today or $ 150 next week?” After they answered many such questions, the researchers could calculate how much value participants placed on immediate rewards, and how much they were willing to wait for a better long-term payoff.
The study found that, when people felt poor, they tilted to the fast end of the fast-slow trade-off, preferring immediate gratification. But when they felt relatively rich, they took the long view. To underscore the point that this was not simply some abstract decision without consequences in the real world, the researchers performed the study again with a second group of participants. This time, instead of hypothetical choices, the participants were given twenty dollars and offered the chance to gamble with it. They could decline, pocket the money, and go home, or they could play a card game against the computer and take their chances, in which case they either would lose everything or might make much more money. When participants were made to feel relatively rich, 60 percent chose to gamble. When they were made to feel poor, the number rose to 88 percent. Feeling poor made people more willing to roll the dice.
The astonishing thing about these experiments was that it did not take an entire childhood spent in poverty or affluence to change people’s level of shortsightedness. Even the mere subjective feeling of being less well-off than others was sufficient to trigger the live fast, die young approach to life.
Nothing to Lose
Most of the drug-dealing gang members that Sudhir Venkatesh followed were earning the equivalent of minimum wage and living with their mothers. If they weren’t getting rich and the job was so dangerous, then why did they choose to do it? Because there were a few top gang members who were making several hundred thousand dollars a year. They made their wealth conspicuous by driving luxury cars and wearing expensive clothes and flashy jewelry. They traveled with entourages. The rank-and-file gang members did not look at one another’s lives and conclude that this was a terrible job. They looked instead at the top and imagined what they could be. Despite the fact that their odds of success were impossibly low, even the slim chance of making it big drove them to take outrageous risks.
The live fast, die young theory explains why people would focus on the here and now and neglect the future when conditions make them feel poor. But it does not tell the whole story. The research described in Chapter 2 revealed that rates of many health and social problems were higher, even among members of the middle class, in societies where there was more inequality. One of the puzzling aspects of the rapid rise of inequality over the past three decades is that almost all of the change in fortune has taken place at the top. The incomes of the poor and the middle class are not too different from where they were in 1980, once the numbers are adjusted for inflation. But the income and wealth of the top 1 percent have soared, and those of the top one tenth of a percent dwarfed even their increases. How are the gains of the superrich having harmful effects on the health and well-being of the rest of us? […]
As Cartar suspected, when the bees received bonus nectar, they played it safe and fed in the seablush fields. But when their nectar was removed, they headed straight for the dwarf huckleberry fields.
Calculating the best option in an uncertain environment is a complicated matter; even humans have a hard time with it. According to traditional economic theories, rational decision making means maximizing your payoffs. You can calculate your “expected utility” by multiplying the size of the reward by the likelihood of getting it. So, an option that gives you a 90 percent chance of winning $ 500 has a greater expected utility than an option that gives you a 40 percent chance of winning $ 1,000 ($ 500 × .90 = $ 450 as compared with $ 1,000 × .40 = $ 400). But the kind of decision making demonstrated by the bumblebees doesn’t necessarily line up well with the expected utility model. Neither, it turns out, do the risky decisions made by the many other species that also show the same tendency to take big risks when they are needy.
Humans are one of those species. Imagine what you would do if you owed a thousand dollars in rent that was due today or you would lose your home. In a gamble, would you take the 90 percent chance of winning $ 500, or the 40 percent chance of winning $ 1,000? Most people would opt for the smaller chance of getting the $ 1,000, because if they won, their need would be met. Although it is irrational from the expected utility perspective, it is rational in another sense, because meeting basic needs is sometimes more important than the mathematically best deal. The fact that we see the same pattern across animal species suggests that evolution has found need-based decision making to be adaptive, too. From the humble bumblebee, with its tiny brain, to people trying to make ends meet, we do not always seek to maximize our profits. Call it Mick Jagger logic: If we can’t always get what we want, we try to get what we need. Sometimes that means taking huge risks.
We saw in Chapter 2 that people judge what they need by making comparisons to others, and the impact of comparing to those at the top is much larger than comparing to those at the bottom. If rising inequality makes people feel that they need more, and higher levels of need lead to risky choices, it implies a fundamentally new relationship between inequality and risk: Regardless of whether you are poor or middle class, inequality itself might cause you to engage in riskier behavior. […]
People googling terms like “lottery tickets” and “payday loans,” for example, are probably already involved in some risky spending. To measure sexual riskiness, we counted searches for the morning-after pill and for STD testing. And to measure drug- and alcohol-related risks, we counted searches for how to get rid of a hangover and how to pass a drug test. Of course, a person might search for any of these terms for reasons unrelated to engaging in risky behaviors. But, on average, if there are more people involved in sex, drugs, and money risks, you would expect to find more of these searches.
Armed with billions of such data points from Google, we asked whether the states where people searched most often for those terms were also the states with higher levels of income inequality. To help reduce the impact of idiosyncrasies related to each search term, we averaged the six terms together into a general risk-taking index. Then we plotted that index against the degree of inequality in each state. The states with higher inequality had much higher risk taking, as estimated from their Google searches. This relationship remained strong after statistically adjusting for the average income in each state.
If the index of risky googling tracks real-life risky behavior, then we would expect it to be associated with poor life outcomes. So we took our Google index and tested whether it could explain the link, reported in Chapter 2, between inequality and Richard Wilkinson and Kate Pickett’s index of ten major health and social problems. Indeed, the risky googling index was strongly correlated with the index of life problems. Using sophisticated statistical analyses, we found that inequality was a strong predictor of risk taking, which in turn was a strong predictor of health and social problems. These findings suggest that risky behavior is a pathway that helps explain the link between inequality and bad outcomes in everyday life. The evidence becomes much stronger still when we consider these correlations together with the evidence of cause and effect provided by the laboratory experiments.
Experiments like the ones described in this chapter are essential for understanding the effects of inequality, because only experiments can separate the effects of the environment from individual differences in character traits. Surely there were some brilliant luminaries and some dullards in each experimental group. Surely there were some hearty souls endowed with great self-control, and some irresponsible slackers, too. Because they were assigned to the experimental groups at random, it is exceedingly unlikely that the groups differed consistently in their personalities or abilities. Instead, we can be confident that the differences we see are caused by the experimental factor, in this case making decisions in a context of high or low inequality. […]
Experiments are gentle reminders that, in the words of John Bradford, “There but for the grace of God go I.” If we deeply understand behavioral experiments, they make us humble. They challenge our assumption that we are always in control of our own successes and failures. They remind us that, like John Bradford, we are not simply the products of our thoughts, our plans, or our bootstraps.
These experiments suggest that any average person, thrust into these different situations, will start behaving differently. Imagine that you are an evil scientist with a giant research budget and no ethical review board. You decide to take ten thousand newborn babies and randomly assign them to be raised by families in a variety of places. You place some with affluent, well-educated parents in the suburbs of Atlanta. You place others with single mothers in inner-city Milwaukee, and so on. The studies we’ve looked at suggest that the environments you assign them to will have major effects on their futures. The children you assign to highly unequal places, like Texas, will have poorer outcomes than those you assign to more equal places, like Iowa, even though Texas and Iowa have about the same average income.
In part, this will occur because bad things are more likely to happen to them in unequal places. And in part, it will occur because the children raised in unequal places will behave differently. All of this can transpire even though the babies you are randomly assigning begin life with the same potential abilities and values.
If you look carefully at Figure 5.1, you’ll notice that the curve comparing different countries is bent. The relatively small income advantage that India has over Mozambique, for example, translates into much longer lives in India. Once countries reach the level of development of Chile or Costa Rica, something interesting happens: The curve flattens out. Very rich countries like the United States cease to have any life expectancy advantage over moderately rich countries like Bahrain or even Cuba. At a certain level of economic development, increases in average income stop mattering much.
But within a rich country, there is no bend; the relationship between money and longevity remains linear. If the relationship was driven by high mortality rates among the very poor, you would expect to see a bend. That is, you would expect dramatically shorter lives among the very poor, and then, once above the poverty line, additional income would have little effect. This curious absence of the bend in the line suggests that the link between money and health is not actually a reflection of poverty per se, at least not among economically developed countries. If it was extreme poverty driving the effect, then there would be a big spike in mortality among the very poorest and little difference between the middle- and highest-status groups.
The linear pattern in the British Civil Service study is also striking, because the subjects in this study all have decent government jobs and the salaries, health insurance, pensions, and other benefits that are associated with them. If you thought that elevated mortality rates were only a function of the desperately poor being unable to meet their basic needs, this study would disprove that, because it did not include any desperately poor subjects and still found elevated mortality among those with lower status.
Psychologist Nancy Adler and colleagues have found that where people place themselves on the Status Ladder is a better predictor of health than their actual income or education. In fact, in collaboration with Marmot, Adler’s team revisited the study of British civil servants and asked the research subjects to rate themselves on the ladder. Their subjective assessments of where they stood compared with others proved to be a better predictor of their health than their occupational status. Adler’s analyses suggest that occupational status shapes subjective status, and this subjective feeling of one’s standing, in turn, affects health.
If health and longevity in developed countries are more closely linked to relative comparisons than to income, then you would expect that societies with greater inequality would have poorer health. And, in fact, they do. Across the developed nations surveyed by Wilkinson and Pickett, those with greater income equality had longer life expectancies (see Figure 5.3). Likewise, in the United States, people who lived in states with greater income equality lived longer (see Figure 5.4). Both of these relationships remain once we statistically control for average income, which means that inequality in incomes, not just income itself, is responsible.
But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.
To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.
Taken together, this evidence implies that inequality leads to illness and shorter lives in part because it gives rise to unhealthy behaviors. That conclusion has been very controversial, especially on the political left. Some argue that it blames the victim because it implies that the poor and those who live in high-inequality areas are partly responsible for their fates by making bad choices. But I don’t think it’s assigning blame to point out the obvious fact that health is affected by smoking, drinking too much, poor diet and exercise, and so on. It becomes a matter of blaming the victim only if you assume that these behaviors are exclusively the result of the weak characters of the less fortunate. On the contrary, we have seen plenty of evidence that poverty and inequality have effects on the thinking and decision making of people living in those conditions. If you or I were thrust into such situations, we might well start behaving in more unhealthy ways, too.
The link between inequality and unhealthy behaviors helps shed light on a surprising trend discovered in a 2015 paper by economists Anne Case and Angus Deaton. Death rates have been steadily declining in the United States and throughout the economically developed world for decades, but these authors noticed a glaring exception: Since the 1990s, the death rate for middle-aged white Americans has been rising. The increase is concentrated among men and whites without a college degree. The death rate for black Americans of the same age remains higher, but is trending slowly downward, like that of all other minority groups.
The wounds in this group seem to be largely self-inflicted. They are not dying from higher rates of heart disease or cancer. They are dying of cirrhosis of the liver, suicide, and a cycle of chronic pain and overdoses of opiates and painkillers.
The trend itself is striking because it speaks to the power of subjective social comparisons. This demographic group is dying of violated expectations. Although high school– educated whites make more money on average than similarly educated blacks, the whites expect more because of their history of privilege. Widening income inequality and stagnant social mobility, Case and Deaton suggest, mean that this generation is likely to be the first in American history that is not more affluent than its parents.
Unhealthy behaviors among those who feel left behind can explain part of the link between inequality and health, but only part. The best estimates have found that such behavior accounts for about one third of the association between inequality and health. Much of the rest is a function of how the body itself responds to crises. Just as our decisions and actions prioritize short-term gains over longer-term interests when in a crisis, the body has a sophisticated mechanism that adopts the same strategy. This crisis management system is specifically designed to save you now, even if it has to shorten your life to do so.
* * *
by James Gilligan
Kindle Locations 552-706
The Social Cause of Violence
In order to understand the spread of contagious disease so that one can prevent epidemics, it is just as important to know the vector by which the pathogenic organism that causes the disease is spread throughout the population as it is to identify the pathogen itself. In the nineteenth century, for example, the water supply and the sewer system were discovered to be vectors through which some diseases became epidemic. What is the vector by which shame, the pathogen that causes violence, is spread to its hosts, the people who succumb to the illness of violence? There is a great deal of evidence, which I will summarize here, that shame is spread via the social and economic system. This happens in two ways. The first is through what we might call the “vertical” division of the population into a hierarchical ranking of upper and lower status groups, chiefly classes, castes, and age groups, but also other means by which people are divided into in-groups and out-groups, the accepted and the rejected, the powerful and the weak, the rich and the poor, the honored and the dishonored. For people are shamed on a systematic, wholesale basis, and their vulnerability to feelings of humiliation is increased when they are assigned an inferior social or economic status; and the more inferior and humble it is, the more frequent and intense the feelings of shame, and the more frequent and intense the acts of violence. The second way is by what we could call the “horizontal” asymmetry of social roles, or gender roles, to which the two sexes are assigned in patriarchal cultures, one consequence of which is that men are shamed or honored for different and in some respects opposite behavior from that which brings shame or honor to women. That is, men are shamed for not being violent enough (called cowards or even shot as deserters), and are more honored the more violent they are (with medals, promotions, titles, and estates)—violence for men is successful as a strategy. Women, however, are shamed for being too active and aggressive (called bitches or unfeminine) and honored for being passive and submissive—violence is much less likely to protect them against shame.
Relative Poverty and Unemployment
The most powerful predictor of the homicide rate in comparisons of the different nations of the world, the different states in the United States, different counties, and different cities and census tracts, is the size of the disparities in income and wealth between the rich and the poor. Some three dozen studies, at least, have found statistically significant correlations between the degree of absolute as well as relative poverty and the incidence of homicide, Hsieh and Pugh in 1993 did a meta-analysis of thirty-four such studies and found strong statistical support for these findings, as have several other reviews of this literature: two on homicide by Smith and Zahn in 1999; Chasin in 1998; Short in 1997; James in 1995; and individual studies, such as Braithwaite in 1979 and Messner in 1980.
On a worldwide basis, the nations with the highest inequities in wealth and income, such as many Third World countries in Latin America, Africa, and Asia, have the highest homicide rates (and also the most collective or political violence). Among the developed nations, the United States has the highest inequities in wealth and income, and also has by far the highest homicide rates, five to ten times larger than the other First World nations, all of which have the lowest levels of inequity and relative poverty in the world, and the lowest homicide rates. Sweden and Japan, for example, have had the lowest degree of inequity in the world in recent years, according to the World Bank’s measures; but in fact, all the other countries of western Europe, including Ireland and the United Kingdom, as well as Canada, Australia, and New Zealand, have a much more equal sharing of their collective wealth and income than either the United States or virtually any of the Second or Third World countries, as well as the lowest murder rates.
Those are cross-sectional studies, which analyze the populations being studied at one point in time. Longitudinal studies find the same result: violence rates climb and fall over time as the disparity in income rises and decreases, both in the less violent and the more violent nations. For example, in England and Wales, as Figures 1 and 2 show, there was an almost perfect fit between the rise in several different measures of the size of the gap between the rich and the poor, and the number of serious crimes recorded by the police between 1950 and 1990. Figure 1 shows two measures of the gradual widening of income differences, which accelerated dramatically from 1984 and 1985. Figure 2 shows the increasing percentage of households and families living in relative poverty, a rate that has been particularly rapid since the late 1970s, and also the number of notifiable offences recorded by the police during the same years. As you can see, the increase in crime rates follows the increase in rates of relative poverty almost perfectly. As both inequality and crime accelerated their growth rates simultaneously, the annual increases in crime from one year to the next became larger than the total crime rate had been in the early 1950s. If we examine the rates for murder alone during the same period, as reported by the Home Office, we find the same pattern, namely a progression from a murder rate that averaged 0.6 per 100,000 between 1946 and 1970, increased to 0.9 from 1971–78, and increased yet again to an average of 1.1 between 1979 and 1997 (with a range of 1.0 to 1.3) To put it another way, 1.2 and 1.3, the five highest levels since the end of World War II, were recorded in 1987, 1991, 1994, 1995 and 1997, ail twice as high as the 1946–70 average.
The same correlation between violence and relative poverty has been found in the United States. The economist James Galbraith in Created Unequal (1997) has used inequity in wages as one measure of the size and history of income inequity between the rich and the poor from 1920 to 1992. If we correlate this with fluctuations in the American homicide rate during the same period, we find that both wage inequity and the homicide rate increased sharply in the slump of 1920–21, and remained at those historically high levels until the Great Crash of 1929, when they both jumped again, literally doubling together and suddenly, to the highest levels ever observed up to that time. These record levels of economic inequality (which increase, as Galbraith shows, when unemployment increases) were accompanied by epidemic violence; both murder rates and wage inequity remained twice as high as they had previously been, until the economic leveling effects of Roosevelt’s New Deal, beginning in 1933, and the Second World War a few years later, combined to bring both violence and wage inequity down by the end of the war to the same low levels as at the end of the First World War, and they both remained at those low levels for the next quarter of a century, from roughly 1944 to 1968.
That was the modern turning point. In 1968 the median wage began falling, after having risen steadily for the previous three decades, and “beginning in 1969 inequality started to rise, and continued to increase sharply for fifteen years,” (J. K. Galbraith). The homicide rate soon reached levels twice as high as they had been during the previous quarter of a century (1942–66). Both wage inequality and homicide rates remained at those relatively high levels for the next quarter of a century, from 1973 to 1997. That is, the murder rate averaged 5 per 100,000 population from 1942 to 1966, and 10 per 100,000 from 1970 to 1997. Finally, by 1998 unemployment dropped to the lowest level since 1970; both the minimum wage and the median wage began increasing again in real terms for the first time in thirty years; and the poverty rate began dropping. Not surprisingly, the homicide rate also fell, for the first time in nearly thirty years, below the range in which it had been fluctuating since 1970–71 (though both rates, of murder and of economic inequality, are still higher than they were from the early 1940s to the mid-1960s).
As mentioned before, unemployment rates are also relevant to rates of violence. M. H. Brenner found that every one per cent rise in the unemployment rate is followed within a year by a 6 per cent rise in the homicide rate, together with similar increases in the rates of suicide, imprisonment, mental hospitalization, infant mortality, and deaths from natural causes such as heart attacks and strokes (Mental Illness and the Economy, 1973, and uPersonal Stability and Economic Security,” 1977). Theodore Chiricos reviewed sixty-three American studies and concluded that while the relationship between unemployment and crime may have been inconsistent during the 1960s (some studies found a relationship, some did not), it became overwhelmingly positive in the 1970s, as unemployment changed from a brief interval between jobs to enduring worklessness (“Rates of Crime and Unemployment,” 1987). David Dickinson found an exceptionally close relationship between rates of burglary and unemployment for men under twenty-five in the U.K. in the 1980s and 1990s (“Crime and Unemployment,” 1993). Bernstein and Houston have also found statistically significant correlations between unemployment and crime rates, and negative correlations between wages and crime rates, in the U.S. between 1989 and 1998 (Crime and Work, 2000).
If we compare Galbraith’s data with U.S. homicide statistics, we find that the U.S. unemployment rate has moved in the same direction as the homicide rate from 1920 to 1992: increasing sharply in 1920–21, then jumping to even higher levels from the Crash of 1929 until Roosevelt’s reforms began in 1933, at which point the rates of both unemployment and homicide also began to fall, a trend that accelerated further with the advent of the war. Both rates then remained low (with brief fluctuations) until 1968, when they began a steady rise which kept them both at levels higher than they had been in any postwar period, until the last half of 1997, when unemployment fell below that range and has continued to decline ever since, followed closely by the murder rate.
Why do economic inequality and unemployment both stimulate violence? Ultimately, because both increase feelings of shame (Gilligan, Violence). For example, we speak of the poor as the lower classes, who have lower social and economic status, and the rich as the upper classes who have higher status. But the Latin for lower is inferior, and the word for the lower classes in Roman law was the humiliores. Even in English, the poor are sometimes referred to as the humbler classes. Our language itself tells us that to be poor is to be humiliated and inferior, which makes it more difficult not to feel inferior. The word for upper or higher was superior, which is related to the word for pride, superbia (the opposite of shame), also the root of our word superb (another antonym of inferior). And a word for the upper classes, in Roman law, was the honestiores (related to the word honor, also the opposite of shame and dishonor).
Inferiority and superiority are relative concepts, which is why it is relative poverty, not absolute poverty, that exposes people to feelings of inferiority. When everyone is on the same level, there is no shame in being poor, for in those circumstances the very concept of poverty loses its meaning. Shame is also a function of the gap between one’s level of aspiration and one’s level of achievement. In a society with extremely rigid caste or class hierarchies, it may not feel so shameful to be poor, since it is a matter of bad luck rather than of any personal failing. Under those conditions, lower social status may be more likely to motivate apathy, fatalism, and passivity (or “passive aggressiveness”), and to inhibit ambition and the need for achievement, as Gunnar Myrdal noted in many of the caste-ridden peasant cultures that he studied in Asian Drama (1968). Caste-ridden cultures, however, may have the potential to erupt into violence on a revolutionary or even genocidal scale, once they reject the notion that the caste or class one is born into is immutable, and replace it with the notion that one has only oneself to blame if one remains poor while others are rich. This we have seen repeatedly in the political and revolutionary violence that has characterized the history of Indonesia, Kampuchea, India, Ceylon, China, Vietnam, the Philippines, and many other areas throughout Asia during the past half-century.
All of which is another way of saying that one of the costs people pay for the benefits associated with belief in the “American Dream,” the myth of equal opportunity, is an increased potential for violence. In fact, the social and economic system of the United States combines almost every characteristic that maximizes shame and hence violence. First, there is the “Horatio Alger” myth that everyone can get rich if they are smart and work hard (which means that if they are not rich they must be stupid or lazy, or both). Second, we are not only told that we can get rich, we are also stimulated to want to get rich. For the whole economic system of mass production depends on whetting people’s appetites to consume the flood of goods that are being produced (hence the flood of advertisements). Third, the social and economic reality is the opposite of the Horatio Alger myth, since social mobility is actually less likely in the U.S. than in the supposedly more rigid social structures of Europe and the U.K. As Mishel, Bernstein and Schmitt have noted:
Contrary to widely held perceptions, the U.S. offers less economic mobility than other rich countries. In one study, for example, low-wage workers in the U.S. were more likely to remain in the low-wage labor market five years longer than workers in Germany, France, Italy, the United Kingdom, Denmark, Finland, and Sweden (all the other countries studied in this analysis). In another study, poor households in the U.S. were less likely to leave poverty from one year to the next than were poor households in Canada, Germany, the Netherlands, Sweden, and the United Kingdom (all the countries included in this second analysis).
(The State of Working America 2000–2001, 2001)
Fourth, as they also mention, “the U.S. has the most unequal income distribution and the highest poverty rates among all the advanced economies in the world. The U.S. tax and benefit system is also one of the least effective in reducing poverty.” The net effect of all these features of U.S. society is to maximize the gap between aspiration and attainment, which maximizes the frequency and intensity of feelings of shame, which maximizes the rates of violent crimes.
It is difficult not to feel inferior if one is poor when others are rich, especially in a society that equates self-worth with net worth; and it is difficult not to feel rejected and worthless if one cannot get or hold a job while others continue to be employed. Of course, most people who lose jobs or income do not commit murders as a result; but there are always some men who are just barely maintaining their self-esteem at minimally tolerable levels even when they do have jobs and incomes. And when large numbers of them lose those sources of self-esteem, the number who explode into homicidal rage increases as measurably, regularly, and predictably as any epidemic does when the balance between pathogenic forces and the immune system is altered.
And those are not just statistics. I have seen many individual men who have responded in exactly that way under exactly these circumstances. For example, one African-American man was sent to the prison mental hospital I directed in order to have a psychiatric evaluation before his murder trial. A few months before that, he had had a good job. Then he was laid off at work, but he was so ashamed of this that he concealed the fact from his wife (who was a schoolteacher) and their children, going off as if to work every morning and returning at the usual time every night. Finally, after two or three months of this, his wife noticed that he was not bringing in any money. He had to admit the truth, and then his wife fatally said, “What kind of man are you? What kind of man would behave this way?” To prove that he was a man, and to undo the feeling of emasculation, he took out his gun and shot his wife and children. (Keeping a gun is, of course, also a way that some people reassure themselves that they are really men.) What I was struck by, in addition to the tragedy of the whole story, was the intensity of the shame he felt over being unemployed, which led him to go to such lengths to conceal what had happened to him.
Caste stratification also stimulates violence, for the same reasons. The United States, perhaps even more than the other Western democracies, has a caste system that is just as real as that of India, except that it is based on skin color and ethnicity more than on hereditary occupation. The fact that it is a caste system similar to India’s is registered by the fact that in my home city, Boston, members of the highest caste are called “Bsoston Brahmins” (a.k.a. “WASPs,” or White Anglo-Saxon Protestants). The lowest rung on the caste ladder, corresponding to the “untouchables” or Harijan, of India, is occupied by African-Americans, Native Americans, and some Hispanic-Americans. To be lower caste is to be rejected, socially and vocationally, by the upper castes, and regarded and treated as inferior. For example, whites often move out of neighborhoods when blacks move in; blacks are “the last to be hired and the first to be fired,” so that their unemployment rate has remained twice as high as the white rate ever since it began being measured; black citizens are arrested and publicly humiliated under circumstances in which no white citizen would be; respectable white authors continue to write books and articles claiming that blacks are intellectually inferior to whites; and so on and on, ad infinitum. It is not surprising that the constant shaming and attributions of inferiority to which the lower caste groups are subjected would cause members of those groups to feel shamed, insulted, disrespected, disdained, and treated as inferior—because they have been, and because many of their greatest writers and leaders have told us that this is how they feel they have been treated by whites. Nor is it surprising that this in turn would give rise to feelings of resentment if not rage, nor that the most vulnerable, those who lacked any non-violent means of restoring their sense of personal dignity, such as educational achievements, success, and social status, might well see violence as the only way of expressing those feelings. And since one of the major disadvantages of lower-caste status is lack of equal access to educational and vocational opportunities, it is not surprising that the rates of homicide and other violent crimes among all the lower-caste groups mentioned are many times higher, year after year, than those of the upper-caste groups.
Kindle Locations 1218-1256
Single-Parent Families Another factor that correlates with rates of violence in the United States is the rate of single-parent families: children raised in them are more likely to be abused, and are more likely to become delinquent and criminal as they grow older, than are children who are raised by two parents. For example, over the past three decades those two variables—the rates of violent crime and of one-parent families—have increased in tandem with each other; the correlation is very close. For some theorists, this has suggested that the enormous increase in the rate of youth violence in the U.S. over the past few decades has been caused by the proportionately similar increase in the rate of single-parent families.
As a parent myself, I would be the first to agree that child-rearing is such a complex and demanding task that parents need all the help they can get, and certainly having two caring and responsible parents available has many advantages over having only one. In addition, children, especially boys, can be shown to benefit in many ways, including diminished risk of delinquency and violent criminality, from having a positive male role-model in the household. The adult who is most often missing in single-parent families is the father. Some criminologists have noticed that Japan, for example, has practically no single-parent families, and its murder rate is only about one-tenth as high as that of the United States.
Sweden’s rate of one-parent families, however, has grown almost to equal that in the United States, and over the same period (the past few decades), yet Sweden’s homicide rate has also been on average only about one-tenth as high as that of the U.S., during that same time. To understand these differences, we should consider another variable, namely, the size of the gap between the rich and the poor. As stated earlier, Sweden and Japan both have among the lowest degrees of economic inequity in the world, whereas the U.S. has the highest polarization of both wealth and income of any industrialized nation. And these differences exist even when comparing different family structures. For example, as Timothy M. Smeeding has shown, the rate of relative poverty is very much lower among single-parent families in Sweden than it is among those in the U.S. Even more astonishing, however, is the fact that the rate of relative poverty among single-parent families in Sweden is much lower than it is among two-parent families in the United States (“Financial Poverty in Developed Countries,” 1997). Thus, it would seem that however much family structure may influence the rate of violence in a society, the overall social and economic structure of the society—the degree to which it is or is not stratified into highly polarized upper and lower social classes and castes—is a much more powerful determinant of the level of violence.
There are other differences between the cultures of Sweden and the U.S. that may also contribute to the differences in the correlation between single-parenthood and violent crime. The United States, with its strongly Puritanical and Calvinist cultural heritage, is much more intolerant of both economic dependency and out-of-wedlock sex than Sweden. Thus, the main form of welfare support for single-parent families in the U.S. (until it was ended a year ago) A.F.D.C., Aid to Families with Dependent Children, was specifically denied to families in which the father (or any other man) was living with the mother; indeed, government agents have been known to raid the homes of single mothers with no warning in the middle of the night in order to “catch” them in bed with a man, so that they could then deprive them (and their children) of their welfare benefits. This practice, promulgated by politicians who claimed that they were supporting what they called “family values,” of course had the effect of destroying whatever family life did exist. Fortunately for single mothers in Sweden, the whole society is much more tolerant of people’s right to organize their sexual life as they wish, and as a result many more single mothers are in fact able to raise their children with the help of a man.
Another difference between Sweden and the U.S. is that fewer single mothers in Sweden are actually dependent on welfare than is true in the U.S. The main reason for this is that mothers in Sweden receive much more help from the government in getting an education, including vocational training; more help in finding a job; and access to high-quality free childcare, so that mothers can work without leaving their children uncared for. The U.S. system, which claims to be based on opposition to dependency, thus fosters more welfare dependency among single mothers than Sweden’s does, largely because it is so more miserly and punitive with the “welfare” it does provide. Even more tragically, however, it also fosters much more violence. It is not single motherhood as such that causes the extremely high levels of violence in the United States, then; it is the intense degree of shaming to which single mothers and their children are exposed by the punitive, miserly, Puritanical elements that still constitute a powerful strain in the culture of the United States.
Kindle Locations 1310-1338
Social and Political Democracy Since the end of the Second World War, the homicide rates of the nations of western Europe, and Japan, for example, have been only about a tenth as high as those of the United States, which is another way of saying that they have been preventing 90 per cent of the violence that the U.S. still experiences. Their rates of homicide were not lower than those in the U.S. before. On the contrary, Europe and Asia were scenes of the largest numbers of homicides ever recorded in the history of the world, both in terms of absolute numbers killed and in the death rates per 100,000 population, in the “thirty years’ war” that lasted from 1914 to 1945. Wars, and governments, have always caused far more homicides than all the individual murderers put together (Richardson, Statistics of Deadly Quarrels, 1960; Keeley, War Before Civilization, 1996.) After that war ended, however, they all took two steps which have been empirically demonstrated throughout the world to prevent violence. They instituted social democracy (or “welfare states,” as they are sometimes called), and achieved an unprecedented decrease in the inequities in wealth and income between the richest and poorest groups in the population, one effect of which is to reduce the frequency of interpersonal or “criminal” violence. And Germany, Japan and Italy adopted political democracy as well, the effect of which is to reduce the frequency of international violence, or warfare (including “war crimes”).
While the United States adopted political democracy at its inception, it is the only developed nation on earth that has never adopted social democracy (a “welfare state”). The United States alone among the developed nations does not provide universal health insurance for all its citizens; it has the highest rate of relative poverty among both children and adults, and the largest gap between the rich and the poor, of any of the major economies; vastly less adequate levels of unemployment insurance and other components of shared responsibility for human welfare; and so on. Thus, it is not surprising that it also has murder rates that have been five to ten times as high as those of any other developed nation, year after year. It is also consistent with that analysis that the murder rate finally fell below the epidemic range in which it had fluctuated without exception for the previous thirty years (namely, 8 to II homicides per 100,000 population per year), only in 1998, after the unemployment rate reached its lowest level in thirty years and the rate of poverty among the demographic groups most vulnerable to violence began to diminish—slightly—for the first time in thirty years.
Some American politicians, such as President Eisenhower, have suggested that the nations of western Europe have merely substituted a high suicide rate for the high homicide rate that the U.S. has. In fact, the suicide rates in most of the other developed nations are also substantially lower than those of the United States, or at worst not substantially higher. The suicide rates throughout the British Isles, the Netherlands, and the southern European nations are around one-third lower than those of the U.S.; the rates in Canada, Australia, and New Zealand, as well as Norway and Luxembourg, are about the same. Only the remaining northern and central European countries and Japan have suicide rates that are higher, ranging from 30 per cent higher to roughly twice as high as the suicide rate of the U.S. By comparison, the U.S. homicide rate is roughly ten times as high as those of western Europe (including the U.K., Scandinavia, France, Germany, Switzerland, Austria), southern Europe, and Japan; and five times as high as those of Canada, Australia and New Zealand. No other developed nation has a homicide rate that is even close to that of the U.S.