Who were the Phoenicians?

In modern society, we are obsessed with identity, specifically in terms of categorizing and labeling. This leads to a tendency to essentialize identity, but this isn’t supported by the evidence. The only thing we are born as is members of a particular species, homo sapiens.

What stands out is that other societies have entirely different experiences of collective identity. The most common distinctions, contrary to ethnic and racial ideologies, are those we perceive in the people most similar to us — the (too often violent) narcissism of small differences.

We not only project onto other societies our own cultural assumptions for we also read anachronisms into the past as our way of rationalizing the present. But if we study closely what we know from history and archaeology, there isn’t any clear evidence for ethnic and racial ideology.

The ancient world is more complex than our simple notions.  A good example of this is the people(s) that have been called Phoenicians.

* * *

In Search of the Phoenicians
by Josephine Quinn
pp. 13-17

However, my intention here is not simply to rescue the Phoenicians from their undeserved obscurity. Quite the opposite, in fact: I’m going to start by making the case that they did not in fact exist as a self-conscious collective or “people.” The term “Phoenician” itself is a Greek invention, and there is no good evidence in our surviving ancient sources that these Phoenicians saw themselves, or acted, in collective terms above the level of the city or in many cases simply the family. The first and so far the only person known to have called himself a Phoenician in the ancient world was the Greek novelist Heliodorus of Emesa (modern Homs in Syria) in the third or fourth century CE, a claim made well outside the traditional chronological and geographical boundaries of Phoenician history, and one that I will in any case call into question later in this book.

Instead, then, this book explores the communities and identities that were important to the ancient people we have learned to call Phoenicians, and asks why the idea of being Phoenician has been so enthusiastically adopted by other people and peoples—from ancient Greece and Rome, to the emerging nations of early modern Europe, to contemporary Mediterranean nation-states. It is these afterlives, I will argue, that provide the key to the modern conception of the Phoenicians as a “people.” As Ernest Gellner put it, Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist”. 7 In the case of the Phoenicians, I will suggest, modern nationalism invented and then sustained an ancient nation.

Identities have attracted a great deal of scholarly attention in recent years, serving as the academic marginalia to a series of crucially important political battles for equality and freedom. 8 We have learned from these investigations that identities are not simple and essential truths into which we are born, but that they are constructed by the social and cultural contexts in which we live, by other people, and by ourselves—which is not to say that they are necessarily freely chosen, or that they are not genuinely and often fiercely felt: to describe something as imagined is not to dismiss it as imaginary. 9 Our identities are also multiple: we identify and are identified by gender, class, age, religion, and many other things, and we can be more than one of any of those things at once, whether those identities are compatible or contradictory. 10 Furthermore, identities are variable across both time and space: we play—and we are assigned—different roles with different people and in different contexts, and they have differing levels of importance to us in different situations. 11

In particular, the common assumption that we all define ourselves as a member of a specific people or “ethnic group,” a collective linked by shared origins, ancestry, and often ancestral territory, rather than simply by contemporary political, social, or cultural ties, remains just that—an assumption. 12 It is also a notion that has been linked to distinctive nineteenth-century European perspectives on nationalism and identity, 13 and one that sits uncomfortably with counterexamples from other times and places. 14

The now-discredited categorization and labeling of African “tribes” by colonial administrators, missionaries, and anthropologists of the nineteenth and twentieth centuries provides many well-known examples, illustrating the way in which the “ethnic assumption” can distort interpretations of other people’s affiliations and self-understanding. 15 The Banande of Zaire, for instance, used to refer to themselves simply as bayira (“cultivators” or “workers”), and it was not until the creation of a border between the British Protectorate of Uganda and the Belgian Congo in 1885 that they came to be clearly delineated from another group of bayira now called Bakonzo. 16 Even more strikingly, the Tonga of Zambia, as they were named by outsiders, did not regard themselves as a unified group differentiated from their neighbors, with the consequence that they tended to disperse and reassimilate among other groups. 17 Where such groups do have self-declared ethnic identities, they were often first imposed from without, by more powerful regional actors. The subsequent local adoption of those labels, and of the very concepts of ethnicity and tribe in some African contexts, illustrates the effects that external identifications can have on internal affiliations and self-understandings. 18 Such external labeling is not of course a phenomenon limited to Africa or to Western colonialism: other examples include the ethnic categorization of the Miao and the Yao in Han China, and similar processes carried out by the state in the Soviet Union. 19

Such processes can be dangerous. When Belgian colonial authorities encountered the central African kingdom of Rwanda, they redeployed labels used locally at the time to identify two closely related groups occupying different positions in the social and political hierarchy to categorize the population instead into two distinct “races” of Hutus (identified as the indigenous farmers) and Tutsis (thought to be a more civilized immigrant population). 20 This was not easy to do, and in 1930 a Belgian census attempting to establish which classification should be recorded on the identity cards of their subjects resorted in some cases to counting cows: possession of ten or more made you a Tutsi. 21 Between April and July 1994, more than half a million Tutsis were killed by Hutus, sometimes using their identity cards to verify the “race” of their victims.

The ethnic assumption also raises methodological problems for historians. The fundamental difficulty with labels like “Phoenician” is that they offer answers to questions about historical explanation before they have even been asked. They assume an underlying commonality between the people they designate that cannot easily be demonstrated; they produce new identities where they did not to our knowledge exist; and they freeze in time particular identities that were in fact in a constant process of construction, from inside and out. As Paul Gilroy has argued, “ethnic absolutism” can homogenize what are in reality significant differences. 22 These labels also encourage historical explanation on a very large and abstract scale, focusing attention on the role of the putative generic identity at the expense of more concrete, conscious, and interesting communities and their stories, obscuring in this case the importance of the family, the city, and the region, not to mention the marking of other social identities such as gender, class, and status. In sum, they provide too easy a way out of actually reading the historical evidence.

As a result, recent scholarship tends to see ethnicity not as a timeless fact about a region or group, but as an ideology that emerges at certain times, in particular social and historical circumstances, and, especially at moments of change or crisis: at the origins of a state, for instance, or after conquest, or in the context of migration, and not always even then. 23 In some cases, we can even trace this development over time: James C. Scott cites the example of the Cossacks on Russia’s frontiers, people used as cavalry by the tsars, Ottomans, and Poles, who “were, at the outset, nothing more and nothing less than runaway serfs from all over European Russia, who accumulated at the frontier. They became, depending on their locations, different Cossack “hosts”: the Don (for the Don River basin) Cossacks, the Azov (Sea) Cossacks, and so on.” 24

Ancient historians and archaeologists have been at the forefront of these new ethnicity studies, emphasizing the historicity, flexibility, and varying importance of ethnic identity in the ancient Mediterranean. 25 They have described, for instance, the emergence of new ethnic groups such as the Moabites and Israelites in the Near East in the aftermath of the collapse of the Bronze Age empires and the “crystallisation of commonalities” among Greeks in the Archaic period. 26 They have also traced subsequent changes in the ethnic content and formulation of these identifications: in relation to “Hellenicity,” for example, scholars have delineated a shift in the fifth century BCE from an “aggregative” conception of Greek identity founded largely on shared history and traditions to a somewhat more oppositional approach based on distinction from non-Greeks, especially Persians, and then another in the fourth century BCE, when Greek intellectuals themselves debated whether Greekness should be based on a shared past or on shared culture and values in the contemporary world. 27 By the Hellenistic period, at least in Egypt, the term “Hellene” (Greek) was in official documents simply an indication of a privileged tax status, and those so labeled could be Jews, Thracians—or, indeed, Egyptians. 28

Despite all this fascinating work, there is a danger that the considerable recent interest in the production, mechanisms, and even decline of ancient ethnicity has obscured its relative rarity. Striking examples of the construction of ethnic groups in the ancient world do not of course mean that such phenomena became the norm. 29 There are good reasons to suppose in principle that without modern levels of literacy, education, communication, mobility, and exchange, ancient communal identities would have tended to form on much smaller scales than those at stake in most modern discussions of ethnicity, and that without written histories and genealogies people might have placed less emphasis on the concepts of ancestry and blood-ties that at some level underlie most identifications of ethnic groups. 30 And in practice, the evidence suggests that collective identities throughout the ancient Mediterranean were indeed largely articulated at the level of city-states and that notions of common descent or historical association were rarely the relevant criterion for constructing “groupness” in these communities: in Greek cities, for instance, mutual identification tended to be based on political, legal, and, to a limited extent, cultural criteria, 31 while the Romans famously emphasized their mixed origins in their foundation legends and regularly manumitted their foreign slaves, whose descendants then became full Roman citizens. 32

This means that some of the best-known “peoples” of antiquity may not actually have been peoples at all. Recent studies have shown that such familiar groups as the Celts of ancient Britain and Ireland and the Minoans of ancient Crete were essentially invented in the modern period by the archaeologists who first studied or “discovered” them, 33 and even the collective identity of the Greeks can be called into question. As S. Rebecca Martin has recently pointed out, “there is no clear recipe for the archetypal Hellene,” and despite our evidence for elite intellectual discussion of the nature of Greekness, it is questionable how much “being Greek” meant to most Greeks: less, no doubt, than to modern scholars. 34 The Phoenicians, I will suggest in what follows, fall somewhere in the middle—unlike the Minoans or the Atlantic Celts, there is ancient evidence for a conception of them as a group, but unlike the Greeks, this evidence is entirely external—and they provide another good case study of the extent to which an assumption of a collective identity in the ancient Mediterranean can mislead. 35

pp. 227-230

In all the exciting work that has been done on “identity” in the past few decades, there has been too little attention paid to the concept of identity itself. We tend to ask how identities are made, vary, and change, not whether they exist at all. But Rogers Brubaker and Frederik Cooper have pinned down a central difficulty with recent approaches: “it is not clear why what is routinely characterized as multiple, fragmented, and fluid should be conceptualized as ‘identity’ at all.” 1 Even personal identity, a strong sense of one’s self as a distinct individual, can be seen as a relatively recent development, perhaps related to a peculiarly Western individualism. 2 Collective identities, furthermore, are fundamentally arbitrary: the artificial ways we choose to organize the world, ourselves, and each other. However strong the attachments they provoke, they are not universal or natural facts. Roger Rouse has pointed out that in medieval Europe, the idea that people fall into abstract social groupings by virtue of common possession of a certain attribute, and occupy autonomous and theoretically equal positions within them, would have seemed nonsensical: instead, people were assigned their different places in the interdependent relationships of a concrete hierarchy. 3

The truth is that although historians are constantly apprehending the dead and checking their pockets for identity, we do not know how people really thought of themselves in the past, or in how many different ways, or indeed how much. I have argued here that the case of the Phoenicians highlights the extent to which the traditional scholarly perception of a basic sense of collective identity at the level of a “people,” “culture,” or “nation” in the cosmopolitan, entangled world of the ancient Mediterranean has been distorted by the traditional scholarly focus on a small number of rather unusual, and unusually literate, societies.

My starting point was that we have no good evidence for the ancient people that we call Phoenician identifying themselves as a single people or acting as a stable collective. I do not conclude from this absence of evidence that the Phoenicians did not exist, nor that nobody ever called her- or himself a Phoenician under any circumstances: Phoenician-speakers undoubtedly had a larger repertoire of self-classifications than survives in our fragmentary evidence, and it would be surprising if, for instance, they never described themselves as Phoenicians to the Greeks who invented that term; indeed, I have drawn attention to several cases where something very close to that is going on. Instead, my argument is that we should not assume that our “Phoenicians” thought of themselves as a group simply by analogy with models of contemporary identity formation among their neighbors—especially since those neighbors do not themselves portray the Phoenicians as a self-conscious or strongly differentiated collective. Instead, we should accept the gaps in our knowledge and fill the space instead with the stories that we can tell.

The stories I have looked at in this book include the ways that the people of the northern Levant did in fact identify themselves—in terms of their cities, but even more of their families and occupations—as well as the formation of complex social, cultural, and economic networks based on particular cities, empires, and ideas. These could be relatively small and closed, like the circle of the tophet, or on the other hand, they could, like the network of Melqart, create shared religious and political connections throughout the Mediterranean—with other Levantine settlements, with other settlers, and with local populations. Identifications with a variety of social and cultural traditions is one recurrent characteristic of the people and cities we call Phoenician, and this continued into the Hellenistic and Roman periods, when “being Phoenician” was deployed as a political and cultural tool, although it was still not claimed as an ethnic identity.

Another story could go further, to read a lack of collective identity, culture, and political organization among Phoenician-speakers as a positive choice, a form of resistance against larger regional powers. James C. Scott has recently argued in The Art of Not Being Governed (2009) that self-governing people living on the peripheries and borders of expansionary states in that region tend to adopt strategies to avoid incorporation and to minimize taxation, conscription, and forced labor. Scott’s focus is on the highlands of Southeast Asia, an area now sometimes known as Zomia, and its relationship with the great plains states of the region such as China and Burma. He describes a series of tactics used by the hill people to avoid state power, including “their physical dispersion in rugged terrain, their mobility, their cropping practices, their kinship structure, their pliable ethnic identities . . . their flexible social structure, their religious heterodoxy, their egalitarianism and even the nonliterate, oral cultures.” The constant reconstruction of identity is a core theme in his work: “ethnic identities in the hills are politically crafted and designed to position a group vis-à-vis others in competition for power and resources.” 4 Political integration in Zomia, when it has happened at all, has usually consisted of small confederations: such alliances, he points out, are common but short-lived, and are often preserved in local place names such as “Twelve Tai Lords” (Sipsong Chutai) or “Nine Towns” (Ko Myo)—information that throws new light on the federal meetings recorded in fourth-century BCE Tripolis (“Three Cities”). 5

In fact, many aspects of Scott’s analysis feel familiar in the world of the ancient Mediterranean, on the periphery of the great agricultural empires of Mesopotamia and Iran, and despite all its differences from Zomia, another potential candidate for the label of “shatterzone.” The validity of Scott’s model for upland Southeast Asia itself —a matter of considerable debate since the book’s publication—is largely irrelevant for our purposes; 6 what is interesting here is how useful it might be for thinking about the mountainous region of the northern Levant, and the places of refuge in and around the Mediterranean.

In addition to outright rebellion, we could argue that the inhabitants of the Levant employed a variety of strategies to evade the heaviest excesses of imperial power. 7 One was to organize themselves in small city-states with flimsy political links and weak hierarchies, requiring larger powers to engage in multiple negotiations and arrangements, and providing the communities involved with multiple small and therefore obscure opportunities for the evasion of taxation and other responsibilities—“divide that ye be not ruled,” as Scott puts it. 8 A cosmopolitan approach to culture and language in those cities would complement such an approach, committing to no particular way of doing or being or even looking, keeping loyalties vague and options open. One of the more controversial aspects of Scott’s model could even explain why there is no evidence for Phoenician literature despite earlier Near Eastern traditions of myth and epic. He argues that the populations he studies are in some cases not so much nonliterate as postliterate: “Given the considerable advantages in plasticity of oral over written histories and genealogies, it is at least conceivable to see the loss of literacy and of written texts as a more or less deliberate adaptation to statelessness.” 9

Another available option was to take to the sea, a familiar but forbidding terrain where the experience and knowledge of Levantine sailors could make them and their activities invisible and unaccountable to their overlords further east. The sea also offered an escape route from more local sources of power, and the stories we hear of the informal origins of western settlements such as Carthage and Lepcis, whether or not they are true, suggest an appreciation of this point. A distaste even for self-government could also explain a phenomenon I have drawn attention to throughout the book: our “Phoenicians” not only fail to visibly identify as Phoenician, they often omit to identify at all.

It is striking in this light that the first surviving visible expression of an explicitly “Phoenician” identity was imposed by the Carthaginians on their subjects as they extended state power to a degree unprecedented among Phoenician-speakers, that it was then adopted by Tyre as a symbol of colonial success, and that it was subsequently exploited by Roman rulers in support of their imperial activities. This illustrates another uncomfortable aspect of identity formation: it is often a cultural bullying tactic, and one that tends to benefit those already in power more than those seeking self-empowerment. Modern European examples range from the linguistic and cultural education strategies that turned “peasants into Frenchmen” in the late nineteenth century, 10 to the eugenic Lebensborn program initiated by the Nazis in mid-twentieth-century central Europe to create more Aryan children through procreation between German SS officers and “racially pure” foreign women. 11 Such examples also underline the difficulty of distinguishing between internal and external conceptions of identity when apparently internal identities are encouraged from above, or even from outside, just as the developing modern identity as Phoenician involved the gradual solidification of the identity of the ancient Phoenicians.

It seems to me that attempts to establish a clear distinction between “emic” and “etic” identity are part of a wider tendency to treat identities as ends rather than means, and to focus more on how they are constructed than on why. Identity claims are always, however, a means to another end, and being “Phoenician” is in all the instances I have surveyed here a political rather than a personal statement. It is sometimes used to resist states and empires, from Roman Africa to Hugh O’Donnell’s Ireland, but more often to consolidate them, lending ancient prestige and authority to later regimes, a strategy we can see in Carthage’s Phoenician coinage, the emperor Elagabalus’s installation of a Phoenician sun god at Rome, British appeals to Phoenician maritime power, and Hannibal Qadhafi’s cruise ship.

In the end, it is modern nationalism that has created the Phoenicians, along with much else of our modern idea of the ancient Mediterranean. Phoenicianism has served nationalist purposes since the early modern period: the fully developed notion of Phoenician ethnicity may be a nineteenth-century invention, a product of ideologies that sought to establish ancient peoples or “nations” at the heart of new nation-states, but its roots, like those of nationalism itself, are deeper. As origin myth or cultural comparison, aggregative or oppositional, imperialist and anti-imperialist, Phoenicianism supported the expansion of the early modern nation of Britain, as well as the position of the nation of Ireland as separate and respected within that empire; it helped to consolidate the nation of Lebanon under French imperial mandate, premised on a regional Phoenician identity agreed on between local and French intellectuals, but it also helped to construct the nation of Tunisia in opposition to European colonialism.

Advertisements

Outpost of Humanity

There have been certain thoughts on my mind. I’ve been focused on the issue of who I want to be in terms of what I do with my time and how I relate to others. To phrase it in the negative, I don’t want to waste time and promote frustration for myself or others.

I’ve come to the conclusion that we humans tend to consciously focus on that which matters the least. We are easily drawn in and distracted. Those in power understand this and use it to create political conflicts and charades to manipulate us. Sadly, the distance between Hollywood and the District of Columbia is nearly non-existent within the public mind. Americans worry about the division of church and state or business and state when what they should be worried most about is the division of entertainment and state, the nexus of spectacle and propaganda. I’m looking at you, mainstream media.

A notion I’ve had is that maybe politics, as with economics, is more of a result than a cause (until recent times, few would have ever seriously considered politics and economics as the primary cause of much of anything; even as late as the 19th century, public debate about such things was often thought of as unseemly). We focus on what is easy to see, which is to say the paradigm that defines our society and so dominates our minds. Politics and economics are ways of simplistically framing what in reality is complex. We don’t know how to deal with the complex reality, confusing and discomforting as it is, and so we mostly ignore it. Besides, politics and economics makes for a more entertaining narrative that plays well on mass media.

It’s like the joke about the man looking for car keys under a streetlamp. When asked if he lost his car keys by the streetlamp, he explains he lost them elsewhere but the lighting is better there. Still, people will go on looking under that streetlamp, no matter what anybody else says. There is no point in arguing about it. Just wish them well on their fool’s errand. I guess we all have to keep ourselves preoccupied somehow.

Here is an even more basic point. It appears that rationality and facts have almost nothing to do with much of anything that has any significance, outside of the precise constraints of particular activities such as scientific research or philosophical analysis. I’m specifically thinking of the abovementioned frames of politics and economics. Rationality must operate within a frame, but it can’t precede the act of framing. That is as true for the political left as for the political right, as true for me as for the rest of humanity. Critical thinking is not what centrally motivates people and not what, on those rare occasions, allows for genuine change. Our ability to think well based on valid info is important in society and is a useful as a tool, but it isn’t what drives human behavior.

By the time an issue gets framed as politics or economics, it is already beyond the point of much influence and improvement. Arguing about such things won’t change anything. Even activism by itself won’t change anything. They are results and not causes. Or at best, they are tools and not the hand that wields the tool nor the mind that determines its use. I’m no longer in the mood to bash my head against the brick wall of public debate. It’s not about feeling superior. Rather, it’s about focusing on what matters.

I barely know what motivates myself and I’m not likely to figure out what makes other people tick. It’s not a lack of curiosity on my part, not a lack of effort in trying to understand. This isn’t to say I plan on ending my obsessive focus on human nature and society. But I realize that focusing on politics, economics, etc doesn’t make me happy or anyone else happy either, much less making the world a better place. It seems like the wrong way to look at things, distracting us from the possibilities of genuine insight and understanding, the point of leverage where the world might be moved. These dominant frames can’t give us the inspiration and vision that is necessary for profound change, the only game that interests me in these times when profound change is desperately needed.

There is another avenue of thought I’ve been following. To find what intrigues and interests you is one of the most important things in the world. Without it, even the best life can feel without meaning or purpose. And with it, even the worst can be tolerable. It’s having something of value to focus upon, to look toward with hope and excitement, to give life direction.

I doubt politics or economics plays this role for anyone. What we care about is always beyond that superficial level. The inspiring pamphleteers of the American Revolution weren’t offering mere political change and economic ideas but an entirely new vision of humanity and society. Some of the American founders even admitted that their own official activities bored them. They’d rather have pursued other interests—to have read edifying books, done scientific research, invented something of value, contributed to their communities, spent more time with their families, or whatever. Something like politics (or economics) was a means, not an end. But too often it gets portrayed as an end, a purpose it is ill-suited to serve.

We spend too little time getting clear in our hearts and minds what it is we want. We use words and throw out ideals while rarely wrestling with what they mean. To shift our focus would require a soul-searching far beyond any election campaigning, political activism, career development, financial investment strategy, or whatever. That isn’t to argue for apathy and disinterest, much less cynicism and fatalism. Let me point to some real world examples. You can hear the kind of deeper engagement in the words of someone like Martin Luther King jr or, upon his death, the speech given by Robert F. Kennedy jr. Sometime really listen to speeches like that, feel the resonance of emotion beyond words.

When politics matters the most is when it stops being about politics, when our shared humanity peeks through. In brief moments of stark human reality, as in tank man on Tiananmen Square, our minds are brought up short and a space opens up for something new. Then the emptiness of ideological rhetoric and campaign slogans becomes painfully apparent. And we ache for something more.

Yet I realize that what I present here is not what you’ll see on the mainstream media, not what you’ll hear from any politician or pundit, not what your career guidance counselor or financial adviser is going to offer. I suspect most people would understand what I’m saying, at least on some level, but it’s not what we normally talk about in our society. It touches a raw nerve. In writing these words, I might not be telling most people what they want to hear. I’m offering no comforting rationalizations, no easy narrative, no plausible deniability. Instead, I’m suggesting people think for themselves and to do so as honestly as possible.

I’ve only come to this view myself after a lifetime of struggle. It comes easy for no one, to question and wonder this deeply. But once one has come to such a view, what does one do with it? All I know to do is to give voice to it, as best I can, however limited my audience. I have no desire to try to force anyone to understand. This is my view and my voice. Others will understand it, maybe even embrace it and find common bond in it or they won’t. My only purpose is to open up a quiet space amidst the rattling noise and flashing lights. All who can meet me as equals in this understanding are welcome. As for those who see it differently, they are free to go elsewhere on the free market of opinions.

I know that I’m a freak, according to mainstream society. I know there are those who don’t understand my views and don’t agree. That is fine. I’ll leave them alone, if they leave me alone. But here in my space, I will let my freak flag fly. It might even turn out that there are more freaks than some have assumed, which is to say maybe people like me are more normal than those in power would like to let on. One day the silenced majority might find its collective voice. We all might be surprised when we finally hear what they have to say.

Until then, I’ll go on doing my own thing in my own way, here at this outpost of humanity.

What kind of trust? And to what end?

A common argument against the success of certain societies is that it wouldn’t be possible in the United States. As it is claimed, what makes them work well is there lack of diversity. Sometimes, it will be added that they are small countries which is to imply ‘tribalistic’. Compared to actual tribes, these countries are rather diverse and large. But I get the point being made and I’m not one to dismiss it out of hand.

Still, not all the data agrees with this conclusion. One example is seen in the comparisons of education systems. In the successful social democracies, even the schools with higher rates of diversity and immigrant students tend to have higher test scores, as compared to a country like the US.  There is one book that seriously challenges the tribal argument: Segregation and Mistrust by Eric M. Uslaner. Looking at the data, he determined that (Kindle Locations 72-73), “It wasn’t diversity but segregation that led to less trust.”

Segregation tends to go along with various forms of inequality: social position, economic class and mobility, political power and representation, access to resources, quality of education, systemic and institutional racism, environmental racism, ghettoization, etc. And around inequality, there is unsurprisingly a constellation of other social and health problems that negatively impact the segregated most of all but also the entire society in general—such as an increase of: food deserts, obesity, stunted neurocognitive development (including brain damage from neurotoxins), mental illnesses, violent crime, teen pregnancies, STDs, high school drop outs, child and spousal abuse, bullying, and the list goes on.

Obviously, none of that creates the conditions for a culture of trust. Segregation and inequality undermine everything that allows for a healthy society. Therefore, lessen inequality and, in proportion, a healthy society will follow. That is even true with high levels of diversity.

Related to this, I recall a study that showed that children raised in diverse communities tended to grow up to be socially liberal adults, which included greater tolerance and acceptance, fundamental traits of social trust.

On the opposite end, a small tribe has high trust within that community, but they have almost little if any trust of anyone outside of the community. Is such a small community really more trusting in the larger sense? I don’t know if that has ever been researched.

Such people in tight-knit communities may be willing to do anything for those within their tribe, but a stranger might be killed for no reason other than being an outsider. Take the Puritans, as an example. They had high trust societies. And from early on they had collectivist tendencies, in their being community-oriented with a strong shared vision. Yet anyone who didn’t quite fit in would be banished, tortured, or killed.

Maybe there are many kinds of trust, as there are many kinds of social capital, social cohesion, and social order. There are probably few if any societies that excel in all forms of trust. Some forms of trust might even be diametrically opposed to other forms of trust. Besides, trust in some cases such as an authoritarian regime isn’t necessarily a good thing. Low diversity societies such as Russia, Germany, Japan, China, etc have their own kinds of potential problems that can endanger the lives of those far outside of their own societies.

Trust is complex. What kind of trust? And to what end?

* * *

Does Diversity Erode Social Cohesion?
Social Capital and Race in British Neighbourhoods
by Natalia Letki

The debate on causes and consequences of social capital has been recently complemented with an investigation into factors that erode it. Various scholars concluded that diversity, and racial heterogeneity in particular, is damaging for the sense of community, interpersonal trust and formal and informal interactions. However, most of this research does not adequately account for the negative effect of a community’s low socio-economic status on neighbourhood interactions and attitudes. This paper is the first to date empirical examination of the impact of racial context on various dimensions of social capital in British neighbourhoods. Findings show that the low neighbourhood status is the key element undermining all dimensions of social capital, while eroding effect of racial diversity is limited.

Racism learned
James H. Burnett III

children exposed to racism tend to accept and embrace it as young as age 3, and in just a matter of days.

Can Racism Be Stopped in the Third Grade?
by Lisa Miller

At no developmental age are children less racist than in elementary school. But that’s not innocence, exactly, since preschoolers are obsessed with race. At ages 3 and 4, children are mapping their world, putting things and people into categories: size, shape, color. Up, down; day, night; in, out; over, under. They see race as a useful sorting measure and ask their parents to give them words for the differences they see, generally rejecting the adult terms “black” and “white,” and preferring finer (and more accurate) distinctions: “tan,” “brown,” “chocolate,” “pinkish.” They make no independent value judgments about racial difference, obviously, but by 4 they are already absorbing the lessons of a racist culture. All of them know reflexively which race it is preferable to be. Even today, almost three-quarters of a century since the Doll Test, made famous in Brown v. Board of Education, experiments by CNN and Margaret Beale Spencer have found that black and white children still show a bias toward people with lighter skin.

But by the time they have entered elementary school, they are in a golden age. At 7 or 8, children become very concerned with fairness and responsive to lessons about prejudice. This is why the third, fourth, and fifth grades are good moments to teach about slavery and the Civil War, suffrage and the civil-rights movement. Kids at that age tend to be eager to wrestle with questions of inequality, and while they are just beginning to form a sense of racial identity (this happens around 7 for most children, though for some white kids it takes until middle school), it hasn’t yet acquired much tribal force. It’s the closest humans come to a racially uncomplicated self. The psychologist Stephen Quintana studies Mexican-American kids. At 6 to 9 years old, they describe their own racial realities in literal terms and without value judgments. When he asks what makes them Mexican-American, they talk about grandparents, language, food, skin color. When he asks them why they imagine a person might dislike Mexican-Americans, they are baffled. Some can’t think of a single answer. This is one reason cross-racial friendships can flourish in elementary school — childhood friendships that researchers cite as the single best defense against racist attitudes in adulthood. The paradise is short-lived, though. Early in elementary school, kids prefer to connect in twos and threes over shared interests — music, sports, Minecraft. Beginning in middle school, they define themselves through membership in groups, or cliques, learning and performing the fraught social codes that govern adult interactions around race. As early as 10, psychologists at Tufts have shown, white children are so uncomfortable discussing race that, when playing a game to identify people depicted in photos, they preferred to undermine their own performance by staying silent rather than speak racial terms aloud.

Being Politically Correct Can Actually Boost Creativity
by Marissa Fessenden

The researchers assessed the ideas each group generated after 10 minutes of brainstorming. In same-sex groups, they found, political correctness priming produced less creative ideas. In the mixed groups however, creativity got a boost. “They generated more ideas, and those ideas were more novel,” Duguid told NPR. “Whether it was two men and one woman or two women and one man, the results were consistent.” The creativity of each group’s ideas was assessed by independent, blind raters.

Is Diversity the Source of America’s Genius?
by Gregory Rodriguez

Despite the fact that diversity is so central to the American condition, scholars who’ve studied the cognitive effects of diversity have long made the mistake of treating homogeneity as the norm. Only this year did a group of researchers from MIT, Columbia University, and Northwestern University publish a paper questioning the conventional wisdom that homogeneity represents some kind of objective baseline for comparison or “neutral indicator of the ideal response in a group setting.”

To bolster their argument, the researchers cite a previous study that found that members of homogenous groups tasked with solving a mystery tend to be more confident in their problem-solving skills than their performance actually merits. By contrast, the confidence level of individuals in diverse groups corresponds better with how well their group actually performs. The authors concluded that homogenous groups “were actually further than diverse groups from an objective index of accuracy.”

The researchers also refer to a 2006 experiment showing that homogenous juries made “more factually inaccurate statements and considered a narrower range of information” than racially diverse juries. What these and other findings suggest, wrote the researchers, is that people in diverse groups “are more likely to step outside their own perspective and less likely to instinctively impute their own knowledge onto others” than people in homogenous groups.

Multicultural Experience Enhances Creativity
by Leung, Maddux, Galinsky, & Chiu

Many practices aimed at cultivating multicultural competence in educational and organizational settings (e.g., exchange programs, diversity education in college, diversity management at work) assume that multicultural experience fosters creativity. In line with this assumption, the research reported in this article is the first to empirically demonstrate that exposure to multiple cultures in and of itself can enhance creativity. Overall, the authors found that extensiveness of multicultural experiences was positively related
to both creative performance (insight learning, remote association, and idea generation) and creativity-supporting cognitive processes (retrieval of unconventional knowledge, recruitment of ideas from unfamiliar cultures for creative idea expansion). Furthermore, their studies showed that the serendipitous creative benefits resulting from multicultural experiences may depend on the extent to which individuals open themselves to foreign cultures, and that creativity is facilitated in contexts that deemphasize the need for firm answers or existential concerns. The authors discuss the implications of their findings for promoting creativity in increasingly global learning and work environments.

The Evidence That White Children Benefit From Integrated Schools
by Anya Kamenetz

For example, there’s evidence that corporations with better gender and racial representation make more money and are more innovative. And many higher education groups have collected large amounts of evidence on the educational benefits of diversity in support of affirmative action policies.

In one set of studies, Phillips gave small groups of three people a murder mystery to solve. Some of the groups were all white and others had a nonwhite member. The diverse groups were significantly more likely to find the right answer.

Sundown Towns: A Hidden Dimension Of American Racism
by James W. Loewen
pp. 360-2

In addition to discouraging new people, hypersegregation may also discourage new ideas. Urban theorist Jane Jacobs has long held that the mix of peoples and cultures found in successful cities prompts creativity. An interesting study by sociologist William Whyte shows that sundown suburbs may discourage out-of-the-box thinking. By the 1970s, some executives had grown weary of the long commutes with which they had saddled themselves so they could raise their families in elite sundown suburbs. Rather than move their families back to the city, they moved their corporate headquarters out to the suburbs. Whyte studied 38 companies that left New York City in the 1970s and ’80s, allegedly “to better [the] quality-of-life needs of their employees.” Actually, they moved close to the homes of their CEOs, cutting their average commute to eight miles; 31 moved to the Greenwich-Stamford, Connecticut, area. These are not sundown towns, but adjacent Darien was, and Greenwich and Stamford have extensive formerly sundown neighborhoods that are also highly segregated on the basis of social class. Whyte then compared those 38 companies to 36 randomly chosen comparable companies that stayed in New York City. Judged by stock price, the standard way to measure how well a company is doing, the suburbanized companies showed less than half the stock appreciation of the companies that chose to remain in the city.7 […]

Research suggests that gay men are also important members of what Richard Florida calls “the creative class”—those who come up with or welcome new ideas and help drive an area economically.11 Metropolitan areas with the most sundown suburbs also show the lowest tolerance for homosexuality and have the lowest concentrations of “out” gays and lesbians, according to Gary Gates of the Urban Institute. He lists Buffalo, Cleveland, Detroit, Milwaukee, and Pittsburgh as examples. Recently, some cities—including Detroit—have recognized the important role that gay residents can play in helping to revive problematic inner-city neighborhoods, and now welcome them.12 The distancing from African Americans embodied by all-white suburbs intensifies another urban problem: sprawl, the tendency for cities to become more spread out and less dense. Sprawl can decrease creativity and quality of life throughout the metropolitan area by making it harder for people to get together for all the human activities—from think tanks to complex commercial transactions to opera—that cities make possible in the first place. Asked in 2000, “What is the most important problem facing the community where you live?” 18% of Americans replied sprawl and traffic, tied for first with crime and violence. Moreover, unlike crime, sprawl is increasing. Some hypersegregated metropolitan areas like Detroit and Cleveland are growing larger geographically while actually losing population.13

How Diversity Makes Us Smarter
by Katherine W. Phillips

Research on large, innovative organizations has shown repeatedly that this is the case. For example, business professors Cristian Deszö of the University of Maryland and David Ross of Columbia University studied the effect of gender diversity on the top firms in Standard & Poor’s Composite 1500 list, a group designed to reflect the overall U.S. equity market. First, they examined the size and gender composition of firms’ top management teams from 1992 through 2006. Then they looked at the financial performance of the firms. In their words, they found that, on average, “female representation in top management leads to an increase of $42 million in firm value.” They also measured the firms’ “innovation intensity” through the ratio of research and development expenses to assets. They found that companies that prioritized innovation saw greater financial gains when women were part of the top leadership ranks.

Racial diversity can deliver the same kinds of benefits. In a study conducted in 2003, Orlando Richard, a professor of management at the University of Texas at Dallas, and his colleagues surveyed executives at 177 national banks in the U.S., then put together a database comparing financial performance, racial diversity and the emphasis the bank presidents put on innovation. For innovation-focused banks, increases in racial diversity were clearly related to enhanced financial performance.

Evidence for the benefits of diversity can be found well beyond the U.S. In August 2012 a team of researchers at the Credit Suisse Research Institute issued a report in which they examined 2,360 companies globally from 2005 to 2011, looking for a relationship between gender diversity on corporate management boards and financial performance. Sure enough, the researchers found that companies with one or more women on the board delivered higher average returns on equity, lower gearing (that is, net debt to equity) and better average growth. […]

In 2006 Margaret Neale of Stanford University, Gregory Northcraft of the University of Illinois at Urbana-Champaign and I set out to examine the impact of racial diversity on small decision-making groups in an experiment where sharing information was a requirement for success. Our subjects were undergraduate students taking business courses at the University of Illinois. We put together three-person groups—some consisting of all white members, others with two whites and one nonwhite member—and had them perform a murder mystery exercise. We made sure that all group members shared a common set of information, but we also gave each member important clues that only he or she knew. To find out who committed the murder, the group members would have to share all the information they collectively possessed during discussion. The groups with racial diversity significantly outperformed the groups with no racial diversity. Being with similar others leads us to think we all hold the same information and share the same perspective. This perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation.

Other researchers have found similar results. In 2004 Anthony Lising Antonio, a professor at the Stanford Graduate School of Education, collaborated with five colleagues from the University of California, Los Angeles, and other institutions to examine the influence of racial and opinion composition in small group discussions. More than 350 students from three universities participated in the study. Group members were asked to discuss a prevailing social issue (either child labor practices or the death penalty) for 15 minutes. The researchers wrote dissenting opinions and had both black and white members deliver them to their groups. When a black person presented a dissenting perspective to a group of whites, the perspective was perceived as more novel and led to broader thinking and consideration of alternatives than when a white person introduced that same dissenting perspective. The lesson: when we hear dissent from someone who is different from us, it provokes more thought than when it comes from someone who looks like us.

This effect is not limited to race. For example, last year professors of management Denise Lewin Loyd of the University of Illinois, Cynthia Wang of Oklahoma State University, Robert B. Lount, Jr., of Ohio State University and I asked 186 people whether they identified as a Democrat or a Republican, then had them read a murder mystery and decide who they thought committed the crime. Next, we asked the subjects to prepare for a meeting with another group member by writing an essay communicating their perspective. More important, in all cases, we told the participants that their partner disagreed with their opinion but that they would need to come to an agreement with the other person. Everyone was told to prepare to convince their meeting partner to come around to their side; half of the subjects, however, were told to prepare to make their case to a member of the opposing political party, and half were told to make their case to a member of their own party.

The result: Democrats who were told that a fellow Democrat disagreed with them prepared less well for the discussion than Democrats who were told that a Republican disagreed with them. Republicans showed the same pattern. When disagreement comes from a socially different person, we are prompted to work harder. Diversity jolts us into cognitive action in ways that homogeneity simply does not.

For this reason, diversity appears to lead to higher-quality scientific research. This year Richard Freeman, an economics professor at Harvard University and director of the Science and Engineering Workforce Project at the National Bureau of Economic Research, along with Wei Huang, a Harvard economics Ph.D. candidate, examined the ethnic identity of the authors of 1.5 million scientific papers written between 1985 and 2008 using Thomson Reuters’s Web of Science, a comprehensive database of published research. They found that papers written by diverse groups receive more citations and have higher impact factors than papers written by people from the same ethnic group. Moreover, they found that stronger papers were associated with a greater number of author addresses; geographical diversity, and a larger number of references, is a reflection of more intellectual diversity. […]

In a 2006 study of jury decision making, social psychologist Samuel Sommers of Tufts University found that racially diverse groups exchanged a wider range of information during deliberation about a sexual assault case than all-white groups did. In collaboration with judges and jury administrators in a Michigan courtroom, Sommers conducted mock jury trials with a group of real selected jurors. Although the participants knew the mock jury was a court-sponsored experiment, they did not know that the true purpose of the research was to study the impact of racial diversity on jury decision making.

Sommers composed the six-person juries with either all white jurors or four white and two black jurors. As you might expect, the diverse juries were better at considering case facts, made fewer errors recalling relevant information and displayed a greater openness to discussing the role of race in the case. These improvements did not necessarily happen because the black jurors brought new information to the group—they happened because white jurors changed their behavior in the presence of the black jurors. In the presence of diversity, they were more diligent and open-minded.

Social Disorder, Mental Disorder

“It is no measure of health to be well adjusted to a profoundly sick society.”
~ Jiddu Krishnamurti

“The opposite of addiction is not sobriety. The opposite of addiction is connection.”
~ Johann Harri

On Staying Sane in a Suicidal Culture
by Dahr Jamail

Our situation so often feels hopeless. So much has spun out of control, and pathology surrounds us. At least one in five Americans are taking psychiatric medications, and the number of children taking adult psychiatric drugs is soaring.

From the perspective of Macy’s teachings, it seems hard to argue that this isn’t, at least in part, active denial of what is happening to the world and how challenging it is for both adults and children to deal with it emotionally, spiritually and psychologically.

These disturbing trends, which are increasing, are something she is very mindful of. As she wrote in World as Lover, World as Self, “The loss of certainty that there will be a future is, I believe, the pivotal psychological reality of our time.”

What does depression feel like? Trust me – you really don’t want to know
by Tim Lott

Admittedly, severely depressed people can connect only tenuously with reality, but repeated studies have shown that mild to moderate depressives have a more realistic take on life than most “normal” people, a phenomenon known as “depressive realism”. As Neel Burton, author of The Meaning of Madness, put it, this is “the healthy suspicion that modern life has no meaning and that modern society is absurd and alienating”. In a goal-driven, work-oriented culture, this is deeply threatening.

This viewpoint can have a paralysing grip on depressives, sometimes to a psychotic extent – but perhaps it haunts everyone. And therefore the bulk of the unafflicted population may never really understand depression. Not only because they (understandably) lack the imagination, and (unforgivably) fail to trust in the experience of the sufferer – but because, when push comes to shove, they don’t want to understand. It’s just too … well, depressing.

The Mental Disease of Late-Stage Capitalism
by Joe Brewer

A great irony of this deeply corrupt system of wealth hoarding is that the “weapon of choice” is how we feel about ourselves as we interact with our friends. The elites don’t have to silence us. We do that ourselves by refusing to talk about what is happening to us. Fake it until you make it. That’s the advice we are given by the already successful who have pigeon-holed themselves into the tiny number of real opportunities society had to offer. Hold yourself accountable for the crushing political system that was designed to divide us against ourselves.

This great lie that we whisper to ourselves is how they control us. Our fear that other impoverished people (which is most of us now) will look down on us for being impoverished too. This is how we give them the power to keep humiliating us.

I say no more of this emotional racket. If I am going to be responsible for my fate in life, let it be because I chose to stand up and fight — that I helped dismantle the global architecture of wealth extraction that created this systemic corruption of our economic and political systems.

Now more than ever, we need spiritual healing. As this capitalist system destroys itself, we can step aside and find healing by living honestly and without fear. They don’t get to tell us how to live. We can share our pain with family and friends. We can post it on social media. Shout it from the rooftops if we feel like it. The pain we feel is capitalism dying. It hurts us because we are still in it.

Neoliberalism – the ideology at the root of all our problems
by George Monbiot

So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.

Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.

Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.

We internalise and reproduce its creeds. The rich persuade themselves that they acquired their wealth through merit, ignoring the advantages – such as education, inheritance and class – that may have helped to secure it. The poor begin to blame themselves for their failures, even when they can do little to change their circumstances.

Never mind structural unemployment: if you don’t have a job it’s because you are unenterprising. Never mind the impossible costs of housing: if your credit card is maxed out, you’re feckless and improvident. Never mind that your children no longer have a school playing field: if they get fat, it’s your fault. In a world governed by competition, those who fall behind become defined and self-defined as losers.

Among the results, as Paul Verhaeghe documents in his book What About Me? are epidemics of self-harm, eating disorders, depression, loneliness, performance anxiety and social phobia. Perhaps it’s unsurprising that Britain, in which neoliberal ideology has been most rigorously applied, is the loneliness capital of Europe. We are all neoliberals now.

Neoliberalism has brought out the worst in us
by Paul Verhaeghe

We tend to perceive our identities as stable and largely separate from outside forces. But over decades of research and therapeutic practice, I have become convinced that economic change is having a profound effect not only on our values but also on our personalities. Thirty years of neoliberalism, free-market forces and privatisation have taken their toll, as relentless pressure to achieve has become normative. If you’re reading this sceptically, I put this simple statement to you: meritocratic neoliberalism favours certain personality traits and penalises others.

There are certain ideal characteristics needed to make a career today. The first is articulateness, the aim being to win over as many people as possible. Contact can be superficial, but since this applies to most human interaction nowadays, this won’t really be noticed.

It’s important to be able to talk up your own capacities as much as you can – you know a lot of people, you’ve got plenty of experience under your belt and you recently completed a major project. Later, people will find out that this was mostly hot air, but the fact that they were initially fooled is down to another personality trait: you can lie convincingly and feel little guilt. That’s why you never take responsibility for your own behaviour.

On top of all this, you are flexible and impulsive, always on the lookout for new stimuli and challenges. In practice, this leads to risky behaviour, but never mind, it won’t be you who has to pick up the pieces. The source of inspiration for this list? The psychopathy checklist by Robert Hare, the best-known specialist on psychopathy today.

What About Me?: The Struggle for Identity in a Market-Based Society
by Paul Verhaeghe
Kindle Locations 2357-2428

Hypotheses such as these, however plausible, are not scientific. If we want to demonstrate the link between a neo-liberal society and, say, mental disorders, we need two things. First, we need a yardstick that indicates the extent to which a society is neo-liberal. Second, we need to develop criteria to measure the increase or decrease of psychosocial wellbeing in society. Combine these two, and you would indeed be able to see whether such a connection existed. And by that I don’t mean a causal connection, but a striking pattern; a rise in one being reflected in the other, or vice versa.

This was exactly the approach used by Richard Wilkinson, a British social epidemiologist, in two pioneering studies (the second carried out with Kate Pickett). The gauge they used was eminently quantifiable: the extent of income inequality within individual countries. This is indeed a good yardstick, as neo-liberal policy is known to cause a spectacular rise in such inequality. Their findings were unequivocal: an increase of this kind has far-reaching consequences for nearly all health criteria. Its impact on mental health (and consequently also mental disorders) is by no means an isolated phenomenon. This finding is just as significant as the discovery that mental disorders are increasing.

As social epidemiologists, Wilkinson and Pickett studied the connection between society and health in the broad sense of the word. Stress proves to be a key factor here. Research has revealed its impact, both on our immune systems and our cardiovascular systems. Tracing the causes of stress is difficult, though, especially given that we live in the prosperous and peaceful West. If we take a somewhat broader view, most academics agree on the five factors that determine our health: early childhood; the fears and cares we experience; the quality of our social relationships; the extent to which we have control over our lives; and, finally, our social status. The worse you score in these areas, the worse your health and the shorter your life expectancy are likely to be.

In his first book, The Impact of Inequality: how to make sick societies healthier, Wilkinson scrutinises the various factors involved, rapidly coming to what would be the central theme of his second book — that is, income inequality. A very striking conclusion is that in a country, or even a city, with high income inequality, the quality of social relationships is noticeably diminished: there is more aggression, less trust, more fear, and less participation in the life of the community. As a psychoanalyst, I was particularly interested in his quest for the factors that play a role at individual level. Low social status proves to have a determining effect on health. Lack of control over one’s work is a prominent stress factor. A low sense of control is associated with poor relationships with colleagues and greater anger and hostility — a phenomenon that Richard Sennett had already described (the infantilisation of adult workers). Wilkinson discovered that this all has a clear impact on health, and even on life expectancy. Which in turn ties in with a classic finding of clinical psychology: powerlessness and helplessness are among the most toxic emotions.

Too much inequality is bad for your health

A number of conclusions are forced upon us. In a prosperous part of the world like Western Europe, it isn’t the quality of health care (the number of doctors and hospitals) that determines the health of the population, but the nature of social and economic life. The better social relationships are, the better the level of health. Excessive inequality is more injurious to health than any other factor, though this is not simply a question of differences between social classes. If anything, it seems to be more of a problem within groups that are presumed to be equal (for example, civil servants and academics). This finding conflicts with the general assumption that income inequality only hurts the underclass — the losers — while those higher up the social ladder invariably benefit. That’s not the case: its negative effects are statistically visible in all sectors of the population, hence the subtitle of Wilkinson’s second work: why more equal societies almost always do better.

In that book, Wilkinson and Pickett adopt a fairly simple approach. Using official statistics, they analyse the connection between income inequality and a host of other criteria. The conclusions are astounding, almost leaping off the page in table after table: the greater the level of inequality in a country or even region, the more mental disorders, teenage pregnancies, child mortality, domestic and street violence, crime, drug abuse, and medication. And the greater the inequality is, the worse physical health and educational performance are, the more social mobility declines, along with feelings of security, and the unhappier people are.

Both books, especially the latter, provoked quite a response in the Anglo-Saxon world. Many saw in them proof of what they already suspected. Many others were more negative, questioning everything from the collation of data to the statistical methods used to reach conclusions. Both authors refuted the bulk of the criticism — which, given the quality of their work, was not a very difficult task. Much of it targeted what was not in the books: the authors were not urging a return to some kind of ‘all animals are equal’ Eastern-bloc state. What critics tended to forget was that their analysis was of relative differences in income, with negative effects becoming most manifest in the case of extreme inequality. Moreover, it is not income inequality itself that produces these effects, but the stress factors associated with it.

Roughly the same inferences can be drawn from Sennett’s study, though it is more theoretical and less underpinned with figures. His conclusion is fairly simple, and can be summed up in the title of what I regard as his best book: Respect in a World of Inequality. Too much inequality leads to a loss of respect, including self-respect — and, in psychosocial terms, this is about the worst thing that can happen to anyone.

This emerges very powerfully from a single study of the social determinants of health, which is still in progress. Nineteen eighty-six saw the start of the second ‘Whitehall Study’ that systematically monitored over 10,000 British civil servants, to establish whether there was a link between their health and their work situations. At first sight, this would seem to be a relatively homogenous group, and one that definitely did not fall in the lowest social class. The study’s most striking finding is that the lower the rank and status of someone within that group, the lower their life expectancy, even when taking account of such factors as smoking, diet, and physical exercise. The most obvious explanation is that the lowest-ranked people experienced the most stress. Medical studies confirm this: individuals in this category have higher cortisol levels (increased stress) and more coagulation-factor deficiencies (and thus are at greater risk of heart attacks).

My initial question was, ‘Is there a demonstrable connection between today’s society and the huge rise in mental disorders?’ As all these studies show, the answer is yes. Even more important is the finding that this link goes beyond mental health. The same studies show highly negative effects on other health parameters. As so often is the case, a parallel can be found in fiction — in this instance, in Alan Lightman’s novel The Diagnosis. During an interview, the author posed the following rhetorical question: ‘Who, experiencing for years the daily toll of intense corporate pressure, could truly escape severe anxiety?’* (I think it may justifiably be called rhetorical, when you think how many have had to find out its answer for themselves.)

A study by a research group at Heidelberg University very recently came to similar conclusions, finding that people’s brains respond differently to stress according to whether they have had an urban or rural upbringing. 3 What’s more, people in the former category prove more susceptible to phobias and even schizophrenia. So our brains are differently shaped by the environment in which we grow up, making us potentially more susceptible to mental disorders. Another interesting finding emerged from the way the researchers elicited stress. While the subjects of the experiment were wrestling with the complex calculations they had been asked to solve, some of them were told (falsely) that their scores were lagging behind those of the others, and asked to hurry up because the experiments were expensive. All the neo-liberal factors were in place: emphasis on productivity, evaluation, competition, and cost reduction.

Capitalist Realism: Is there no alternative?
by Mark Fisher
pp. 19-22

Mental health, in fact, is a paradigm case of how capitalist realism operates. Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect). In the 1960s and 1970s, radical theory and politics (Laing, Foucault, Deleuze and Guattari, etc.) coalesced around extreme mental conditions such as schizophrenia, arguing, for instance, that madness was not a natural, but a political, category. But what is needed now is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS. In his book The Selfish Capitalist, Oliver James has convincingly posited a correlation between rising rates of mental distress and the neoliberal mode of capitalism practiced in countries like Britain, the USA and Australia. In line with James’s claims, I want to argue that it is necessary to reframe the growing problem of stress (and distress) in capitalist societies. Instead of treating it as incumbent on individuals to resolve their own psychological distress, instead, that is, of accepting the vast privatization of stress that has taken place over the last thirty years, we need to ask: how has it become acceptable that so many people, and especially so many young people, are ill? The ‘mental health plague’ in capitalist societies would suggest that, instead of being the only social system that works, capitalism is inherently dysfunctional, and that the cost of it appearing to work is very high. […]

By contrast with their forebears in the 1960s and 1970s, British students today appear to be politically disengaged. While French students can still be found on the streets protesting against neoliberalism, British students, whose situation is incomparably worse, seem resigned to their fate. But this, I want to argue, is a matter not of apathy, nor of cynicism, but of reflexive impotence. They know things are bad, but more than that, they know they can’t do anything about it. But that ‘knowledge’, that reflexivity, is not a passive observation of an already existing state of affairs. It is a self-fulfilling prophecy.

Reflexive impotence amounts to an unstated worldview amongst the British young, and it has its correlate in widespread pathologies. Many of the teenagers I worked with had mental health problems or learning difficulties. Depression is endemic. It is the condition most dealt with by the National Health Service, and is afflicting people at increasingly younger ages. The number of students who have some variant of dyslexia is astonishing. It is not an exaggeration to say that being a teenager in late capitalist Britain is now close to being reclassified as a sickness. This pathologization already forecloses any possibility of politicization. By privatizing these problems – treating them as if they were caused only by chemical imbalances in the individual’s neurology and/ or by their family background – any question of social systemic causation is ruled out.

Many of the teenage students I encountered seemed to be in a state of what I would call depressive hedonia. Depression is usually characterized as a state of anhedonia, but the condition I’m referring to is constituted not by an inability to get pleasure so much as it by an inability to do anything else except pursue pleasure. There is a sense that ‘something is missing’ – but no appreciation that this mysterious, missing enjoyment can only be accessed beyond the pleasure principle. In large part this is a consequence of students’ ambiguous structural position, stranded between their old role as subjects of disciplinary institutions and their new status as consumers of services. In his crucial essay ‘Postscript on Societies of Control’, Deleuze distinguishes between the disciplinary societies described by Foucault, which were organized around the enclosed spaces of the factory, the school and the prison, and the new control societies, in which all institutions are embedded in a dispersed corporation.

pp. 32-38

The ethos espoused by McCauley is the one which Richard Sennett examines in The Corrosion of Character: The Personal Consequences of Work in the New Capitalism, a landmark study of the affective changes that the post-Fordist reorganization of work has brought about. The slogan which sums up the new conditions is ‘no long term’. Where formerly workers could acquire a single set of skills and expect to progress upwards through a rigid organizational hierarchy, now they are required to periodically re-skill as they move from institution to institution, from role to role. As the organization of work is decentralized, with lateral networks replacing pyramidal hierarchies, a premium is put on ‘flexibility’. Echoing McCauley’s mockery of Hanna in Heat (‘ How do you expect to keep a marriage?’), Sennett emphasizes the intolerable stresses that these conditions of permanent instability put on family life. The values that family life depends upon – obligation, trustworthiness, commitment – are precisely those which are held to be obsolete in the new capitalism. Yet, with the public sphere under attack and the safety nets that a ‘Nanny State’ used to provide being dismantled, the family becomes an increasingly important place of respite from the pressures of a world in which instability is a constant. The situation of the family in post-Fordist capitalism is contradictory, in precisely the way that traditional Marxism expected: capitalism requires the family (as an essential means of reproducing and caring for labor power; as a salve for the psychic wounds inflicted by anarchic social-economic conditions), even as it undermines it (denying parents time with children, putting intolerable stress on couples as they become the exclusive source of affective consolation for each other). […]

The psychological conflict raging within individuals cannot but have casualties. Marazzi is researching the link between the increase in bi-polar disorder and post-Fordism and, if, as Deleuze and Guattari argue, schizophrenia is the condition that marks the outer edges of capitalism, then bi-polar disorder is the mental illness proper to the ‘interior’ of capitalism. With its ceaseless boom and bust cycles, capitalism is itself fundamentally and irreducibly bi-polar, periodically lurching between hyped-up mania (the irrational exuberance of ‘bubble thinking’) and depressive come-down. (The term ‘economic depression’ is no accident, of course). To a degree unprecedented in any other social system, capitalism both feeds on and reproduces the moods of populations. Without delirium and confidence, capital could not function.

It seems that with post-Fordism, the ‘invisible plague’ of psychiatric and affective disorders that has spread, silently and stealthily, since around 1750 (i.e. the very onset of industrial capitalism) has reached a new level of acuteness. Here, Oliver James’s work is important. In The Selfish Capitalist, James points to significant rises in the rates of ‘mental distress’ over the last 25 years. ‘By most criteria’, James reports,

rates of distress almost doubled between people born in 1946 (aged thirty-six in 1982) and 1970 (aged thirty in 2000). For example, 16 per cent of thirty-six-year-old women in 1982 reported having ‘trouble with nerves, feeling low, depressed or sad’, whereas 29 per cent of thirty year-olds reported this in 2000 (for men it was 8 per cent in 1982, 13 per cent in 2000).

Another British study James cites compared levels of psychiatric morbidity (which includes neurotic symptoms, phobias and depression) in samples of people in 1977 and 1985. ‘Whereas 22 per cent of the 1977 sample reported psychiatric morbidity, this had risen to almost a third of the population (31 per cent) by 1986’. Since these rates are much higher in countries that have implemented what James calls ‘selfish’ capitalism than in other capitalist nations, James hypothesizes that it is selfish (i.e. neoliberalized) capitalist policies and culture that are to blame. […]

James’s conjectures about aspirations, expectations and fantasy fit with my own observations of what I have called ‘hedonic depression’ in British youth.

It is telling, in this context of rising rates of mental illness, that New Labour committed itself, early in its third term in government, to removing people from Incapacity Benefit, implying that many, if not most, claimants are malingerers. In contrast with this assumption, it doesn’t seem unreasonable to infer that most of the people claiming Incapacity Benefit – and there are well in excess of two million of them – are casualties of Capital. A significant proportion of claimants, for instance, are people psychologically damaged as a consequence of the capitalist realist insistence that industries such as mining are no longer economically viable. (Even considered in brute economic terms, though, the arguments about ‘viability’ seem rather less than convincing, especially once you factor in the cost to taxpayers of incapacity and other benefits.) Many have simply buckled under the terrifyingly unstable conditions of post-Fordism.

The current ruling ontology denies any possibility of a social causation of mental illness. The chemico-biologization of mental illness is of course strictly commensurate with its de-politicization. Considering mental illness an individual chemico-biological problem has enormous benefits for capitalism. First, it reinforces Capital’s drive towards atomistic individualization (you are sick because of your brain chemistry). Second, it provides an enormously lucrative market in which multinational pharmaceutical companies can peddle their pharmaceuticals (we can cure you with our SSRIs). It goes without saying that all mental illnesses are neurologically instantiated, but this says nothing about their causation. If it is true, for instance, that depression is constituted by low serotonin levels, what still needs to be explained is why particular individuals have low levels of serotonin. This requires a social and political explanation; and the task of repoliticizing mental illness is an urgent one if the left wants to challenge capitalist realism.

It does not seem fanciful to see parallels between the rising incidence of mental distress and new patterns of assessing workers’ performance. We will now take a closer look at this ‘new bureaucracy’.

The Opposite of Addiction is Connection
by Robert Weiss LCSW, CSAT-S

Not for Alexander. He was bothered by the fact that the cages in which the rats were isolated were small, with no potential for stimulation beyond the heroin. Alexander thought: Of course they all got high. What else were they supposed to do? In response to this perceived shortcoming, Alexander created what we now call “the rat park,” a cage approximately 200 times larger than the typical isolation cage, with Hamster wheels and multi-colored balls to play with, plenty of tasty food to eat, and spaces for mating and raising litters.[ii] And he put not one rat, but 20 rats (of both genders) into the cage. Then, and only then, did he mirror the old experiments, offering one bottle of pure water and one bottle of heroin water. And guess what? The rats ignored the heroin. They were much more interested in typical communal rat activities such as playing, fighting, eating, and mating. Essentially, with a little bit of social stimulation and connection, addiction disappeared. Heck, even rats who’d previously been isolated and sucking on the heroin water left it alone once they were introduced to the rat park.

The Human Rat Park

One of the reasons that rats are routinely used in psychological experiments is that they are social creatures in many of the same ways that humans are social creatures. They need stimulation, company, play, drama, sex, and interaction to stay happy. Humans, however, add an extra layer to this equation. We need to be able to trust and to emotionally attach.

This human need for trust and attachment was initially studied and developed as a psychological construct in the 1950s, when John Bowlby tracked the reactions of small children when they were separated from their parents.[iii] In a nutshell, he found that infants, toddlers, and young children have an extensive need for safe and reliable caregivers. If children have that, they tend to be happy in childhood and well-adjusted (emotionally healthy) later in life. If children don’t have that, it’s a very different story. In other words, it is clear from Bowlby’s work and the work of later researchers that the level and caliber of trust and connection experienced in early childhood carries forth into adulthood. Those who experience secure attachment as infants, toddlers, and small children nearly always carry that with them into adulthood, and they are naturally able to trust and connect in healthy ways. Meanwhile, those who don’t experience secure early-life attachment tend to struggle with trust and connection later in life. In other words, securely attached individuals tend to feel comfortable in and to enjoy the human rat park, while insecurely attached people typically struggle to fit in and connect.

The Opposite Of Addiction is Connection
By Jonathan Davis

If connection is the opposite of addiction, then an examination of the neuroscience of human connection is in order. Published in 2000, A General Theory Of Love is a collaboration between three professors of psychiatry at the University of California in San Francisco. A General Theory Of Love reveals that humans require social connection for optimal brain development, and that babies cared for in a loving environment are psychological and neurologically ‘immunised’ by love. When things get difficult in adult life, the neural wiring developed from a love-filled childhood leads to increased emotional resilience in adult life. Conversely, those who grow up in an environment where loving care is unstable or absent are less likely to be resilient in the face of emotional distress.

How does this relate to addiction? Gabor Maté observes an extremely high rate of childhood trauma in the addicts he works with and trauma is the extreme opposite of growing up in a consistently safe and loving environment. He asserts that it is extremely common for people with addictions to have a reduced capacity for dealing with emotional distress, hence an increased risk of drug-dependence.

How Our Ability To Connect Is Impaired By Trauma

Trauma is well-known to cause interruption to healthy neural wiring, in both the developing and mature brain. A deeper issue here is that people who have suffered trauma, particularly children, can be left with an underlying sense that the world is no longer safe, or that people can no longer be trusted. This erosion (or complete destruction) of a sense of trust, that our family, community and society will keep us safe, results in isolation – leading to the very lack of connection Johann Harri suggests is the opposite of addiction. People who use drugs compulsively do so to avoid the pain of past trauma and to replace the absence of connection in their life.

Social Solutions To Addiction

The solution to the problem of addiction on a societal level is both simple and fairly easy to implement. If a person is born into a life that is lacking in love and support on a family level, or if due to some other trauma they have become isolated and suffer from addiction, there must be a cultural response to make sure that person knows that they are valued by their society (even if they don’t feel valued by their family). Portugal has demonstrated this with a 50% drop in addiction thanks to programs that are specifically designed to re-create connection between the addict and their community.

The real cause of addiction has been discovered – and it’s not what you think
by Johann Hari

This has huge implications for the one hundred year old war on drugs. This massive war – which, as I saw, kills people from the malls of Mexico to the streets of Liverpool – is based on the claim that we need to physically eradicate a whole array of chemicals because they hijack people’s brains and cause addiction. But if drugs aren’t the driver of addiction – if, in fact, it is disconnection that drives addiction – then this makes no sense.

Ironically, the war on drugs actually increases all those larger drivers of addiction: for example, I went to a prison in Arizona – ‘Tent City’ – where inmates are detained in tiny stone isolation cages (“The Hole”) for weeks and weeks on end, to punish them for drug use. It is as close to a human recreation of the cages that guaranteed deadly addiction in rats as I can imagine. And when those prisoners get out, they will be unemployable because of their criminal record – guaranteeing they with be cut off ever more. I watched this playing out in the human stories I met across the world.

There is an alternative. You can build a system that is designed to help drug addicts to reconnect with the world – and so leave behind their addictions.

This isn’t theoretical. It is happening. I have seen it. Nearly fifteen years ago, Portugal had one of the worst drug problems in Europe, with 1 percent of the population addicted to heroin. They had tried a drug war, and the problem just kept getting worse. So they decided to do something radically different. They resolved to decriminalize all drugs, and transfer all the money they used to spend on arresting and jailing drug addicts, and spend it instead on reconnecting them – to their own feelings, and to the wider society. The most crucial step is to get them secure housing, and subsidized jobs – so they have a purpose in life, and something to get out of bed for. I watched as they are helped, in warm and welcoming clinics, to learn how to reconnect with their feelings, after years of trauma and stunning them into silence with drugs.

One example I learned about was a group of addicts who were given a loan to set up a removals firm. Suddenly, they were a group, all bonded to each other, and to the society, and responsible for each other’s care.

The results of all this are now in. An independent study by the British Journal of Criminology found that since total decriminalization, addiction has fallen, and injecting drug use is down by 50 percent. I’ll repeat that: injecting drug use is down by 50 percent. Decriminalization has been such a manifest success that very few people in Portugal want to go back to the old system. The main campaigner against the decriminalization back in 2000 was Joao Figueira – the country’s top drug cop. He offered all the dire warnings that we would expect from the Daily Mail or Fox News. But when we sat together in Lisbon, he told me that everything he predicted had not come to pass – and he now hopes the whole world will follow Portugal’s example.

This isn’t only relevant to addicts. It is relevant to all of us, because it forces us to think differently about ourselves. Human beings are bonding animals. We need to connect and love. The wisest sentence of the twentieth century was E.M. Forster’s: “only connect.” But we have created an environment and a culture that cut us off from connection, or offer only the parody of it offered by the Internet. The rise of addiction is a symptom of a deeper sickness in the way we live–constantly directing our gaze towards the next shiny object we should buy, rather than the human beings all around us.

The writer George Monbiot has called this “the age of loneliness.” We have created human societies where it is easier for people to become cut off from all human connections than ever before. Bruce Alexander, the creator of Rat Park, told me that for too long, we have talked exclusively about individual recovery from addiction. We need now to talk about social recovery—how we all recover, together, from the sickness of isolation that is sinking on us like a thick fog.

But this new evidence isn’t just a challenge to us politically. It doesn’t just force us to change our minds. It forces us to change our hearts.

* * *

Social Conditions of an Individual’s Condition

Society and Dysfunction

It’s All Your Fault, You Fat Loser!

Liberal-mindedness, Empathetic Imagination, and Capitalist Realism

Ideological Realism & Scarcity of Imagination

The Unimagined: Capitalism and Crappiness

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

The Desperate Acting Desperately

To Grow Up Fast

Morality-Punishment Link

An Invisible Debt Made Visible

Trends in Depression and Suicide Rates

From Bad to Worse: Trends Across Generations

Republicans: Party of Despair

Rate And Duration of Despair

“We have met the enemy and he is us.”

We blame society, but we are society.

That is such a simple truth and for that reason it is easy to ignore or not fully grasp. It slips past us, as if it were just a nice saying. Yet it is the literal and most basic truth of our entire existence. We are social creatures, at the very core of our being.

Living in a dysfunctional society, this gives us plenty of opportunities to think about what this means. I realize most people would rather not think about it because then they’d feel a sense of moral responsibility to do something about it. That is all the more reason for the rest of us, unable to ignore it, to force this issue into public attention. Again and again and again.

People say we have no choice but to choose what society offers us. This is regularly seen during the campaign season. Just hold your nose, eat the plate of shit given you, and try to keep it down.

It’s the saddest thing in the world to see the abused voter returning to the two-party system that abuses them, as if they deserve the abuse. You try to argue with them, but the victim predictably defends the abuser: he’s not so bad, he really loves me, I couldn’t live without him, etc. Even though the victim is physically free to leave, they can’t imagine a life that is different or rather can’t imagine that they deserve anything else.

All of society is about relationships. These relationships don’t exist outside of us. We are our relationships in a fundamental sense. It is what defines us. As such, we should choose our relationships carefully and when necessary choose new relationships.

We don’t live in an overtly violent and oppressive militarized police state. If we speak our minds or act independently, we aren’t likely to be arbitrarily imprisoned or executed. Despite our society being a banana republic, we still do have basic freedoms, even with the elections being rigged. Besides, democracy isn’t an election. Nor is it the government. No, to find democracy look in a mirror or, better yet, look into the face of your neighbor. We are democracy.

If we don’t like the choices within our democracy, we need to act differently. No one is going to give us democracy. No one can give us permission to be free and to act freely. Voting for the right candidate is not the issue, much less the solution.

We will have a functioning democracy if and only when we act as functioning democratic citizens. We’ve allowed ourselves to be fooled. Yet all that it would take for us to see clearly is to remove the blindfold and open our eyes. And all that it would take for us to act freely is to loosen the shackles, once we realized they were never locked.

Some on the political left would like to entirely blame the rich for our failed democracy. Others on the political right would blame the poor. But both sides are wrong. The rich are too small in number to stop what the public demanded, if the public ever were to demand actual democracy. And the poor vote at too low of a rate, for various reasons.

We have a welfare state because that maintains the social order, not because anyone wants a welfare state. It’s just the other side of the corporatocracy. The welfare state just keeps the masses comfortable enough that they will neither vote for reform nor start a revolution. As I’ve said many times before, it is the bread part of the bread and circus.

No one, rich or poor, is necessarily happy with our society. Yet we lack the collective ability to envision anything better. We’re trapped by our own demoralized apathy and crippled imagination. We are dominated by fear, but we forget that we are what we fear. The dysfunction we see is the expression of our own behavior, the results of our own choices. It is fear that holds our society together.

So, the only way to reclaim our society and our democracy is by claiming that fear. That is the source of the power we’ve given away.

Social Conditions of an Individual’s Condition

A paradigm change has been happening. The shift began long ago, but it’s starting to gain traction in the mainstream. Here is one recent example, an article from Psychology Today—Anxiety and Depression Are Symptoms, Not Diseases by Gregg Henriques Ph.D.:

“Depression is a way the emotional system signals that things are not working and that one is not getting one’s relational needs met. If you are low on relational value in the key domains of family, friends, lovers, group and self, feeling depressed in this context is EXACTLY like feeling pain from a broken arm, feeling cold being outside in the cold, and feeling hungry after going 24 hours without food.

“It is worth noting that, given the current structure of society, depression often serves not to help reboot the system and enlist social support, but instead contributes to the further isolation of the individual, which creates a nasty, vicious spiral of shutting down, doing less, feeling more isolated, turning against the self, and thus getting even more depressed. As such, depressive symptoms often do contribute to the problem, and folks do suffer from Negative Affect Syndromes, where extreme negative moods are definitely part of the problem.

“BUT, everyone should be clear, first and foremost, that anxiety and depression are symptoms of psychosocial needs and threats. They should NOT be, first and foremost, considered alien feelings that need to be eliminated or fixed, any more than we would treat pain from a broken arm, coldness and hunger primarily with pills that takes away the feelings, as opposed to fixing the arm, getting warmer or feeding the hungry individual.”

It’s a pretty good article. The focus on symptoms seems like the right way to frame it. This touch upon larger issues. I’d widen the scope even further. Once we consider the symptoms, it opens up a whole slew of possibilities.

There is the book Chasing the Scream by Johann Hari. The author discusses the rat park research, showing that addiction isn’t an individual disease but a social problem. Change the conditions and the results change. Basically, people are healthier, happier, and more well-adjusted in environments that are conducive to satisfying basic needs.

Then there is James Gilligan’s Why Some Politicians Are More Dangerous Than Others, an even more hard-hitting book. It shows (among other things) suicide rates go up when Republicans are elected. As I recall, other data shows that suicide rates go up in other societies as well, when conservatives are elected.

There are other factors that are directly correlated to depression rates and other mental health issues.

Some are purely physical. Toxoplasmosis is an example of that, and its related parasitic load that stunts brain development. Many examples could be added, from malnutrition to lack of healthcare.

Plus, there are problems that involve both the physical environment and social environment. Lead toxicity causes mental health problems, including depression. The rates of lead toxicity depend on how strong and effective are regulations, which in turn depends on the type of government and who is in power.

A wide variety of research and data is pointing to a basic conclusion. Environmental conditions (physical, social, political, and economic) are of penultimate importance. So, why do we treat as sick individuals those who suffer the consequences of the externalized costs of society?

Here is the sticking point. Systemic and collective problems in some ways are the easiest to deal with. The problems, once understood, are essentially simple and their solutions tend to be straightforward. Even so, the very largeness of these problems make them hard for us to confront. We want someone to blame. But who do we blame when the entire society is dysfunctional?

If we recognize the problems as symptoms, we are forced to acknowledge our collective agency and shared fate. For those who understand this, they are up against countervailing forces that maintain the status quo. Even if a psychiatrist realizes that their patient is experiencing the symptoms of larger social issues, how is that psychiatrist supposed to help the patient? Who is going to diagnose the entire society and demand it seek rehabilitation?

No One Knows

Here is a thought experiment. What if almost everything you think you know is wrong? It isn’t just a thought experiment. In all likelihood, it is true.

Almost everything people thought they knew in the past has turned out to be wrong, partly or entirely. There is no reason to think the same isn’t still the case. We are constantly learning new things that add to or alter prior fields of knowledge.

We live in a scientific age. Even so, there are more things we don’t know than we do know. Our scientific knowledge remains narrow and shallow. The universe is vast. Even the earth is vast. Heck, human nature is vast, in its myriad expressions and potentials.

In some ways, science gives a false sense of how much we know. We end up taking many things as scientific that aren’t actually so. Take the examples of consciousness and free will, both areas about which we have little scientific knowledge.

We have no more reason to believe consciousness is limited to the brain than to believe that consciousness is inherent to matter. We have no more reason to believe that free will exists than to believe it doesn’t. These are non-falsifiable hypotheses, which is to say we don’t know how to test them in order to prove them one way or another.

Yet we go about our lives as if these are decided facts, that we are conscious free agents in a mostly non-conscious world. This is what we believe based on our cultural biases. Past societies had different beliefs about consciousness and agency. Future societies likely will have different beliefs than our own and they will look at us as oddly as we look at ancient people. Our present hyper-individualism may one day seem as bizarre as the ancient bicameral mind.

We forget how primitive our society still is. In many ways, not much has changed over the past centuries or even across the recent millennia. Humans still live their lives basically the same. For as long as civilization has existed, people live in houses and ride on wheeled vehicles. When we have health conditions, invasively cutting into people is still often standard procedure, just as people have been doing for a long long time. Political and military power hasn’t really changed either, except in scale. The most fundamental aspects of our lives are remarkably unchanged.

At the same time, we are on the edge of vast changes. Just in my life, technology has leapt ahead far beyond the imaginings of most people in the generations before mine. Our knowledge of genetics, climate change, and even biblical studies has been irrevocably altered—throwing on its head, much of the earlier consensus.

We can’t comprehend what any of it means or where it is heading. All that we can be certain is that paradigms are going to be shattered over this next century. What will replace them no one knows.

Origins of Ritual Behavior

Here is something from the Scientific American. It’s an article by Laura Kehoe, Mysterious Chimpanzee Behavior May Be Evidence of “Sacred” Rituals:

“Even more intriguing than this, maybe we found the first evidence of chimpanzees creating a kind of shrine that could indicate sacred trees. Indigenous West African people have stone collections at “sacred” trees and such man-made stone collections are commonly observed across the world and look eerily similar to what we have discovered here.”

Apparently, this has never before been observed and documented. It is an amazing discovery. Along with tool use, it points toward a central building block of primate society.

I immediately thought of the first evidence of settled civilization. Before humans built homes for themselves in settlements, they built homes for their gods. These first temples likely began quite simply, maybe even as simple as a pile of rocks.

Human society, as we know it, developed around ritual sites. This may have begun much earlier with the common ancestor of both humans and chimpanzees.

Human Condition

Human nature and the human condition
by The Philosopher’s Beard Blog

“The distinction between human nature and the human condition has implications that go beyond whether some academic sub-fields are built on fundamental error and thus a waste of time (hardly news). The foundational mistake of assuming that certain features prominent among contemporary human beings are true of H. sapiens and therefore true of all of us has implications for how we think about ourselves now. There is a lack of adequate critical reflection – of a true scientific spirit of inquiry – in much of the naturalising project. It fits all too easily with our natural desire for a convenient truth: that the way the world seems is the way it has to be.

“For example, many people believe that to be human is to be religious – or at least to have a ‘hunger for religion’ – and argue as a result that religion should be accorded special prominence and autonomy in our societies – in our education, civil, and political institutions. American ‘secularism’ for example might be said to be built on this principle: hence all religions are engaged in a similar project of searching for the divine and deserve equal respect. The pernicious implication is that the non-religious (who are not the same as atheists, by the way) are somehow lacking in an essential human capability, and should be pitied or perhaps given help to overcome the gaping hole in their lives.

“Anatomically modern humans have been around in our current form for around 200,000 years but while our physiological capacities have scarcely changed we are cognitively very different. Human beings operate in a human world of our own creation, as well as in the natural, biological world that we are given. In the human world people create new inventions – like religion or war or slavery – that do something for them. Those inventions succeed and spread in so far as they are amenable to our human nature and our other inventions, and by their success they condition us to accept the world they create until it seems like it could not have been otherwise.

“Recognising the fact that the human condition is human-made offers us the possibility to scrutinise it, to reflect, and perhaps even to adopt better inventions. Slavery was once so dominant in our human world that even Aristotle felt obliged to give an account of its naturalness (some people are just naturally slavish). But we discovered a better invention – market economies – that has made inefficient slavery obsolete and now almost extinct (which is not to say that this invention is perfect either). The human condition concerns humans as we are, but not as we have to be.”

The Final Rhapsody of Charles Bowden
A visit with the famed journalist just before his death.
by Scott Carrier, Mother Jones Magazine

“Postscript from Bowden’s Blood Orchid, 1995: Imagine the problem is not physical. Imagine the problem has never been physical, that it is not biodiversity, it is not the ozone layer, it is not the greenhouse effect, the whales, the old-growth forest, the loss of jobs, the crack in the ghetto, the abortions, the tongue in the mouth, the diseases stalking everywhere as love goes on unconcerned. Imagine the problem is not some syndrome of our society that can be solved by commissions or laws or a redistribution of what we call wealth. Imagine that it goes deeper, right to the core of what we call our civilization and that no one outside of ourselves can affect real change, that our civilization, our governments are sick and that we are mentally ill and spiritually dead and that all our issues and crises are symptoms of this deeper sickness … then what are we to do?”

Society: Precarious or Persistent?

I sometimes think of society as precarious. It can seem easier to destroy something than to create a new thing or to re-create what was lost. It’s natural to take things for granted, until they are gone. Wisdom is learning to appreciate what you have while you have it.

There is value to this perspective, as it expresses the precautionary principle. This includes a wariness about messing with that which we don’t understand… and there is very little in this world we understand as well as maybe we should. We ought to appreciate what we inherit from the generations before us. We don’t know what went into making what we have possible.

Still, I’m not sure this is always the best way to think about it.

Many aspects of society can be as tough to kill as weeds. Use enough harsh chemicals, though, and weeds can be killed, but even then weeds have a way of popping back up. Cultures are like weeds. They persist against amazing odds. We are all living evidence for this being the case, descendants of survivors upon survivors, the products of many millennia of social advance.

In nature, a bare patch of earth rarely remains bare for long, even if doused with weed-killer. You can kill one thing and then something else will take its place. The best way to keep a weed from growing there is to plant other things that make it less hospitable. It’s as much about what a person wants to grow as about what a person doesn’t want to grow.

This is an apt metaphor for the project of imperialism and colonialism. Westerners perceived Africa and the Americas as places of wilderness. They need to be tamed, and that involved farming. The native plants typically were seen as weeds. Europeans couldn’t even recognize some of the agrarian practices of the indigenous for it didn’t fit their idea of farms. They just saw weeds. So, they destroyed what they couldn’t appreciate. As far as they were concerned, it was unused land to be taken and cultivated, which is to say made civilized.

Most of them weren’t going around wantonly destroying everything in sight. They were trying to create something in what to them was a new land and, in the case of the disease impact in the Americas, a seemingly uninhabited land in many cases. Much of the destruction of other societies was incidental from their perspective, although there was plenty of systematic destruction as well. However, my point is that all of this happened in the context of what was seen as “creative destruction”. It was part of a paternalistic project of ‘civilizing’ the world.

In this project, not all was destroyed. Plenty of indigenous people remain in existence and have retained, to varying degrees, their traditional cultures. Still, those who weren’t destroyed had their entire worlds turned upside down.

An example I was thinking about comes from Christine Kenneally’s recent book, The Invisible History of the Human Race by Christine Kenneally.

The areas of Africa where many slaves were taken were originally high functioning societies. They had developed economies and established governments. This meant they had at least basic levels of a culture of trust to make all this possible. It was probably the developed social system and infrastructure that made the slave trade attractive in those places. These Africans were desirable slavves for the very reason that they came from highly developed societies. They had knowledge and skills that the European enslavers lacked.

This where my original thought comes in. From one perspective, it was simply the destruction of a once stable society built on a culture of trust. From another perspective, a new social order was created to take place of the old.

The slave trade obviously created an atmosphere of fear, conflict, and desperation. It eroded trust, turning village against village, neighbor against neighbor, and even families against their own kin. Yet the slave trade was also the foundation of something new, imperialism and colonialism. The agents of this new order didn’t annihilate all of African society. What they did was conquer these societies and then the empires divied up the spoils. In this process, new societies were built on top of the old and so the countries we know today took form.

If these ancient African cultures were genuinely precarious societies, then we would have expected different results. It was the rock-solid substratum that made this transition to colonial rule possible. Even the development of cultures of distrust was a sign of a functioning society in defensive mode. These societies weren’t destroyed. They were defending themselves from destruction under difficult conditions. These societies persisted amidst change by adapting to change.

It is impossible to make a value judgment of this persistence. A culture of distrust may be less than optimal, but it makes perfect sense in these situations. These people have had to fight for their survival. They aren’t going to be taken for fools. Considering the world is still ruled by their former colonizers, they have every right to move forward with trepidation. They would be crazy to do otherwise.

In comparison, I was thinking of societies known for their strong cultures of trust. Those that come to mind are Scandinavia, Germany, and Japan. These societies are also known for their xenophobia. They may have strong trust for insiders, but this is paired with strong distrust of outsiders. So, there is some nuance to what we mean when we speak of cultures of trust. Anyway, it is true that cultures of trust tend to lead to high economic development and wealth. But, as with the examples of Germany and Japan, the xenophobic side of the equation can also lead to mass destruction and violent oppression that impacts people far outside of their national borders.

As for cultures of distrust, they tend to primarily keep their distrust contained within their own boundaries. Few of the former colonies have become empires colonizing other societies. The United States is one of the few exceptions, probably because the native population was so severely decimated and made a minority in their own land. It also should be noted that the U.S. measures fairly high as a culture of trust. I suspect it requires a strong culture of trust to make for an effective empire, and so it oddly may require a culture of trust among the occuppiers in order to create cultures of distrust in the occupied and formerly occupied societies. That is sad to think about.

Cultures tend to persist, even when some people would rather they not. Claiming societies to be precarious, in many cases, could be considered wishful thinking. Social orders must serve one purpose before all others, that is self-perpetuation.

The core of my message here is that we should be as concerned about what we are creating as what we are destroying. The example of Africa is an example of that. A similar example is what happened to the Ottoman Empire. In both cases, they were divided up by the conquering nations and artificial boundaries were created that inevitably led to conflict. This formed the basis for all the problems that have continued in the Middle East and the Arab world extending into North Africa.

That world of conflict didn’t just happened. It was intentionally created. The powers that be wanted the local people to be divided against themselves. It made it easier to rule over them or to otherwise take advantage of them, such as in procuring/stealing their natural resources.

We Americans inherited that colonial mess, as we are part of it. America has never known any other world, for we were born out of the same oppression as the African and Middle Eastern countries. Now, the U.S. has taken the role of the British Empire, the former ruler now made a partner to and subsidiary of American global power. In this role, we assassinate democratically-elected leaders, foment coup d’etats, arm rebel groups, invade and occupy countries, bomb entire regions into oblivion, etc.

The U.S. military can topple a leader like Saddam Hussein and destroy the social order he created that created secular stability, but the U.S. can’t rebuild what it destroyed. Anyway, that isn’t the point. The U.S. never cared about rebuilding anything. It was always about creating something entirely new. Yet the Iraqi people and their society persists, even in a state of turmoil.

The old persists as it is transformed.

What exactly persists in these times of change? Which threads can be traced into the past and which threads will continue to unwind into the future? What is being woven from these threads? What will be inherited by the following generations?