The Political Right that the Political Left Needs

About Jordan Peterson, some of his views are too ideologically simplistic and constrained for my taste, even occasionally leaning toward the ideologically dogmatic which he exemplifies by his very denial of ideology within himself. And it can’t be doubted that, unfortunately, he gives far too much ammunition to the political right. But he also is saying some things of real value.

In a sense, he is ultimately more important for the political right than the political left. And I praise his attempt to save young men from the dire fate of the reactionary right-wing and alt-right. So, he shouldn’t necessarily be criticized for talking with what some might consider right-wing loons, such as Stefan Molyneux who is an anarcho-capitalist guru, aspiring cult leader, and Donald Trump supporter. But that is all the more reason we on the political left should hold Peterson accountable which means taking him seriously.

He calls himself a classical liberal and is what used to be called conservative in the United States, prior to the radical right taking over the label. He is playing a much needed role in bringing some sanity back to the political right, this maybe being possible by his bringing a Canadian attitude to the table. And if that requires him to reach out to radicalized reactionaries with offers of sympathetic understanding, then so be it. Someone has to do it. And I hope he succeeds.

The other end of the political spectrum shouldn’t ignore or dismiss him. If anything, the political left should engage him for the very reason to make clear that he is a worthy opponent. He represents the political right that the political left needs to move forward sane public debate, as those like Noam Chomsky, Ralph Nader, Jill Stein, and Bernie Sanders represent the political left that is needed also to pull our society from the precipice (while Democrats become the new conservative party). These are the people we need to reframe the ideological spectrum in mainstream politics to better match the actual ideological spectrum of the general population.

Iain McGilchrist and Russell Brand, ideologically and psychologically to the left of Jordan Peterson, are able to draw out of him what is actually useful in his worldview. They demonstrate the most optimal approach. There is a necessary meeting point of genuine dialogue and open inquiry. It’s all about fruitful disagreement that allows for the discovery of mutual ground.

“To improve America so it can be a shining light of what the human spirit can accomplish, our nation needs a thoughtful conservative movement, one that argues for holding on to the tried and true, not just holding on to what is, as some conservatives have always done (see slavery, arguments for its economic necessity).

“America needs constructive and serious conservative thinkers because their work will promote public policy debates rooted in facts and reason, as those sons of the Enlightenment, the Founders, intended.

“And this, in turn, will foster better-reasoned arguments and more effective policy solutions from those whose vision is of what America can in time become rather than what it is. That is because they will have to address serious critiques, not bull.”

David Cay Johnston

Advertisements

Progress and Reaction in a Liberal Age

I have some thoughts rumbling around in my head. Let me try to lay them out and put order to them. What I’m pondering is liberalism and conservatism, progressive reform and the reactionary mind, oppression and backlash.

One conclusion I’ve come to is that, ever since the Enlightenment, we live in a liberal age dominated by a liberal paradigm. So, in a sense, we are all liberals. Even reactionaries are defined by the liberalism they are reacting to. This relates to Corey Robin’s observation of how reactionaries are constantly co-opting ideas, rhetoric, and tactics from the political left. Reaction, in and of itself, has no substance other than what it takes from elsewhere. This is why conservatives, the main variety of reactionaries, often get called classical liberals. A conservative is simply what a liberal used to be and conservatism as such merely rides along on the coattails of liberalism.

This isn’t necessarily a compliment to liberalism. The liberal paradigm ultimately gets not just all the credit but also all the blame. What we call liberals and conservatives are simply the progressive and regressive manifestations of this paradigm. The progressive-oriented have tended to be called ‘liberals’ for the very reason these are the people identified with the social order, the post-Enlightenment progress that has built the entire world we know. But this easily turns those on the political left toward another variety of reaction. Liberals, as they age, find themselves relatively further and further to the right as the population over the generations keeps moving left. This is how liberals, as they age, can sometimes start thinking of themselves as conservatives. It’s not that the liberal changed but the world around them.

As reactionaries have no ideological loyalty, liberals can lack a certain kind of discernment. Liberals have a tendency toward psychological openness and curiosity along with a tolerance for cognitive dissonance (simultaneously holding two different thoughts or seeing two different perspectives). This can lead liberals to be accepting of or even sympathetic toward reactionaries, even when it is contradictory and harmful to liberalism. Furthermore, when experiencing cognitive overload, liberals easily take on reactionary traits and, if stress and anxiety continue long enough, the liberal can be permanently transformed into a reactionary (as a beautiful elf is tortured until becoming an orc).

We are living under conditions that are the opposite of being optimal for and conducive toward healthy liberal-mindedness. That isn’t to say the liberal paradigm is going to disappear any time soon. What it does mean is that the political left will get wonky for quite a while. American society, in particular, has become so oppressive and dysfunctional that there is no hope for a genuinely progressive liberalism. Right now, the progressive worldview is on the defense and that causes liberals to attack the political left as or more harshly than they do the political right. As they increasingly take on reactionary traits, mainstream liberals trying to hold onto power will defend what is left of the status quo by any means necessary.

Yet there is still that urge for progress, even as it gets demented through frustration and outrage. It was inevitable that the #MeToo movement would go too far. The same pattern is always seen following a period of oppression that leads to a populist lashing out or at least that is how some will perceive it. It is what is seen in any revolutionary era, such as how many at the time saw the American and French revolutions going too far, and indeed both led to large numbers of deaths and refugees, but that is what happens under oppressive regimes when the struggle and suffering of the masses becomes intolerable. The judgment of going too far was also made against the labor movement and the civil rights movement. Those stuck in the reactionary mind will see any challenge to their agenda of rigid hierarchy as being too much and so deserving of being crushed. And as reactionary worldview takes hold of society, almost everyone starts taking on the traits of the reactionary mind, hence reaction leading to ever more reaction until hopefully a new stability is achieved.

All of this has more to do with psychological tendencies than political ideologies. We all carry the potential for reaction as we carry the potential for progressivism. That struggle within human nature is what it means to live in a liberal age.

Who were the Phoenicians?

In modern society, we are obsessed with identity, specifically in terms of categorizing and labeling. This leads to a tendency to essentialize identity, but this isn’t supported by the evidence. The only thing we are born as is members of a particular species, homo sapiens.

What stands out is that other societies have entirely different experiences of collective identity. The most common distinctions, contrary to ethnic and racial ideologies, are those we perceive in the people most similar to us — the (too often violent) narcissism of small differences.

We not only project onto other societies our own cultural assumptions for we also read anachronisms into the past as our way of rationalizing the present. But if we study closely what we know from history and archaeology, there isn’t any clear evidence for ethnic and racial ideology.

The ancient world is more complex than our simple notions.  A good example of this is the people(s) that have been called Phoenicians.

* * *

In Search of the Phoenicians
by Josephine Quinn
pp. 13-17

However, my intention here is not simply to rescue the Phoenicians from their undeserved obscurity. Quite the opposite, in fact: I’m going to start by making the case that they did not in fact exist as a self-conscious collective or “people.” The term “Phoenician” itself is a Greek invention, and there is no good evidence in our surviving ancient sources that these Phoenicians saw themselves, or acted, in collective terms above the level of the city or in many cases simply the family. The first and so far the only person known to have called himself a Phoenician in the ancient world was the Greek novelist Heliodorus of Emesa (modern Homs in Syria) in the third or fourth century CE, a claim made well outside the traditional chronological and geographical boundaries of Phoenician history, and one that I will in any case call into question later in this book.

Instead, then, this book explores the communities and identities that were important to the ancient people we have learned to call Phoenicians, and asks why the idea of being Phoenician has been so enthusiastically adopted by other people and peoples—from ancient Greece and Rome, to the emerging nations of early modern Europe, to contemporary Mediterranean nation-states. It is these afterlives, I will argue, that provide the key to the modern conception of the Phoenicians as a “people.” As Ernest Gellner put it, Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist”. 7 In the case of the Phoenicians, I will suggest, modern nationalism invented and then sustained an ancient nation.

Identities have attracted a great deal of scholarly attention in recent years, serving as the academic marginalia to a series of crucially important political battles for equality and freedom. 8 We have learned from these investigations that identities are not simple and essential truths into which we are born, but that they are constructed by the social and cultural contexts in which we live, by other people, and by ourselves—which is not to say that they are necessarily freely chosen, or that they are not genuinely and often fiercely felt: to describe something as imagined is not to dismiss it as imaginary. 9 Our identities are also multiple: we identify and are identified by gender, class, age, religion, and many other things, and we can be more than one of any of those things at once, whether those identities are compatible or contradictory. 10 Furthermore, identities are variable across both time and space: we play—and we are assigned—different roles with different people and in different contexts, and they have differing levels of importance to us in different situations. 11

In particular, the common assumption that we all define ourselves as a member of a specific people or “ethnic group,” a collective linked by shared origins, ancestry, and often ancestral territory, rather than simply by contemporary political, social, or cultural ties, remains just that—an assumption. 12 It is also a notion that has been linked to distinctive nineteenth-century European perspectives on nationalism and identity, 13 and one that sits uncomfortably with counterexamples from other times and places. 14

The now-discredited categorization and labeling of African “tribes” by colonial administrators, missionaries, and anthropologists of the nineteenth and twentieth centuries provides many well-known examples, illustrating the way in which the “ethnic assumption” can distort interpretations of other people’s affiliations and self-understanding. 15 The Banande of Zaire, for instance, used to refer to themselves simply as bayira (“cultivators” or “workers”), and it was not until the creation of a border between the British Protectorate of Uganda and the Belgian Congo in 1885 that they came to be clearly delineated from another group of bayira now called Bakonzo. 16 Even more strikingly, the Tonga of Zambia, as they were named by outsiders, did not regard themselves as a unified group differentiated from their neighbors, with the consequence that they tended to disperse and reassimilate among other groups. 17 Where such groups do have self-declared ethnic identities, they were often first imposed from without, by more powerful regional actors. The subsequent local adoption of those labels, and of the very concepts of ethnicity and tribe in some African contexts, illustrates the effects that external identifications can have on internal affiliations and self-understandings. 18 Such external labeling is not of course a phenomenon limited to Africa or to Western colonialism: other examples include the ethnic categorization of the Miao and the Yao in Han China, and similar processes carried out by the state in the Soviet Union. 19

Such processes can be dangerous. When Belgian colonial authorities encountered the central African kingdom of Rwanda, they redeployed labels used locally at the time to identify two closely related groups occupying different positions in the social and political hierarchy to categorize the population instead into two distinct “races” of Hutus (identified as the indigenous farmers) and Tutsis (thought to be a more civilized immigrant population). 20 This was not easy to do, and in 1930 a Belgian census attempting to establish which classification should be recorded on the identity cards of their subjects resorted in some cases to counting cows: possession of ten or more made you a Tutsi. 21 Between April and July 1994, more than half a million Tutsis were killed by Hutus, sometimes using their identity cards to verify the “race” of their victims.

The ethnic assumption also raises methodological problems for historians. The fundamental difficulty with labels like “Phoenician” is that they offer answers to questions about historical explanation before they have even been asked. They assume an underlying commonality between the people they designate that cannot easily be demonstrated; they produce new identities where they did not to our knowledge exist; and they freeze in time particular identities that were in fact in a constant process of construction, from inside and out. As Paul Gilroy has argued, “ethnic absolutism” can homogenize what are in reality significant differences. 22 These labels also encourage historical explanation on a very large and abstract scale, focusing attention on the role of the putative generic identity at the expense of more concrete, conscious, and interesting communities and their stories, obscuring in this case the importance of the family, the city, and the region, not to mention the marking of other social identities such as gender, class, and status. In sum, they provide too easy a way out of actually reading the historical evidence.

As a result, recent scholarship tends to see ethnicity not as a timeless fact about a region or group, but as an ideology that emerges at certain times, in particular social and historical circumstances, and, especially at moments of change or crisis: at the origins of a state, for instance, or after conquest, or in the context of migration, and not always even then. 23 In some cases, we can even trace this development over time: James C. Scott cites the example of the Cossacks on Russia’s frontiers, people used as cavalry by the tsars, Ottomans, and Poles, who “were, at the outset, nothing more and nothing less than runaway serfs from all over European Russia, who accumulated at the frontier. They became, depending on their locations, different Cossack “hosts”: the Don (for the Don River basin) Cossacks, the Azov (Sea) Cossacks, and so on.” 24

Ancient historians and archaeologists have been at the forefront of these new ethnicity studies, emphasizing the historicity, flexibility, and varying importance of ethnic identity in the ancient Mediterranean. 25 They have described, for instance, the emergence of new ethnic groups such as the Moabites and Israelites in the Near East in the aftermath of the collapse of the Bronze Age empires and the “crystallisation of commonalities” among Greeks in the Archaic period. 26 They have also traced subsequent changes in the ethnic content and formulation of these identifications: in relation to “Hellenicity,” for example, scholars have delineated a shift in the fifth century BCE from an “aggregative” conception of Greek identity founded largely on shared history and traditions to a somewhat more oppositional approach based on distinction from non-Greeks, especially Persians, and then another in the fourth century BCE, when Greek intellectuals themselves debated whether Greekness should be based on a shared past or on shared culture and values in the contemporary world. 27 By the Hellenistic period, at least in Egypt, the term “Hellene” (Greek) was in official documents simply an indication of a privileged tax status, and those so labeled could be Jews, Thracians—or, indeed, Egyptians. 28

Despite all this fascinating work, there is a danger that the considerable recent interest in the production, mechanisms, and even decline of ancient ethnicity has obscured its relative rarity. Striking examples of the construction of ethnic groups in the ancient world do not of course mean that such phenomena became the norm. 29 There are good reasons to suppose in principle that without modern levels of literacy, education, communication, mobility, and exchange, ancient communal identities would have tended to form on much smaller scales than those at stake in most modern discussions of ethnicity, and that without written histories and genealogies people might have placed less emphasis on the concepts of ancestry and blood-ties that at some level underlie most identifications of ethnic groups. 30 And in practice, the evidence suggests that collective identities throughout the ancient Mediterranean were indeed largely articulated at the level of city-states and that notions of common descent or historical association were rarely the relevant criterion for constructing “groupness” in these communities: in Greek cities, for instance, mutual identification tended to be based on political, legal, and, to a limited extent, cultural criteria, 31 while the Romans famously emphasized their mixed origins in their foundation legends and regularly manumitted their foreign slaves, whose descendants then became full Roman citizens. 32

This means that some of the best-known “peoples” of antiquity may not actually have been peoples at all. Recent studies have shown that such familiar groups as the Celts of ancient Britain and Ireland and the Minoans of ancient Crete were essentially invented in the modern period by the archaeologists who first studied or “discovered” them, 33 and even the collective identity of the Greeks can be called into question. As S. Rebecca Martin has recently pointed out, “there is no clear recipe for the archetypal Hellene,” and despite our evidence for elite intellectual discussion of the nature of Greekness, it is questionable how much “being Greek” meant to most Greeks: less, no doubt, than to modern scholars. 34 The Phoenicians, I will suggest in what follows, fall somewhere in the middle—unlike the Minoans or the Atlantic Celts, there is ancient evidence for a conception of them as a group, but unlike the Greeks, this evidence is entirely external—and they provide another good case study of the extent to which an assumption of a collective identity in the ancient Mediterranean can mislead. 35

pp. 227-230

In all the exciting work that has been done on “identity” in the past few decades, there has been too little attention paid to the concept of identity itself. We tend to ask how identities are made, vary, and change, not whether they exist at all. But Rogers Brubaker and Frederik Cooper have pinned down a central difficulty with recent approaches: “it is not clear why what is routinely characterized as multiple, fragmented, and fluid should be conceptualized as ‘identity’ at all.” 1 Even personal identity, a strong sense of one’s self as a distinct individual, can be seen as a relatively recent development, perhaps related to a peculiarly Western individualism. 2 Collective identities, furthermore, are fundamentally arbitrary: the artificial ways we choose to organize the world, ourselves, and each other. However strong the attachments they provoke, they are not universal or natural facts. Roger Rouse has pointed out that in medieval Europe, the idea that people fall into abstract social groupings by virtue of common possession of a certain attribute, and occupy autonomous and theoretically equal positions within them, would have seemed nonsensical: instead, people were assigned their different places in the interdependent relationships of a concrete hierarchy. 3

The truth is that although historians are constantly apprehending the dead and checking their pockets for identity, we do not know how people really thought of themselves in the past, or in how many different ways, or indeed how much. I have argued here that the case of the Phoenicians highlights the extent to which the traditional scholarly perception of a basic sense of collective identity at the level of a “people,” “culture,” or “nation” in the cosmopolitan, entangled world of the ancient Mediterranean has been distorted by the traditional scholarly focus on a small number of rather unusual, and unusually literate, societies.

My starting point was that we have no good evidence for the ancient people that we call Phoenician identifying themselves as a single people or acting as a stable collective. I do not conclude from this absence of evidence that the Phoenicians did not exist, nor that nobody ever called her- or himself a Phoenician under any circumstances: Phoenician-speakers undoubtedly had a larger repertoire of self-classifications than survives in our fragmentary evidence, and it would be surprising if, for instance, they never described themselves as Phoenicians to the Greeks who invented that term; indeed, I have drawn attention to several cases where something very close to that is going on. Instead, my argument is that we should not assume that our “Phoenicians” thought of themselves as a group simply by analogy with models of contemporary identity formation among their neighbors—especially since those neighbors do not themselves portray the Phoenicians as a self-conscious or strongly differentiated collective. Instead, we should accept the gaps in our knowledge and fill the space instead with the stories that we can tell.

The stories I have looked at in this book include the ways that the people of the northern Levant did in fact identify themselves—in terms of their cities, but even more of their families and occupations—as well as the formation of complex social, cultural, and economic networks based on particular cities, empires, and ideas. These could be relatively small and closed, like the circle of the tophet, or on the other hand, they could, like the network of Melqart, create shared religious and political connections throughout the Mediterranean—with other Levantine settlements, with other settlers, and with local populations. Identifications with a variety of social and cultural traditions is one recurrent characteristic of the people and cities we call Phoenician, and this continued into the Hellenistic and Roman periods, when “being Phoenician” was deployed as a political and cultural tool, although it was still not claimed as an ethnic identity.

Another story could go further, to read a lack of collective identity, culture, and political organization among Phoenician-speakers as a positive choice, a form of resistance against larger regional powers. James C. Scott has recently argued in The Art of Not Being Governed (2009) that self-governing people living on the peripheries and borders of expansionary states in that region tend to adopt strategies to avoid incorporation and to minimize taxation, conscription, and forced labor. Scott’s focus is on the highlands of Southeast Asia, an area now sometimes known as Zomia, and its relationship with the great plains states of the region such as China and Burma. He describes a series of tactics used by the hill people to avoid state power, including “their physical dispersion in rugged terrain, their mobility, their cropping practices, their kinship structure, their pliable ethnic identities . . . their flexible social structure, their religious heterodoxy, their egalitarianism and even the nonliterate, oral cultures.” The constant reconstruction of identity is a core theme in his work: “ethnic identities in the hills are politically crafted and designed to position a group vis-à-vis others in competition for power and resources.” 4 Political integration in Zomia, when it has happened at all, has usually consisted of small confederations: such alliances, he points out, are common but short-lived, and are often preserved in local place names such as “Twelve Tai Lords” (Sipsong Chutai) or “Nine Towns” (Ko Myo)—information that throws new light on the federal meetings recorded in fourth-century BCE Tripolis (“Three Cities”). 5

In fact, many aspects of Scott’s analysis feel familiar in the world of the ancient Mediterranean, on the periphery of the great agricultural empires of Mesopotamia and Iran, and despite all its differences from Zomia, another potential candidate for the label of “shatterzone.” The validity of Scott’s model for upland Southeast Asia itself —a matter of considerable debate since the book’s publication—is largely irrelevant for our purposes; 6 what is interesting here is how useful it might be for thinking about the mountainous region of the northern Levant, and the places of refuge in and around the Mediterranean.

In addition to outright rebellion, we could argue that the inhabitants of the Levant employed a variety of strategies to evade the heaviest excesses of imperial power. 7 One was to organize themselves in small city-states with flimsy political links and weak hierarchies, requiring larger powers to engage in multiple negotiations and arrangements, and providing the communities involved with multiple small and therefore obscure opportunities for the evasion of taxation and other responsibilities—“divide that ye be not ruled,” as Scott puts it. 8 A cosmopolitan approach to culture and language in those cities would complement such an approach, committing to no particular way of doing or being or even looking, keeping loyalties vague and options open. One of the more controversial aspects of Scott’s model could even explain why there is no evidence for Phoenician literature despite earlier Near Eastern traditions of myth and epic. He argues that the populations he studies are in some cases not so much nonliterate as postliterate: “Given the considerable advantages in plasticity of oral over written histories and genealogies, it is at least conceivable to see the loss of literacy and of written texts as a more or less deliberate adaptation to statelessness.” 9

Another available option was to take to the sea, a familiar but forbidding terrain where the experience and knowledge of Levantine sailors could make them and their activities invisible and unaccountable to their overlords further east. The sea also offered an escape route from more local sources of power, and the stories we hear of the informal origins of western settlements such as Carthage and Lepcis, whether or not they are true, suggest an appreciation of this point. A distaste even for self-government could also explain a phenomenon I have drawn attention to throughout the book: our “Phoenicians” not only fail to visibly identify as Phoenician, they often omit to identify at all.

It is striking in this light that the first surviving visible expression of an explicitly “Phoenician” identity was imposed by the Carthaginians on their subjects as they extended state power to a degree unprecedented among Phoenician-speakers, that it was then adopted by Tyre as a symbol of colonial success, and that it was subsequently exploited by Roman rulers in support of their imperial activities. This illustrates another uncomfortable aspect of identity formation: it is often a cultural bullying tactic, and one that tends to benefit those already in power more than those seeking self-empowerment. Modern European examples range from the linguistic and cultural education strategies that turned “peasants into Frenchmen” in the late nineteenth century, 10 to the eugenic Lebensborn program initiated by the Nazis in mid-twentieth-century central Europe to create more Aryan children through procreation between German SS officers and “racially pure” foreign women. 11 Such examples also underline the difficulty of distinguishing between internal and external conceptions of identity when apparently internal identities are encouraged from above, or even from outside, just as the developing modern identity as Phoenician involved the gradual solidification of the identity of the ancient Phoenicians.

It seems to me that attempts to establish a clear distinction between “emic” and “etic” identity are part of a wider tendency to treat identities as ends rather than means, and to focus more on how they are constructed than on why. Identity claims are always, however, a means to another end, and being “Phoenician” is in all the instances I have surveyed here a political rather than a personal statement. It is sometimes used to resist states and empires, from Roman Africa to Hugh O’Donnell’s Ireland, but more often to consolidate them, lending ancient prestige and authority to later regimes, a strategy we can see in Carthage’s Phoenician coinage, the emperor Elagabalus’s installation of a Phoenician sun god at Rome, British appeals to Phoenician maritime power, and Hannibal Qadhafi’s cruise ship.

In the end, it is modern nationalism that has created the Phoenicians, along with much else of our modern idea of the ancient Mediterranean. Phoenicianism has served nationalist purposes since the early modern period: the fully developed notion of Phoenician ethnicity may be a nineteenth-century invention, a product of ideologies that sought to establish ancient peoples or “nations” at the heart of new nation-states, but its roots, like those of nationalism itself, are deeper. As origin myth or cultural comparison, aggregative or oppositional, imperialist and anti-imperialist, Phoenicianism supported the expansion of the early modern nation of Britain, as well as the position of the nation of Ireland as separate and respected within that empire; it helped to consolidate the nation of Lebanon under French imperial mandate, premised on a regional Phoenician identity agreed on between local and French intellectuals, but it also helped to construct the nation of Tunisia in opposition to European colonialism.

Paradoxes of State and Civilization Narratives

Below is a passage from a recent book by James C. Scott, Against the Grain.

The book is about agriculture, sedentism, and early statism. The author questions the standard narrative. In doing so, he looks more closely at what the evidence actually shows us about civilization, specifically in terms of supposed collapses and dark ages (elsewhere in the book, he also discusses how non-state ‘barbarians’ are connected to, influenced by, and defined according to states).

Oddly, Scott never mentions Göbekli Tepe. It is an ancient archaeological site that offers intriguing evidence of civilization preceding and hence not requiring agriculture, sedentism, or statism. As has been said of it, “First came the temple, then the city.” That would seem to fit into the book’s framework.

The other topic not mentioned, less surprisingly, is Julian Jaynes’ theory of bicameralism. Jaynes’ view might complicate Scott’s interpretations. Scott goes into great detail about domestication and slavery, specifically in the archaic civilizations such as first seen with the walled city-states. But Jaynes pointed out that authoritarianism as we know it didn’t seem to exist early on, as the bicameral mind made social conformity possible through non-individualistic identity and collective experience (explained in terms of the hypothesis of archaic authorization).

Scott’s focus is more on external factors. From perusing the book, he doesn’t seem to fully take into account social science research, cultural studies, anthropology, philology, etc. The thesis of the book could have been further developed by exploring other areas, although maybe the narrow focus is useful for emphasizing the central point about agriculture. There is a deeper issue, though, that the author does touch upon. What does it mean to be a domesticated human? After all, that is what civilization is about.

He does offer an interesting take on human domestication. Basically, he doesn’t see that most humans ever take the yoke of civilization willingly. There must be systems of force and control in place to make people submit. I might agree, even as I’m not sure that this is the central issue. It’s less about how people submit in body than how they submit in mind. Whether or not we are sheep, there is no shepherd. Even the rulers of the state are sheep.

The temple comes first. Before civilization proper, before walled city-states, before large-scale settlement, before agriculture, before even pottery, there was a temple. What does the temple represent?

* * *

Against the Grain
by James C. Scott
pp. 22-27

PARADOXES OF STATE AND CIVILIZATION NARRATIVES

A foundational question underlying state formation is how we ( Homo sapiens sapiens ) came to live amid the unprecedented concentrations of domesticated plants, animals, and people that characterize states. From this wide-angle view, the state form is anything but natural or given. Homo sapiens appeared as a subspecies about 200,000 years ago and is found outside of Africa and the Levant no more than 60,000 years ago. The first evidence of cultivated plants and of sedentary communities appears roughly 12,000 years ago. Until then—that is to say for ninety-five percent of the human experience on earth—we lived in small, mobile, dispersed, relatively egalitarian, hunting-and-gathering bands. Still more remarkable, for those interested in the state form, is the fact that the very first small, stratified, tax-collecting, walled states pop up in the Tigris and Euphrates Valley only around 3,100 BCE, more than four millennia after the first crop domestications and sedentism. This massive lag is a problem for those theorists who would naturalize the state form and assume that once crops and sedentism, the technological and demographic requirements, respectively, for state formation were established, states/empires would immediately arise as the logical and most efficient units of political order. 4

These raw facts trouble the version of human prehistory that most of us (I include myself here) have unreflectively inherited. Historical humankind has been mesmerized by the narrative of progress and civilization as codified by the first great agrarian kingdoms. As new and powerful societies, they were determined to distinguish themselves as sharply as possible from the populations from which they sprang and that still beckoned and threatened at their fringes. In its essentials, it was an “ascent of man” story. Agriculture, it held, replaced the savage, wild, primitive, lawless, and violent world of hunter-gatherers and nomads. Fixed-field crops, on the other hand, were the origin and guarantor of the settled life, of formal religion, of society, and of government by laws. Those who refused to take up agriculture did so out of ignorance or a refusal to adapt. In virtually all early agricultural settings the superiority of farming was underwritten by an elaborate mythology recounting how a powerful god or goddess entrusted the sacred grain to a chosen people.

Once the basic assumption of the superiority and attraction of fixed-field farming over all previous forms of subsistence is questioned, it becomes clear that this assumption itself rests on a deeper and more embedded assumption that is virtually never questioned. And that assumption is that sedentary life itself is superior to and more attractive than mobile forms of subsistence. The place of the domus and of fixed residence in the civilizational narrative is so deep as to be invisible; fish don’t talk about water! It is simply assumed that weary Homo sapiens couldn’t wait to finally settle down permanently, could not wait to end hundreds of millennia of mobility and seasonal movement. Yet there is massive evidence of determined resistance by mobile peoples everywhere to permanent settlement, even under relatively favorable circumstances. Pastoralists and hunting-and-gathering populations have fought against permanent settlement, associating it, often correctly, with disease and state control. Many Native American peoples were confined to reservations only on the heels of military defeat. Others seized historic opportunities presented by European contact to increase their mobility, the Sioux and Comanche becoming horseback hunters, traders, and raiders, and the Navajo becoming sheep-based pastoralists. Most peoples practicing mobile forms of subsistence—herding, foraging, hunting, marine collecting, and even shifting cultivation—while adapting to modern trade with alacrity, have bitterly fought permanent settlement. At the very least, we have no warrant at all for supposing that the sedentary “givens” of modern life can be read back into human history as a universal aspiration. 5

The basic narrative of sedentism and agriculture has long survived the mythology that originally supplied its charter. From Thomas Hobbes to John Locke to Giambattista Vico to Lewis Henry Morgan to Friedrich Engels to Herbert Spencer to Oswald Spengler to social Darwinist accounts of social evolution in general, the sequence of progress from hunting and gathering to nomadism to agriculture (and from band to village to town to city) was settled doctrine. Such views nearly mimicked Julius Caesar’s evolutionary scheme from households to kindreds to tribes to peoples to the state (a people living under laws), wherein Rome was the apex, with the Celts and then the Germans ranged behind. Though they vary in details, such accounts record the march of civilization conveyed by most pedagogical routines and imprinted on the brains of schoolgirls and schoolboys throughout the world. The move from one mode of subsistence to the next is seen as sharp and definitive. No one, once shown the techniques of agriculture, would dream of remaining a nomad or forager. Each step is presumed to represent an epoch-making leap in mankind’s well-being: more leisure, better nutrition, longer life expectancy, and, at long last, a settled life that promoted the household arts and the development of civilization. Dislodging this narrative from the world’s imagination is well nigh impossible; the twelve-step recovery program required to accomplish that beggars the imagination. I nevertheless make a small start here.

It turns out that the greater part of what we might call the standard narrative has had to be abandoned once confronted with accumulating archaeological evidence. Contrary to earlier assumptions, hunters and gatherers—even today in the marginal refugia they inhabit—are nothing like the famished, one-day-away-from-starvation desperados of folklore. Hunters and gathers have, in fact, never looked so good—in terms of their diet, their health, and their leisure. Agriculturalists, on the contrary, have never looked so bad—in terms of their diet, their health, and their leisure. 6 The current fad of “Paleolithic” diets reflects the seepage of this archaeological knowledge into the popular culture. The shift from hunting and foraging to agriculture—a shift that was slow, halting, reversible, and sometimes incomplete—carried at least as many costs as benefits. Thus while the planting of crops has seemed, in the standard narrative, a crucial step toward a utopian present, it cannot have looked that way to those who first experienced it: a fact some scholars see reflected in the biblical story of Adam and Eve’s expulsion from the Garden of Eden.

The wounds the standard narrative has suffered at the hands of recent research are, I believe, life threatening. For example, it has been assumed that fixed residence—sedentism—was a consequence of crop-field agriculture. Crops allowed populations to concentrate and settle, providing a necessary condition for state formation. Inconveniently for the narrative, sedentism is actually quite common in ecologically rich and varied, preagricultural settings—especially wetlands bordering the seasonal migration routes of fish, birds, and larger game. There, in ancient southern Mesopotamia (Greek for “between the rivers”), one encounters sedentary populations, even towns, of up to five thousand inhabitants with little or no agriculture. The opposite anomaly is also encountered: crop planting associated with mobility and dispersal except for a brief harvest period. This last paradox alerts us again to the fact that the implicit assumption of the standard narrative—namely that people couldn’t wait to abandon mobility altogether and “settle down”—may also be mistaken.

Perhaps most troubling of all, the civilizational act at the center of the entire narrative: domestication turns out to be stubbornly elusive. Hominids have, after all, been shaping the plant world—largely with fire—since before Homo sapiens. What counts as the Rubicon of domestication? Is it tending wild plants, weeding them, moving them to a new spot, broadcasting a handful of seeds on rich silt, depositing a seed or two in a depression made with a dibble stick, or ploughing? There appears to be no “aha!” or “Edison light bulb” moment. There are, even today, large stands of wild wheat in Anatolia from which, as Jack Harlan famously showed, one could gather enough grain with a flint sickle in three weeks to feed a family for a year. Long before the deliberate planting of seeds in ploughed fields, foragers had developed all the harvest tools, winnowing baskets, grindstones, and mortars and pestles to process wild grains and pulses. 7 For the layman, dropping seeds in a prepared trench or hole seems decisive. Does discarding the stones of an edible fruit into a patch of waste vegetable compost near one’s camp, knowing that many will sprout and thrive, count?

For archaeo-botanists, evidence of domesticated grains depended on finding grains with nonbrittle rachis (favored intentionally and unintentionally by early planters because the seedheads did not shatter but “waited for the harvester”) and larger seeds. It now turns out that these morphological changes seem to have occurred well after grain crops had been cultivated. What had appeared previously to be unambiguous skeletal evidence of fully domesticated sheep and goats has also been called into question. The result of these ambiguities is twofold. First, it makes the identification of a single domestication event both arbitrary and pointless. Second, it reinforces the case for a very, very long period of what some have called “low-level food production” of plants not entirely wild and yet not fully domesticated either. The best analyses of plant domestication abolish the notion of a singular domestication event and instead argue, on the basis of strong genetic and archaeological evidence, for processes of cultivation lasting up to three millennia in many areas and leading to multiple, scattered domestications of most major crops (wheat, barley, rice, chick peas, lentils). 8

While these archaeological findings leave the standard civilizational narrative in shreds, one can perhaps see this early period as part of a long process, still continuing, in which we humans have intervened to gain more control over the reproductive functions of the plants and animals that interest us. We selectively breed, protect, and exploit them. One might arguably extend this argument to the early agrarian states and their patriarchal control over the reproduction of women, captives, and slaves. Guillermo Algaze puts the matter even more boldly: “Early Near Eastern villages domesticated plants and animals. Uruk urban institutions, in turn, domesticated humans.” 9

How Universal Is The Mind?

One expression of the misguided nature vs nurture debate is the understanding of our humanity. In wondering about the universality of Western views, we have already framed the issue in terms of Western dualism. The moment we begin speaking in specific terms, from mind to psyche, we’ve already smuggled in cultural preconceptions and biases.

Sabrina Golonka discusses several other linguistic cultures (Korean, Japanese, and Russian) in comparison to English. She suggests that dualism, even if variously articulated, underlies each conceptual tradition — a general distinction between visible and invisible. But all of those are highly modernized societies built on millennia of civilizational projects, from imperialism to industrialization. It would be even more interesting and insightful to look into the linguistic worldviews of indigenous cultures.

The Piraha, for example, are linguistically limited in only speaking about what they directly experience or about what those they personally know have directly experienced. They don’t talk about what is ‘invisible’, whether within the human sphere or beyond in the world, and as such they aren’t prone to theoretical speculations.

What is clear is that the Piraha’s mode of perception and description is far different, even to the point that what they see is sometimes invisible to those who aren’t Piraha. There is an anecdote shared by Daniel Everett. The Piraha crowded on the riverbank pointing to the spirit they saw on the other side, but Everett and his family saw nothing. That brings doubt to the framework of visible vs invisible. The Piraha were fascinated by what becomes invisible such as a person disappearing around the bend of a trail, although their fascination ended at that liminal point at the edge of the visible, not extending beyond it.

Another useful example would be the Australian Aborigine. The Songlines were traditionally integrated with their sense of identity and reality, signifying an experience that is invisible within the reality tunnel of WEIRD society (Western Educated Industrialized Rich Democratic). Prior to contact, individualism as we know it may have been entirely unknown for Songlines express a profoundly collective sense of being in the world.

If any kind of dualism between visible and invisible did exist within the Aboriginal worldview, it more likely would have been on a communal level of experience. In their culture, ritual songs are learned and then what they represent becomes visible to the initiated, however this process might be made sense of within Aboriginal language. A song makes some aspect of the world visible, which is to invoke a particular reality and the beings that inhabit that reality. This is what Westerners would interpret as states of mind, but that is clearly an inadequate understanding of the fully immersive and embodied experience.

Western psychology has made non-Western experience invisible to most Westerners. There is the invisible we talk about within our own cultural worldview, what we perceive as known and familiar, no matter how intangible. But even more important is the unknown and unfamiliar that is so fundamentally invisible that we are incapable of talking about it. This doesn’t merely limit our understanding. Entire ways of being in the world are precluded by the words and concepts we use. Our sense of our own humanity is lesser for it and, as cultural languages go extinct, this state of affairs worsens with the near complete monocultural destruction of the very alternatives that most powerfully challenge our assumptions.

* * *

How Universal Is The Mind?
by Sabrina Golonka

So, back to the mind and our current view of cognition. Cross-linguistic research shows that, generally speaking, every culture has a folk model of a person consisting of visible and invisible (psychological) aspects (Wierzbicka, 2005). While there is agreement that the visible part of the person refers to the body, there is considerable variation in how different cultures think about the invisible (psychological) part. In the West, and, specifically, in the English-speaking West, the psychological aspect of personhood is closely related to the concept of “the mind” and the modern view of cognition. But, how universal is this conception? How do speakers of other languages think about the psychological aspect of personhood? […]

In a larger sense, the fact that there seems to be a universal belief that people consist of visible and invisible aspects explains much of the appeal of cognitive psychology over behaviourism. Cognitive psychology allows us to invoke invisible, internal states as causes of behaviour, which fits nicely with the broad, cultural assumption that the mind causes us to act in certain ways.

To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.

I recently wrote a blog about a related topic. In Pāli and Sanskrit – ancient Indian languages – there is no collective term for emotions. They do have words for all of the basic emotions and some others, but they do not think of them as a category distinct from thought. I have yet to think through all of the implications of this observation but clearly the ancient Indian view on psychology must have been very different to ours.

Han 21 December 2011 at 17:06

Very interesting post. Have you looked into Julian Jaynes’s strange and marvelous book “The Origin of Consciousness in the Breakdown of the Bicameral Mind”? Even if you regard bicameralism as iffy, there’s an interesting section on the creation of metaphorical spaces — body-words that become “containers” for feelings, thoughts, attributes etc. The culturally distinct descriptors of the “invisible” may be related to historical accidents that vary from place to place.

Simon 9 January 2012 at 06:33

Also relevant might be Lakoff and Johnson’s “Philosophy in the Flesh” looking at, in their formulation, the inevitably metaphorical nature of thought and speech and the ultimate grounding of (almost) all metaphors in our physical experience from embodiment in the world.

Vestiges of an Earlier Mentality: Different Psychologies

“The Self as Interiorized Social Relations Applying a Jaynesian Approach to Problems of Agency and Volition”
By Brian J. McVeigh

(II) Vestiges of an Earlier Mentality: Different Psychologies

If what Jaynes has proposed about bicamerality is correct, we should expect to find remnants of this extinct mentality. In any case, an examination of the ethnopsychologies of other societies should at least challenge our assumptions. What kinds of metaphors do they employ to discuss the self? Where is agency localized? To what extent do they even “psychologize” the individual, positing an “interior space” within the person? If agency is a socio-historical construction (rather than a bio-evolutionary product), we should expect some cultural variability in how it is conceived. At the same time, we should also expect certain parameters within which different theories of agency are built.

Ethnographies are filled with descriptions of very different psychologies. For example, about the Maori, Jean Smith writes that

it would appear that generally it was not the “self” which encompassed the experience, but experience which encompassed the “self” … Because the “self” was not in control of experience, a man’s experience was not felt to be integral to him; it happened in him but was not of him. A Maori individual was not so much the experiencer of his experience as the observer of it. 22

Furthermore, “bodily organs were endowed with independent volition.” 23 Renato Rosaldo states that the Ilongots of the Philippines rarely concern themselves with what we refer to as an “inner self” and see no major differences between public presentation and private feeling. 24

Perhaps the most intriguing picture of just how radically different mental concepts can be is found in anthropologist Maurice Leenhardt’s   intriguing book Do Kamo, about the Canaque of New Caledonia, who are “unaware” of their own existence: the “psychic or psychological aspect of man’s actions are events in nature. The Canaque sees them as outside of himself, as externalized. He handles his existence similarly: he places it in an object — a yam, for instance — and through the yam he gains some knowledge of his existence, by identifying himself with it.” 25

Speaking of the Dinka, anthropologist Godfrey Lienhardt writes that “the man is the object acted upon,” and “we often find a reversal of European expressions which assume the human self, or mind, as subject in relation to what happens to it.” 26 Concerning the mind itself,

The Dinka have no conception which at all closely corresponds to our popular modern conception of the “mind,” as mediating and, as it were, storing up the experiences of the self. There is for them no such interior entity to appear, on reflection, to stand between the experiencing self at any given moment and what is or has been an exterior influence upon the self. So it seems that what we should call in some cases the memories of experiences, and regard therefore as in some way intrinsic and interior to the remembering person and modified in their effect upon him by that interiority, appear to the Dinka as exteriority acting upon him, as were the sources from which they derived. 27

The above mentioned ethnographic examples may be interpreted as merely colorful descriptions, as exotic and poetic folk psychologies. Or, we may take a more literal view, and entertain the idea that these ethnopsychological accounts are vestiges of a distant past when individuals possessed radically different mentalities. For example, if it is possible to be a person lacking interiority in which a self moves about making conscious decisions, then we must at least entertain the idea that entire civilizations existed whose members had a radically different mentality. The notion of a “person without a self” is admittedly controversial and open to misinterpretation. Here allow me to stress that I am not suggesting that in today’s world there are groups of people whose mentality is distinct from our own. However, I am suggesting that remnants of an earlier mentality are evident in extant ethnopsychologies, including our own. 28

* * *

Text from:

Reflections on the Dawn of Consciousness:
Julian Jaynes’s Bicameral Mind Theory Revisited
Edited by Marcel Kuijsten
Chapter 7, Kindle Locations 3604-3636

See also:

Survival and Persistence of Bicameralism
Piraha and Bicameralism

Ian Cheng on Julian Jaynes

Down an Internet Rabbit Hole With an Artist as Your Guide
by Daniel McDermon

The art of Ian Cheng, for example, is commonly described in relation to video games, a clear influence. But the SI: Visions episode about him touches only lightly on that connection and on Mr. Cheng’s career, which includes a solo exhibition earlier this year at MoMA PS1. Instead, viewers go on a short but heady intellectual journey, narrated by Mr. Cheng, who discusses improv theater and the esoteric theories of the psychologist Julian Jaynes.

Jaynes, Mr. Cheng said, posits that ancient people weren’t conscious in the way that modern humans are. “You and I hear an internal voice and we perceive it to be a voice that comes from us,” Mr. Cheng says in the video. But Jaynes argued that those voices might well have been perceived as other people.

In that theory, Mr. Cheng explained in an interview, “The mind is actually composed of many sub-people inside of you, and any one of those people is getting the spotlight at any given time.” It’s a model of consciousness that is echoed in the film “Inside Out,” in which an adolescent girl’s mind comprises five different characters.

This conception of consciousness and motivation helped him build out the triad of digital simulations that were shown at MoMA PS1. In those works, Mr. Cheng created characters and landscapes, but the narrative that unfolds is beyond his control. He has referred to them as “video games that play themselves.”

“Lack of the historical sense is the traditional defect in all philosophers.”

Human, All Too Human: A Book for Free Spirits
by Friedrich Wilhelm Nietzsche

The Traditional Error of Philosophers.—All philosophers make the common mistake of taking contemporary man as their starting point and of trying, through an analysis of him, to reach a conclusion. “Man” involuntarily presents himself to them as an aeterna veritas as a passive element in every hurly-burly, as a fixed standard of things. Yet everything uttered by the philosopher on the subject of man is, in the last resort, nothing more than a piece of testimony concerning man during a very limited period of time. Lack of the historical sense is the traditional defect in all philosophers. Many innocently take man in his most childish state as fashioned through the influence of certain religious and even of certain political developments, as the permanent form under which man must be viewed. They will not learn that man has evolved,4 that the intellectual faculty itself is an evolution, whereas some philosophers make the whole cosmos out of this intellectual faculty. But everything essential in human evolution took place aeons ago, long before the four thousand years or so of which we know anything: during these man may not have changed very much. However, the philosopher ascribes “instinct” to contemporary man and assumes that this is one of the unalterable facts regarding man himself, and hence affords a clue to the understanding of the universe in general. The whole teleology is so planned that man during the last four thousand years shall be spoken of as a being existing from all eternity, and with reference to whom everything in the cosmos from its very inception is naturally ordered. Yet everything evolved: there are no eternal facts as there are no absolute truths. Accordingly, historical philosophising is henceforth indispensable, and with it honesty of judgment.

What Locke Lacked
by Louise Mabille

Locke is indeed a Colossus of modernity, but one whose twin projects of providing a concept of human understanding and political foundation undermine each other. The specificity of the experience of perception alone undermines the universality and uniformity necessary to create the subject required for a justifiable liberalism. Since mere physical perspective can generate so much difference, it is only to be expected that political differences would be even more glaring. However, no political order would ever come to pass without obliterating essential differences. The birth of liberalism was as violent as the Empire that would later be justified in its name, even if its political traces are not so obvious. To interpret is to see in a particular way, at the expense of all other possibilities of interpretation. Perspectives that do not fit are simply ignored, or as that other great resurrectionist of modernity, Freud, would concur, simply driven underground. We ourselves are the source of this interpretative injustice, or more correctly, our need for a world in which it is possible to live, is. To a certain extent, then, man is the measure of the world, but only his world. Man is thus a contingent measure and our measurements do not refer to an original, underlying reality. What we call reality is the result not only of our limited perspectives upon the world, but the interplay of those perspectives themselves. The liberal subject is thus a result of, and not a foundation for, the experience of reality. The subject is identified as origin of meaning only through a process of differentiation and reduction, a course through which the will is designated as a psychological property.

Locke takes the existence of the subject of free will – free to exercise political choice such as rising against a tyrant, choosing representatives, or deciding upon political direction – simply for granted. Furthermore, he seems to think that everyone should agree as to what the rules are according to which these events should happen. For him, the liberal subject underlying these choices is clearly fundamental and universal.

Locke’s philosophy of individualism posits the existence of a discreet and isolated individual, with private interests and rights, independent of his linguistic or socio-historical context. C. B. MacPhearson identifies a distinctly possessive quality to Locke’s individualist ethic, notably in the way in which the individual is conceived as proprietor of his own personhood, possessing capacities such as self-reflection and free will. Freedom becomes associated with possession, which the Greeks would associate with slavery, and society conceived in terms of a collection of free and equal individuals who are related to each through their means of achieving material success – which Nietzsche, too, would associate with slave morality.  […]

There is a central tenet to John Locke’s thinking that, as conventional as it has become, remains a strange strategy. Like Thomas Hobbes, he justifies modern society by contrasting it with an original state of nature. For Hobbes, as we have seen, the state of nature is but a hypothesis, a conceptual tool in order to elucidate a point. For Locke, however, the state of nature is a very real historical event, although not a condition of a state of war. Man was social by nature, rational and free. Locke drew this inspiration from Richard Hooker’s Laws of Ecclesiastical Polity, notably from his idea that church government should be based upon human nature, and not the Bible, which, according to Hooker, told us nothing about human nature. The social contract is a means to escape from nature, friendlier though it be on the Lockean account. For Nietzsche, however, we have never made the escape: we are still holus-bolus in it: ‘being conscious is in no decisive sense the opposite of the instinctive – most of the philosopher’s conscious thinking is secretly directed and compelled into definite channels by his instincts. Behind all logic too, and its apparent autonomy there stand evaluations’ (BGE, 3). Locke makes a singular mistake in thinking the state of nature a distant event. In fact, Nietzsche tells us, we have never left it. We now only wield more sophisticated weapons, such as the guilty conscience […]

Truth originates when humans forget that they are ‘artistically creating subjects’ or products of law or stasis and begin to attach ‘invincible faith’ to their perceptions, thereby creating truth itself. For Nietzsche, the key to understanding the ethic of the concept, the ethic of representation, is conviction […]

Few convictions have proven to be as strong as the conviction of the existence of a fundamental subjectivity. For Nietzsche, it is an illusion, a bundle of drives loosely collected under the name of ‘subject’ —indeed, it is nothing but these drives, willing, and actions in themselves—and it cannot appear as anything else except through the seduction of language (and the fundamental errors of reason petrified in it), which understands and misunderstands all action as conditioned by something which causes actions, by a ‘Subject’ (GM I 13). Subjectivity is a form of linguistic reductionism, and when using language, ‘[w]e enter a realm of crude fetishism when we summon before consciousness the basic presuppositions of the metaphysics of language — in plain talk, the presuppositions of reason. Everywhere reason sees a doer and doing; it believes in will as the cause; it believes in the ego, in the ego as being, in the ego as substance, and it projects this faith in the ego-substance upon all things — only thereby does it first create the concept of ‘thing’ (TI, ‘Reason in Philosophy’ 5). As Nietzsche also states in WP 484, the habit of adding a doer to a deed is a Cartesian leftover that begs more questions than it solves. It is indeed nothing more than an inference according to habit: ‘There is activity, every activity requires an agent, consequently – (BGE, 17). Locke himself found the continuous existence of the self problematic, but did not go as far as Hume’s dissolution of the self into a number of ‘bundles’. After all, even if identity shifts occurred behind the scenes, he required a subject with enough unity to be able to enter into the Social Contract. This subject had to be something more than merely an ‘eternal grammatical blunder’ (D, 120), and willing had to be understood as something simple. For Nietzsche, it is ‘above all complicated, something that is a unit only as a word, a word in which the popular prejudice lurks, which has defeated the always inadequate caution of philosophers’ (BGE, 19).

Nietzsche’s critique of past philosophers
by Michael Lacewing

Nietzsche is questioning the very foundations of philosophy. To accept his claims means being a new kind of philosopher, ones who ‘taste and inclination’, whose values, are quite different. Throughout his philosophy, Nietzsche is concerned with origins, both psychological and historical. Much of philosophy is usually thought of as an a priori investigation. But if Nietzsche can show, as he thinks he can, that philosophical theories and arguments have a specific historical basis, then they are not, in fact, a priori. What is known a priori should not change from one historical era to the next, nor should it depend on someone’s psychology. Plato’s aim, the aim that defines much of philosophy, is to be able to give complete definitions of ideas – ‘what is justice?’, ‘what is knowledge?’. For Plato, we understand an idea when we have direct knowledge of the Form, which is unchanging and has no history. If our ideas have a history, then the philosophical project of trying to give definitions of our concepts, rather than histories, is radically mistaken. For example, in §186, Nietzsche argues that philosophers have consulted their ‘intuitions’ to try to justify this or that moral principle. But they have only been aware of their own morality, of which their ‘justifications’ are in fact only expressions. Morality and moral intuitions have a history, and are not a priori. There is no one definition of justice or good, and the ‘intuitions’ that we use to defend this or that theory are themselves as historical, as contentious as the theories we give – so they offer no real support. The usual ways philosophers discuss morality misunderstands morality from the very outset. The real issues of understanding morality only emerge when we look at the relation between this particular morality and that. There is no world of unchanging ideas, no truths beyond the truths of the world we experience, nothing that stands outside or beyond nature and history.

GENEALOGY AND PHILOSOPHY

Nietzsche develops a new way of philosophizing, which he calls a ‘morphology and evolutionary theory’ (§23), and later calls ‘genealogy’. (‘Morphology’ means the study of the forms something, e.g. morality, can take; ‘genealogy’ means the historical line of descent traced from an ancestor.) He aims to locate the historical origin of philosophical and religious ideas and show how they have changed over time to the present day. His investigation brings together history, psychology, the interpretation of concepts, and a keen sense of what it is like to live with particular ideas and values. In order to best understand which of our ideas and values are particular to us, not a priori or universal, we need to look at real alternatives. In order to understand these alternatives, we need to understand the psychology of the people who lived with them. And so Nietzsche argues that traditional ways of doing philosophy fail – our intuitions are not a reliable guide to the ‘truth’, to the ‘real’ nature of this or that idea or value. And not just our intuitions, but the arguments, and style of arguing, that philosophers have used are unreliable. Philosophy needs to become, or be informed by, genealogy. A lack of any historical sense, says Nietzsche, is the ‘hereditary defect’ of all philosophers.

MOTIVATIONAL ANALYSIS

Having long kept a strict eye on the philosophers, and having looked between their lines, I say to myself… most of a philosopher’s conscious thinking is secretly guided and channelled into particular tracks by his instincts. Behind all logic, too, and its apparent tyranny of movement there are value judgements, or to speak more clearly, physiological demands for the preservation of a particular kind of life. (§3) A person’s theoretical beliefs are best explained, Nietzsche thinks, by evaluative beliefs, particular interpretations of certain values, e.g. that goodness is this and the opposite of badness. These values are best explained as ‘physiological demands for the preservation of a particular kind of life’. Nietzsche holds that each person has a particular psychophysical constitution, formed by both heredity and culture. […] Different values, and different interpretations of these values, support different ways of life, and so people are instinctively drawn to particular values and ways of understanding them. On the basis of these interpretations of values, people come to hold particular philosophical views. §2 has given us an illustration of this: philosophers come to hold metaphysical beliefs about a transcendent world, the ‘true’ and ‘good’ world, because they cannot believe that truth and goodness could originate in the world of normal experience, which is full of illusion, error, and selfishness. Therefore, there ‘must’ be a pure, spiritual world and a spiritual part of human beings, which is the origin of truth and goodness. Philosophy and values But ‘must’ there be a transcendent world? Or is this just what the philosopher wants to be true? Every great philosophy, claims Nietzsche, is ‘the personal confession of its author’ (§6). The moral aims of a philosophy are the ‘seed’ from which the whole theory grows. Philosophers pretend that their opinions have been reached by ‘cold, pure, divinely unhampered dialectic’ when in fact, they are seeking reasons to support their pre-existing commitment to ‘a rarefied and abstract version of their heart’s desire’ (§5), viz. that there is a transcendent world, and that good and bad, true and false are opposites. Consider: Many philosophical systems are of doubtful coherence, e.g. how could there be Forms, and if there were, how could we know about them? Or again, in §11, Nietzsche asks ‘how are synthetic a priori judgments possible?’. The term ‘synthetic a priori’ was invented by Kant. According to Nietzsche, Kant says that such judgments are possible, because we have a ‘faculty’ that makes them possible. What kind of answer is this?? Furthermore, no philosopher has ever been proved right (§25). Given the great difficulty of believing either in a transcendent world or in human cognitive abilities necessary to know about it, we should look elsewhere for an explanation of why someone would hold those beliefs. We can find an answer in their values. There is an interesting structural similarity between Nietzsche’s argument and Hume’s. Both argue that there is no rational explanation of many of our beliefs, and so they try to find the source of these beliefs outside or beyond reason. Hume appeals to imagination and the principle of ‘Custom’. Nietzsche appeals instead to motivation and ‘the bewitchment of language’ (see below). So Nietzsche argues that philosophy is not driven by a pure ‘will to truth’ (§1), to discover the truth whatever it may be. Instead, a philosophy interprets the world in terms of the philosopher’s values. For example, the Stoics argued that we should live ‘according to nature’ (§9). But they interpret nature by their own values, as an embodiment of rationality. They do not see the senselessness, the purposelessness, the indifference of nature to our lives […]

THE BEWITCHMENT OF LANGUAGE

We said above that Nietzsche criticizes past philosophers on two grounds. We have looked at the role of motivation; the second ground is the seduction of grammar. Nietzsche is concerned with the subject-predicate structure of language, and with it the notion of a ‘substance’ (picked out by the grammatical ‘subject’) to which we attribute ‘properties’ (identified by the predicate). This structure leads us into a mistaken metaphysics of ‘substances’. In particular, Nietzsche is concerned with the grammar of ‘I’. We tend to think that ‘I’ refers to some thing, e.g. the soul. Descartes makes this mistake in his cogito – ‘I think’, he argues, refers to a substance engaged in an activity. But Nietzsche repeats the old objection that this is an illegitimate inference (§16) that rests on many unproven assumptions – that I am thinking, that some thing is thinking, that thinking is an activity (the result of a cause, viz. I), that an ‘I’ exists, that we know what it is to think. So the simple sentence ‘I think’ is misleading. In fact, ‘a thought comes when ‘it’ wants to, and not when ‘I’ want it to’ (§17). Even ‘there is thinking’ isn’t right: ‘even this ‘there’ contains an interpretation of the process and is not part of the process itself. People are concluding here according to grammatical habit’. But our language does not allow us just to say ‘thinking’ – this is not a whole sentence. We have to say ‘there is thinking’; so grammar constrains our understanding. Furthermore, Kant shows that rather than the ‘I’ being the basis of thinking, thinking is the basis out of which the appearance of an ‘I’ is created (§54). Once we recognise that there is no soul in a traditional sense, no ‘substance’, something constant through change, something unitary and immortal, ‘the way is clear for new and refined versions of the hypothesis about the soul’ (§12), that it is mortal, that it is multiplicity rather than identical over time, even that it is a social construct and a society of drives. Nietzsche makes a similar argument about the will (§19). Because we have this one word ‘will’, we think that what it refers to must also be one thing. But the act of willing is highly complicated. First, there is an emotion of command, for willing is commanding oneself to do something, and with it a feeling of superiority over that which obeys. Second, there is the expectation that the mere commanding on its own is enough for the action to follow, which increases our sense of power. Third, there is obedience to the command, from which we also derive pleasure. But we ignore the feeling the compulsion, identifying the ‘I’ with the commanding ‘will’. Nietzsche links the seduction of language to the issue of motivation in §20, arguing that ‘the spell of certain grammatical functions is the spell of physiological value judgements’. So even the grammatical structure of language originates in our instincts, different grammars contributing to the creation of favourable conditions for different types of life. So what values are served by these notions of the ‘I’ and the ‘will’? The ‘I’ relates to the idea that we have a soul, which participates in a transcendent world. It functions in support of the ascetic ideal. The ‘will’, and in particular our inherited conception of ‘free will’, serves a particular moral aim

Hume and Nietzsche: Moral Psychology (short essay)
by epictetus_rex

1. Metaphilosophical Motivation

Both Hume and Nietzsche1 advocate a kind of naturalism. This is a weak naturalism, for it does not seek to give science authority over philosophical inquiry, nor does it commit itself to a specific ontological or metaphysical picture. Rather, it seeks to (a) place the human mind firmly in the realm of nature, as subject to the same mechanisms that drive all other natural events, and (b) investigate the world in a way that is roughly congruent with our best current conception(s) of nature […]

Furthermore, the motivation for this general position is common to both thinkers. Hume and Nietzsche saw old rationalist/dualist philosophies as both absurd and harmful: such systems were committed to extravagant and contradictory metaphysical claims which hinder philosophical progress. Furthermore, they alienated humanity from its position in nature—an effect Hume referred to as “anxiety”—and underpinned religious or “monkish” practises which greatly accentuated this alienation. Both Nietzsche and Hume believe quite strongly that coming to see ourselves as we really are will banish these bugbears from human life.

To this end, both thinkers ask us to engage in honest, realistic psychology. “Psychology is once more the path to the fundamental problems,” writes Nietzsche (BGE 23), and Hume agrees:

the only expedient, from which we can hope for success in our philosophical researches, is to leave the tedious lingering method, which we have hitherto followed, and instead of taking now and then a castle or village on the frontier, to march up directly to the capital or center of these sciences, to human nature itself.” (T Intro)

2. Selfhood

Hume and Nietzsche militate against the notion of a unified self, both at-a-time and, a fortiori, over time.

Hume’s quest for a Newtonian “science of the mind” lead him to classify all mental events as either impressions (sensory) or ideas (copies of sensory impressions, distinguished from the former by diminished vivacity or force). The self, or ego, as he says, is just “a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different; whatever natural propension we may have to imagine that simplicity and identity.” (Treatise 4.6) […]

For Nietzsche, the experience of willing lies in a certain kind of pleasure, a feeling of self-mastery and increase of power that comes with all success. This experience leads us to mistakenly posit a simple, unitary cause, the ego. (BGE 19)

The similarities here are manifest: our minds do not have any intrinsic unity to which the term “self” can properly refer, rather, they are collections or “bundles” of events (drives) which may align with or struggle against one another in a myriad of ways. Both thinkers use political models to describe what a person really is. Hume tells us we should “more properly compare the soul to a republic or commonwealth, in which the several members [impressions and ideas] are united by ties of government and subordination, and give rise to persons, who propagate the same republic in the incessant change of its parts” (T 261)

3. Action and The Will

Nietzsche and Hume attack the old platonic conception of a “free will” in lock-step with one another. This picture, roughly, involves a rational intellect which sits above the appetites and ultimately chooses which appetites will express themselves in action. This will is usually not considered to be part of the natural/empirical order, and it is this consequence which irks both Hume and Nietzsche, who offer two seamlessly interchangeable refutations […]

Since we are nothing above and beyond events, there is nothing for this “free will” to be: it is a causa sui, “a sort of rape and perversion of logic… the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense” (BGE 21).

When they discover an erroneous or empty concept such as “Free will” or “the self”, Nietzsche and Hume engage in a sort of error-theorizing which is structurally the same. Peter Kail (2006) has called this a “projective explanation”, whereby belief in those concepts is “explained by appeal to independently intelligible features of psychology”, rather than by reference to the way the world really is1.

The Philosophy of Mind
INSTRUCTOR: Larry Hauser
Chapter 7: Egos, bundles, and multiple selves

  • Who dat?  “I”
    • Locke: “something, I know not what”
    • Hume: the no-self view … “bundle theory”
    • Kant’s transcendental ego: a formal (nonempirical) condition of thought that the “I’ must accompany every perception.
      • Intentional mental state: I think that snow is white.
        • to think: a relation between
          • a subject = “I”
          • a propositional content thought =  snow is white
      • Sensations: I feel the coldness of the snow.
        • to feel: a relation between
          • a subject = “I”
          • a quale = the cold-feeling
    • Friedrich Nietzsche
      • A thought comes when “it” will and not when “I” will. Thus it is a falsification of the evidence to say that the subject “I” conditions the predicate “think.”
      • It is thought, to be sure, but that this “it” should be that old famous “I” is, to put it mildly, only a supposition, an assertion. Above all it is not an “immediate certainty.” … Our conclusion is here formulated out of our grammatical custom: “Thinking is an activity; every activity presumes something which is active, hence ….” 
    • Lichtenberg: “it’s thinking” a la “it’s raining”
      • a mere grammatical requirement
      • no proof of an thinking self

[…]

  • Ego vs. bundle theories (Derek Parfit (1987))
    • Ego: “there really is some kind of continuous self that is the subject of my experiences, that makes decisions, and so on.” (95)
      • Religions: Christianity, Islam, Hinduism
      • Philosophers: Descartes, Locke, Kant & many others (the majority view)
    • Bundle: “there is no underlying continuous and unitary self.” (95)
      • Religion: Buddhism
      • Philosophers: Hume, Nietzsche, Lichtenberg, Wittgenstein, Kripke(?), Parfit, Dennett {a stellar minority}
  • Hume v. Reid
    • David Hume: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure.  I never can catch myself at any time without a perception, and never can observe anything but the perception.  (Hume 1739, Treatise I, VI, iv)
    • Thomas Reid: I am not thought, I am not action, I am not feeling: I am something which thinks and acts and feels. (1785)

Dark Triad Domination

It has been noted that some indigenous languages have words that can be interpreted as what, in English, is referred to as psychopathic, sociopathic, narcissistic, Machiavellian, etc. This is the region of the Dark Triad. One Inuit language has the word ‘kunlangeta‘, meaning “his mind knows what to do but he does not do it.” That could be thought of as describing a psychopath’s possession of cognitive empathy while lacking affective empathy. Or consider the Yoruba word ‘arankan‘ that “is applied to a person who always goes his own way regardless of others, who is uncooperative, full of malice, and bullheaded.”

These are tribal societies. Immense value is placed on kinship loyalty, culture of trust, community survival, collective well-being, and public good. Even though they aren’t oppressive authoritarian states, the modern Western notion of hyper-individualism wouldn’t make much sense within these close-knit groups. Sacrifice of individual freedom and rights is a given under such social conditions, since individuals are intimately related to one another and physically dependent upon one another. Actually, it wouldn’t likely be experienced as sacrifice at all since it would simply be the normal state of affairs, the shared reality within which they exist — their identity being social rather than individual.

This got me thinking about psychopathy and modern society. Research has found that, at least in some Western countries, the rate of psychopathy is not only high in prison populations but equally as high among the economic and political elite. My father left upper management in a major corporation because of how ruthless was the backstabbing, a win at all costs social Darwinism. This is what defines a country like the United States, as these social dominators are the most revered and emulated individuals. Psychopaths and such, instead of being eliminated or banished, are promoted and empowered.

What occurred to me is the difference for tribal societies is that hyper-individualism is seen not only as abnormal but dangerous and so intolerable. Maybe the heavy focus on individualism in the modern West inevitably leads to the psychopathological traits of the Dark Triad. As such, that would mean there is something severely abnormal and dysfunctional about Western societies (WEIRD – Western Educated Industrialized Rich Democratic). Psychopaths, in particular, are the ultimate individualists and so they will be the ultimate winners in an individualistic culture — their relentless confidence and ruthless competitiveness, their Machiavellian manipulations and persuasive charm supporting a narcissistic optimism and leading to success.

There are a couple of ways of looking at this. First off, there might be something about urbanization itself or a correlated factor that exacerbates mental illness. Studies have found, for example, an increase in psychosis across the recent generations of city-dwellers — precisely during the period of populations being further urbanized and concentrated. It reminds one of the study done on crowding large numbers of rats in a small contained cage until they turned anti-social, aggressive, and violent. If these rats were humans, we might describe this behavior in terms of psychopathy or sociopathy.

There is a second thing to consider, as discussed by Barbara Oakley in her book Evil Genes (pp. 265-6). About rural populations, she writes that, “Psychopathy is rare in those settings, notes psychologist David Cooke, who has studied psychopathy across cultures.” And she continues:

“But what about more urban environments? Cooke’s research has shown, surprisingly, that there are more psychopaths from Scotland prisons of England and Wales than there are in Scottish prisons. (Clearly, this is not to say that the Scottish are more given to psychopathy than anyone else.) Studies of migration records showed that many Scottish psychopaths had migrated to the more populated metropolitan areas of the south. Cooke hypothesized that, in the more crowded metropolitan areas, the psychopath could attack or steal with little danger that the victim would recognize or catch him. Additionally, the psychopath’s impulsivity and need for stimulation could also play a role in propelling the move to the dazzling delights of the big city — he would have no affection for family and friends to keep him tethered back home. Densely populated areas, apparently, are the equivalent for psychopaths of ponds and puddles for malarial mosquitoes.”

As Oakley’s book is on genetics, she goes in an unsurprising direction in pointing out how some violent individuals have been able to pass on their genetics to large numbers of descendants. The most famous example being Genghis Khan. She writes that (p. 268),

“These recent discoveries reinforce the findings of the anthropologist Laura Betzig. Her 1986 Despotism and Differential Reproduction provides a cornucopia of evidence documenting the increased capacity of those with more power — and frequently, Machiavellian tendencies — to have offspring. […] As Machiavellian researcher Richard Christie and his colleague Florence Geis aptly note: “[H]igh population density and highly competitive environments have been found to increase the use of antisocial and Machiavellian strategies, and my in fact foster the ability of those who possess those strategies to reproduce.” […] Beltzig’s ultimte point is not that the corrupt attain power but that those corrupted individuals who achieved power in preindustrial agricultural societies had far more opportunity to reproduce, generally through polygyny, and pass on their genes. In fact, the more Machiavellian, that is, despotic, a man might be, the more polygynous he tended to be — grabbing and keeping for himself as many beautiful women as he could. Some researchers have posited that envy is itself a useful, possibly geneticall linked trait, “serving a key role in survival, motivating achievement, serving the conscience of self and other, and alerting us to inequities that, if fueled, can lead to esclaated violence.” Thus, genese related to envy — not to mention other more problematic temperaments — might have gradually found increased prevalence in such environments.”

That kind of genetic hypothesis is highly speculative, to say the least. Their could be some truth value in them, if one wanted to give the benefit of the doubt, but we have no direct evidence that such is the case. At present, these speculations are yet more just-so stories and they will remain so until we can better control confounding factors in order to directly ascertain causal factors. Anyway, genetic determinism in this simplistic sense is largely moot at this point, as the science is moving on into new understandings. Besides being unhelpful, such speculations are unnecessary. We already have plenty of social science research that proves changing environmental conditions alters social behavior — besides what I’ve already mentioned, there is such examples as the fascinating rat park research. There is no debate to be had about the immense influence of external influences, such as factors of socioeconomic class and high inequality: Power Causes Brain Damage by Justin Renteria, How Wealth Reduces Compassion by Daisy Grewal, Got Money? Then You Might Lack Compassion by Jeffrey Kluger, Why the Rich Don’t Give to Charity by Ken Stern, Rich People Literally See the World Differently by Drake Baer, The rich really DO ignore the poor by Cheyenne Macdonald, Propagandopoly: Monopoly as an Ideological Tool by Naomi Russo, A ‘Rigged’ Game Of Monopoly Reveals How Feeling Wealthy Changes Our Behavior [TED VIDEO] by Planetsave, etc.

Knowing the causes is important. But knowing the consequences is just as important. No matter what increases Dark Triad behaviors, they can have widespread and long-lasting repurcussions, maybe even permanently altering entire societies in how they function. Following her speculations, Oakley gets down to the nitty gritty (p. 270):

“Questions we might reasonably ask are — has the percentage of Machiavellians and other more problematic personality types increased in the human population, or in certain human populations, since the advent of agriculture? And if the answer is yes, does the increase in these less savory types change a group’s culture? In other words, is there a tipping point of Machiavellian and emote control behavior that can subtly or not so subtly affect the way the members of a society interact? Certainly a high expectation of meeting a “cheater,” for example, would profoundly impact the trust that appears to form the grease of modern democratic societies and might make the development of democratic processes in certain areas more difficult. Crudely put, an increase in successfully sinister types from 2 percent, say, to 4 percent of a population would double the pool of Machiavellians vying for power. And it is the people in power who set the emotional tone, perhaps through mirroring and emotional contagion, for their followers and those around them. As Judith Rich Harris points out, higher-status members of a group are looked at more, which means they have more influence on how a person becomes socialized.”

The key factor in much of this seems to be concentration. Simply concentrating populations, humans or rats, leads to social problems related to mental health issues. On top of that, there is the troubling concern of what kind of people are being concentrated and where they are being concentrated — psychopaths being concentrated not only in big cities and prisons but worse still in positions of wealth and power, authority and influence. We live in a society that creates the conditions for the Dark Triad to increase and flourish. This is how the success of those born psychopaths encourages others to follow their example in developing into sociopaths, which in turn makes the Dark Triad mindset into a dominant ethos within mainstream culture.

The main thing on my mind is individualism. It’s been on my mind a lot lately, such as in terms of the bundle theory of the mind and the separate individual, connected to my long term interest in community and the social nature of humans. In relation to individualism, there is the millennia-old cultural divide between Germanic ‘freedom‘ and Roman ‘liberty‘. But because Anglo-American society mixed up the two, this became incorrectly framed by Isaiah Berlin in terms of positive and negative. In Contemporary Political Theory, J. C. Johari writes that (p. 266), “Despite this all, it may be commented that though Berlin advances the argument that the two aspects of liberty cannot be so distinguished in practical terms, one may differ from him and come to hold that his ultimate preference is for the defence of the negative view of liberty. Hence, he obviously belongs to the category of Mill and Hayek.”  He states this “is evident from his emphatic affirmation” in the following assertion by Berlin:

“The fundamental sense of freedom is freedom from chains, from imprisonment, from enslavement by others. The rest is extension of this sense or else metaphor. To strive to be free is to seek to remove obstacles; to struggle for personal freedom is to seek to curb interference, exploitation, enslavement by men whose ends are theirs, not one’s own. Freedom, at least in its political sense, is coterminous with the absence of bullying or domination.”

Berlin makes a common mistake here. Liberty was defined by not being a slave in a slave-based society, which is what existed in the Roman Empire. But that isn’t freedom, an entirely different term with an etymology related to ‘friend’ and with a meaning that indicated membership in an autonomous community — such freedom meant not being under the oppression of a slave-based society (e.g., German tribes remaining independent of the Roman Empire). Liberty, not freedom, was determined by one’s individual status of lacking oppression in an oppressive social order. This is why liberty has a negative connotation for it is what you lack, rather than what you possess. A homeless man starving alone on the street with no friend in the world to help him and no community to support him, such a man has liberty but not freedom. He is ‘free’ to do what he wants under those oppressive conditions and constraints, as no one is physically detaining him.

This notion of liberty has had a purchase on the American mind because of the history of racial and socioeconomic oppression. After the Civil War, blacks had negative liberty in no longer being slaves but they definitely did not have positive freedom through access to resources and opportunities, instead being shackled by systemic and institutional racism that maintained their exploited status as a permanent underclass — along with slavery overtly continuing in other forms through false criminal charges leading to prison labor, such that the criminal charges justified blaming the individual for their own lack of freedom which maintained the outward perception of negative liberty. Other populations such as Native Americans faced a similar dilemma. But is one actually free when the chains holding one down are invisible but still all too real? If liberty is an abstraction detached from lived experience and real world results, of what value is such liberty? The nature of negative liberty has always had a superficial and illusory quality about it in how it is maintained through public narrative. Unlike freedom, liberty as a social construct is highly effective as a tool for social control and oppression.

This point is made by another critic of Berlin’s perspective. “It is hard for me to see that Berlin is consistent on this point,” writes L. H. Crocker (Positive Liberty, p. 69). “Surely not all alterable human failures to open doors are cases of bullying. After all, it is often through neglect that opportunities fail to be created for the disadvantaged. It is initially more plausible that all failures to open doors are the result of domination in some sense or another.” I can’t help but think that Dark Triad individuals would feel right at home in a culture of liberty where individuals have the ‘freedom’ to oppress and be oppressed. Embodying this sick mentality, Margaret Thatcher once gave perfect voice to the sociopathic worldview — speaking of the victims of disadvantage and desperation, she claimed that, “They’re casting their problem on society. And, you know, there is no such thing as society.” That is to say, there is no freedom.

The question, then, is whether or not we want freedom. A society is only free to the degree that as a society freedom is demanded. To deny society itself is an attempt to deny the very basis of freedom, but that is just a trick of rhetoric. A free people know their own freedom by acting freely, even if that means fighting the oppressors who seek to deny that freedom. Thatcher intentionally conflated society and government, something never heard in the clear-eyed wisdom of a revolutionary social democrat like Thomas Paine“Society in every state is a blessing, but government, even in its best stage, is but a necessary evil; in its worst state an intolerable one.” These words expressed the values of negative liberty as made perfect sense for someone living in an empire built on colonialism, corporatism, and slavery. But the same words gave hint to a cultural memory of Germanic positive freedom. It wasn’t a principled libertarian hatred of governance, rather the principled radical protest against a sociopathic social order. As Paine made clear, this unhappy situation wasn’t the natural state of humanity, neither inevitable nor desirable, much less tolerable.

The Inuits would find a way for psychopaths to ‘accidentally’ fall off the ice, never to trouble the community again. As for the American revolutionaries, they preferred more overt methods, from tar and feathering to armed revolt. So, now to regain our freedom as a people, what recourse do we have in abolishing the present Dark Triad domination?

* * *

Here are some pieces on individualism and community, as contrasted between far different societies. These involve issues of mental health (from depression to addiction), and social problems (from authoritarianism to capitalist realism) — as well as other topics, including carnival and revolution.

Self, Other, & World

Retrieving the Lost Worlds of the Past:
The Case for an Ontological Turn
by Greg Anderson

“[…] This ontological individualism would have been scarcely intelligible to, say, the inhabitants of precolonial Bali or Hawai’i, where the divine king or chief, the visible incarnation of the god Lono, was “the condition of possibility of the community,” and thus “encompasse[d] the people in his own person, as a projection of his own being,” such that his subjects were all “particular instances of the chief’s existence.” 12 It would have been barely imaginable, for that matter, in the world of medieval Europe, where conventional wisdom proverbially figured sovereign and subjects as the head and limbs of a single, primordial “body politic” or corpus mysticum. 13 And the idea of a natural, presocial individual would be wholly confounding to, say, traditional Hindus and the Hagen people of Papua New Guinea, who objectify all persons as permeable, partible “dividuals” or “social microcosms,” as provisional embodiments of all the actions, gifts, and accomplishments of others that have made their lives possible.1

“We alone in the modern capitalist west, it seems, regard individuality as the true, primordial estate of the human person. We alone believe that humans are always already unitary, integrated selves, all born with a natural, presocial disposition to pursue a rationally calculated self-interest and act competitively upon our no less natural, no less presocial rights to life, liberty, and private property. We alone are thus inclined to see forms of sociality, like relations of kinship, nationality, ritual, class, and so forth, as somehow contingent, exogenous phenomena, not as essential constituents of our very subjectivity, of who or what we really are as beings. And we alone believe that social being exists to serve individual being, rather than the other way round. Because we alone imagine that individual humans are free-standing units in the first place, “unsocially sociable” beings who ontologically precede whatever “society” our self-interest prompts us to form at any given time.”

What Kinship Is-And Is Not
by Marshall Sahlins, p. 2

“In brief, the idea of kinship in question is “mutuality of being”: people who are intrinsic to one another’s existence— thus “mutual person(s),” “life itself,” “intersubjective belonging,” “transbodily being,” and the like. I argue that “mutuality of being” will cover the variety of ethnographically documented ways that kinship is locally constituted, whether by procreation, social construction, or some combination of these. Moreover, it will apply equally to interpersonal kinship relations, whether “consanguineal” or “affinal,” as well as to group arrangements of descent. Finally, “mutuality of being” will logically motivate certain otherwise enigmatic effects of kinship bonds— of the kind often called “mystical”— whereby what one person does or suffers also happens to others. Like the biblical sins of the father that descend on the sons, where being is mutual, there experience is more than individual.”

Music and Dance on the Mind

We aren’t as different from ancient humanity as it might seem. Our societies have changed drastically, suppressing old urges and potentialities. Yet the same basic human nature still lurks within us, hidden in the underbrush along the well trod paths of the mind. The hive mind is what the human species naturally falls back upon, from millennia of collective habit. The problem we face is we’ve lost the ability to express well our natural predisposition toward group-mindedness, too easily getting locked into groupthink, a tendency easily manipulated.

Considering this, we have good reason to be wary, not knowing what we could tap into. We don’t understand our own minds and so we naively underestimate the power of humanity’s social nature. With the right conditions, hiving is easy to elicit but hard to control or shut down. The danger is that the more we idolize individuality the more prone we become to what is so far beyond the individual. It is the glare of hyper-individualism that casts the shadow of authoritarianism.

Pacifiers, Individualism & Enculturation

I’ve often thought that individualism, in particular hyper-individualism, isn’t the natural state of human nature. By this, I mean that it isn’t how human nature manifested for the hundreds of thosands of years prior to modern Western civilization. Julian Jaynes theorizes that, even in early Western civilization, humans didn’t have a clear sense of separate individuality. He points out that in the earliest literature humans were all the time hearing voices outside of themselves (giving them advice, telling them what to do, making declarations, chastising them, etc), maybe not unlike in the way we hear a voice in our head.

We moderns have internalized those external voices of collective culture. This seems normal to us. This is not just about pacifiers. It’s about technology in general. The most profound technology ever invented was written text (along with the binding of books and the printing press). All the time I see my little niece absorbed in a book, even though she can’t yet read. Like pacifiers, books are tools of enculturation that help create the individual self. Instead of mommy’s nipple, the baby soothes themselves. Instead of voices in the world, the child becomes focused on text. In both cases, it is a process of internalizing.

All modern civilization is built on this process of individualization. I don’t know if it is overall good or bad. I’m sure much of our destructive tendencies are caused by the relationship between individualization and objectification. Nature as a living world that could speak to us has become mere matter without mind or soul. So, the cost of this process has been high… but then again, the innovative creativeness has exploded as this individualizing process has increasingly taken hold in recent centuries.

“illusion of a completed, unitary self”

The Voices Within: The History and Science of How We Talk to Ourselves
by Charles Fernyhough, Kindle Locations 3337-3342

“And we are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent “me” from moment to moment. We are all more or less dissociated. Our selves are constantly constructed and reconstructed in ways that often work well, but often break down. Stuff happens, and the center cannot hold. Some of us have more fragmentation going on, because of those things that have happened; those people face a tougher challenge of pulling it all together. But no one ever slots in the last piece and makes it whole. As human beings, we seem to want that illusion of a completed, unitary self, but getting there is hard work. And anyway, we never get there.”

Delirium of Hyper-Individualism

Individualism is a strange thing. For anyone who has spent much time meditating, it’s obvious that there is no there there. It slips through one’s grasp like an ancient philosopher trying to study aether. The individual self is the modernization of the soul. Like the ghost in the machine and the god in the gaps, it is a theological belief defined by its absence in the world. It’s a social construct, a statement that is easily misunderstood.

In modern society, individualism has been raised up to an entire ideological worldview. It is all-encompassing, having infiltrated nearly every aspect of our social lives and become internalized as a cognitive frame. Traditional societies didn’t have this obsession with an idealized self as isolated and autonomous. Go back far enough and the records seem to show societies that didn’t even have a concept, much less an experience, of individuality.

Yet for all its dominance, the ideology of individualism is superficial. It doesn’t explain much of our social order and personal behavior. We don’t act as if we actually believe in it. It’s a convenient fiction that we so easily disregard when inconvenient, as if it isn’t all that important after all. In our most direct experience, individuality simply makes no sense. We are social creatures through and through. We don’t know how to be anything else, no matter what stories we tell ourselves.

The ultimate value of this individualistic ideology is, ironically, as social control and social justification.

It’s All Your Fault, You Fat Loser!

Capitalist Realism: Is there no alternative?
By Mark Fisher, pp. 18-20

“[…] In what follows, I want to stress two other aporias in capitalist realism, which are not yet politicized to anything like the same degree. The first is mental health. Mental health, in fact, is a paradigm case of how capitalist realism operates. Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect). In the 1960s and 1970s, radical theory and politics (Laing, Foucault, Deleuze and Guattari, etc.) coalesced around extreme mental conditions such as schizophrenia, arguing, for instance, that madness was not a natural, but a political, category. But what is needed now is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS . In his book The Selfish Capitalist, Oliver James has convincingly posited a correlation between rising rates of mental distress and the neoliberal mode of capitalism practiced in countries like Britain, the USA and Australia. In line with James’s claims, I want to argue that it is necessary to reframe the growing problem of stress (and distress) in capitalist societies. Instead of treating it as incumbent on individuals to resolve their own psychological distress, instead, that is, of accepting the vast privatization of stress that has taken place over the last thirty years, we need to ask: how has it become acceptable that so many people, and especially so many young people, are ill? The ‘mental health plague’ in capitalist societies would suggest that, instead of being the only social system that works, capitalism is inherently dysfunctional, and that the cost of it appearing to work is very high.”

There is always an individual to blame. It sucks to be an individual these days, I tell ya. I should know because I’m one of those faulty miserable individuals. I’ve been one my whole life. If it weren’t for all of us pathetic and depraved individuals, capitalism would be utopia. I beat myself up all the time for failing the great dream of capitalism. Maybe I need to buy more stuff.

“The other phenomenon I want to highlight is bureaucracy. In making their case against socialism, neoliberal ideologues often excoriated the top-down bureaucracy which supposedly led to institutional sclerosis and inefficiency in command economies. With the triumph of neoliberalism, bureaucracy was supposed to have been made obsolete; a relic of an unlamented Stalinist past. Yet this is at odds with the experiences of most people working and living in late capitalism, for whom bureaucracy remains very much a part of everyday life. Instead of disappearing, bureaucracy has changed its form; and this new, decentralized, form has allowed it to proliferate. The persistence of bureaucracy in late capitalism does not in itself indicate that capitalism does not work – rather, what it suggests is that the way in which capitalism does actually work is very different from the picture presented by capitalist realism.”

Neoliberalism: Dream & Reality

[…] in the book Capitalist Realism by Mark Fisher (p. 20):

“[…] But incoherence at the level of what Brown calls ‘political rationality’ does nothing to prevent symbiosis at the level of political subjectivity, and, although they proceeded from very different guiding assumptions, Brown argues that neoliberalism and neoconservatism worked together to undermine the public sphere and democracy, producing a governed citizen who looks to find solutions in products, not political processes. As Brown claims,

“the choosing subject and the governed subject are far from opposites … Frankfurt school intellectuals and, before them, Plato theorized the open compatibility between individual choice and political domination, and depicted democratic subjects who are available to political tyranny or authoritarianism precisely because they are absorbed in a province of choice and need-satisfaction that they mistake for freedom.”

“Extrapolating a little from Brown’s arguments, we might hypothesize that what held the bizarre synthesis of neoconservatism and neoliberalism together was their shared objects of abomination: the so called Nanny State and its dependents. Despite evincing an anti-statist rhetoric, neoliberalism is in practice not opposed to the state per se – as the bank bail-outs of 2008 demonstrated – but rather to particular uses of state funds; meanwhile, neoconservatism’s strong state was confined to military and police functions, and defined itself against a welfare state held to undermine individual moral responsibility.”

[…] what Robin describes touches upon my recent post about the morality-punishment link. As I pointed out, the world of Star Trek: Next Generation imagines the possibility of a social order that serves humans, instead of the other way around. I concluded that, “Liberals seek to promote freedom, not just freedom to act but freedom from being punished for acting freely. Without punishment, though, the conservative sees the world lose all meaning and society to lose all order.” The neoliberal vision subordinates the individual to the moral order. The purpose of forcing the individual into a permanent state of anxiety and fear is to preoccupy their minds and their time, to redirect all the resources of the individual back into the system itself. The emphasis on the individual isn’t because individualism is important as a central ideal but because the individual is the weak point that must be carefully managed. Also, focusing on the individual deflects our gaze from the structure and its attendant problems.

This brings me to how this relates to corporations in neoliberalism (Fisher, pp. 69-70):

“For this reason, it is a mistake to rush to impose the individual ethical responsibility that the corporate structure deflects. This is the temptation of the ethical which, as Žižek has argued, the capitalist system is using in order to protect itself in the wake of the credit crisis – the blame will be put on supposedly pathological individuals, those ‘abusing the system’, rather than on the system itself. But the evasion is actually a two step procedure – since structure will often be invoked (either implicitly or openly) precisely at the point when there is the possibility of individuals who belong to the corporate structure being punished. At this point, suddenly, the causes of abuse or atrocity are so systemic, so diffuse, that no individual can be held responsible. This was what happened with the Hillsborough football disaster, the Jean Charles De Menezes farce and so many other cases. But this impasse – it is only individuals that can be held ethically responsible for actions, and yet the cause of these abuses and errors is corporate, systemic – is not only a dissimulation: it precisely indicates what is lacking in capitalism. What agencies are capable of regulating and controlling impersonal structures? How is it possible to chastise a corporate structure? Yes, corporations can legally be treated as individuals – but the problem is that corporations, whilst certainly entities, are not like individual humans, and any analogy between punishing corporations and punishing individuals will therefore necessarily be poor. And it is not as if corporations are the deep-level agents behind everything; they are themselves constrained by/ expressions of the ultimate cause-that-is-not-a-subject: Capital.”

Sleepwalking Through Our Dreams

The modern self is not normal, by historical and evolutionary standards. Extremely unnatural and unhealthy conditions have developed, our minds having correspondingly grown malformed like the binding of feet. Our hyper-individuality is built on disconnection and, in place of human connection, we take on various addictions, not just to drugs and alcohol but also to work, consumerism, entertainment, social media, and on and on. The more we cling to an unchanging sense of bounded self, the more burdened we become trying to hold it all together, hunched over with the load we carry on our shoulders. We are possessed by the identities we possess.

This addiction angle interests me. Our addiction is the result of our isolated selves. Yet even as our addiction attempts to fill emptiness, to reach out beyond ourselves toward something, anything, a compulsive relationship devoid of the human, we isolate ourselves further. As Johann Hari explained in Chasing the Scream (Kindle Locations 3521-3544):

There were three questions I had never understood. Why did the drug war begin when it did, in the early twentieth century? Why were people so receptive to Harry Anslinger’s message? And once it was clear that it was having the opposite effect to the one that was intended— that it was increasing addiction and supercharging crime— why was it intensified, rather than abandoned?

I think Bruce Alexander’s breakthrough may hold the answer.

“Human beings only become addicted when they cannot find anything better to live for and when they desperately need to fill the emptiness that threatens to destroy them,” Bruce explained in a lecture in London31 in 2011. “The need to fill an inner void is not limited to people who become drug addicts, but afflicts the vast majority of people of the late modern era, to a greater or lesser degree.”

A sense of dislocation has been spreading through our societies like a bone cancer throughout the twentieth century. We all feel it: we have become richer, but less connected to one another. Countless studies prove this is more than a hunch, but here’s just one: the average number of close friends a person has has been steadily falling. We are increasingly alone, so we are increasingly addicted. “We’re talking about learning to live with the modern age,” Bruce believes. The modern world has many incredible benefits, but it also brings with it a source of deep stress that is unique: dislocation. “Being atomized and fragmented and all on [your] own— that’s no part of human evolution and it’s no part of the evolution of any society,” he told me.

And then there is another kicker. At the same time that our bonds with one another have been withering, we are told— incessantly, all day, every day, by a vast advertising-shopping machine— to invest our hopes and dreams in a very different direction: buying and consuming objects. Gabor tells me: “The whole economy is based around appealing to and heightening every false need and desire, for the purpose of selling products. So people are always trying to find satisfaction and fulfillment in products.” This is a key reason why, he says, “we live in a highly addicted society.” We have separated from one another and turned instead to things for happiness— but things can only ever offer us the thinnest of satisfactions.

This is where the drug war comes in. These processes began in the early twentieth century— and the drug war followed soon after. The drug war wasn’t just driven, then, by a race panic. It was driven by an addiction panic— and it had a real cause. But the cause wasn’t a growth in drugs. It was a growth in dislocation.

The drug war began when it did because we were afraid of our own addictive impulses, rising all around us because we were so alone. So, like an evangelical preacher who rages against gays because he is afraid of his own desire to have sex with men, are we raging against addicts because we are afraid of our own growing vulnerability to addiction?

In The Secret Life of Puppets, Victoria Nelson makes some useful observations of reading addiction, specifically in terms of formulaic genres. She discusses Sigmund Freud’s repetition compulsion and Lenore Terr’s post-traumatic games. She sees genre reading as a ritual-like enactment that can’t lead to resolution, and so the addictive behavior becomes entrenched. This would apply to many other forms of entertainment and consumption. And it fits into Derrick Jensen’s discussion of abuse, trauma, and the victimization cycle.

I would broaden her argument in another way. People have feared the written text ever since it was invented. In the 18th century, there took hold a moral panic about reading addiction in general and that was before any fiction genres had developed (Frank Furedi, The Media’s First Moral Panic). The written word is unchanging and so creates the conditions for repetition compulsion. Every time a text is read, it is the exact same text.

That is far different from oral societies. And it is quite telling that oral societies have a much more fluid sense of self. The Piraha, for example, don’t cling to their sense of self nor that of others. When a Piraha individual is possessed by a spirit or meets a spirit who gives them a new name, the self that was there is no longer there. When asked where is that person, the Piraha will say that he or she isn’t there, even if the same body of the individual is standing right there in front of them. They also don’t have a storytelling tradition or concern for the past.

Another thing that the Piraha apparently lack is mental illness, specifically depression along with suicidal tendencies. According to Barbara Ehrenreich from Dancing in the Streets, there wasn’t much written about depression even in the Western world until the suppression of religious and public festivities, such as Carnival. One of the most important aspects of Carnival and similar festivities was the masking, shifting, and reversal of social identities. Along with this, there was the losing of individuality within the group. And during the Middle Ages, an amazing number of days in the year were dedicated to communal celebrations. The ending of this era coincided with numerous societal changes, including the increase of literacy with the spread of the movable type printing press.

Another thing happened with suppression of festivities. Local community began to break down as power became centralized in far off places and the classes became divided, which Ehrenreich details. The aristocracy used to be inseparable from their feudal roles and this meant participating in local festivities where, as part of the celebration, a king might wrestle with a blacksmith. As the divides between people grew into vast chasms, the social identities held and social roles played became hardened into place. This went along with a growing inequality of wealth and power. And as research has shown, wherever there is inequality also there is found high rates of social problems and mental health issues.

It’s maybe unsurprising that what followed from this was colonial imperialism and a racialized social order, class conflict and revolution. A society formed that was simultaneously rigid in certain ways and destabilized in others. The individuals became increasingly atomized and isolated. With the loss of kinship and community, the cheap replacement we got is identity politics. The natural human bonds are lost or constrained. Social relations are narrowed down. Correspondingly, our imaginations are hobbled and we can’t envision society being any other way. Most tragic, we forget that human society used to be far different, a collective amnesia forcing us into a collective trance. Our entire sense of reality is held in the vice grip of historical moment we find ourselves in.

Social Conditions of an Individual’s Condition

A wide variety of research and data is pointing to a basic conclusion. Environmental conditions (physical, social, political, and economic) are of penultimate importance. So, why do we treat as sick individuals those who suffer the consequences of the externalized costs of society?

Here is the sticking point. Systemic and collective problems in some ways are the easiest to deal with. The problems, once understood, are essentially simple and their solutions tend to be straightforward. Even so, the very largeness of these problems make them hard for us to confront. We want someone to blame. But who do we blame when the entire society is dysfunctional?

If we recognize the problems as symptoms, we are forced to acknowledge our collective agency and shared fate. For those who understand this, they are up against countervailing forces that maintain the status quo. Even if a psychiatrist realizes that their patient is experiencing the symptoms of larger social issues, how is that psychiatrist supposed to help the patient? Who is going to diagnose the entire society and demand it seek rehabilitation?

Winter Season and Holiday Spirit

With this revelry and reversal follows, along with licentiousness and transgression, drunkenness and bawdiness, fun and games, song and dance, feasting and festival. It is a time for celebration of this year’s harvest and blessing of next year’s harvest. Bounty and community. Death and rebirth. The old year must be brought to a close and the new year welcomed. This is the period when gods, ancestors, spirits, and demons must be solicited, honored, appeased, or driven out. The noise of song, gunfire, and such serves many purposes.

In the heart of winter, some of the most important religious events took place. This includes Christmas, of course, but also the various celebrations around the same time. A particular winter festival season that began on All Hallows Eve (i.e., Halloween) ended with the Twelfth Night. This included carnival-like revelry and a Lord of Misrule. There was also the tradition of going house to house, of singing and pranks, of demanding treats/gifts and threats if they weren’t forthcoming. It was a time of community and sharing, and those who didn’t willingly participate might be punished. Winter, a harsh time of need, was when the group took precedence. […]

I’m also reminded of the Santa Claus as St. Nick. This invokes an image of jollity and generosity. And this connects to wintertime as period of community needs and interdependence, of sharing and gifting, of hospitality and kindness. This includes enforcement of social norms which easily could transform into the challenging of social norms.

It’s maybe in this context we should think of the masked vigilantes participating in the Boston Tea Party. Like carnival, there had developed a tradition of politics out-of-doors, often occurring on the town commons. And on those town commons, large trees became identified as liberty trees — under which people gathered, upon which notices were nailed, and sometimes where effigies were hung. This was an old tradition that originated in Northern Europe, where a tree was the center of a community, the place of law-giving and community decision-making. In Europe, the commons had become the place of festivals and celebrations, such as carnival. And so the commons came to be the site of revolutionary fervor as well.

The most famous Liberty Tree was a great elm near the Boston common. It was there that many consider the birth of the American Revolution, as it was the site of early acts of defiance. This is where the Sons of Liberty met, organized, and protested. This would eventually lead to that even greater act of defiance on Saturnalia eve, the Boston Tea Party. One of the participants in the Boston Tea Party and later in the Revolutionary War, Samuel Sprague, is buried in the Boston Common.

There is something many don’t understand about the American Revolution. It wasn’t so much a fight against oppression in general and certainly not about mere taxation in particular. What angered those Bostonians and many other colonists was that they had become accustomed to community-centered self-governance and this was being challenged. The tea tax wasn’t just an imposition of imperial power but also colonial corporatism. The East India Company was not acting as a moral member of the community, in its taking advantage by monopolizing trade. Winter had long been the time of year when bad actors in the community would be punished. Selfishness was not to be tolerated.

Those Boston Tea Partiers were simply teaching a lesson about the Christmas spirit. And in the festival tradition, they chose the guise of Native Americans which to their minds would have symbolized freedom and an inversion of power. What revolution meant to them was a demand for return of what was taken from them, making the world right again. It was revelry with a purpose.

* * *

As addiction is key, below is some other stuff in terms of individualism and social problems, mental health and abnormal psychology. It seems that high rates of addiction are caused by the same and/or related factors involved in depression, anxiety, dark triad, etc. It’s a pattern of dysfunction found most strongly in WEIRD societies and increasingly in other developed societies, such as seen in Japan as the traditional social order breaks down (e.g., increasing number of elderly Japanese dying alone and forgotten). This pattern is seen clearly in the weirdest of WEIRD, such as with sociopathic organizations like Amazon which I bet has high prevalence of addiction among employees.

Drug addiction makes possible human adaptation to inhuman conditions. It’s part of a victimization cycle that allows victimizers to not only take power but to enforce the very conditions of victimization. The first step is isolating the victim by creating a fractured society of dislocation, disconnection, and division. Psychopaths rule by imposing a sociopathic social order, a sociopathic economic and political system. This is the environment in which the dark triad flourishes and, in coping with the horror of it, so many turn to addiction to numb the pain and distress, anxiety and fear. Addiction is the ‘normal’ state of existence under the isolated individualism of social Darwinism and late stage capitalism.

Addiction is the expression of disconnection, the embodiment of isolation. Without these anti-social conditions, the dark triad could never take hold and dominate all of society.

“The opposite of addiction is not sobriety. The opposite of addiction is connection.”
~ Johann Harri

“We are all so much together, but we are all dying of loneliness.”
~ Albert Schweitzer

The New Individualism
by Anthony Elliott and Charles Lemert
pp. 117-118

Giddens tells us that reflexivity, powered by processes of globalization, stands closest to autonomy. In a world in which tradition has more thoroughly been swept away than ever before, contingency appears unavoidable. And with contingency comes the potential to remake the world and negotiate lifestyle options — about who to be, how to act, whom to love and how to live together. The promised autonomy of reflexivity is, however, also a problem, since choice necessarily brings with it ambivalence, doubt and uncertainty. There is no way out of this paradox, though of the various, necessarily unsuccessful, attempts people make to avoid the dilemmas of reflexivity Giddens identifies ‘addiction’ as being of key importance to the present age. As he writes:

Once institutional reflexivity reaches into virtually all parts of everyday social life, almost any pattern or habit can become an addiction. The idea of addiction makes little sense in a traditional culture, where it is normal to do today what one did yesterday . . . Addictions, then, are a negative index of the degree to which the reflexive project of the self moves to the centre-stage in late modernity.

Reflexivity’s promise of freedom carries with it the burden of continual choice and deals with all the complexities of emotional life. ‘Every addiction’, writes Giddens, ‘is a defensive reaction, and an escape, a recognition of lack of autonomy that casts a shadow over the competence of the self.’

How Individualism Undermines Our Health Care
from Shared Justice

Addictions Originate in Unhappiness—and Compassion Could Be the Cure
by Gabor Maté

Dislocation Theory of Addiction
by Bruce K. Alexander

Addiction, Environmental Crisis, and Global Capitalism
by Bruce K. Alexander

Healing Addiction Through Community: A Much Longer Road Than it Seems?
by Bruce K. Alexander

What Lab experiments Can Tell Us About The Cause And Cure For Addiction
by Mark

#7 Theory of Dislocation
by Ross Banister

‘The globalisation of addiction’ by Bruce Alexander
review by Mike Jay

The cost of the loneliness epidemic
from Broccoli & Brains

The Likely Cause of Addiction Has Been Discovered, and It Is Not What You Think
by Johann Hari

The Politics of Loneliness
by Michael Bader

America’s deadly epidemic of loneliness
by Michael Bader

Addiction and Modernity: A Comment on a Global Theory of Addiction
by Robert Granfield

The Addicted Narcissist: How Substance Addiction Contributes to Pathological Narcissism With Implications for Treatment
by Kim Laurence

Edge of the Depths

“In Science there are no ‘depths’; there is surface everywhere.”
~ Rodolf Carnap

I was reading Richard S. Hallam’s Virtual Selves, Real Persons. I’ve enjoyed it, but I find a point of disagreement or maybe merely doubt and questioning. He emphasizes persons as being real, in that they are somehow pre-existing and separate. He distinguishes the person from selves, although this distinction isn’t necessarily relevant to my thoughts here.

I’m not sure to what degree our views diverge, as I find much of the text to be insightful and a wonderful overview. However, to demonstrate my misgivings, the author only mentions David Hume’s bundle theory a couple of times on a few pages (in a several hundred page book), a rather slight discussion for such a key perspective. He does give a bit more space to Julian Jaynes’ bicameral theory, but even Jaynes is isolated to one fairly small section and not fully integrated into the author’s larger analysis.

The commonality between Humes and Jaynes is that they perceived conscious identity as being more nebulous — no there there. In my own experience, that feels more right to me. As one dives down into the psyche, the waters become quite murky, so dark that one can’t even see one’s hands in front of one’s face, much less know what one might be attempting to grasp. Notions of separateness, at a great enough depth, fades away — one finds oneself floating in darkness with no certain sense of distance or direction. I don’t know how to explain this, if one hasn’t experienced altered states of mind, from extended meditation to psychedelic trips.

This is far from a new line of thought for me, but it kept jumping out at me as I read Hallam’s book. His writing is scholarly to a high degree and, for me, that is never a criticism. The downside is that a scholarly perspective alone can’t be taken into the depths. Jaynes solved this dilemma by maintaining a dual focus, intellectual argument balanced with a sense of wonder — speaking of our search for certainty, he said that, “Beyond that, there is only awe.

I keep coming back to that. For all I appreciate of Hallam’s book, I never once experienced awe. Then again, he probably wasn’t attempting to communicate awe. So, it’s not exactly that I judge this as a failing, even if it can feel like an inadequacy from the perspective of human experience or at least my experience. In the throes of awe, we are humbled into an existential state of ignorance. A term like ‘separation’ becomes yet another word. To take consciousness directly and fully is to lose any sense of separateness for, then, there is consciousness alone — not my consciousness and your consciousness, just consciousness.

I could and have made more intellectual arguments about consciousness and how strange it can be. It’s not clear to me, as it is clear to some, that there is any universal experience of consciousness (human or otherwise). There seems to be a wide variety of states of mind found across diverse societies and species. Consider animism that seems so alien to the modern sensibility. What does ‘separation’ mean in an animate world that doesn’t assume the individual as the starting point of human existence?

I don’t need to rationally analyze any of this. Rationality as easily turns into rationalization, justifying what we think we already know. All I can say is that, intuitively, Hume’s bundle theory makes more sense of what I know directly within my own mind, whatever that may say about the minds of others. That viewpoint can’t be scientifically proven for the experience behind it is inscrutable, not an object to be weighed and measured, even as brain scans remain fascinating. Consciousness can’t be found by pulling apart Hume’s bundle anymore than a frog’s soul can be found by dissecting its beating heart — consciousness having a similar metaphysical status as the soul. Something like the bundle theory either makes sense or not. Consciousness is a mystery, no matter how unsatisfying that may seem. Science can take us to the edge of the depths, but that is where it stops. To step off that edge requires something else entirely.

Actually, stepping off rarely happens since few, if any, ever choose to sink into the depths. One slips and falls and the depths envelop one. Severe depression was my initiation experience, the weight dragging me down. There are many possible entry points to this otherness. When that happens, thoughts on consciousness stop being intellectual speculation and thought experiment. One knows consciousness as well as one will ever know it when one drowns in it. If one thrashes their way back to the surface, then and only then can one offer meaningful insight but more likely one is lost in silence, water still choking in one’s throat.

This is why Julian Jaynes, for all of his brilliance and insight, reached the end of his life filled with frustration at what felt like a failure to communicate. As his historical argument went, individuals don’t change their mindsets so much as the social system that maintains a particular mindset is changed, which in the case of bicameralism meant the collapse of the Bronze Age civilizations. Until our society faces a similar crises and is collectively thrown into the depths, separation will remain the dominant mode of experience and understanding. As for what might replace it, that is anyone’s guess.

Here we stand, our footing not entirely secure, at the edge of the depths.