Western Individuality Before the Enlightenment Age

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Introduction
pp. 5-7

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Historical and Philosophical Issues in the Conservation of Cultural Heritage
edited by Nicholas Price, M. Kirby Talley, and Alessandra Melucco Vaccaro
Reading 5: “The History of Art as a Humanistic Discipline”
by Erwin Panofsky
pp. 83-85

Nine days before his death Immanuel Kant was visited by his physician. Old, ill and nearly blind, he rose from his chair and stood trembling with weakness and muttering unintelligible words. Finally his faithful companion realized that he would not sit down again until the visitor had taken a seat. This he did, and Kant then permitted himself to be helped to his chair and, after having regained some of his strength, said, ‘Das Gefühl für Humanität hat mich noch nicht verlassen’—’The sense of humanity has not yet left me’. The two men were moved almost to tears. For, though the word Humanität had come, in the eighteenth century, to mean little more than politeness and civility, it had, for Kant, a much deeper significance, which the circumstances of the moment served to emphasize: man’s proud and tragic consciousness of self-approved and self-imposed principles, contrasting with his utter subjection to illness, decay and all that implied in the word ‘mortality.’

Historically the word humanitas has had two clearly distinguishable meanings, the first arising from a contrast between man and what is less than man; the second between man and what is more. In the first case humanitas means a value, in the second a limitation.

The concept of humanitas as a value was formulated in the circle around the younger Scipio, with Cicero as its belated, yet most explicit spokesman. It meant the quality which distinguishes man, not only from animals, but also, and even more so, from him who belongs to the species homo without deserving the name of homo humanus; from the barbarian or vulgarian who lacks pietas and παιδεια- that is, respect for moral values and that gracious blend of learning and urbanity which we can only circumscribe by the discredited word “culture.”

In the Middle Ages this concept was displaced by the consideration of humanity as being opposed to divinity rather than to animality or barbarism. The qualities commonly associated with it were therefore those of frailty and transience: humanitas fragilis, humanitas caduca.

Thus the Renaissance conception of humanitas had a two-fold aspect from the outset. The new interest in the human being was based both on a revival of the classical antithesis between humanitas and barbartias, or feritas, and on a survival of the mediaeval antithesis between humanitas and divinitas. When Marsilio Ficino defines man as a “rational soul participating in the intellect of God, but operating in a body,” he defines him as the one being that is both autonomous and finite. And Pico’s famous ‘speech’ ‘On the Dignity of Man’ is anything but a document of paganism. Pico says that God placed man in the center of the universe so that he might be conscious of where he stands, and therefore free to decide ‘where to turn.’ He does not say that man is the center of the universe, not even in the sense commonly attributed to the classical phrase, “man the measure of all things.”

It is from this ambivalent conception of humanitas that humanism was born. It is not so much a movement as an attitude which can be defined as the conviction of the dignity of man, based on both the insistence on human values (rationality and freedom) and the acceptance of human limitations (fallibility and frailty); from this two postulates result responsibility and tolerance.

Small wonder that this attitude has been attacked from two opposite camps whose common aversion to the ideas of responsibility and tolerance has recently aligned them in a united front. Entrenched in one of these camps are those who deny human values: the determinists, whether they believe in divine, physical or social predestination, the authoritarians, and those “insectolatrists” who profess the all-importance of the hive, whether the hive be called group, class, nation or race. In the other camp are those who deny human limitations in favor of some sort of intellectual or political libertinism, such as aestheticists, vitalists, intuitionists and hero-worshipers. From the point of view of determinism, the humanist is either a lost soul or an ideologist. From the point of view of authoritarianism, he is either a heretic or a revolutionary (or a counterrevolutionary). From the point of view of “insectolatry,” he is a useless individualist. And from the point of view of libertinism he is a timid bourgeois.

Erasmus of Rotterdam, the humanist par excellence, is a typical case in point. The church suspected and ultimately rejected the writings of this man who had said: “Perhaps the spirit of Christ is more largely diffused than we think, and there are many in the community of saints who are not in our calendar.” The adventurer Uhich von Hutten despised his ironical skepticism and his unheroic love of tranquillity. And Luther, who insisted that “no man has power to think anything good or evil, but everything occurs in him by absolute necessity,” was incensed by a belief which manifested itself in the famous phrase; “What is the use of man as a totality [that is, of man endowed with both a body and a soul], if God would work in him as a sculptor works in clay, and might just as well work in stone?”

Food and Faith in Christian Culture
edited by Ken Albala and Trudy Eden
Chapter 3: “The Food Police”
Sumptuary Prohibitions On Food In The Reformation
by Johanna B. Moyer
pp. 80-83

Protestants too employed a disease model to explain the dangers of luxury consumption. Luxury damaged the body politic leading to “most incurable sickness of the universal body” (33). Protestant authors also employed Galenic humor theory, arguing that “continuous superfluous expense” unbalanced the humors leading to fever and illness (191). However, Protestants used this model less often than Catholic authors who attacked luxury. Moreover, those Protestants who did employ the Galenic model used it in a different manner than their Catholic counterparts.

Protestants also drew parallels between the damage caused by luxury to the human body and the damage excess inflicted on the French nation. Rather than a disease metaphor, however, many Protestant authors saw luxury more as a “wound” to the body politic. For Protestants the danger of luxury was not only the buildup of humors within the body politic of France but the constant “bleeding out” of humor from the body politic in the form of cash to pay for imported luxuries. The flow of cash mimicked the flow of blood from a wound in the body. Most Protestants did not see luxury foodstuffs as the problem, indeed most saw food in moderation as healthy for the body. Even luxury apparel could be healthy for the body politic in moderation, if it was domestically produced and consumed. Such luxuries circulated the “blood” of the body politic creating employment and feeding the lower orders. 72 De La Noue made this distinction clear. He dismissed the need to individually discuss the damage done by each kind of luxury that was rampant in France in his time as being as pointless “as those who have invented auricular confession have divided mortal and venal sins into infinity of roots and branches.” Rather, he argued, the damage done by luxury was in its “entire bulk” to the patrimonies of those who purchased luxuries and to the kingdom of France (116). For the Protestants, luxury did not pose an internal threat to the body and salvation of the individual. Rather, the use of luxury posed an external threat to the group, to the body politic of France.

The Reformation And Sumptuary Legislation

Catholics, as we have seen, called for antiluxury regulations on food and banqueting, hoping to curb overeating and the damage done by gluttony to the body politic. Although some Protestants also wanted to restrict food and banqueting, more often French Protestants called for restrictions on clothing and foreign luxuries. These differing views of luxury during and after the French Wars of Religion not only give insight into the theological differences between these two branches of Christianity but also provides insight into the larger pattern of the sumptuary regulation of food in Europe in this period. Sumptuary restrictions were one means by which Catholics and Protestants enforced their theology in the post-Reformation era.

Although Catholicism is often correctly cast as the branch of Reformation Christianity that gave the individual the least control over their salvation, it was also true that the individual Catholic’s path to salvation depended heavily on ascetic practices. The responsibility for following these practices fell on the individual believer. Sumptuary laws on food in Catholic areas reinforced this responsibility by emphasizing what foods should and should not be eaten and mirrored the central theological practice of fasting for the atonement of sin. Perhaps the historiographical cliché that it was only Protestantism which gave the individual believer control of his or her salvation needs to be qualified. The arithmetical piety of Catholicism ultimately placed the onus on the individual to atone for each sin. Moreover, sumptuary legislation tried to steer the Catholic believer away from the more serious sins that were associated with overeating, including gluttony, lust, anger, and pride.

Catholic theology meshed nicely with the revival of Galenism that swept through Europe in this period. Galenists preached that meat eating, overeating, and the imbalance in humors which accompanied these practices, led to behavioral changes, including an increased sex drive and increased aggression. These physical problems mirrored the spiritual problems that luxury caused, including fornication and violence. This is why so many authors blamed the French nobility for the luxury problem in France. Nobles were seen not only as more likely to bear the expense of overeating but also as more prone to violence. 73

Galenism also meshed nicely with Catholicism because it was a very physical religion in which the control of the physical body figured prominently in the believer’s path to salvation. Not surprisingly, by the seventeenth century, Protestants gravitated away from Galenism toward the chemical view of the body offered by Paracelsus. 74 Catholic sumptuary law embodied a Galenic view of the body where sin and disease were equated and therefore pushed regulations that advocated each person’s control of his or her own body.

Protestant legislators, conversely, were not interested in the individual diner. Sumptuary legislation in Protestant areas ran the gamut from control of communal displays of eating, in places like Switzerland and Germany, to little or no concern with restrictions on luxury foods, as in England. For Protestants, it was the communal role of food and luxury use that was important. Hence the laws in Protestant areas targeted food in the context of weddings, baptisms, and even funerals. The English did not even bother to enact sumptuary restrictions on food after their break with Catholicism. The French Protestants who wrote on luxury glossed over the deleterious effects of meat eating, even proclaiming it to be healthful for the body while producing diatribes against the evils of imported luxury apparel. The use of Galenism in the French Reformed treatises suggests that Protestants too were concerned with a “body,” but it was not the individual body of the believer that worried Protestant legislators. Sumptuary restrictions were designed to safeguard the mystical body of believers, or the “Elect” in the language of Calvinism. French Protestants used the Galenic model of the body to discuss the damage that luxury did to the body of believers in France, but ultimately to safeguard the economic welfare of all French subjects. The Calvinists of Switzerland used sumptuary legislation on food to protect those predestined for salvation from the dangerous eating practices of members of the community whose overeating suggested they might not be saved.

Ultimately, sumptuary regulations in the Reformation spoke to the Christian practice of fasting. Fasting served very different functions in Protestants and Catholic theology. Raymond Mentzer has suggested that Protestants “modified” the Catholic practice of fasting during the Reformation. The major reformers, including Luther, Calvin, and Zwingli, all rejected fasting as a path to salvation. 75 For Protestants, fasting was a “liturgical rite,” part of the cycle of worship and a practice that served to “bind the community.” Fasting was often a response to adversity, as during the French Wars of Religion. For Catholics, fasting was an individual act, just as sumptuary legislation in Catholic areas targeted individual diners. However, for Protestants, fasting was a communal act, “calling attention to the body of believers.” 76 The symbolic nature of fasting, Mentzer argues, reflected Protestant rejection of transubstantiation. Catholics continued to believe that God was physically present in the host, but Protestants believed His was only a spiritual presence. When Catholics took Communion, they fasted to cleanse their own bodies so as to receive the real, physical body of Christ. Protestants, on the other hand, fasted as spiritual preparation because it was their spirits that connected with the spirit of Christ in the Eucharist. 77

Vestiges of an Earlier Mentality: Different Psychologies

“The Self as Interiorized Social Relations Applying a Jaynesian Approach to Problems of Agency and Volition”
By Brian J. McVeigh

(II) Vestiges of an Earlier Mentality: Different Psychologies

If what Jaynes has proposed about bicamerality is correct, we should expect to find remnants of this extinct mentality. In any case, an examination of the ethnopsychologies of other societies should at least challenge our assumptions. What kinds of metaphors do they employ to discuss the self? Where is agency localized? To what extent do they even “psychologize” the individual, positing an “interior space” within the person? If agency is a socio-historical construction (rather than a bio-evolutionary product), we should expect some cultural variability in how it is conceived. At the same time, we should also expect certain parameters within which different theories of agency are built.

Ethnographies are filled with descriptions of very different psychologies. For example, about the Maori, Jean Smith writes that

it would appear that generally it was not the “self” which encompassed the experience, but experience which encompassed the “self” … Because the “self” was not in control of experience, a man’s experience was not felt to be integral to him; it happened in him but was not of him. A Maori individual was not so much the experiencer of his experience as the observer of it. 22

Furthermore, “bodily organs were endowed with independent volition.” 23 Renato Rosaldo states that the Ilongots of the Philippines rarely concern themselves with what we refer to as an “inner self” and see no major differences between public presentation and private feeling. 24

Perhaps the most intriguing picture of just how radically different mental concepts can be is found in anthropologist Maurice Leenhardt’s   intriguing book Do Kamo, about the Canaque of New Caledonia, who are “unaware” of their own existence: the “psychic or psychological aspect of man’s actions are events in nature. The Canaque sees them as outside of himself, as externalized. He handles his existence similarly: he places it in an object — a yam, for instance — and through the yam he gains some knowledge of his existence, by identifying himself with it.” 25

Speaking of the Dinka, anthropologist Godfrey Lienhardt writes that “the man is the object acted upon,” and “we often find a reversal of European expressions which assume the human self, or mind, as subject in relation to what happens to it.” 26 Concerning the mind itself,

The Dinka have no conception which at all closely corresponds to our popular modern conception of the “mind,” as mediating and, as it were, storing up the experiences of the self. There is for them no such interior entity to appear, on reflection, to stand between the experiencing self at any given moment and what is or has been an exterior influence upon the self. So it seems that what we should call in some cases the memories of experiences, and regard therefore as in some way intrinsic and interior to the remembering person and modified in their effect upon him by that interiority, appear to the Dinka as exteriority acting upon him, as were the sources from which they derived. 27

The above mentioned ethnographic examples may be interpreted as merely colorful descriptions, as exotic and poetic folk psychologies. Or, we may take a more literal view, and entertain the idea that these ethnopsychological accounts are vestiges of a distant past when individuals possessed radically different mentalities. For example, if it is possible to be a person lacking interiority in which a self moves about making conscious decisions, then we must at least entertain the idea that entire civilizations existed whose members had a radically different mentality. The notion of a “person without a self” is admittedly controversial and open to misinterpretation. Here allow me to stress that I am not suggesting that in today’s world there are groups of people whose mentality is distinct from our own. However, I am suggesting that remnants of an earlier mentality are evident in extant ethnopsychologies, including our own. 28

* * *

Text from:

Reflections on the Dawn of Consciousness:
Julian Jaynes’s Bicameral Mind Theory Revisited
Edited by Marcel Kuijsten
Chapter 7, Kindle Locations 3604-3636

See also:

Survival and Persistence of Bicameralism
Piraha and Bicameralism

Human Nature: Categories & Biases

There is something compelling about seemingly opposing views. There is Mythos vs Logos, Apollonian vs Dionysian, Fox vs Hedgehog, Socratic vs the Sophistic, Platonic vs Aristotelian, Spinoza vs Locke, Paine vs Burke, Jung vs Freud, nature vs nurture, biology vs culture, determinism vs free will, parenting style vs peer influence, etc.

And these perceived divisions overlap in various ways, a long developing history of ideas, worldviews, and thinkers. It’s a dance. One side will take the lead and then the other. The two sides will take different forms, the dividing lines shifting.

In more recent decades, we’ve come to more often think in terms of political ideologies. The greatest of them all is liberal vs conservative. But since World War II, there has been a growing obsession with authoritarianism and anti-authoritarianism. And there is the newer area of social dominance orientation (SDO). Some prefer focusing on progressive vs reactionary as more fundamental, as it relates to the history of the revolutionary and counterrevolutionary.

With the advent of social science and neuroscience, we’ve increasingly put all of this in new frames. Always popular, there is left and right brain hemispheres, along with more specific brain anatomy (e.g., conservatives on average have a larger amygdala). Then there is the personality research: Myers-Briggs, trait theory, boundary types, etc — of those three, trait theory being the most widely used.

Part of it is that humans simply like to categorize. It’s how we attempt to make sense of the world. And there is nothing that preoccupies human curiosity more than humanity itself, our shared inheritance of human ideas and human nature. For as long as humans have been writing and probably longer, there have been categorizations to slot humans into.

My focus has most often been toward personality, along with social science more generally. What also interests me is that one’s approach to such issues also comes in different varieties. With that in mind, I wanted to briefly compare two books. Both give voice to two sides of my own thinking. The first I’ll discuss is The Liberal’s Guide to Conservatives by J. Scott Wagner. And the second is A Skeptic’s Guide to the Mind by Robert Burton.

Wagner’s book is the kind of overview I wish I’d had earlier last decade. But a book like this gets easier to write as time goes on. Many points of confusion have been further clarified, if not always resolved, by more recent research. Then again, often this has just made us more clear about what exactly is our confusion.

What is useful about a book like this is that it helps show what we do know at the moment. Or simply what we think we know, until further research is done to confirm or disconfirm present theories. But at least some of it allows a fair amount of certainty that we are looking at significant patterns in the data.

It’s a straightforward analysis with a simple purpose. The author is on the political left and he wants to help those who share his biases to understand those on the political right who have different biases. A noble endeavor, as always. He covers a lot of territory and it is impressive. I won’t even attempt to summarize it all. I’m already broadly familiar with the material, as this area of study involves models and theories that have been researched for a long time.

What most stood out to me was his discussion of authoritarianism and social dominance orientation (SDO). For some reason, that seems like more important than all the rest. Those taken together represent the monkey wrench thrown into the gears of the human mind. I was amused when Wagner opined that,

Unlike all that subtlety around “social conformity-autonomy” and authoritarianism, the SDO test is straightforward: not to put too fine a point on it, but to me, the questions measure how much of a jerk you are. (Kindle Locations 3765-3767)

He holds no love for SDOs. And for good reason. Combine the worst aspects from the liberal elite of the classical liberal variety as found in a class-based pseudo-meritocracy. Remove any trace of liberal-minded tolerance, empathy, kindness, and compassion. And then wrap this all up with in-group domination. Serve with a mild sauce of near sociopathy.

Worse part of it is that SDOs are disproportionately found among those with wealth and power, authority and privilege. These people are found among the ruling elite for the simple reason that they want to be a ruling elite. Unless society stops them from dominating, they will dominate. It’s their nature, like the scorpion that stings the frog carrying him across the river. The scorpion can’t help itself.

All of that is important info. I do wish more people would read books like these. There is no way for the public, conservative and liberal alike, to come together in defense against threats to the public good when they don’t understand or often even clearly see those threats.

Anyway, Wagner’s book offers a systematizing approach, with a more practical emphasis that offers useful insight. He shows what differentiates people and what those demarcations signify. He offers various explanations and categorizations, models and theories. You could even take professional tests that will show your results on the various scales discussed, in order to see where you fit in the scheme of personality traits and ideological predispositions. Reading his book will help you understand why conflicts are common and communication difficult. But he doesn’t leave it at that, as he shares personal examples and helpful advice.

Now for the other approach, more contrarian in nature. This is exemplified by the other book I’ve been reading, the one by Robert Burton (who I quoted in a recent post). As Wagner brings info together, Burton dissects it into its complicated messy details (Daniel Everett has a similar purpose). Yet Burton also is seeking to be of use, in promoting clear thinking and a better scientific understanding. His is a challenge not just to the public but also to scientific researchers.

Rather than promising answers to age-old questions about the mind, it is my goal to challenge the underlying assumptions that drive these questions. In the end, this is a book questioning the nature of the questions about the mind that we seem compelled to ask yet are scientifically unable to answer. (p. 7)

Others like Wagner show the answers so far found for the questions we ask. Burton’s motive is quite the opposite, to question those answers. This is in the hope of improving both questions and answers.

Here is what I consider the core insight from Burton’s analysis (p. 105-7):

“Heinrich’s team showed the illusion to members of sixteen different social groups including fourteen from small-scale societies such as native African tribes. To see how strong the illusion was in each of these groups, they determined how much longer the “shorter” line needed to be for the observer to conclude that the two lines were equal. (You can test yourself at this website— http://www.michaelbach.de/ot/sze_muelue/index.html.) By measuring the amount of lengthening necessary for the illusion to disappear, they were able to chart differences between various societies. At the far end of the spectrum— those requiring the greatest degree of lengthening in order to perceive the two lines as equal (20 percent lengthening)— were American college undergraduates, followed by the South African European sample from Johannesburg. At the other end of the spectrum were members of a Kalahari Desert tribe, the San foragers. For the San tribe members, the lines looked equal; no line adjustment was necessary, as they experienced no sense of illusion. The authors’ conclusion: “This work suggests that even a process as apparently basic as visual perception can show substantial variation across populations. If visual perception can vary, what kind of psychological processes can we be sure will not vary?” 14

“Challenging the entire field of psychology, Heinrich and colleagues have come to some profoundly disquieting conclusions. Lifelong members of societies that are Western, educated, industrialized, rich, democratic (the authors coined the acronym WEIRD) reacted differently from others in experiment after experiment involving measures of fairness, antisocial punishment, and cooperation, as well as when responding to visual illusions and questions of individualism and conformity. “The fact that WEIRD people are the outliers in so many key domains of the behavioral sciences may render them one of the worst subpopulations one could study for generalizing about Homo sapiens.” The researchers found that 96 percent of behavioral science experiment subjects are from Western industrialized countries, even though those countries have just 12 percent of the world’s population, and that 68 percent of all subjects are Americans.

“Jonathan Haidt, University of Virginia psychologist and prepublication reviewer of the article, has said that Heinrich’s study “confirms something that many researchers knew all along but didn’t want to admit or acknowledge because its implications are so troublesome.” 15 Heinrich feels that either many behavioral psychology studies have to be redone on a far wider range of cultural groups— a daunting proposition— or they must be understood to offer insight only into the minds of rich, educated Westerners.

“Results of a scientific study that offer universal claims about human nature should be independent of location, cultural factors, and any outside influences. Indeed, one of the prerequisites of such a study would be to test the physical principles under a variety of situations and circumstances. And yet, much of what we know or believe we know about human behavior has been extrapolated from the study of a small subsection of the world’s population known to have different perceptions in such disparate domains as fairness, moral choice, even what we think about sharing. 16 If we look beyond the usual accusations and justifications— from the ease of inexpensively studying undergraduates to career-augmenting shortcuts— we are back at the recurrent problem of a unique self-contained mind dictating how it should study itself.”

I don’t feel much need to add to that. The implications of it are profound. This possibly throws everything up in the air. We might be forced to change what we think we know. I will point out Jonathan Haidt being quoted in that passage. Like many other social scientists, Haidt’s own research has been limited in scope, something that has been pointed out before (by me and others). But at least those like Haidt are acknowledging the problem and putting some effort into remedying it.

These are exciting times. There is the inevitable result that, as we come to know more, we come to realize how little we know and how limited is what we know (or think we know). We become more circumspect in our knowledge.

Still, that doesn’t lessen the significance of what we’ve so far learned. Even with the WEIRD bias disallowing generalization about a universal human nature, the research done remains relevant to showing the psychological patterns and social dynamics in WEIRD societies. So, for us modern Westerners, the social science is as applicable as it ever was. But what it shows is that there is nothing inevitable about human nature, as what has been shown is that there is immense potential for diverse expressions of our shared humanity.

If you combine these two books, you will have greater understanding than either alone. They can be seen as opposing views, but at a deeper level they share a common purpose, that of gaining better insight into ourselves and others.

The Psychology and Anthropology of Consciousness

“There is in my opinion no tenable argument against the hypothesis that psychic functions which today seem conscious to us were once unconscious and yet worked as if they were conscious. We could also say that all the psychic phenomena to be found in man were already present in the natural unconscious state. To this it might be objected that it would then be far from clear why there is such a thing as consciousness at all.”
~ Carl Jung, On the Nature of the Psyche 

An intriguing thought by Jung. Many have considered this possibility. It leads to questions about what is consciousness and what purpose it serves. A recent exploration of this is the User Illusion by Tor Nørretranders, in which the author proposes that consciousness doesn’t determine what we do but chooses what we don’t do, the final vote before action is taken, but action itself requires no consciousness. As such, consciousness is useful and advantageous, just not absolutely necessary. It keeps you from eating that second cookie or saying something cruel.

Another related perspective is that of Julian Jaynes’ bicameral mind theory. I say related because Jaynes influenced Nørretranders. About Jung, Jaynes was aware of his writings and stated disagreement with some ideas: “Jung had many insights indeed, but the idea of the collective unconscious and of the archetypes has always seemed to me to be based on the inheritance of acquired characteristics, a notion not accepted by biologists or psychologists today.” (Quoted by Philip Ardery in “Ramifications of Julian Jaynes’s theory of consciousness for traditional general semantics.”) What these three thinkers agree about is that the unconscious mind is much more expansive and capable, more primary and important than is normally assumed. There is so much more to our humanity than the limits of interiorized self-awareness.

What interested me was the anthropological angle. Here is something I wrote earlier:

“Julian Jaynes had written about the comparison of shame and guilt cultures. He was influenced in by E. R. Dodds (and Bruno Snell). Dodds in turn based some of his own thinking about the Greeks on the work of Ruth Benedict, who originated the shame and guilt culture comparison in her writings on Japan and the United States. Benedict, like Margaret Mead, had been taught by Franz Boas. Boas developed some of the early anthropological thinking that saw societies as distinct cultures.”

Boas founded a school of thought about the primacy of culture, the first major challenge to race realism and eugenics. He gave the anthropology field new direction and inspired a generation of anthropologists.This was the same era during which Jung was formulating his own views.

As with Jung before him, Jaynes drew upon the work of anthropologists. Both also influenced anthropologists, but Jung’s influence of course came earlier. Even though some of these early anthropologists were wary of Jungian psychology, such as archetypes and collective unconscious, they saw personality typology as a revolutionary framework (those influenced also included the likes of Edward Sapir and Benjamin Lee Whorf, both having been mentors of Boas who maybe was the source of introducing linguistic relativism into American thought). Through personality types, it was possible to begin understanding what fundamentally made one mind different from another, a necessary factor in distinguishing one culture from another.

In Jung and the Making of Modern Psychology, Sonu Shamdasani describes this meeting of minds (Kindle Locations 4706-4718):

“The impact of Jung’s typology on Ruth Benedict may be found in her concept of Apollonian and Dionysian culture patterns which she first put forward in 1928 in “Psychological Types in the cultures of the Southwest,” west,” and subsequently elaborated in Patterns of Culture. Mead recalled that their conversations on this topic had in part been shaped by Sapir and Oldenweiser’s discussion of Jung’s typology in Toronto in 1924 as well as by Seligman’s article cited above (1959, 207). In Patterns of Culture, ture, Benedict discussed Wilhelm Worringer’s typification of empathy and abstraction, Oswald Spengler’s of the Apollonian and the Faustian and Friedrich Nietzsche’s of the Apollonian and the Dionysian. Conspicuously, ously, she failed to cite Jung explicitly, though while criticizing Spengler, she noted that “It is quite as convincing to characterize our cultural type as thoroughly extravert … as it is to characterize it as Faustian” (1934, 54-55). One gets the impression that Benedict was attempting to distance herself from Jung, despite drawing some inspiration from his Psychological Types.

“In her autobiography, Mead recalls that in the period that led up to her Sex and Temperament, she had a great deal of discussion with Gregory Bateson concerning the possibility that aside from sex difference, there were other types of innate differences which “cut across sex lines” (1973, 216). She stated that: “In my own thinking I drew on the work of Jung, especially his fourfold scheme for grouping human beings as psychological ical types, each related to the others in a complementary way” (217). Yet in her published work, Mead omitted to cite Jung’s work. A possible explanation for the absence of citation of Jung by Benedict and Mead, despite the influence of his typological model, was that they were developing oping diametrically opposed concepts of culture and its relation to the personality to Jung’s. Ironically, it is arguably through such indirect and half-acknowledged conduits that Jung’s work came to have its greatest impact upon modern anthropology and concepts of culture. This short account of some anthropological responses to Jung may serve to indicate that when Jung’s work was engaged with by the academic community, it was taken to quite different destinations, and underwent a sea change.”

It was Benedict’s Patterns of Culture that was a major source of influence on Jaynes. It created a model for comparing and contrasting different kinds of societies. Benedict was studying two modern societies, but Dodds came to see how it could be applied to different societies across time, even into the ancient world. That was a different way of thinking and opened up new possibilities of understanding. It set the stage for Jaynes’ radical proposal, that consciousness itself was built on culture. From types of personalities to types of cultures.

All of that is just something that caught my attention. I find fascinating such connections, how ideas get passed on and develop. None of that was the original reason for this post, though. I was doing my regular perusing of the web and came across some stuff of interest. This post is simply an excuse to share some of it.

This topic is always on my mind. The human psyche is amazing. It’s easy to forget what a miracle it is to be conscious and the power of the unconscious that underlies it. There is so much more to our humanity than we can begin to comprehend. Such things as dissociation and voice hearing isn’t limited to crazy people or, if it is, then we’re all a bit crazy.

* * *

Other Multiplicity
by Mark and Rana Mannng, Legion Theory

When the corpus callosum is severed in adults, we create separate consciousnesses which can act together cooperatively within a single body. In Multiple Personality Disorder (MPD), or Dissociative Identity Disorder (DID), as it is now known, psychological trauma to the developing mind also creates separate consciousnesses which can act together cooperatively within a single body. And in both cases, in most normal social situations, the individual would provide no reason for someone to suspect that they were not dealing with someone with a unitary consciousness.

The Third Man Factor: Surviving the Impossible
by John Geiger
pp. 161-162

For modern humans generally, however, the stress threshold for triggering a bicameral hallucination is much higher, according to Jaynes: “Most of us need to be over our heads in trouble before we would hear voices.” 10 Yet, he said, “contrary to what many an ardent biological psychiatrist wishes to think, they occur in normal individuals also.” 11 Recent studies have supported him, with some finding that a large minority of the general population, between 30 and 40 percent, report having experienced auditory hallucinations. These often involve hearing one’s own name, but also phrases spoken from the rear of a car, and the voices of absent friends or dead relatives. 12 Jaynes added that it is “absolutely certain that such voices do exist and that experiencing them is just like hearing actual sound.” Even today, though they are loath to admit it, completely normal people hear voices, he said, “often in times of stress.”

Jaynes pointed to an example in which normally conscious individuals have experienced vestiges of bicameral mentality, notably, “shipwrecked sailors during the war who conversed with an audible God for hours in the water until they were saved.” 13 In other words, it emerges in normal people confronting high stress and stimulus reduction in extreme environments. A U.S. study of combat veterans with post-traumatic stress disorder found a majority (65 percent) reported hearing voices, sometimes “command hallucinations to which individuals responded with a feeling of automatic obedience.”

Gods, voice-hearing and the bicameral mind
by Jules Evans, Philosophy for Life

Although humans evolved into a higher state of subjective consciousness, vestiges of the bicameral mind still remain, most obviously in voice-hearing. As much as 10% of the population hear voices at some point in their lives, much higher than the clinical incidence of schizophrenia (1%). For many people, voice-hearing is not debilitating and can be positive and encouraging.

Sensing a voice or presence often emerges in stressful situations – anecdotally, it’s relatively common for the dying to see the spirits of dead loved ones, likewise as many as 35% of people who have recently lost a loved one say they have a sense of the departed’s continued presence. Mountaineers in extreme conditions often report a sensed presence guiding them (known as the Third Man Factor).

And around 65% of children say they have had ‘imaginary friends’ or toys that play a sort of guardian-angel role in their lives – Jaynes thought children evolve from bicameral to conscious, much as Piaget thought young children are by nature animist

Earslips: Of Mishearings and Mondegreens
by Steven Connor, personal blog

The processing of the sounds of the inanimate world as voices may strike us as a marginal or anomalous phenomenon. However, some recent work designed to explain why THC, the active component of cannabis, might sometimes trigger schizophrenia, points in another direction. Zerrin Atakan of London’s Institute of Psychiatry conducted experiments which suggest that subjects who had been given small doses of THC were much less able to inhibit involuntary actions. She suggests that THC may induce psychotic hallucinations, especially the auditory hallucinations which are classically associated with paranoid delusion, by suppressing the response inhibition which would normally prevent us from reacting to nonvocal sounds as though they were voices. The implications of this argument are intriguing; for it seems to imply that, far from only occasionally or accidentally hearing voices in sounds, we have in fact continuously and actively to inhibit this tendency. Perhaps, without this filter, the wind would always and for all of us be whispering ‘Mary’, or ‘Malcolm’.

Hallucinations and Sensory Overrides
by T. M. Luhrmann, Stanford University

Meanwhile, the absence of cultural categories to describe inner experience does limit
the kinds of psychotic phenomena people experience. In the West, those who are psychotic sometimes experience symptoms that are technically called “thought insertion” and “thought withdrawal”, the sense that some external force has placed thoughts in one’s mind or taken them out. Thought insertion and withdrawal are standard items in symptoms checklists. Yet when Barrett (2004) attempted to translate the item in Borneo, he could not. The Iban do not have an elaborated idea of the mind as a container, and so the idea that someone could experience external thoughts as placed within the mind or removed from it was simply not available to them.

Hallucinatory ‘voices’ shaped by local culture, Stanford anthropologist says
by Clifton B. Parker, Stanford University

Why the difference? Luhrmann offered an explanation: Europeans and Americans tend to see themselves as individuals motivated by a sense of self identity, whereas outside the West, people imagine the mind and self interwoven with others and defined through relationships.

“Actual people do not always follow social norms,” the scholars noted. “Nonetheless, the more independent emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally in many places.”

As a result, hearing voices in a specific context may differ significantly for the person involved, they wrote. In America, the voices were an intrusion and a threat to one’s private world – the voices could not be controlled.

However, in India and Africa, the subjects were not as troubled by the voices – they seemed on one level to make sense in a more relational world. Still, differences existed between the participants in India and Africa; the former’s voice-hearing experience emphasized playfulness and sex, whereas the latter more often involved the voice of God.

The religiosity or urban nature of the culture did not seem to be a factor in how the voices were viewed, Luhrmann said.

“Instead, the difference seems to be that the Chennai (India) and Accra (Ghana) participants were more comfortable interpreting their voices as relationships and not as the sign of a violated mind,” the researchers wrote.

Tanya Luhrmann, hearing voices in Accra and Chenai
by Greg Downey, Neuroanthropology

local theory of mind—the features of perception, intention, and inference that the community treats as important—and local practices of mental cultivation will affect both the kinds of unusual sensory experiences that individuals report and the frequency of those experiences. Hallucinations feel unwilled. They are experienced as spontaneous and uncontrolled. But hallucinations are not the meaningless biological phenomena they are understood to be in much of the psychiatric literature. They are shaped by explicit and implicit learning around the ways that people pay attention with their senses. This is an important anthropological finding because it demonstrates that cultural ideas and practices can affect mental experience so deeply that they lead to the override of ordinary sense perception.

How Universal Is The Mind?
by Salina Golonka, Notes from Two Scientific Psychologists

To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.

A puppet is a magical object. It is not a toy, is it? Here they see it as puppet theatre, as puppets for kids. But it’s just not like that. These native tribes — in Africa or Oceania, etc. — the shamans use puppets in communication not only with the upper world, with the gods, but even in relation when they treat a sick person. Those shamans, when they dress as some demon or some deity, they incarnate genuinely. They are either the totem animal or the demon. (via Matt Cardin)

Social Disorder, Mental Disorder

“It is no measure of health to be well adjusted to a profoundly sick society.”
~ Jiddu Krishnamurti

“The opposite of addiction is not sobriety. The opposite of addiction is connection.”
~ Johann Harri

On Staying Sane in a Suicidal Culture
by Dahr Jamail

Our situation so often feels hopeless. So much has spun out of control, and pathology surrounds us. At least one in five Americans are taking psychiatric medications, and the number of children taking adult psychiatric drugs is soaring.

From the perspective of Macy’s teachings, it seems hard to argue that this isn’t, at least in part, active denial of what is happening to the world and how challenging it is for both adults and children to deal with it emotionally, spiritually and psychologically.

These disturbing trends, which are increasing, are something she is very mindful of. As she wrote in World as Lover, World as Self, “The loss of certainty that there will be a future is, I believe, the pivotal psychological reality of our time.”

What does depression feel like? Trust me – you really don’t want to know
by Tim Lott

Admittedly, severely depressed people can connect only tenuously with reality, but repeated studies have shown that mild to moderate depressives have a more realistic take on life than most “normal” people, a phenomenon known as “depressive realism”. As Neel Burton, author of The Meaning of Madness, put it, this is “the healthy suspicion that modern life has no meaning and that modern society is absurd and alienating”. In a goal-driven, work-oriented culture, this is deeply threatening.

This viewpoint can have a paralysing grip on depressives, sometimes to a psychotic extent – but perhaps it haunts everyone. And therefore the bulk of the unafflicted population may never really understand depression. Not only because they (understandably) lack the imagination, and (unforgivably) fail to trust in the experience of the sufferer – but because, when push comes to shove, they don’t want to understand. It’s just too … well, depressing.

The Mental Disease of Late-Stage Capitalism
by Joe Brewer

A great irony of this deeply corrupt system of wealth hoarding is that the “weapon of choice” is how we feel about ourselves as we interact with our friends. The elites don’t have to silence us. We do that ourselves by refusing to talk about what is happening to us. Fake it until you make it. That’s the advice we are given by the already successful who have pigeon-holed themselves into the tiny number of real opportunities society had to offer. Hold yourself accountable for the crushing political system that was designed to divide us against ourselves.

This great lie that we whisper to ourselves is how they control us. Our fear that other impoverished people (which is most of us now) will look down on us for being impoverished too. This is how we give them the power to keep humiliating us.

I say no more of this emotional racket. If I am going to be responsible for my fate in life, let it be because I chose to stand up and fight — that I helped dismantle the global architecture of wealth extraction that created this systemic corruption of our economic and political systems.

Now more than ever, we need spiritual healing. As this capitalist system destroys itself, we can step aside and find healing by living honestly and without fear. They don’t get to tell us how to live. We can share our pain with family and friends. We can post it on social media. Shout it from the rooftops if we feel like it. The pain we feel is capitalism dying. It hurts us because we are still in it.

Neoliberalism – the ideology at the root of all our problems
by George Monbiot

So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.

Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.

Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.

We internalise and reproduce its creeds. The rich persuade themselves that they acquired their wealth through merit, ignoring the advantages – such as education, inheritance and class – that may have helped to secure it. The poor begin to blame themselves for their failures, even when they can do little to change their circumstances.

Never mind structural unemployment: if you don’t have a job it’s because you are unenterprising. Never mind the impossible costs of housing: if your credit card is maxed out, you’re feckless and improvident. Never mind that your children no longer have a school playing field: if they get fat, it’s your fault. In a world governed by competition, those who fall behind become defined and self-defined as losers.

Among the results, as Paul Verhaeghe documents in his book What About Me? are epidemics of self-harm, eating disorders, depression, loneliness, performance anxiety and social phobia. Perhaps it’s unsurprising that Britain, in which neoliberal ideology has been most rigorously applied, is the loneliness capital of Europe. We are all neoliberals now.

Neoliberalism has brought out the worst in us
by Paul Verhaeghe

We tend to perceive our identities as stable and largely separate from outside forces. But over decades of research and therapeutic practice, I have become convinced that economic change is having a profound effect not only on our values but also on our personalities. Thirty years of neoliberalism, free-market forces and privatisation have taken their toll, as relentless pressure to achieve has become normative. If you’re reading this sceptically, I put this simple statement to you: meritocratic neoliberalism favours certain personality traits and penalises others.

There are certain ideal characteristics needed to make a career today. The first is articulateness, the aim being to win over as many people as possible. Contact can be superficial, but since this applies to most human interaction nowadays, this won’t really be noticed.

It’s important to be able to talk up your own capacities as much as you can – you know a lot of people, you’ve got plenty of experience under your belt and you recently completed a major project. Later, people will find out that this was mostly hot air, but the fact that they were initially fooled is down to another personality trait: you can lie convincingly and feel little guilt. That’s why you never take responsibility for your own behaviour.

On top of all this, you are flexible and impulsive, always on the lookout for new stimuli and challenges. In practice, this leads to risky behaviour, but never mind, it won’t be you who has to pick up the pieces. The source of inspiration for this list? The psychopathy checklist by Robert Hare, the best-known specialist on psychopathy today.

What About Me?: The Struggle for Identity in a Market-Based Society
by Paul Verhaeghe
Kindle Locations 2357-2428

Hypotheses such as these, however plausible, are not scientific. If we want to demonstrate the link between a neo-liberal society and, say, mental disorders, we need two things. First, we need a yardstick that indicates the extent to which a society is neo-liberal. Second, we need to develop criteria to measure the increase or decrease of psychosocial wellbeing in society. Combine these two, and you would indeed be able to see whether such a connection existed. And by that I don’t mean a causal connection, but a striking pattern; a rise in one being reflected in the other, or vice versa.

This was exactly the approach used by Richard Wilkinson, a British social epidemiologist, in two pioneering studies (the second carried out with Kate Pickett). The gauge they used was eminently quantifiable: the extent of income inequality within individual countries. This is indeed a good yardstick, as neo-liberal policy is known to cause a spectacular rise in such inequality. Their findings were unequivocal: an increase of this kind has far-reaching consequences for nearly all health criteria. Its impact on mental health (and consequently also mental disorders) is by no means an isolated phenomenon. This finding is just as significant as the discovery that mental disorders are increasing.

As social epidemiologists, Wilkinson and Pickett studied the connection between society and health in the broad sense of the word. Stress proves to be a key factor here. Research has revealed its impact, both on our immune systems and our cardiovascular systems. Tracing the causes of stress is difficult, though, especially given that we live in the prosperous and peaceful West. If we take a somewhat broader view, most academics agree on the five factors that determine our health: early childhood; the fears and cares we experience; the quality of our social relationships; the extent to which we have control over our lives; and, finally, our social status. The worse you score in these areas, the worse your health and the shorter your life expectancy are likely to be.

In his first book, The Impact of Inequality: how to make sick societies healthier, Wilkinson scrutinises the various factors involved, rapidly coming to what would be the central theme of his second book — that is, income inequality. A very striking conclusion is that in a country, or even a city, with high income inequality, the quality of social relationships is noticeably diminished: there is more aggression, less trust, more fear, and less participation in the life of the community. As a psychoanalyst, I was particularly interested in his quest for the factors that play a role at individual level. Low social status proves to have a determining effect on health. Lack of control over one’s work is a prominent stress factor. A low sense of control is associated with poor relationships with colleagues and greater anger and hostility — a phenomenon that Richard Sennett had already described (the infantilisation of adult workers). Wilkinson discovered that this all has a clear impact on health, and even on life expectancy. Which in turn ties in with a classic finding of clinical psychology: powerlessness and helplessness are among the most toxic emotions.

Too much inequality is bad for your health

A number of conclusions are forced upon us. In a prosperous part of the world like Western Europe, it isn’t the quality of health care (the number of doctors and hospitals) that determines the health of the population, but the nature of social and economic life. The better social relationships are, the better the level of health. Excessive inequality is more injurious to health than any other factor, though this is not simply a question of differences between social classes. If anything, it seems to be more of a problem within groups that are presumed to be equal (for example, civil servants and academics). This finding conflicts with the general assumption that income inequality only hurts the underclass — the losers — while those higher up the social ladder invariably benefit. That’s not the case: its negative effects are statistically visible in all sectors of the population, hence the subtitle of Wilkinson’s second work: why more equal societies almost always do better.

In that book, Wilkinson and Pickett adopt a fairly simple approach. Using official statistics, they analyse the connection between income inequality and a host of other criteria. The conclusions are astounding, almost leaping off the page in table after table: the greater the level of inequality in a country or even region, the more mental disorders, teenage pregnancies, child mortality, domestic and street violence, crime, drug abuse, and medication. And the greater the inequality is, the worse physical health and educational performance are, the more social mobility declines, along with feelings of security, and the unhappier people are.

Both books, especially the latter, provoked quite a response in the Anglo-Saxon world. Many saw in them proof of what they already suspected. Many others were more negative, questioning everything from the collation of data to the statistical methods used to reach conclusions. Both authors refuted the bulk of the criticism — which, given the quality of their work, was not a very difficult task. Much of it targeted what was not in the books: the authors were not urging a return to some kind of ‘all animals are equal’ Eastern-bloc state. What critics tended to forget was that their analysis was of relative differences in income, with negative effects becoming most manifest in the case of extreme inequality. Moreover, it is not income inequality itself that produces these effects, but the stress factors associated with it.

Roughly the same inferences can be drawn from Sennett’s study, though it is more theoretical and less underpinned with figures. His conclusion is fairly simple, and can be summed up in the title of what I regard as his best book: Respect in a World of Inequality. Too much inequality leads to a loss of respect, including self-respect — and, in psychosocial terms, this is about the worst thing that can happen to anyone.

This emerges very powerfully from a single study of the social determinants of health, which is still in progress. Nineteen eighty-six saw the start of the second ‘Whitehall Study’ that systematically monitored over 10,000 British civil servants, to establish whether there was a link between their health and their work situations. At first sight, this would seem to be a relatively homogenous group, and one that definitely did not fall in the lowest social class. The study’s most striking finding is that the lower the rank and status of someone within that group, the lower their life expectancy, even when taking account of such factors as smoking, diet, and physical exercise. The most obvious explanation is that the lowest-ranked people experienced the most stress. Medical studies confirm this: individuals in this category have higher cortisol levels (increased stress) and more coagulation-factor deficiencies (and thus are at greater risk of heart attacks).

My initial question was, ‘Is there a demonstrable connection between today’s society and the huge rise in mental disorders?’ As all these studies show, the answer is yes. Even more important is the finding that this link goes beyond mental health. The same studies show highly negative effects on other health parameters. As so often is the case, a parallel can be found in fiction — in this instance, in Alan Lightman’s novel The Diagnosis. During an interview, the author posed the following rhetorical question: ‘Who, experiencing for years the daily toll of intense corporate pressure, could truly escape severe anxiety?’* (I think it may justifiably be called rhetorical, when you think how many have had to find out its answer for themselves.)

A study by a research group at Heidelberg University very recently came to similar conclusions, finding that people’s brains respond differently to stress according to whether they have had an urban or rural upbringing. 3 What’s more, people in the former category prove more susceptible to phobias and even schizophrenia. So our brains are differently shaped by the environment in which we grow up, making us potentially more susceptible to mental disorders. Another interesting finding emerged from the way the researchers elicited stress. While the subjects of the experiment were wrestling with the complex calculations they had been asked to solve, some of them were told (falsely) that their scores were lagging behind those of the others, and asked to hurry up because the experiments were expensive. All the neo-liberal factors were in place: emphasis on productivity, evaluation, competition, and cost reduction.

Capitalist Realism: Is there no alternative?
by Mark Fisher
pp. 19-22

Mental health, in fact, is a paradigm case of how capitalist realism operates. Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect). In the 1960s and 1970s, radical theory and politics (Laing, Foucault, Deleuze and Guattari, etc.) coalesced around extreme mental conditions such as schizophrenia, arguing, for instance, that madness was not a natural, but a political, category. But what is needed now is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS. In his book The Selfish Capitalist, Oliver James has convincingly posited a correlation between rising rates of mental distress and the neoliberal mode of capitalism practiced in countries like Britain, the USA and Australia. In line with James’s claims, I want to argue that it is necessary to reframe the growing problem of stress (and distress) in capitalist societies. Instead of treating it as incumbent on individuals to resolve their own psychological distress, instead, that is, of accepting the vast privatization of stress that has taken place over the last thirty years, we need to ask: how has it become acceptable that so many people, and especially so many young people, are ill? The ‘mental health plague’ in capitalist societies would suggest that, instead of being the only social system that works, capitalism is inherently dysfunctional, and that the cost of it appearing to work is very high. […]

By contrast with their forebears in the 1960s and 1970s, British students today appear to be politically disengaged. While French students can still be found on the streets protesting against neoliberalism, British students, whose situation is incomparably worse, seem resigned to their fate. But this, I want to argue, is a matter not of apathy, nor of cynicism, but of reflexive impotence. They know things are bad, but more than that, they know they can’t do anything about it. But that ‘knowledge’, that reflexivity, is not a passive observation of an already existing state of affairs. It is a self-fulfilling prophecy.

Reflexive impotence amounts to an unstated worldview amongst the British young, and it has its correlate in widespread pathologies. Many of the teenagers I worked with had mental health problems or learning difficulties. Depression is endemic. It is the condition most dealt with by the National Health Service, and is afflicting people at increasingly younger ages. The number of students who have some variant of dyslexia is astonishing. It is not an exaggeration to say that being a teenager in late capitalist Britain is now close to being reclassified as a sickness. This pathologization already forecloses any possibility of politicization. By privatizing these problems – treating them as if they were caused only by chemical imbalances in the individual’s neurology and/ or by their family background – any question of social systemic causation is ruled out.

Many of the teenage students I encountered seemed to be in a state of what I would call depressive hedonia. Depression is usually characterized as a state of anhedonia, but the condition I’m referring to is constituted not by an inability to get pleasure so much as it by an inability to do anything else except pursue pleasure. There is a sense that ‘something is missing’ – but no appreciation that this mysterious, missing enjoyment can only be accessed beyond the pleasure principle. In large part this is a consequence of students’ ambiguous structural position, stranded between their old role as subjects of disciplinary institutions and their new status as consumers of services. In his crucial essay ‘Postscript on Societies of Control’, Deleuze distinguishes between the disciplinary societies described by Foucault, which were organized around the enclosed spaces of the factory, the school and the prison, and the new control societies, in which all institutions are embedded in a dispersed corporation.

pp. 32-38

The ethos espoused by McCauley is the one which Richard Sennett examines in The Corrosion of Character: The Personal Consequences of Work in the New Capitalism, a landmark study of the affective changes that the post-Fordist reorganization of work has brought about. The slogan which sums up the new conditions is ‘no long term’. Where formerly workers could acquire a single set of skills and expect to progress upwards through a rigid organizational hierarchy, now they are required to periodically re-skill as they move from institution to institution, from role to role. As the organization of work is decentralized, with lateral networks replacing pyramidal hierarchies, a premium is put on ‘flexibility’. Echoing McCauley’s mockery of Hanna in Heat (‘ How do you expect to keep a marriage?’), Sennett emphasizes the intolerable stresses that these conditions of permanent instability put on family life. The values that family life depends upon – obligation, trustworthiness, commitment – are precisely those which are held to be obsolete in the new capitalism. Yet, with the public sphere under attack and the safety nets that a ‘Nanny State’ used to provide being dismantled, the family becomes an increasingly important place of respite from the pressures of a world in which instability is a constant. The situation of the family in post-Fordist capitalism is contradictory, in precisely the way that traditional Marxism expected: capitalism requires the family (as an essential means of reproducing and caring for labor power; as a salve for the psychic wounds inflicted by anarchic social-economic conditions), even as it undermines it (denying parents time with children, putting intolerable stress on couples as they become the exclusive source of affective consolation for each other). […]

The psychological conflict raging within individuals cannot but have casualties. Marazzi is researching the link between the increase in bi-polar disorder and post-Fordism and, if, as Deleuze and Guattari argue, schizophrenia is the condition that marks the outer edges of capitalism, then bi-polar disorder is the mental illness proper to the ‘interior’ of capitalism. With its ceaseless boom and bust cycles, capitalism is itself fundamentally and irreducibly bi-polar, periodically lurching between hyped-up mania (the irrational exuberance of ‘bubble thinking’) and depressive come-down. (The term ‘economic depression’ is no accident, of course). To a degree unprecedented in any other social system, capitalism both feeds on and reproduces the moods of populations. Without delirium and confidence, capital could not function.

It seems that with post-Fordism, the ‘invisible plague’ of psychiatric and affective disorders that has spread, silently and stealthily, since around 1750 (i.e. the very onset of industrial capitalism) has reached a new level of acuteness. Here, Oliver James’s work is important. In The Selfish Capitalist, James points to significant rises in the rates of ‘mental distress’ over the last 25 years. ‘By most criteria’, James reports,

rates of distress almost doubled between people born in 1946 (aged thirty-six in 1982) and 1970 (aged thirty in 2000). For example, 16 per cent of thirty-six-year-old women in 1982 reported having ‘trouble with nerves, feeling low, depressed or sad’, whereas 29 per cent of thirty year-olds reported this in 2000 (for men it was 8 per cent in 1982, 13 per cent in 2000).

Another British study James cites compared levels of psychiatric morbidity (which includes neurotic symptoms, phobias and depression) in samples of people in 1977 and 1985. ‘Whereas 22 per cent of the 1977 sample reported psychiatric morbidity, this had risen to almost a third of the population (31 per cent) by 1986’. Since these rates are much higher in countries that have implemented what James calls ‘selfish’ capitalism than in other capitalist nations, James hypothesizes that it is selfish (i.e. neoliberalized) capitalist policies and culture that are to blame. […]

James’s conjectures about aspirations, expectations and fantasy fit with my own observations of what I have called ‘hedonic depression’ in British youth.

It is telling, in this context of rising rates of mental illness, that New Labour committed itself, early in its third term in government, to removing people from Incapacity Benefit, implying that many, if not most, claimants are malingerers. In contrast with this assumption, it doesn’t seem unreasonable to infer that most of the people claiming Incapacity Benefit – and there are well in excess of two million of them – are casualties of Capital. A significant proportion of claimants, for instance, are people psychologically damaged as a consequence of the capitalist realist insistence that industries such as mining are no longer economically viable. (Even considered in brute economic terms, though, the arguments about ‘viability’ seem rather less than convincing, especially once you factor in the cost to taxpayers of incapacity and other benefits.) Many have simply buckled under the terrifyingly unstable conditions of post-Fordism.

The current ruling ontology denies any possibility of a social causation of mental illness. The chemico-biologization of mental illness is of course strictly commensurate with its de-politicization. Considering mental illness an individual chemico-biological problem has enormous benefits for capitalism. First, it reinforces Capital’s drive towards atomistic individualization (you are sick because of your brain chemistry). Second, it provides an enormously lucrative market in which multinational pharmaceutical companies can peddle their pharmaceuticals (we can cure you with our SSRIs). It goes without saying that all mental illnesses are neurologically instantiated, but this says nothing about their causation. If it is true, for instance, that depression is constituted by low serotonin levels, what still needs to be explained is why particular individuals have low levels of serotonin. This requires a social and political explanation; and the task of repoliticizing mental illness is an urgent one if the left wants to challenge capitalist realism.

It does not seem fanciful to see parallels between the rising incidence of mental distress and new patterns of assessing workers’ performance. We will now take a closer look at this ‘new bureaucracy’.

The Opposite of Addiction is Connection
by Robert Weiss LCSW, CSAT-S

Not for Alexander. He was bothered by the fact that the cages in which the rats were isolated were small, with no potential for stimulation beyond the heroin. Alexander thought: Of course they all got high. What else were they supposed to do? In response to this perceived shortcoming, Alexander created what we now call “the rat park,” a cage approximately 200 times larger than the typical isolation cage, with Hamster wheels and multi-colored balls to play with, plenty of tasty food to eat, and spaces for mating and raising litters.[ii] And he put not one rat, but 20 rats (of both genders) into the cage. Then, and only then, did he mirror the old experiments, offering one bottle of pure water and one bottle of heroin water. And guess what? The rats ignored the heroin. They were much more interested in typical communal rat activities such as playing, fighting, eating, and mating. Essentially, with a little bit of social stimulation and connection, addiction disappeared. Heck, even rats who’d previously been isolated and sucking on the heroin water left it alone once they were introduced to the rat park.

The Human Rat Park

One of the reasons that rats are routinely used in psychological experiments is that they are social creatures in many of the same ways that humans are social creatures. They need stimulation, company, play, drama, sex, and interaction to stay happy. Humans, however, add an extra layer to this equation. We need to be able to trust and to emotionally attach.

This human need for trust and attachment was initially studied and developed as a psychological construct in the 1950s, when John Bowlby tracked the reactions of small children when they were separated from their parents.[iii] In a nutshell, he found that infants, toddlers, and young children have an extensive need for safe and reliable caregivers. If children have that, they tend to be happy in childhood and well-adjusted (emotionally healthy) later in life. If children don’t have that, it’s a very different story. In other words, it is clear from Bowlby’s work and the work of later researchers that the level and caliber of trust and connection experienced in early childhood carries forth into adulthood. Those who experience secure attachment as infants, toddlers, and small children nearly always carry that with them into adulthood, and they are naturally able to trust and connect in healthy ways. Meanwhile, those who don’t experience secure early-life attachment tend to struggle with trust and connection later in life. In other words, securely attached individuals tend to feel comfortable in and to enjoy the human rat park, while insecurely attached people typically struggle to fit in and connect.

The Opposite Of Addiction is Connection
By Jonathan Davis

If connection is the opposite of addiction, then an examination of the neuroscience of human connection is in order. Published in 2000, A General Theory Of Love is a collaboration between three professors of psychiatry at the University of California in San Francisco. A General Theory Of Love reveals that humans require social connection for optimal brain development, and that babies cared for in a loving environment are psychological and neurologically ‘immunised’ by love. When things get difficult in adult life, the neural wiring developed from a love-filled childhood leads to increased emotional resilience in adult life. Conversely, those who grow up in an environment where loving care is unstable or absent are less likely to be resilient in the face of emotional distress.

How does this relate to addiction? Gabor Maté observes an extremely high rate of childhood trauma in the addicts he works with and trauma is the extreme opposite of growing up in a consistently safe and loving environment. He asserts that it is extremely common for people with addictions to have a reduced capacity for dealing with emotional distress, hence an increased risk of drug-dependence.

How Our Ability To Connect Is Impaired By Trauma

Trauma is well-known to cause interruption to healthy neural wiring, in both the developing and mature brain. A deeper issue here is that people who have suffered trauma, particularly children, can be left with an underlying sense that the world is no longer safe, or that people can no longer be trusted. This erosion (or complete destruction) of a sense of trust, that our family, community and society will keep us safe, results in isolation – leading to the very lack of connection Johann Harri suggests is the opposite of addiction. People who use drugs compulsively do so to avoid the pain of past trauma and to replace the absence of connection in their life.

Social Solutions To Addiction

The solution to the problem of addiction on a societal level is both simple and fairly easy to implement. If a person is born into a life that is lacking in love and support on a family level, or if due to some other trauma they have become isolated and suffer from addiction, there must be a cultural response to make sure that person knows that they are valued by their society (even if they don’t feel valued by their family). Portugal has demonstrated this with a 50% drop in addiction thanks to programs that are specifically designed to re-create connection between the addict and their community.

The real cause of addiction has been discovered – and it’s not what you think
by Johann Hari

This has huge implications for the one hundred year old war on drugs. This massive war – which, as I saw, kills people from the malls of Mexico to the streets of Liverpool – is based on the claim that we need to physically eradicate a whole array of chemicals because they hijack people’s brains and cause addiction. But if drugs aren’t the driver of addiction – if, in fact, it is disconnection that drives addiction – then this makes no sense.

Ironically, the war on drugs actually increases all those larger drivers of addiction: for example, I went to a prison in Arizona – ‘Tent City’ – where inmates are detained in tiny stone isolation cages (“The Hole”) for weeks and weeks on end, to punish them for drug use. It is as close to a human recreation of the cages that guaranteed deadly addiction in rats as I can imagine. And when those prisoners get out, they will be unemployable because of their criminal record – guaranteeing they with be cut off ever more. I watched this playing out in the human stories I met across the world.

There is an alternative. You can build a system that is designed to help drug addicts to reconnect with the world – and so leave behind their addictions.

This isn’t theoretical. It is happening. I have seen it. Nearly fifteen years ago, Portugal had one of the worst drug problems in Europe, with 1 percent of the population addicted to heroin. They had tried a drug war, and the problem just kept getting worse. So they decided to do something radically different. They resolved to decriminalize all drugs, and transfer all the money they used to spend on arresting and jailing drug addicts, and spend it instead on reconnecting them – to their own feelings, and to the wider society. The most crucial step is to get them secure housing, and subsidized jobs – so they have a purpose in life, and something to get out of bed for. I watched as they are helped, in warm and welcoming clinics, to learn how to reconnect with their feelings, after years of trauma and stunning them into silence with drugs.

One example I learned about was a group of addicts who were given a loan to set up a removals firm. Suddenly, they were a group, all bonded to each other, and to the society, and responsible for each other’s care.

The results of all this are now in. An independent study by the British Journal of Criminology found that since total decriminalization, addiction has fallen, and injecting drug use is down by 50 percent. I’ll repeat that: injecting drug use is down by 50 percent. Decriminalization has been such a manifest success that very few people in Portugal want to go back to the old system. The main campaigner against the decriminalization back in 2000 was Joao Figueira – the country’s top drug cop. He offered all the dire warnings that we would expect from the Daily Mail or Fox News. But when we sat together in Lisbon, he told me that everything he predicted had not come to pass – and he now hopes the whole world will follow Portugal’s example.

This isn’t only relevant to addicts. It is relevant to all of us, because it forces us to think differently about ourselves. Human beings are bonding animals. We need to connect and love. The wisest sentence of the twentieth century was E.M. Forster’s: “only connect.” But we have created an environment and a culture that cut us off from connection, or offer only the parody of it offered by the Internet. The rise of addiction is a symptom of a deeper sickness in the way we live–constantly directing our gaze towards the next shiny object we should buy, rather than the human beings all around us.

The writer George Monbiot has called this “the age of loneliness.” We have created human societies where it is easier for people to become cut off from all human connections than ever before. Bruce Alexander, the creator of Rat Park, told me that for too long, we have talked exclusively about individual recovery from addiction. We need now to talk about social recovery—how we all recover, together, from the sickness of isolation that is sinking on us like a thick fog.

But this new evidence isn’t just a challenge to us politically. It doesn’t just force us to change our minds. It forces us to change our hearts.

* * *

Social Conditions of an Individual’s Condition

Society and Dysfunction

It’s All Your Fault, You Fat Loser!

Liberal-mindedness, Empathetic Imagination, and Capitalist Realism

Ideological Realism & Scarcity of Imagination

The Unimagined: Capitalism and Crappiness

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

The Desperate Acting Desperately

To Grow Up Fast

Morality-Punishment Link

An Invisible Debt Made Visible

Trends in Depression and Suicide Rates

From Bad to Worse: Trends Across Generations

Republicans: Party of Despair

Rate And Duration of Despair

Views of the Self

A Rant: The Brief Discussion of the Birth of An Error
by Skepoet2

Collectivism vs. Individualism is the primary and fundamental misreading of human self-formation out of the Enlightenment and picking sides in that dumb-ass binary has been the primary driver of bad politics left and right for the last 250 years.

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Kindle Locations 80-95

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, sions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine vine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence quence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular ticular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian tian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding derstanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Amusing Ourselves to Death: Public Discourse in the Age of Show Business
by Neil Postman

That does not say much unless one connects it to the more important idea that form will determine the nature of content. For those readers who may believe that this idea is too “McLuhanesque” for their taste, I offer Karl Marx from The German Ideology. “Is the Iliad possible,” he asks rhetorically, “when the printing press and even printing machines exist? Is it not inevitable that with the emergence of the press, the singing and the telling and the muse cease; that is, the conditions necessary for epic poetry disappear?”

Meta-Theory
by bcooney

When I read Jaynes’s book for the first time last year I was struck by the opportunities his theory affords for marrying materialism to psychology, linguistics, and philosophy. The idea that mentality is dependent on social relations, that power structures in society are related to the structure of mentality and language, and the idea that we can only understand mentality historically and socially are all ideas that appeal to me as a historical materialist.

Consciousness: Breakdown Or Breakthrough?
by ignosympathnoramus

The “Alpha version” of consciousness involved memory having authority over the man, instead of the man having authority over his memory. Bicameral man could remember some powerful admonishment from his father, but he could not recall it at will. He experienced this recollection as an external event; namely a visitation from either his father or a god. It was a sort of third-person-perspective group-think where communication was not intentional or conscious but, just like our “blush response,” unconscious and betraying of our deepest being. You can see this in the older versions of the Iliad, where, for instance, we do not learn about Achilles’ suicidal impulse by his internal feelings, thoughts, or his speaking, but instead, by the empathic understanding of his friend. Do you have to “think” in order to empathize, or does it just come on its own, in a rush of feeling? Well, that used to be consciousness. Think about it, whether you watch your friend blush or you blush yourself, the experience is remarkably similar, and seems to be nearly third-person in orientation. What you are recognizing in your friend’s blush the Greeks would have recognized as possession by a god, but it is important to notice that you have no more control over it than the Greeks did. They all used the same name for the same god (emotion) and this led to a relatively stable way of viewing human volition, that is, until it came into contact with other cultures with other “gods.” When this happens, you either have war, or you have conversion. That is, unless you can develop an operating system better than Alpha. We have done so, but at the cost of making us all homeless or orphaned. How ironic that in the modern world the biggest problem is that there are entirely too many individuals in the world, and yet their biggest problem is somehow having too few people to give each individual the support and family-type-structure that humans need to feel secure and thrive. We simply don’t have a shared themis that would allow each of us to view the other as “another self,” to use Aristotle’s phrase, or if we do, we realize that “another self” means “another broken and lost orphan like me.” It is in the nature of self-consciousness to not trust yourself, to remain skeptical, to resist immediate impulse. You cannot order your Will if you simply trust it and cave to every inclination. However, this paranoia is hardly conducive to social trust or to loving another as if he were “another self,” for that would only amount to him being another system of forces that we have to interpret, organize or buffer ourselves from. How much easier it is to empathize and care about your fellow citizens when they are not individuals, but vehicles for the very same muses, daimons, and gods that animate you! The matter is rather a bit worse that this, though. Each child discovers and secures his “inner self” by the discovery of his ability to lie, which further undermines social trust!

Marx’s theory of human nature
by Wikipedia

Marx’s theory of human nature has an important place in his critique of capitalism, his conception of communism, and his ‘materialist conception of history’. Karl Marx, however, does not refer to “human nature” as such, but to Gattungswesen, which is generally translated as ‘species-being’ or ‘species-essence’. What Marx meant by this is that humans are capable of making or shaping their own nature to some extent. According to a note from the young Marx in the Manuscripts of 1844, the term is derived from Ludwig Feuerbach’s philosophy, in which it refers both to the nature of each human and of humanity as a whole.[1] However, in the sixth Thesis on Feuerbach (1845), Marx criticizes the traditional conception of “human nature” as “species” which incarnates itself in each individual, on behalf of a conception of human nature as formed by the totality of “social relations”. Thus, the whole of human nature is not understood, as in classical idealist philosophy, as permanent and universal: the species-being is always determined in a specific social and historical formation, with some aspects being biological.

The strange case of my personal marxism (archive 2012)
by Skepoet2

It is the production capacity within a community that allows a community to exist, but communities are more than their productive capacities and subjectivities are different from subjects. Therefore, it is best to think of the schema we have given societies in terms of integrated wholes, and societies are produced by their histories both ecological and cultural. The separation of ecological and the cultural are what Ken Wilber would call “right-hand” and “left-hand” distinctions: or, the empirical experience of what is outside of us but limits us–the subject here being collective–is and what is within us that limits us.

THE KOSMOS TRILOGY VOL. II: EXCERPT A
AN INTEGRAL AGE AT THE LEADING EDGE
by Ken Wilber

One of the easiest ways to get a sense of the important ideas that Marx was advancing is to look at more recent research (such as Lenski’s) on the relation of techno-economic modes of production (foraging, horticultural, herding, maritime, agrarian, industrial, informational) to cultural practices such as slavery, bride price, warfare, patrifocality, matrifocality, gender of prevailing deities, and so on. With frightening uniformity, similar techno-economic modes have similar probabilities of those cultural practices (showing just how strongly the particular probability waves are tetra-meshed).

For example, over 90% of societies that have female-only deities are horticultural societies. 97% of herding societies, on the other hand, are strongly patriarchal. 37% of foraging tribes have bride price, but 86% of advanced horticultural do. 58% of known foraging tribes engaged in frequent or intermittent warfare, but an astonishing 100% of simple horticultural did so.

The existence of slavery is perhaps most telling. Around 10% of foraging tribes have slavery, but 83% of advanced horticultural do. The only societal type to completely outlaw slavery was patriarchal industrial societies, 0% of which sanction slavery.

Who’s correct about human nature, the left or the right?
by Ed Rooksby

So what, if anything, is human nature? Marx provides a much richer account. He is often said to have argued that there is no such thing as human nature. This is not true. Though he did think that human behaviour was deeply informed by social environment, this is not to say that human nature does not exist. In fact it is our capacity to adapt and transform in terms of social practices and behaviours that makes us distinctive as a species and in which our specifically human nature is to be located.

For Marx, we are essentially creative and producing beings. It is not just that we produce for our means of survival, it is also that we engage in creative and productive activity over and above what is strictly necessary for survival and find fulfilment in this activity. This activity is inherently social – most of what we produce is produced collectively in some sense or another. In opposition to the individualist basis of liberal thought, then, we are fundamentally social creatures.

Indeed, for Marx, human consciousness and thus our very notion of individual identity is collectively generated. We become consciously aware of ourselves as a discrete entity only through language – and language is inherently inter-subjective; it is a social practice. What we think – including what we think about ourselves – is governed by what we do and what we do is always done socially and collectively. It is for this reason that Marx refers to our “species-being” – what we are can only be understood properly in social terms because what we are is a property and function of the human species as a whole.

Marx, then, has a fairly expansive view of human nature – it is in our nature to be creatively adaptable and for our understanding of what is normal in terms of behaviour to be shaped by the social relations around us. This is not to say that any social system is as preferable as any other. We are best able to flourish in conditions that allow us to express our sociability and creativity.

Marx’s Critique of Religion
by Cris Campbell

Alienated consciousness makes sense only in contrast to un-alienated consciousness. Marx’s conception of the latter, though somewhat vague, derives from his understanding of primitive communism. It is here that Marx’s debt to anthropology is most clear. In foraging or “primitive” societies, people are whole – they are un-alienated because resources are freely available and work directly transforms those resources into useable goods. This directness and immediateness – with no interventions or distortions between the resource, work, and result – makes for creative, fulfilled, and unified people. Society is, as a consequence, tightly bound. There are no class divisions which pit one person or group against another. Because social relations are always reflected back into people’s lives, unified societies make for unified individuals. People are not alienated they have direct, productive, and creative relationships with resources, work, things, and others. This communalism is, for Marx, most conducive to human happiness and well-being.

This unity is shattered when people begin claiming ownership of resources. Private property introduces division into formerly unified societies and classes develop. When this occurs people are no longer free to appropriate and produce as they please. Creativity and fulfillment is crushed when labor is separated from life and becomes an isolated commodity. Humans who labor for something other than their needs, or for someone else, become alienated from resources, work, things, and others. When these divided social relations are reflected back into peoples’ lives, the result is discord and disharmony. People, in other words, feel alienated. As economies develop and become more complex, life becomes progressively more specialized and splintered. The alienation becomes so intense that something is required to sooth it; otherwise, life becomes unbearable.

It is at this point (which anthropologists recognize as the Neolithic transition) that religion arises. But religion is not, Marx asserts, merely a soothing palliative – it also masks the economically and socially stratified conditions that cause alienation:

“Precisely as a consequence of man’s loss of spontaneous self-activity, religion arises as a compensatory mechanism for explaining what alienated man cannot explain and for promising him elsewhere what he cannot achieve here. Thus because man does not create himself through his productive labor, he supposes that he is created by a power beyond. Because man lacks power, he attributes power to something beyond himself. Like all forms of man’s self-alienation, religion displaces reality with illusion. The reason is that man, the alienated being, requires an ideology that will simultaneously conceal his situation from him and confer upon it significance. Religion is man’s oblique and doomed effort at humanization, a search for divine meaning in the face of human meaninglessness.”

Related posts from my blog:

Facing Shared Trauma and Seeking Hope

Society: Precarious or Persistent?

Plowing the Furrows of the Mind

Démos, The People

Making Gods, Making Individuals

On Being Strange

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

Whose Human Nature?

Kenan Malik made a defense of unrestricted free speech. I agreed with his basic argument. But that wasn’t what got me thinking.

In the comments section, I noticed that a couple of people didn’t understand what Malik was trying to communicate. They were conflating the issue of free speech with all the issues related to free speech, as if the only way to enforce control over all of society is by strictly controlling what people are allowed to say, and I assume harshly punishing anyone who disobeys by speaking freely. One of these conflated issues was human nature (see this comment and my responses).

The one commenter I had in mind seemed to be basing his views on some basic beliefs. There is a belief that there is a singular human nature that can be known and upon which laws should be based. Also there is the belief that human nature is unchanging, uncontrollable, and unimproving… all that one can do is constrain its expression.

This kind of thinking always seems bizarre to me. It’s a more typical conservative worldview. It’s the belief that human nature is just what it is and can be nothing else. So, liberals and left-wingers are perceived as being utopian perfection-seekers because they point out that human psychology is diverse, plastic, and full of potential.

I was thinking about this more in my own experience, though, and not just as a liberal. I’ve long realized I’m not normal and I’ve never thought that my own psychology should be considered normative for the human race. If all humans were like me, society would have some serious problems. I don’t presume most people are like me or should be like me.

Here is what I see in others who have strong beliefs about human nature, both descriptively and prescriptively. I often suspect they are projecting, taking what they know in their own experience and assuming others are like them. My self-perceived abnormality has safeguarded me from projecting onto others, at least in my understanding of human nature.

Economic Predispositions?

Predisposed: Liberals, Conservatives, and the Biology of Political Differences
John R. Hibbing, Kevin B. Smith, and John R. Alford
Kindle Locations 1295-1332

Classical economic theory has very precise predictions about what you will do with the money. In the dictator game you will not give the stranger anything. Why should you, since you will probably never see him or her again? The rational thing to do is to maximize your benefit and that means holding onto the fistful of dollar bills. You cannot be similarly Scrooge-like in the ultimatum game, though, because the stranger has a veto. Give the stranger nothing and you are likely to get nothing. The problem is how much to give. Economic theory predicts that you will give the least amount required to avoid a veto. If you are holding 20 one-dollar bills, that amounts to a measly dollar. Here’s the logic: Walking away with a dollar is better than walking away with nothing, so a dollar should be enough to prevent a rational stranger from exercising a veto.

These sorts of games have been repeated thousands of times in an amazing variety of contexts, and with an amazing variety of twists and minor modifications. The clear message from all this research— a message that is surprising only to economists— is that classical economic theory stinks at predicting how people will divide their 20 dollars. People are wildly more generous to strangers than they need to be. The average amount passed along in a dictator game is not zero but rather about $ 8 of the $ 20; in other words, pretty close to an even split and way more than rational maximizing behavior would suggest.

The results of ultimatum games are even more interesting. Remember, a rational person should accept any positive amount because one dollar is more than no dollars. In reality it is very common for small offers to be rejected. If you keep $ 19 and offer just $ 1, many strangers will exercise their veto and your 19 bucks will go poof. Splits of $ 18– $ 2, $ 17– $ 3, $ 16– $ 4 also are frequently turned down; even $ 15– $ 5 splits are occasionally nixed. What all this tells us is that people routinely deviate from rationality in order to be generous to a powerless stranger or to stick it to a greedy bastard. These findings probably are not big news to you but they create serious problems for the theory that humans are rational maximizing actors because, well, they don’t seem to act very rationally.

This basic message stays the same even when researchers tinker with the setting or format of the basic script. These games have been played in Siberia, in Western universities, and in hunter-gatherer societies. 3 The stakes of the games have been altered by taking them to regions of the world where $ 20 is the equivalent of several months’ wages. 4 The $ 20 has been described as a blind (an unseen resource) or a pot rather than as a fund belonging to the divider. 5 The physical attractiveness of the “stranger” has been altered. 6 And the “stranger” has been rendered less strange by altering the extent to which the players know each other. 7 These changes make a difference, driving non-maximizing behavior up or down, but none alters the basic conclusion that people are not the single-minded pursuers of profit that economic theory holds them out to be.

Just as Milgram’s results are presented as indicating that people are subservient to authority, the divide the dollar outcomes are presented as evidence that people are irrational; and just as the common interpretation of Milgram’s research is mistaken, so too is the common interpretation of the research on economic games. A closer look at the game results indicates tremendous individual variation in the decisions people make— even when the locale and experimental manipulations are the same . Some people are simply more generous than other people; some are more punitive; some are more strategic; some are more consistent; and some are more sensitive to the setting.

A significant minority of people— our best guess is around 20 percent— play economic games in a manner that is quite consistent with classic microeconomic theory in that they do not share unless they have to and they do not punish those who do not share with them . Others are relentlessly generous and the decisions of still others are variable and contingent upon context. The common conclusion growing out of the economic games research— that people are not rational maximizers— badly misses the point. Whether the topic is obeying authority figures or sharing resources with strangers, the real message of empirical research on human behavior is that people are fundamentally different. “People” are not lemmings in the face of authority— but some are. “People” are not rational maximizers —but some are.

Kindle Locations 1344-1361

Milgram’s focus on the situation as the key explanation of behavior and his abject indifference to behavioral variation within the same situation is disconcertingly typical of social science research. As an illustration of the value that could be added if this research tendency were altered, consider a fascinating study conducted some time ago by economist Kevin McCabe and colleagues. They had participants play a variant of divide the dollar games called a “trust” game while their brains were being imaged. The twist in this case was that players sometimes interacted with another human being and sometimes with a computer that was programmed to follow a preset sequence. McCabe found that people’s brain activation patterns are quite different in these two situations.

Told they are playing a computer, little activity registered in the emotional (or limbic) areas of the brain or in the prefrontal cortex of participants. In this situation the brain appears to be on autopilot, doing nothing more than calculating the way to get the most money (in other words, to be rational). Against a human being, in contrast, limbic areas such as the amygdala are activated, as is the prefrontal cortex, which presumably must resolve the conflict created by the rational desire to acquire more money and the emotional feelings that might accompany an exchange situation. 9

If it ended there, this research would be another example of the kind of approach that we are cautioning against: general statements that “people” display different brain activation patterns depending on the situation. This particular study, however , has a feature that illustrates the value of looking at individual differences. When the five most uncooperative individuals , as determined by the decisions they made in earlier economic games, were observed in the scanner, their brain activation patterns, unlike other participants, tended to be no different when they were playing against another human being than when they were playing against a computer. Thus, at least some people appear to be surprisingly devoid of the emotional responses that typically accompany human interaction. 10

Kindle Locations 1452-1467

Classical economic theory is in much the same boat. We have already noted this theory’s spectacularly inaccurate predictions with regard to various divide the dollar games. Classical microeconomic theory ends up in the same situational place as behaviorism and gets there much faster than evolutionary psychology. This is because it, too , is built on a worldview of presumed human universality, specifically humans as preference-maximizing machines. We might prefer beer and you might prefer wine, but the reasons we have different preferences is not of interest to most economists. They are more excited by the presumed universal process people employ to maximize those preferences in a given situation (rational utility maximization, as it’s called in the trade).

Classical economists rarely recognize the relevance of behavioral morphs. While psychologists study introverts and extroverts and political scientists study liberals and conservatives, economists have no parallel widely accepted terms that are indicative of fundamental economic types. 22 The situation determines what people need to do to maximize preferences so there is no need to worry about the fiddle-faddle of people having different preferences in the same situation. Preferences are taken as given (in other words, assumed away), and when deciding what to do, it is assumed that all humans crank through a universal cost-benefit calculation. The perceived pros and cons in that calculation are determined not by variation in personality, or neural architecture, or cognitive processing styles, but by the situation. As Dennis Mueller wisely notes, “homo economicus … bears a close resemblance to Skinner’s rat.” 23 The point is that broad swathes of the most prominent social science theories are based on the assumption that the human condition is monolithic and that any variations in human behavior are exclusively the product of the situation. The problem with this assertion is that it is simply not true.

Kindle Locations 3420-3422

At least among males , the more buff you are, the more likely you are to push strongly for positions that further your own economic interest (socialistic redistribution if you are poor; laissez-faire capitalism if you are rich). 44

Kindle Locations 4567-4577

We believe that traits such as orientation toward out-groups, openness to new experiences, and a heightened negativity bias fit more naturally with social than economic issues, and we tend to agree with Congressman Weaver that economic positions are typically secondary. He points out that “ethnocentrics do not give a fig for individual rights” and sees the connection between conservatism and free market principles as a relatively recent development. Similarly , he does not view Marxism as connecting to the deeper forces shaping empathics and believes that accounts that do make this connection “totally ignore our biological origins.” 55 The deep forces that shape political predispositions likely do not act directly on controversies over the role of government in society (after all, for how long in evolutionary time has the size of government been an issue?) or, relatedly, on controversies over the glories of the free market relative to the social welfare state. But if the issue becomes whether or not to open up a country’s social welfare system to recent or future out-group members (that is, immigrants ), deeper forces quickly come into play. Economic issues are certainly crucial in modern politics—sometimes the most crucial— but this does not mean fault lines on these issues are as biologically rooted as social issues.

 

Political Appetitions

Appetition

Definitions
n. Desire; a longing for, or seeking after, something.

Etymologies
From Latin appetītiō (“a longing for or desire”).

Leibniz’s Philosophy of Mind
Stanford Encyclopedia of Philosophy

Appetitions are explained as “tendencies from one perception to another” (Principles of Nature and Grace, sec.2 (1714)). Thus, we represent the world in our perceptions, and these representations are linked with an internal principle of activity and change (Monadology, sec.15 (1714)) which, in its expression in appetitions, urges us ever onward in the constantly changing flow of mental life. More technically explained, the principle of action, that is, the primitive force which is our essence, expresses itself in momentary derivative forces involving two aspects: on the one hand, there is a representative aspect (perception), by which that the many without are expressed within the one, the simple substance; on the other, there is a dynamical aspect, a tendency or striving towards new perceptions, which inclines us to change our representative state, to move towards new perceptions. (See Carlin 2004.)

Leibniz: truth, knowledge and metaphysics
Academic Dictionaries and Encyclopedias

This is the famous doctrine of unconscious perceptions. Here it is helpful to recall Leibniz’s hierarchical arrangement of monads. All monads perceive, but they differ vastly in terms of the quality of their perceptions. Human minds or spirits are distinguished not only by reason but also by ‘apperception’ which means consciousness or perhaps even selfconsciousness. But though Leibniz holds that human minds are set apart from lower monads by their capacity for (self)-conscious awareness, he further believes that they also have unconscious or little perceptions (petites perceptions); such perceptions are little because they are low in intensity. Not merely do large stretches of our mental life consist wholly in little perceptions, but even conscious mental states are composed of such perceptions. The doctrine of unconscious perceptions is perhaps Leibniz’s principal innovation in psychology, and it is of course profoundly anti-Cartesian in its implications. For Descartes subscribes to the view that the mind is transparent to itself; he is explicit that there is nothing in the mind of which we are not conscious.80 In the New Essays on Human Understanding, his reply to Locke, Leibniz remarks that there are ‘thousands of indications’ in favour of unconscious perceptions.81

Predisposed: Liberals, Conservatives, and the Biology of Political Differences
By John R. Hibbing, Kevin B. Smith, and John R. Alford
Kindle Locations 429-488

People are not fully conscious of their predispositions. Gottfried Leibniz, a seventeeth-century mathematician and scientist, called them “appetitions” and argued that, though unconscious , appetitions drive human actions. His ideas so troubled Descartes-addled Enlightenment minds that they were not published until well after Leibniz’s death. Even then, they were not taken seriously for a long time. Recent science, though, is fully on board with Leibniz’s ideas and is providing ever -increasing evidence that people grossly overestimate the role in their decisions of rational, conscious thought , just as they grossly overestimate the extent to which sensory input is objective.

Neuroscientist David Eagleman goes so far as to claim that “the brain is properly thought of as a mostly closed system that runs on its own internally generated activity … internal data is not generated by external sensory data but merely modulated by it.” 14 Noting that people often do things because of forces of which they are not aware and then produce a bogus reason for these actions after the fact, Stephen Pinker refers to the portion of the brain involved in constructing this post hoc narrative as the “baloney generator.” 15 The baloney generator is so effective that people believe they know the reasons for their actions and beliefs even when these reasons are inaccurate and patently untrue. 16

Need examples of physiology affecting attitudes and behavior, even when people think they are being rational? Consider this: Job applicant resumes reviewed on heavy clipboards are judged more worthy than identical resumes on lighter clipboards; holding a warm or hot drink can influence whether opinions of other people are positive or negative; when people reach out to pick up an orange while smelling strawberries they unwittingly spread their fingers less widely— as if they were picking up a strawberry rather than an orange. 17 People sitting in a messy, smelly room tend to make harsher moral judgments than those who are in a neutral room; disgusting ambient odors also increase expressed dislike of gay men. 18 Judges’ sentencing practices are measurably more lenient when they are fresh and haven’t just dealt with a string of prior cases. 19 Sitting on a hard, uncomfortable chair leads people to be less flexible in their stances than if they are seated on a soft , comfortable chair, and people reminded of physical cleansing, perhaps by being located near a hand sanitizer, are more likely to render stern judgments than those who were not given such a reminder. 20 People even can be made to change their moral judgments as a result of hypnotic suggestion. 21

In all these cases the baloney generator can produce a convincing case that the pertinent decision was made on the merits rather than as a result of irrelevant factors. People actively deny that a chunky clipboard has anything to do with their assessment of job applicants or that a funky odor has anything to do with their moral judgments. Judges certainly refuse to believe that the length of time since their last break has anything to do with their sentencing decisions; after all, they are meting out objective justice . Leibniz was right, though, and the baloney generator is full of it. The way we respond—biologically, physiologically, and in many cases unwittingly— to our environments influences attitudes and behavior. People much prefer to believe, however , that their decisions and opinions are rational rather than rationalized.

This desire to believe we are rational is certainly in effect when it comes to politics, where an unwillingness to acknowledge the role of extraneous forces of which we may not even be aware is especially strong. Many pretend that politics is a product of citizens taking their civic obligations seriously, sifting through political messages and information, and then carefully and deliberately considering the candidates and issue positions before making a consciously informed decision. Doubtful. In truth, people’s political judgments are affected by all kinds of factors they assume to be wholly irrelevant.

Compared to people (not just judges) with full stomachs, those who have not eaten for several hours are more sympathetic to the plight of welfare recipients. 22 Americans whose polling place happens to be a church are more likely to vote for right-of-center candidates and ideas than those whose polling place is a public school. 23 People are more likely to accept the realities of global warming if their air conditioning is broken. 24 Italians insisting they were neutral in the lead-up to a referendum on expanding a U.S . military base, but who implicitly associated pictures of the base with negative terms, were more likely to vote against the referendum; in other words, people who genuinely believed themselves to be undecided were not. 25 People shown a cartoon happy face for just a few milliseconds (too quick to register consciously) list fewer arguments against immigration than those individuals who were shown a frowning cartoon face. 26 Political views are influenced not only by forces believed to be irrelevant but by forces that have not entered into conscious awareness. People think they know the reasons they vote for the candidates they do or espouse particular political positions or beliefs, but there is at least a slice of baloney in that thinking.

Responses to political stimuli are animated by emotional and not always conscious bodily processes. Political scientist Milt Lodge studies “hot cognition” or “automaticity.” His research shows that people tag familiar objects and concepts with an emotional response and that political stimuli such as a picture of Sarah Palin or the word “Obamacare” are particularly likely to generate emotional or affective (and therefore physiologically detectable) responses. In fact, Lodge and his colleague Charles Taber claim that “all political leaders, groups, issues, symbols, and ideas previously thought about and evaluated in the past become affectively charged— positively or negatively.” 27 Responses to a range of individual concepts and objects frequently become integrated in a network that can be thought of as the tangible manifestation of a broader political ideology.

The fact that extraneous forces that may not have crossed the threshold of awareness (sometimes called sub-threshold) shape political orientations and actions makes it possible for individual variation in nonpolitical variables to affect politics. If hotter ambient temperatures in a room increase acceptance of global warming, maybe people whose internal thermostats incline them to feeling hot are also more likely to be accepting of global warming. Likewise, sensitivity to clutter and disorder, to smell, to disgust, and to threats becomes potentially relevant to political views. Since elements of these sensitivities often are outside of conscious awareness, it becomes possible that political views are shaped by psychological and physiological patterns.

To Unfurl the Flag of Liberalism

I have a conjecture about liberalism and conservatism. My speculation is about the psychological side of things, moreso than the political.

As I’ve often pointed out, liberals are prone to conservative-mindedness when under conditions of social stress or cognitive overload (here is the most recent post on the topic). But not all liberals seem to be prone to this and maybe under normal conditions most liberals aren’t prone to it.

I’m not sure. I just know there is something in liberal psychology that makes this a very real possibility. The examples of it in politics can be seen all the time, especially during times like these when conditions are far from perfect for the liberal predisposition. Then again, when are conditions ever perfect for or much in the way of being conducive to the liberal predisposition?

Liberals rarely if ever get the opportunity to fully be themselves, to let their liberal flag wave fully unfurled.

My thoughts relate to the issue of fear. Some studies have shown conservatives have a larger part of the brain that deals with fear. What I was wondering is if there are different kinds of fear, not all kinds being labeled fear as such. Some of these kinds of ‘fear’ might be more relevant to the liberal mindset.

Let me use an example to clarify one possibility. There is often an inverse relationship between homicide and suicide. It has been theorized that this is based on whether anger and aggression is turned outward or inward. Similarly, maybe there is a difference between fear turned outward or inward, the latter experienced as anxiety or in other ways.

I suffer from social anxiety, though the emphasis is more on the anxiety part. There is a lot of fear involved, but it is very internalized. I don’t project my fears onto outside factors or people so much, at least not in any specific way. I don’t fear other countries, cultures, ethnicities, religions, etc. I’m a typical liberal in that sense. At the same time, I have tons of internalized fears that express as anxieties, doubts and guilt.

When I look at conservative-minded liberals, what I see is liberalism turning on itself. Such people seem to let their doubts of liberalism get the better of themselves. That is one of the things that liberals seem really good at: doubt. It can be quite undermining and self-destructive, especially in movement politics and party politics, but also on a personal level.

Liberals don’t have the kind of righteous certainty and proud confidence that is more common among conservatives. Liberals not only have endless doubts, but we’re talented at rationalizing our doubts. We have as many good reasons to doubt as conservatives have to believe. Liberals tend to approach things more indirectly, like a fox circling around and around then backtracking and then circling around some more. This hedging-your-bets mentality has its benefits in that it moderates extremes and allows for a carefulness that dampens arrogance and zeal. Also, it is a stumbling block.

Many liberals seem afraid of being caught up in radicalism or even getting called radical. Liberals are sensitive. We don’t want our feelings hurt and we don’t want to hurt the feelings of others. We just want everyone to get along. Direct confrontation seems dangerous, and maybe for good reason. Conservatives do seem better at winning that game. So, why should liberals play into their own weakness?

I’m wondering about this because I want a liberalism that can win, not simply not lose. I want a fighting liberalism of the variety seen expressed by Thomas Paine and Martin Luther King jr, a liberalism not afraid of a few bruises or hurt feelings. Liberalism can express strongly at times for certain liberals, but it sure is rare.

Why is that? Is it just fear? Or is there something else going on here?