Verbal Behavior

There is a somewhat interesting discussion of the friendship between B.F. Skinner and W.V.O. Quine. The piece explores their shared interests and possible influences on one another. It’s not exactly an area of personal interest, but it got me thinking about Julian Jaynes.

Skinner is famous for his behaviorist research. When behaviorism is mentioned, what immediately comes to mind for most people is Pavlov’s dog. But behaviorism wasn’t limited to animals and simple responses to stimuli. Skinner developed his theory toward verbal behavior as well. As Michael Karson explains,

“Skinner called his behaviorism “radical,” (i.e., thorough or complete) because he rejected then-behaviorism’s lack of interest in private events. Just as Galileo insisted that the laws of physics would apply in the sky just as much as on the ground, Skinner insisted that the laws of psychology would apply just as much to the psychologist’s inner life as to the rat’s observable life.

“Consciousness has nothing to do with the so-called and now-solved philosophical problem of mind-body duality, or in current terms, how the physical brain can give rise to immaterial thought. The answer to this pseudo-problem is that even though thought seems to be immaterial, it is not. Thought is no more immaterial than sound, light, or odor. Even educated people used to believe, a long time ago, that these things were immaterial, but now we know that sound requires a material medium to transmit waves, light is made up of photons, and odor consists of molecules. Thus, hearing, seeing, and smelling are not immaterial activities, and there is nothing in so-called consciousness besides hearing, seeing, and smelling (and tasting and feeling). Once you learn how to see and hear things that are there, you can also see and hear things that are not there, just as you can kick a ball that is not there once you have learned to kick a ball that is there. Engaging in the behavior of seeing and hearing things that are not there is called imagination. Its survival value is obvious, since it allows trial and error learning in the safe space of imagination. There is nothing in so-called consciousness that is not some version of the five senses operating on their own. Once you have learned to hear words spoken in a way that makes sense, you can have thoughts; thinking is hearing yourself make language; it is verbal behavior and nothing more. It’s not private speech, as once was believed; thinking is private hearing.”

It’s amazing how much this is resonates with Jaynes’ bicameral theory. This maybe shouldn’t be surprising. After all, Jaynes was trained in behaviorism and early on did animal research. He was mentored by the behaviorist Frank A. Beach and was friends with Edward Boring who wrote a book about consciousness in relation to behaviorism. Reading about Skinner’s ideas about verbal behavior, I was reminded of Jaynes’ view of authorization as it relates to linguistic commands and how they become internalized to form an interiorized mind-space (i.e., Jaynesian consciousness).

I’m not the only person to think along these lines. On Reddit, someone wrote: “It is possible that before there were verbal communities that reinforced the basic verbal operants in full, people didn’t have complete “thinking” and really ran on operant auto-pilot since they didn’t have a full covert verbal repertoire and internal reinforcement/shaping process for verbal responses covert or overt, but this would be aeons before 2-3 kya. Wonder if Jaynes ever encountered Skinner’s “Verbal Behavior”…” Jaynes only references Skinner once in his book on bicameralism and consciousness. But he discusses behaviorism in general to some extent.

In the introduction, he describes behaviorism in this way: “From the outside, this revolt against consciousness seemed to storm the ancient citadels of human thought and set its arrogant banners up in one university after another. But having once been a part of its major school, I confess it was not really what it seemed. Off the printed page, behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious. And there was a very real hypocrisy abroad, as those interested in its problems were forcibly excluded from academic psychology, as text after text tried to smother the unwanted problem from student view. In essence, behaviorism was a method, not the theory that it tried to be. And as a method, it exorcised old ghosts. It gave psychology a thorough house cleaning. And now the closets have been swept out and the cupboards washed and aired, and we are ready to examine the problem again.” As dissatisfying as animal research was for Jaynes, it nonetheless set the stage for deeper questioning by way of a broader approach. It made possible new understanding.

Like Skinner, he wanted to take the next step, shifting from behavior to experience. Even their strategies to accomplish this appear to have been similar. Sensory experience itself becomes internalized, according to both of their theories. For Jaynes, perception of external space becomes the metaphorical model for a sense of internal space. When Karson says of Skinner’s view that “thinking is hearing yourself make language,” that seems close to Jaynes discussion of hearing voices as it develops into an ‘I’ and a ‘me’, the sense of identity split into subject and object which asserted was required for one to hear one’s own thoughts.

I don’t know Skinner’s thinking in detail or how it changed over time. He too pushed beyond the bounds of behavioral research. It’s not clear that Jaynes’ ever acknowledged this commonality. In his 1990 afterword to his book, Jaynes’ makes his one mention of Skinner without pointing out Skinner’s work on verbal behavior:

“This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

“This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.”

Jaynes was making the simple, if oft overlooked, point that perception of body is not the same thing as consciousness of mind. A behavioral response to one’s own body isn’t fundamentally different than a behavioral response to anything else. Behavioral responses are found in every species. This isn’t helpful in exploring consciousness itself. Skinner too wanted to get beyond this level of basic behavioral research, so it seems. Interestingly, without any mention of Skinner, Jaynes does use the exact phrasing of Skinner in speaking about the unconscious learning of ‘verbal behavior’ (Book One, Chapter 1):

“Another simple experiment can demonstrate this. Ask someone to sit opposite you and to say words, as many words as he can think of, pausing two or three seconds after each of them for you to write them down. If after every plural noun (or adjective, or abstract word, or whatever you choose) you say “good” or “right” as you write it down, or simply “mmm-hmm” or smile, or repeat the plural word pleasantly, the frequency of plural nouns (or whatever) will increase significantly as he goes on saying words. The important thing here is that the subject is not aware that he is learning anything at all. [13] He is not conscious that he is trying to find a way to make you increase your encouraging remarks, or even of his solution to that problem. Every day, in all our conversations, we are constantly training and being trained by each other in this manner, and yet we are never conscious of it.”

This is just a passing comment in using one example among many, and he states that “Such unconscious learning is not confined to verbal behavior.” He doesn’t further explore language in this immediate section or repeat again the phrase ‘verbal behavior’ in any other section, although the notion of verbal behavior is central to the entire book. But a decade after the original publication date of his book, Jaynes wrote a paper where he does talk about Skinner’s ideas about language:

“One needs language for consciousness. We think consciousness is learned by children between two and a half and five or six years in what we can call the verbal surround, or the verbal community as B.F Skinner calls it. It is an aspect of learning to speak. Mental words are out there as part of the culture and part of the family. A child fits himself into these words and uses them even before he knows the meaning of them. A mother is constantly instilling the seeds of consciousness in a two- and three-year-old, telling the child to stop and think, asking him “What shall we do today?” or “Do you remember when we did such and such or were somewhere?” And all this while metaphor and analogy are hard at work. There are many different ways that different children come to this, but indeed I would say that children without some kind of language are not conscious.”
(Jaynes, J. 1986. “Consciousness and the Voices of the Mind.” Canadian Psychology, 27, 128– 148.)

I don’t have access to that paper. That quote comes from an article by John E. Limber: “Language and consciousness: Jaynes’s “Preposterous idea” reconsidered.” It is found in Reflections on the Dawn of Consciousness edited by Marcel Kuijsten (pp. 169-202).

Anyway, the point Jaynes makes is that language is required for consciousness as an inner sense of self because language is required to hear ourselves think. So verbal behavior is a necessary, if not sufficient, condition for the emergence of consciousness as we know it. As long as verbal behavior remains an external event, conscious experience won’t follow. Humans have to learn to hear themselves as they hear others, to split themselves into a speaker and a listener.

This relates to what makes possible the differentiation of hearing a voice being spoken by someone in the external world and hearing a voice as a memory of someone in one’s internal mind-space. Without this distinction, imagination isn’t possible for anything imagined would become a hallucination where internal and external hearing are conflated or rather never separated. Jaynes proposes this is why ancient texts regularly describe people as hearing voices of deities and deified kings, spirits and ancestors. The bicameral person, according to the theory, hears their own voice without being conscious that it is their own thought.

All of that emerges from those early studies of animal behavior. Behaviorism plays a key role simply in placing the emphasis on behavior. From there, one can come to the insight that consciousness is a neurocognitive behavior modeled on physical and verbal behavior. The self is a metaphor built on embodied experience in the world. This relates to many similar views, such as that humans learn a theory of mind within themselves by first developing a theory of mind in perceiving others. This goes along with attention schema and the attribution of consciousness. And some have pointed out what is called the double subject fallacy, a hidden form of dualism that infects neuroscience. However described, it gets at the same issue.

It all comes down our being both social animals and inhabitants of the world. Human development begins with a focus outward, culture and language determining what kind of identity forms. How we learn to behave is who we become.

Advertisements

Vestiges of an Earlier Mentality: Different Psychologies

“The Self as Interiorized Social Relations Applying a Jaynesian Approach to Problems of Agency and Volition”
By Brian J. McVeigh

(II) Vestiges of an Earlier Mentality: Different Psychologies

If what Jaynes has proposed about bicamerality is correct, we should expect to find remnants of this extinct mentality. In any case, an examination of the ethnopsychologies of other societies should at least challenge our assumptions. What kinds of metaphors do they employ to discuss the self? Where is agency localized? To what extent do they even “psychologize” the individual, positing an “interior space” within the person? If agency is a socio-historical construction (rather than a bio-evolutionary product), we should expect some cultural variability in how it is conceived. At the same time, we should also expect certain parameters within which different theories of agency are built.

Ethnographies are filled with descriptions of very different psychologies. For example, about the Maori, Jean Smith writes that

it would appear that generally it was not the “self” which encompassed the experience, but experience which encompassed the “self” … Because the “self” was not in control of experience, a man’s experience was not felt to be integral to him; it happened in him but was not of him. A Maori individual was not so much the experiencer of his experience as the observer of it. 22

Furthermore, “bodily organs were endowed with independent volition.” 23 Renato Rosaldo states that the Ilongots of the Philippines rarely concern themselves with what we refer to as an “inner self” and see no major differences between public presentation and private feeling. 24

Perhaps the most intriguing picture of just how radically different mental concepts can be is found in anthropologist Maurice Leenhardt’s   intriguing book Do Kamo, about the Canaque of New Caledonia, who are “unaware” of their own existence: the “psychic or psychological aspect of man’s actions are events in nature. The Canaque sees them as outside of himself, as externalized. He handles his existence similarly: he places it in an object — a yam, for instance — and through the yam he gains some knowledge of his existence, by identifying himself with it.” 25

Speaking of the Dinka, anthropologist Godfrey Lienhardt writes that “the man is the object acted upon,” and “we often find a reversal of European expressions which assume the human self, or mind, as subject in relation to what happens to it.” 26 Concerning the mind itself,

The Dinka have no conception which at all closely corresponds to our popular modern conception of the “mind,” as mediating and, as it were, storing up the experiences of the self. There is for them no such interior entity to appear, on reflection, to stand between the experiencing self at any given moment and what is or has been an exterior influence upon the self. So it seems that what we should call in some cases the memories of experiences, and regard therefore as in some way intrinsic and interior to the remembering person and modified in their effect upon him by that interiority, appear to the Dinka as exteriority acting upon him, as were the sources from which they derived. 27

The above mentioned ethnographic examples may be interpreted as merely colorful descriptions, as exotic and poetic folk psychologies. Or, we may take a more literal view, and entertain the idea that these ethnopsychological accounts are vestiges of a distant past when individuals possessed radically different mentalities. For example, if it is possible to be a person lacking interiority in which a self moves about making conscious decisions, then we must at least entertain the idea that entire civilizations existed whose members had a radically different mentality. The notion of a “person without a self” is admittedly controversial and open to misinterpretation. Here allow me to stress that I am not suggesting that in today’s world there are groups of people whose mentality is distinct from our own. However, I am suggesting that remnants of an earlier mentality are evident in extant ethnopsychologies, including our own. 28

* * *

Text from:

Reflections on the Dawn of Consciousness:
Julian Jaynes’s Bicameral Mind Theory Revisited
Edited by Marcel Kuijsten
Chapter 7, Kindle Locations 3604-3636

See also:

Survival and Persistence of Bicameralism
Piraha and Bicameralism

“Lack of the historical sense is the traditional defect in all philosophers.”

Human, All Too Human: A Book for Free Spirits
by Friedrich Wilhelm Nietzsche

The Traditional Error of Philosophers.—All philosophers make the common mistake of taking contemporary man as their starting point and of trying, through an analysis of him, to reach a conclusion. “Man” involuntarily presents himself to them as an aeterna veritas as a passive element in every hurly-burly, as a fixed standard of things. Yet everything uttered by the philosopher on the subject of man is, in the last resort, nothing more than a piece of testimony concerning man during a very limited period of time. Lack of the historical sense is the traditional defect in all philosophers. Many innocently take man in his most childish state as fashioned through the influence of certain religious and even of certain political developments, as the permanent form under which man must be viewed. They will not learn that man has evolved,4 that the intellectual faculty itself is an evolution, whereas some philosophers make the whole cosmos out of this intellectual faculty. But everything essential in human evolution took place aeons ago, long before the four thousand years or so of which we know anything: during these man may not have changed very much. However, the philosopher ascribes “instinct” to contemporary man and assumes that this is one of the unalterable facts regarding man himself, and hence affords a clue to the understanding of the universe in general. The whole teleology is so planned that man during the last four thousand years shall be spoken of as a being existing from all eternity, and with reference to whom everything in the cosmos from its very inception is naturally ordered. Yet everything evolved: there are no eternal facts as there are no absolute truths. Accordingly, historical philosophising is henceforth indispensable, and with it honesty of judgment.

What Locke Lacked
by Louise Mabille

Locke is indeed a Colossus of modernity, but one whose twin projects of providing a concept of human understanding and political foundation undermine each other. The specificity of the experience of perception alone undermines the universality and uniformity necessary to create the subject required for a justifiable liberalism. Since mere physical perspective can generate so much difference, it is only to be expected that political differences would be even more glaring. However, no political order would ever come to pass without obliterating essential differences. The birth of liberalism was as violent as the Empire that would later be justified in its name, even if its political traces are not so obvious. To interpret is to see in a particular way, at the expense of all other possibilities of interpretation. Perspectives that do not fit are simply ignored, or as that other great resurrectionist of modernity, Freud, would concur, simply driven underground. We ourselves are the source of this interpretative injustice, or more correctly, our need for a world in which it is possible to live, is. To a certain extent, then, man is the measure of the world, but only his world. Man is thus a contingent measure and our measurements do not refer to an original, underlying reality. What we call reality is the result not only of our limited perspectives upon the world, but the interplay of those perspectives themselves. The liberal subject is thus a result of, and not a foundation for, the experience of reality. The subject is identified as origin of meaning only through a process of differentiation and reduction, a course through which the will is designated as a psychological property.

Locke takes the existence of the subject of free will – free to exercise political choice such as rising against a tyrant, choosing representatives, or deciding upon political direction – simply for granted. Furthermore, he seems to think that everyone should agree as to what the rules are according to which these events should happen. For him, the liberal subject underlying these choices is clearly fundamental and universal.

Locke’s philosophy of individualism posits the existence of a discreet and isolated individual, with private interests and rights, independent of his linguistic or socio-historical context. C. B. MacPhearson identifies a distinctly possessive quality to Locke’s individualist ethic, notably in the way in which the individual is conceived as proprietor of his own personhood, possessing capacities such as self-reflection and free will. Freedom becomes associated with possession, which the Greeks would associate with slavery, and society conceived in terms of a collection of free and equal individuals who are related to each through their means of achieving material success – which Nietzsche, too, would associate with slave morality.  […]

There is a central tenet to John Locke’s thinking that, as conventional as it has become, remains a strange strategy. Like Thomas Hobbes, he justifies modern society by contrasting it with an original state of nature. For Hobbes, as we have seen, the state of nature is but a hypothesis, a conceptual tool in order to elucidate a point. For Locke, however, the state of nature is a very real historical event, although not a condition of a state of war. Man was social by nature, rational and free. Locke drew this inspiration from Richard Hooker’s Laws of Ecclesiastical Polity, notably from his idea that church government should be based upon human nature, and not the Bible, which, according to Hooker, told us nothing about human nature. The social contract is a means to escape from nature, friendlier though it be on the Lockean account. For Nietzsche, however, we have never made the escape: we are still holus-bolus in it: ‘being conscious is in no decisive sense the opposite of the instinctive – most of the philosopher’s conscious thinking is secretly directed and compelled into definite channels by his instincts. Behind all logic too, and its apparent autonomy there stand evaluations’ (BGE, 3). Locke makes a singular mistake in thinking the state of nature a distant event. In fact, Nietzsche tells us, we have never left it. We now only wield more sophisticated weapons, such as the guilty conscience […]

Truth originates when humans forget that they are ‘artistically creating subjects’ or products of law or stasis and begin to attach ‘invincible faith’ to their perceptions, thereby creating truth itself. For Nietzsche, the key to understanding the ethic of the concept, the ethic of representation, is conviction […]

Few convictions have proven to be as strong as the conviction of the existence of a fundamental subjectivity. For Nietzsche, it is an illusion, a bundle of drives loosely collected under the name of ‘subject’ —indeed, it is nothing but these drives, willing, and actions in themselves—and it cannot appear as anything else except through the seduction of language (and the fundamental errors of reason petrified in it), which understands and misunderstands all action as conditioned by something which causes actions, by a ‘Subject’ (GM I 13). Subjectivity is a form of linguistic reductionism, and when using language, ‘[w]e enter a realm of crude fetishism when we summon before consciousness the basic presuppositions of the metaphysics of language — in plain talk, the presuppositions of reason. Everywhere reason sees a doer and doing; it believes in will as the cause; it believes in the ego, in the ego as being, in the ego as substance, and it projects this faith in the ego-substance upon all things — only thereby does it first create the concept of ‘thing’ (TI, ‘Reason in Philosophy’ 5). As Nietzsche also states in WP 484, the habit of adding a doer to a deed is a Cartesian leftover that begs more questions than it solves. It is indeed nothing more than an inference according to habit: ‘There is activity, every activity requires an agent, consequently – (BGE, 17). Locke himself found the continuous existence of the self problematic, but did not go as far as Hume’s dissolution of the self into a number of ‘bundles’. After all, even if identity shifts occurred behind the scenes, he required a subject with enough unity to be able to enter into the Social Contract. This subject had to be something more than merely an ‘eternal grammatical blunder’ (D, 120), and willing had to be understood as something simple. For Nietzsche, it is ‘above all complicated, something that is a unit only as a word, a word in which the popular prejudice lurks, which has defeated the always inadequate caution of philosophers’ (BGE, 19).

Nietzsche’s critique of past philosophers
by Michael Lacewing

Nietzsche is questioning the very foundations of philosophy. To accept his claims means being a new kind of philosopher, ones who ‘taste and inclination’, whose values, are quite different. Throughout his philosophy, Nietzsche is concerned with origins, both psychological and historical. Much of philosophy is usually thought of as an a priori investigation. But if Nietzsche can show, as he thinks he can, that philosophical theories and arguments have a specific historical basis, then they are not, in fact, a priori. What is known a priori should not change from one historical era to the next, nor should it depend on someone’s psychology. Plato’s aim, the aim that defines much of philosophy, is to be able to give complete definitions of ideas – ‘what is justice?’, ‘what is knowledge?’. For Plato, we understand an idea when we have direct knowledge of the Form, which is unchanging and has no history. If our ideas have a history, then the philosophical project of trying to give definitions of our concepts, rather than histories, is radically mistaken. For example, in §186, Nietzsche argues that philosophers have consulted their ‘intuitions’ to try to justify this or that moral principle. But they have only been aware of their own morality, of which their ‘justifications’ are in fact only expressions. Morality and moral intuitions have a history, and are not a priori. There is no one definition of justice or good, and the ‘intuitions’ that we use to defend this or that theory are themselves as historical, as contentious as the theories we give – so they offer no real support. The usual ways philosophers discuss morality misunderstands morality from the very outset. The real issues of understanding morality only emerge when we look at the relation between this particular morality and that. There is no world of unchanging ideas, no truths beyond the truths of the world we experience, nothing that stands outside or beyond nature and history.

GENEALOGY AND PHILOSOPHY

Nietzsche develops a new way of philosophizing, which he calls a ‘morphology and evolutionary theory’ (§23), and later calls ‘genealogy’. (‘Morphology’ means the study of the forms something, e.g. morality, can take; ‘genealogy’ means the historical line of descent traced from an ancestor.) He aims to locate the historical origin of philosophical and religious ideas and show how they have changed over time to the present day. His investigation brings together history, psychology, the interpretation of concepts, and a keen sense of what it is like to live with particular ideas and values. In order to best understand which of our ideas and values are particular to us, not a priori or universal, we need to look at real alternatives. In order to understand these alternatives, we need to understand the psychology of the people who lived with them. And so Nietzsche argues that traditional ways of doing philosophy fail – our intuitions are not a reliable guide to the ‘truth’, to the ‘real’ nature of this or that idea or value. And not just our intuitions, but the arguments, and style of arguing, that philosophers have used are unreliable. Philosophy needs to become, or be informed by, genealogy. A lack of any historical sense, says Nietzsche, is the ‘hereditary defect’ of all philosophers.

MOTIVATIONAL ANALYSIS

Having long kept a strict eye on the philosophers, and having looked between their lines, I say to myself… most of a philosopher’s conscious thinking is secretly guided and channelled into particular tracks by his instincts. Behind all logic, too, and its apparent tyranny of movement there are value judgements, or to speak more clearly, physiological demands for the preservation of a particular kind of life. (§3) A person’s theoretical beliefs are best explained, Nietzsche thinks, by evaluative beliefs, particular interpretations of certain values, e.g. that goodness is this and the opposite of badness. These values are best explained as ‘physiological demands for the preservation of a particular kind of life’. Nietzsche holds that each person has a particular psychophysical constitution, formed by both heredity and culture. […] Different values, and different interpretations of these values, support different ways of life, and so people are instinctively drawn to particular values and ways of understanding them. On the basis of these interpretations of values, people come to hold particular philosophical views. §2 has given us an illustration of this: philosophers come to hold metaphysical beliefs about a transcendent world, the ‘true’ and ‘good’ world, because they cannot believe that truth and goodness could originate in the world of normal experience, which is full of illusion, error, and selfishness. Therefore, there ‘must’ be a pure, spiritual world and a spiritual part of human beings, which is the origin of truth and goodness. Philosophy and values But ‘must’ there be a transcendent world? Or is this just what the philosopher wants to be true? Every great philosophy, claims Nietzsche, is ‘the personal confession of its author’ (§6). The moral aims of a philosophy are the ‘seed’ from which the whole theory grows. Philosophers pretend that their opinions have been reached by ‘cold, pure, divinely unhampered dialectic’ when in fact, they are seeking reasons to support their pre-existing commitment to ‘a rarefied and abstract version of their heart’s desire’ (§5), viz. that there is a transcendent world, and that good and bad, true and false are opposites. Consider: Many philosophical systems are of doubtful coherence, e.g. how could there be Forms, and if there were, how could we know about them? Or again, in §11, Nietzsche asks ‘how are synthetic a priori judgments possible?’. The term ‘synthetic a priori’ was invented by Kant. According to Nietzsche, Kant says that such judgments are possible, because we have a ‘faculty’ that makes them possible. What kind of answer is this?? Furthermore, no philosopher has ever been proved right (§25). Given the great difficulty of believing either in a transcendent world or in human cognitive abilities necessary to know about it, we should look elsewhere for an explanation of why someone would hold those beliefs. We can find an answer in their values. There is an interesting structural similarity between Nietzsche’s argument and Hume’s. Both argue that there is no rational explanation of many of our beliefs, and so they try to find the source of these beliefs outside or beyond reason. Hume appeals to imagination and the principle of ‘Custom’. Nietzsche appeals instead to motivation and ‘the bewitchment of language’ (see below). So Nietzsche argues that philosophy is not driven by a pure ‘will to truth’ (§1), to discover the truth whatever it may be. Instead, a philosophy interprets the world in terms of the philosopher’s values. For example, the Stoics argued that we should live ‘according to nature’ (§9). But they interpret nature by their own values, as an embodiment of rationality. They do not see the senselessness, the purposelessness, the indifference of nature to our lives […]

THE BEWITCHMENT OF LANGUAGE

We said above that Nietzsche criticizes past philosophers on two grounds. We have looked at the role of motivation; the second ground is the seduction of grammar. Nietzsche is concerned with the subject-predicate structure of language, and with it the notion of a ‘substance’ (picked out by the grammatical ‘subject’) to which we attribute ‘properties’ (identified by the predicate). This structure leads us into a mistaken metaphysics of ‘substances’. In particular, Nietzsche is concerned with the grammar of ‘I’. We tend to think that ‘I’ refers to some thing, e.g. the soul. Descartes makes this mistake in his cogito – ‘I think’, he argues, refers to a substance engaged in an activity. But Nietzsche repeats the old objection that this is an illegitimate inference (§16) that rests on many unproven assumptions – that I am thinking, that some thing is thinking, that thinking is an activity (the result of a cause, viz. I), that an ‘I’ exists, that we know what it is to think. So the simple sentence ‘I think’ is misleading. In fact, ‘a thought comes when ‘it’ wants to, and not when ‘I’ want it to’ (§17). Even ‘there is thinking’ isn’t right: ‘even this ‘there’ contains an interpretation of the process and is not part of the process itself. People are concluding here according to grammatical habit’. But our language does not allow us just to say ‘thinking’ – this is not a whole sentence. We have to say ‘there is thinking’; so grammar constrains our understanding. Furthermore, Kant shows that rather than the ‘I’ being the basis of thinking, thinking is the basis out of which the appearance of an ‘I’ is created (§54). Once we recognise that there is no soul in a traditional sense, no ‘substance’, something constant through change, something unitary and immortal, ‘the way is clear for new and refined versions of the hypothesis about the soul’ (§12), that it is mortal, that it is multiplicity rather than identical over time, even that it is a social construct and a society of drives. Nietzsche makes a similar argument about the will (§19). Because we have this one word ‘will’, we think that what it refers to must also be one thing. But the act of willing is highly complicated. First, there is an emotion of command, for willing is commanding oneself to do something, and with it a feeling of superiority over that which obeys. Second, there is the expectation that the mere commanding on its own is enough for the action to follow, which increases our sense of power. Third, there is obedience to the command, from which we also derive pleasure. But we ignore the feeling the compulsion, identifying the ‘I’ with the commanding ‘will’. Nietzsche links the seduction of language to the issue of motivation in §20, arguing that ‘the spell of certain grammatical functions is the spell of physiological value judgements’. So even the grammatical structure of language originates in our instincts, different grammars contributing to the creation of favourable conditions for different types of life. So what values are served by these notions of the ‘I’ and the ‘will’? The ‘I’ relates to the idea that we have a soul, which participates in a transcendent world. It functions in support of the ascetic ideal. The ‘will’, and in particular our inherited conception of ‘free will’, serves a particular moral aim

Hume and Nietzsche: Moral Psychology (short essay)
by epictetus_rex

1. Metaphilosophical Motivation

Both Hume and Nietzsche1 advocate a kind of naturalism. This is a weak naturalism, for it does not seek to give science authority over philosophical inquiry, nor does it commit itself to a specific ontological or metaphysical picture. Rather, it seeks to (a) place the human mind firmly in the realm of nature, as subject to the same mechanisms that drive all other natural events, and (b) investigate the world in a way that is roughly congruent with our best current conception(s) of nature […]

Furthermore, the motivation for this general position is common to both thinkers. Hume and Nietzsche saw old rationalist/dualist philosophies as both absurd and harmful: such systems were committed to extravagant and contradictory metaphysical claims which hinder philosophical progress. Furthermore, they alienated humanity from its position in nature—an effect Hume referred to as “anxiety”—and underpinned religious or “monkish” practises which greatly accentuated this alienation. Both Nietzsche and Hume believe quite strongly that coming to see ourselves as we really are will banish these bugbears from human life.

To this end, both thinkers ask us to engage in honest, realistic psychology. “Psychology is once more the path to the fundamental problems,” writes Nietzsche (BGE 23), and Hume agrees:

the only expedient, from which we can hope for success in our philosophical researches, is to leave the tedious lingering method, which we have hitherto followed, and instead of taking now and then a castle or village on the frontier, to march up directly to the capital or center of these sciences, to human nature itself.” (T Intro)

2. Selfhood

Hume and Nietzsche militate against the notion of a unified self, both at-a-time and, a fortiori, over time.

Hume’s quest for a Newtonian “science of the mind” lead him to classify all mental events as either impressions (sensory) or ideas (copies of sensory impressions, distinguished from the former by diminished vivacity or force). The self, or ego, as he says, is just “a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different; whatever natural propension we may have to imagine that simplicity and identity.” (Treatise 4.6) […]

For Nietzsche, the experience of willing lies in a certain kind of pleasure, a feeling of self-mastery and increase of power that comes with all success. This experience leads us to mistakenly posit a simple, unitary cause, the ego. (BGE 19)

The similarities here are manifest: our minds do not have any intrinsic unity to which the term “self” can properly refer, rather, they are collections or “bundles” of events (drives) which may align with or struggle against one another in a myriad of ways. Both thinkers use political models to describe what a person really is. Hume tells us we should “more properly compare the soul to a republic or commonwealth, in which the several members [impressions and ideas] are united by ties of government and subordination, and give rise to persons, who propagate the same republic in the incessant change of its parts” (T 261)

3. Action and The Will

Nietzsche and Hume attack the old platonic conception of a “free will” in lock-step with one another. This picture, roughly, involves a rational intellect which sits above the appetites and ultimately chooses which appetites will express themselves in action. This will is usually not considered to be part of the natural/empirical order, and it is this consequence which irks both Hume and Nietzsche, who offer two seamlessly interchangeable refutations […]

Since we are nothing above and beyond events, there is nothing for this “free will” to be: it is a causa sui, “a sort of rape and perversion of logic… the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense” (BGE 21).

When they discover an erroneous or empty concept such as “Free will” or “the self”, Nietzsche and Hume engage in a sort of error-theorizing which is structurally the same. Peter Kail (2006) has called this a “projective explanation”, whereby belief in those concepts is “explained by appeal to independently intelligible features of psychology”, rather than by reference to the way the world really is1.

The Philosophy of Mind
INSTRUCTOR: Larry Hauser
Chapter 7: Egos, bundles, and multiple selves

  • Who dat?  “I”
    • Locke: “something, I know not what”
    • Hume: the no-self view … “bundle theory”
    • Kant’s transcendental ego: a formal (nonempirical) condition of thought that the “I’ must accompany every perception.
      • Intentional mental state: I think that snow is white.
        • to think: a relation between
          • a subject = “I”
          • a propositional content thought =  snow is white
      • Sensations: I feel the coldness of the snow.
        • to feel: a relation between
          • a subject = “I”
          • a quale = the cold-feeling
    • Friedrich Nietzsche
      • A thought comes when “it” will and not when “I” will. Thus it is a falsification of the evidence to say that the subject “I” conditions the predicate “think.”
      • It is thought, to be sure, but that this “it” should be that old famous “I” is, to put it mildly, only a supposition, an assertion. Above all it is not an “immediate certainty.” … Our conclusion is here formulated out of our grammatical custom: “Thinking is an activity; every activity presumes something which is active, hence ….” 
    • Lichtenberg: “it’s thinking” a la “it’s raining”
      • a mere grammatical requirement
      • no proof of an thinking self

[…]

  • Ego vs. bundle theories (Derek Parfit (1987))
    • Ego: “there really is some kind of continuous self that is the subject of my experiences, that makes decisions, and so on.” (95)
      • Religions: Christianity, Islam, Hinduism
      • Philosophers: Descartes, Locke, Kant & many others (the majority view)
    • Bundle: “there is no underlying continuous and unitary self.” (95)
      • Religion: Buddhism
      • Philosophers: Hume, Nietzsche, Lichtenberg, Wittgenstein, Kripke(?), Parfit, Dennett {a stellar minority}
  • Hume v. Reid
    • David Hume: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure.  I never can catch myself at any time without a perception, and never can observe anything but the perception.  (Hume 1739, Treatise I, VI, iv)
    • Thomas Reid: I am not thought, I am not action, I am not feeling: I am something which thinks and acts and feels. (1785)

“Beyond that, there is only awe.”

“What is the meaning of life?” This question has no answer except in the history of how it came to be asked. There is no answer because words have meaning, not life or persons or the universe itself. Our search for certainty rests in our attempts at understanding the history of all individual selves and all civilizations. Beyond that, there is only awe.
~ Julian Jaynes, 1988, Life Magazine

That is always a nice quote. Jaynes never seemed like an ideologue about his own speculations. In his controversial book, more than a decade earlier (1976), he titled his introduction as “The Problem of Consciousness”. That is what frames his thought, confronting a problem. The whole issue of consciousness is still problematic to this day and likely will be so for a long time. After a lengthy analysis of complex issues, he concludes his book with some humbling thoughts:

For what is the nature of this blessing of certainty that science so devoutly demands in its very Jacob-like wrestling with nature? Why should we demand that the universe make itself clear to us? Why do we care?

To be sure, a part of the impulse to science is simple curiosity, to hold the unheld and watch the unwatched. We are all children in the unknown.

Following that, he makes a plea for understanding. Not just understanding of the mind but also of experience. It is a desire to grasp what makes us human, the common impulses that bind us, underlying both religion and science. There is a tender concern being given voice, probably shaped and inspired by his younger self having poured over his deceased father’s Unitarian sermons.

As individuals we are at the mercies of our own collective imperatives. We see over our everyday attentions, our gardens and politics, and children, into the forms of our culture darkly. And our culture is our history. In our attempts to communicate or to persuade or simply interest others, we are using and moving about through cultural models among whose differences we may select, but from whose totality we cannot escape. And it is in this sense of the forms of appeal, of begetting hope or interest or appreciation or praise for ourselves or for our ideas, that our communications are shaped into these historical patterns, these grooves of persuasion which are even in the act of communication an inherent part of what is communicated. And this essay is no exception.

That humility feels genuine. His book was far beyond mere scholarship. It was an expression of decades of questioning and self-questioning, about what it means to be human and what it might have meant for others throughout the millennia.

He never got around to writing another book on the topic, despite his stated plans to do so. But during the last decade of his life, he wrote an afterword to his original work. It was placed in the 1990 edition, fourteen years after the original publication. He had faced much criticism and one senses a tired frustration in those last years. Elsewhere, he complained about the expectation to explain himself and make himself understood to people who, for whatever reason, didn’t understand. Still, he realized that was the nature of his job as an academic scholar working at a major university. From the after word, he wrote:

A favorite practice of some professional intellectuals when at first faced with a theory as large as the one I have presented is to search for that loose thread which, when pulled, will unravel all the rest. And rightly so. It is part of the discipline of scientific thinking. In any work covering so much of the terrain of human nature and history, hustling into territories jealously guarded by myriad aggressive specialists, there are bound to be such errancies, sometimes of fact but I fear more often of tone. But that the knitting of this book is such that a tug on such a bad stitch will unravel all the rest is more of a hope on the part of the orthodox than a fact in the scientific pursuit of truth. The book is not a single hypothesis.

Interestingly, Jaynes doesn’t state the bicameral mind as an overarching context for the hypotheses he lists. In fact, it is just one among the several hypotheses and not even the first to be mentioned. That shouldn’t be surprising since decades of his thought and research, including laboratory studies done on animal behavior, preceded the formulation of the bicameral hypothesis. Here are the four hypotheses:

  1. Consciousness is based on language.
  2. The bicameral mind.
  3. The dating.
  4. The double brain.

He states that, “I wish to emphasize that these four hypotheses are separable. The last, for example, could be mistaken (at least in the simplified version I have presented) and the others true. The two hemispheres of the brain are not the bicameral mind but its present neurological model. The bicameral mind is an ancient mentality demonstrated in the literature and artifacts of antiquity.” Each hypothesis is connected to the others but must be dealt with separately. The key element to his project is consciousness, as that is the key problem. And as problems go, it is a doozy. Calling it a problem is like calling the moon a chunk of rock and the sun a warm fire.

Related to these hypotheses, earlier in his book, Jaynes proposes a useful framework. He calls it the General Bicameral Paradigm. “By this phrase,” he explains, “I mean an hypothesized structure behind a large class of phenomena of diminished consciousness which I am interpreting as partial holdovers from our earlier mentality.” There are four components:

  1. “the collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form;”
  2. “an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations;”
  3. “the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group; and”
  4. “the archaic authorization to which the trance is directed or related to, usually a god, but sometimes a person who is accepted by the individual and his culture as an authority over the individual, and who by the collective cognitive imperative is prescribed to be responsible for controlling the trance state.”

The point is made that the reader shouldn’t assume that they are “to be considered as a temporal succession necessarily, although the induction and trance usually do follow each other. But the cognitive imperative and the archaic authorization pervade the whole thing. Moreover, there is a kind of balance or summation among these elements, such that when one of them is weak the others must be strong for the phenomena to occur. Thus, as through time, particularly in the millennium following the beginning of consciousness, the collective cognitive imperative becomes weaker (that is, the general population tends toward skepticism about the archaic authorization), we find a rising emphasis on and complication of the induction procedures, as well as the trance state itself becoming more profound.”

This general bicameral paradigm is partly based on the insights he gained from studying ancient societies. But ultimately it can be considered separately from that. All you have to understand is that these are a basic set of cognitive abilities and tendencies that have been with humanity for a long time. These are the vestiges of human evolution and societal development. They can be combined and expressed in multiple ways. Our present society is just one of many possible manifestations. Human nature is complex and human potential is immense, and so diversity is to be expected among human neurocognition, behavior, and culture.

An important example of the general bicameral paradigm is hypnosis. It isn’t just an amusing trick done for magic shows. Hypnosis shows something profoundly odd, disturbing even, about the human mind. Also, it goes far beyond the individual for it is about how humans relate. It demonstrates the power of authority figures, in whatever form they take, and indicates the significance of what Jaynes calls authorization. By the way, this leads down the dark pathways of authoritarianism, brainwashing, propaganda, and punishment — as for the latter, Jaynes writes that:

If we can regard punishment in childhood as a way of instilling an enhanced relationship to authority, hence training some of those neurological relationships that were once the bicameral mind, we might expect this to increase hypnotic susceptibility. And this is true. Careful studies show that those who have experienced severe punishment in childhood and come from a disciplined home are more easily hypnotized, while those who were rarely punished or not punished at all tend to be less susceptible to hypnosis.

He discusses the history of hypnosis beginning with Mesmer. In this, he shows how metaphor took different form over time. And, accordingly, it altered shared experience and behavior.

Now it is critical here to realize and to understand what we might call the paraphrandic changes which were going on in the people involved, due to these metaphors. A paraphrand, you will remember, is the projection into a metaphrand of the associations or paraphiers of a metaphier. The metaphrand here is the influences between people. The metaphiers, or what these influences are being compared to, are the inexorable forces of gravitation, magnetism, and electricity. And their paraphiers of absolute compulsions between heavenly bodies, of unstoppable currents from masses of Ley den jars, or of irresistible oceanic tides of magnetism, all these projected back into the metaphrand of interpersonal relationships, actually changing them, changing the psychological nature of the persons involved, immersing them in a sea of uncontrollable control that emanated from the ‘magnetic fluids’ in the doctor’s body, or in objects which had ‘absorbed’ such from him.

It is at least conceivable that what Mesmer was discovering was a different kind of mentality that, given a proper locale, a special education in childhood, a surrounding belief system, and isolation from the rest of us, possibly could have sustained itself as a society not based on ordinary consciousness, where metaphors of energy and irresistible control would assume some of the functions of consciousness.

How is this even possible? As I have mentioned already, I think Mesmer was clumsily stumbling into a new way of engaging that neurological patterning I have called the general bicameral paradigm with its four aspects: collective cognitive imperative, induction, trance, and archaic authorization.

Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.

This touches upon the issue of rhetoric, although Jaynes never mentioned the topic. It’s disappointing since his original analysis of metaphor has many implications. Fortunately, others have picked up where he left off (see Ted Remington, Brian J. McVeigh, and Frank J. D’Angelo). Authorization in the ancient world came through a poetic voice, but today it is most commonly heard in rhetoric.

Still, that old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”

Metaphor grows naturally in poetic soil, but its seeds are planted in every aspect of language and thought, giving fruit to our perceptions and actions. This is a thousandfold true on the collective level of society and politics. Metaphors are most powerful when we don’t see them as metaphors. So, the most persuasive rhetoric is that which hides its metaphorical frame and obfuscates any attempts to bring it to light.

Going far back into the ancient world, metaphors didn’t need to be hidden in this sense. The reason for this is that there was no intellectual capacity or conceptual understanding of metaphors as metaphors. Instead, metaphors were taken literally. The way people spoke about reality was inseparable from their experience of reality and they had no way of stepping back from their cultural biases, as the cultural worldviews they existed within were all-encompassing. It’s only with the later rise of multicultural societies, especially the vast multi-ethnic trade empires, that people began to think in terms of multiple perspectives. Such a society was developing in the trade networking and colonizing nation-states of Greece in the centuries leading up to Hellenism.

That is the well known part of Jaynes’ speculations, the basis of his proposed bicameral mind. And Jaynes considered it extremely relevant to the present.

Marcel Kuijsten wrote that, “Jaynes maintained that we are still deep in the midst of this transition from bicamerality to consciousness; we are continuing the process of expanding the role of our internal dialogue and introspection in the decision-making process that was started some 3,000 years ago. Vestiges of the bicameral mind — our longing for absolute guidance and external control — make us susceptible to charismatic leaders, cults, trends, and persuasive rhetoric that relies on slogans to bypass logic” (“Consciousness, Hallucinations, and the Bicameral Mind Three Decades of New Research”, Reflections on the Dawn of Consciousness, Kindle Locations 2210-2213). Considering the present, in Authoritarian Grammar and Fundamentalist Arithmetic, Ben G. Price puts it starkly: “Throughout, tyranny asserts its superiority by creating a psychological distance between those who command and those who obey. And they do this with language, which they presume to control.” The point made by the latter is that this knowledge, even as it can be used as intellectual defense, might just lead to even more effective authoritarianism.

We’ve grown less fearful of rhetoric because we see ourselves as being savvy, experienced consumers of media. The cynical modern mind is always on guard, our well-developed and rigid state of consciousness offering a continuous psychological buffering against the intrusions of the world. So we like to think. I remember, back in 7th grade, being taught how the rhetoric of advertising is used to manipulate us. But we are over-confident. Consciousness operates at the surface of the psychic depths. We are better at rationalizing than being rational, something we may understand intellectually but rarely do we fully acknowledge the psychological and societal significance of this. That is the usefulness of theories like that of bicameralism, as they remind us that we are out of our depths. In the ancient world, there was a profound mistrust between the poetic and rhetorical, and for good reason. We would be wise to learn from that clash of mindsets and worldviews.

We shouldn’t be so quick to assume we understand our own minds, the kind of vessel we find ourselves on. Nor should we allow ourselves to get too comfortable within the worldview we’ve always known, the safe harbor of our familiar patterns of mind. It’s hard to think about these issues because they touch upon our own being, the surface of consciousness along with the depths below it. This is the near difficult task of fathoming the ocean floor using rope and a weight, an easier task the closer we hug the shoreline. But what might we find if cast ourselves out on open waters? What new lands might be found, lands to be newly discovered and lands already inhabited?

We moderns love certainty. And it’s true we possess more knowledge than any civilization before has accumulated. Yet we’ve partly made the unfamiliar into familiar by remaking the world in our own image. There is no place on earth that remains entirely untouched. Only a couple hundred small isolated tribes are still uncontacted, representing foreign worldviews not known or studied, but even they live under unnatural conditions of stress as the larger world closes in on them. Most of the ecological and cultural diversity that once existed has been obliterated from the face of the earth, most of it having left not a single trace or record, just simply gone. Populations beyond count have faced extermination by outside influences and forces before they ever got a chance to meet an outsider. Plagues, environmental destruction, and societal collapse wiped them out often in short periods of time.

Those other cultures might have gifted us with insights about our humanity that now are lost forever, just as extinct species might have held answers to questions not yet asked and medicines for diseases not yet understood. Almost all that now is left is a nearly complete monoculture with the differences ever shrinking into the constraints of capitalist realism. If not for scientific studies done on the last of isolated tribal people, we would never know how much diversity exists within human nature. Many of the conclusions that earlier social scientists had made were based mostly on studies involving white, middle class college kids in Western countries, what some have called the WEIRD: Western, Educated, Industrialized, Rich, and Democratic. But many of those conclusions have since proven wrong, biased, or limited.

When Jaynes’ first thought on such matters, the social sciences were still getting established as serious fields of study. His entered college around 1940 when behaviorism was a dominant paradigm. It was only in the prior decades that the very idea of ‘culture’ began to take hold among anthropologists. He was influenced by anthropologists, directly and indirectly. One indirect influence came by way of E. R. Dodds, a classical scholar, who in writing his 1951 The Greeks and the Irrational found inspiration from Ruth Benedict’s anthropological work comparing cultures (Benedict taking this perspective through the combination of the ideas of Franz Boas and Carl Jung). Still, anthropology was young and the fascinating cases so well known today were unknown back then (e.g., Daniel Everett’s recent books on the Pirahã). So, in following Dodds example, Jaynes turned to ancient societies and their literature.

His ideas were forming at the same time the social sciences were gaining respectability and maturity. It was a time when many scholars and other intellectuals were more fully questioning Western civilization. But it was also the time when Western ascendancy was becoming clear with the WWI ending of the Ottoman Empire and the WWII ending of the Japanese Empire. The whole world was falling under Western cultural influence. And traditional societies were in precipitous decline. That was the dawning of the age of monoculture.

We are the inheritors of the world that was created from that wholesale destruction of all that came before. And even what came before was built on millennia of collapsing civilizations. Jaynes focused on the earliest example of mass destruction and chaos leading him to see a stark division to what came before and after. How do we understand why we came to be the way we are when so much has been lost? We are forced back on our own ignorance. Jaynes apparently understood that and so considered awe to be the proper response. We know the world through our own humanity, but we can only know our own humanity through the cultural worldview we are born into. It is our words that have meaning, was Jaynes response, “not life or persons or the universe itself.” That is to say we bring meaning to what we seek to understand. Meaning is created, not discovered. And the kind of meaning we create depends on our cultural worldview.

In Monoculture, F. S. Michaels writes (pp. 1-2):

THE HISTORY OF HOW we think and act, said twentieth-century philosopher Isaiah Berlin, is, for the most part, a history of dominant ideas. Some subject rises to the top of our awareness, grabs hold of our imagination for a generation or two, and shapes our entire lives. If you look at any civilization, Berlin said, you will find a particular pattern of life that shows up again and again, that rules the age. Because of that pattern, certain ideas become popular and others fall out of favor. If you can isolate the governing pattern that a culture obeys, he believed, you can explain and understand the world that shapes how people think, feel and act at a distinct time in history.1

The governing pattern that a culture obeys is a master story — one narrative in society that takes over the others, shrinking diversity and forming a monoculture. When you’re inside a master story at a particular time in history, you tend to accept its definition of reality. You unconsciously believe and act on certain things, and disbelieve and fail to act on other things. That’s the power of the monoculture; it’s able to direct us without us knowing too much about it.

Over time, the monoculture evolves into a nearly invisible foundation that structures and shapes our lives, giving us our sense of how the world works. It shapes our ideas about what’s normal and what we can expect from life. It channels our lives in a certain direction, setting out strict boundaries that we unconsciously learn to live inside. It teaches us to fear and distrust other stories; other stories challenge the monoculture simply by existing, by representing alternate possibilities.

Jaynes argued that ideas are more than mere concepts. Ideas are embedded in language and metaphor. And ideas take form not just as culture but as entire worldviews built on interlinked patterns of attitudes, thought, perception, behavior, and identity. Taken together, this is the reality tunnel we exist within.

It takes a lot to shake us loose from these confines of the mind. Certain practices, from meditation to imbibing psychedelics, can temporarily or permanently alter the matrix of our identity. Jaynes, for reasons of his own, came to question the inevitability of the society around him which allowed him to see that other possibilities may exist. The direction his queries took him landed him in foreign territory, outside of the idolized individualism of Western modernity.

His ideas might have been less challenging in a different society. We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.

Maybe we modern Westerners need to practice a sense of awe, to inquire more deeply. Jaynes offers a different way of thinking that doesn’t even require us to look to another society. If he is correct, this radical worldview is at the root of Western Civilization. Maybe the traces of the past are still with us.

* * *

The Origin of Rhetoric in the Breakdown of the Bicameral Mind
by Ted Remington

Endogenous Hallucinations and the Bicameral Mind
by Rick Straussman

Consciousness and Dreams
by Marcel Kuijsten, Julian Jaynes Society

Ritual and the Consciousness Monoculture
by Sarah Perry, Ribbonfarm

“I’m Nobody”: Lyric Poetry and the Problem of People
by David Baker, The Virginia Quarterly Review

It is in fact dangerous to assume a too similar relationship between those ancient people and us. A fascinating difference between the Greek lyricists and ourselves derives from the entity we label “the self.” How did the self come to be? Have we always been self-conscious, of two or three or four minds, a stew of self-aware voices? Julian Jaynes thinks otherwise. In The Origin of Consciousness in the Breakdown of the Bicameral Mind—that famous book my poetry friends adore and my psychologist friends shrink from—Jaynes surmises that the early classical mind, still bicameral, shows us the coming-into-consciousness of the modern human, shows our double-minded awareness as, originally, a haunted hearing of voices. To Jaynes, thinking is not the same as consciousness: “one does one’s thinking before one knows what one is to think about.” That is, thinking is not synonymous with consciousness or introspection; it is rather an automatic process, notably more reflexive than reflective. Jaynes proposes that epic poetry, early lyric poetry, ritualized singing, the conscience, even the voices of the gods, all are one part of the brain learning to hear, to listen to, the other.

Auditory Hallucinations: Psychotic Symptom or Dissociative Experience?
by Andrew Moskowitz & Dirk Corstens

Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or by persons with no mental disorder at all.

Neuroimaging, auditory hallucinations, and the bicameral mind.
by L. Sher, Journal of Psychiatry and Neuroscience

Olin suggested that recent neuroimaging studies “have illuminated and confirmed the importance of Jaynes’ hypothesis.” Olin believes that recent reports by Lennox et al and Dierks et al support the bicameral mind. Lennox et al reported a case of a right-handed subject with schizophrenia who experienced a stable pattern of hallucinations. The authors obtained images of repeated episodes of hallucination and observed its functional anatomy and time course. The patient’s auditory hallucination occurred in his right hemisphere but not in his left.

What Is It Like to Be Nonconscious?: A Defense of Julian Jaynes
by Gary William, Phenomenology and the Cognitive Sciences

To explain the origin of consciousness is to explain how the analog “I” began to narratize in a functional mind-space. For Jaynes, to understand the conscious mind requires that we see it as something fleeting rather than something always present. The constant phenomenality of what-it-is-like to be an organism is not equivalent to consciousness and, subsequently, consciousness must be thought in terms of the authentic possibility of consciousness rather than its continual presence.

Defending Damasio and Jaynes against Block and Gopnik
by Emilia Barile, Phenomenology Lab

When Jaynes says that there was “nothing it is like” to be preconscious, he certainly didn’t mean to say that nonconscious animals are somehow not having subjective experience in the sense of “experiencing” or “being aware” of the world. When Jaynes said there is “nothing it is like” to be preconscious, he means that there is no sense of mental interiority and no sense of autobiographical memory. Ask yourself what it is like to be driving a car and then suddenly wake up and realize that you have been zoned out for the past minute. Was there something it is like to drive on autopilot? This depends on how we define “what it is like”.

“The Evolution of the Analytic Topoi: A Speculative Inquiry”
by Frank J. D’Angelo
from Essays on Classical Rhetoric and Modern Discourse
ed. Robert J. Connors, Lisa S. Ede, & Andrea A. Lunsford
pp. 51-5

The first stage in the evolution of the analytic topoi is the global stage. Of this stage we have scanty evidence, since we must assume the ontogeny of invention in terms of spoken language long before the individual is capable of anything like written language. But some hints of how logical invention might have developed can be found in the work of Eric Havelock. In his Preface to Plato, Havelock, in recapitulating the educational experience of the Homeric and post-Homeric Greek, comments that the psychology of the Homeric Greek is characterized by a high degree of automatism.

He is required as a civilised being to become acquainted with the history, the social organisation, the technical competence and the moral imperatives of his group. This in turn is able to function only as a fragment of the total Hellenic world. It shares a consciousness in which he is keenly aware that he, as a Hellene, in his memory. Such is poetic tradition, essentially something he accepts uncritically, or else it fails to survive in his living memory. Its acceptance and retention are made psychologically possible by a mechanism of self-surrender to the poetic performance and of self-identification with the situations and the stories related in the performance. . . . His receptivity to the tradition has thus, from the standpoint of inner psychology, a degree of automatism which however is counter-balanced by a direct and unfettered capacity for action in accordance with the paradigms he has absorbed. 6

Preliterate man was apparently unable to think logically. He acted, or as Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, puts it, “reacted” to external events. “There is in general,” writes Jaynes, “no consciousness in the Iliad . . . and in general therefore, no words for consciousness or mental acts.” 7 There was, in other words, no subjective consciousness in Iliadic man. His actions were not rooted in conscious plans or in reasoning. We can only speculate, then, based on the evidence given by Havelock and Jaynes that logical invention, at least in any kind of sophisticated form, could not take place until the breakdown of the bicameral mind, with the invention of writing. If ancient peoples were unable to introspect, then we must assume that the analytic topoi were a discovery of literate man. Eric Havelock, however, warns that the picture he gives of Homeric and post-Homeric man is oversimplified and that there are signs of a latent mentality in the Greek mind. But in general, Homeric man was more concerned to go along with the tradition than to make individual judgments.

For Iliadic man to be able to think, he must think about something. To do this, states Havelock, he had to be able to revolt against the habit of self-identification with the epic poem. But identification with the poem at this time in history was necessary psychologically (identification was necessary for memorization) and in the epic story implicitly as acts or events that are carried out by important people, must be abstracted from the narrative flux. “Thus the autonomous subject who no longer recalls and feels, but knows, can now be confronted with a thousand abstract laws, principles, topics, and formulas which become the objects of his knowledge.” 8

The analytic topoi, then, were implicit in oral poetic discourse. They were “experienced” in the patterns of epic narrative, but once they are abstracted they can become objects of thought as well as of experience. As Eric Havelock puts it,

If we view them [these abstractions] in relation to the epic narrative from which, as a matter of historical fact, they all emerged they can all be regarded as in one way or another classifications of an experience which was previously “felt” in an unclassified medley. This was as true of justice as of motion, of goodness as of body or space, of beauty as of weight or dimension. These categories turn into linguistic counters, and become used as a matter of course to relate one phenomenon to another in a non-epic, non-poetic, non-concrete idiom. 9

The invention of the alphabet made it easier to report experience in a non-epic idiom. But it might be a simplification to suppose that the advent of alphabetic technology was the only influence on the emergence of logical thinking and the analytic topics, although perhaps it was the major influence. Havelock contends that the first “proto-thinkers” of Greece were the poets who at first used rhythm and oral formulas to attempt to arrange experience in categories, rather than in narrative events. He mentions in particular that it was Hesiod who first parts company with the narrative in the Theogony and Works and Days. In Works and Days, Hesiod uses a cataloging technique, consisting of proverbs, aphorisms, wise sayings, exhortations, and parables, intermingled with stories. But this effect of cataloging that goes “beyond the plot of a story in order to impose a rough logic of topics . . . presumes that Hesiod is 10

The kind of material found in the catalogs of Hesiod was more like the cumulative commonplace material of the Renaissance than the abstract topics that we are familiar with today. Walter Ong notes that “the oral performer, poet or orator needed a stock of material to keep him going. The doctrine of the commonplaces is, from one point of view, the codification of ways of assuring and managing this stock.” 11 We already know what some of the material was like: stock epithets, figures of speech, exempla, proverbs, sententiae, quotations, praises or censures of people and things, and brief treatises on virtues and vices. By the time we get to the invention of printing, there are vast collections of this commonplace material, so vast, relates Ong, that scholars could probably never survey it all. Ong goes on to observe that

print gave the drive to collect and classify such excerpts a potential previously undreamed of. . . . the ranging of items side by side on a page once achieved, could be multiplied as never before. Moreover, printed collections of such commonplace excerpts could be handily indexed; it was worthwhile spending days or months working up an index because the results of one’s labors showed fully in thousands of copies. 12

To summarize, then, in oral cultures rhetorical invention was bound up with oral performance. At this stage, both the cumulative topics and the analytic topics were implicit in epic narrative. Then the cumulative commonplaces begin to appear, separated out by a cataloging technique from poetic narrative, in sources such as the Theogony and Works and Days . Eric Havelock points out that in Hesiod, the catalog “has been isolated or abstracted . . . out of a thousand contexts in the rich reservoir of oral tradition. … A general world view is emerging in isolated or ‘abstracted’ form.” 13 Apparently, what we are witnessing is the emergence of logical thinking. Julian Jaynes describes the kind of thought to be found in the Works and Days as “preconscious hypostases.” Certain lines in Hesiod, he maintains, exhibit “some kind of bicameral struggle.” 14

The first stage, then, of rhetorical invention is that in which the analytic topoi are embedded in oral performance in the form of commonplace material as “relationships” in an undifferentiated matrix. Oral cultures preserve this knowledge by constantly repeating the fixed sayings and formulae. Mnemonic patterns, patterns of repetition, are not added to the thought of oral cultures. They are what the thought consists of.

Emerging selves: Representational foundations of subjectivity
by Wolfgang Prinz, Consciousness and Cognition

What, then, may mental selves be good for and why have they emerged during evolution (or, perhaps, human evolution or even early human history)? Answers to these questions used to take the form of stories explaining how the mental self came about and what advantages were associated with it. In other words, these are theories that construct hypothetical scenarios offering plausible explanations for why certain (groups of) living things that initially do not possess a mental self gain fitness advantages when they develop such an entity—with the consequence that they move from what we can call a self-less to a self-based or “self-morphic” state.

Modules for such scenarios have been presented occasionally in recent years by, for example, Dennett, 1990 and Dennett, 1992, Donald (2001), Edelman (1989), Jaynes (1976), Metzinger, 1993 and Metzinger, 2003, or Mithen (1996). Despite all the differences in their approaches, they converge around a few interesting points. First, they believe that the transition between the self-less and self-morphic state occurred at some stage during the course of human history—and not before. Second, they emphasize the cognitive and dynamic advantages accompanying the formation of a mental self. And, third, they also discuss the social and political conditions that promote or hinder the constitution of this self-morphic state. In the scenario below, I want to show how these modules can be keyed together to form a coherent construction. […]

Thus, where do thoughts come from? Who or what generates them, and how are they linked to the current perceptual situation? This brings us to a problem that psychology describes as the problem of source attribution ( Heider, 1958).

One obvious suggestion is to transfer the schema for interpreting externally induced messages to internally induced thoughts as well. Accordingly, thoughts are also traced back to human sources and, likewise, to sources that are present in the current situation. Such sources can be construed in completely different ways. One solution is to trace the occurrence of thoughts back to voices—the voices of gods, priests, kings, or ancestors, in other words, personal authorities that are believed to have an invisible presence in the current situation. Another solution is to locate the source of thoughts in an autonomous personal authority bound to the body of the actor: the self.

These two solutions to the attribution problem differ in many ways: historically, politically, and psychologically. In historical terms, the former must be markedly older than the latter. The transition from one solution to the other and the mentalities associated with them are the subject of Julian Jaynes’s speculative theory of consciousness. He even considers that this transfer occurred during historical times: between the Iliad and the Odyssey. In the Iliad, according to Jaynes, the frame of mind of the protagonists is still structured in a way that does not perceive thoughts, feelings, and intentions as products of a personal self, but as the dictates of supernatural voices. Things have changed in the Odyssey: Odysseus possesses a self, and it is this self that thinks and acts. Jaynes maintains that the modern consciousness of Odysseus could emerge only after the self had taken over the position of the gods (Jaynes, 1976; see also Snell, 1975).

Moreover, it is obvious why the political implications of the two solutions differ so greatly: Societies whose members attribute their thoughts to the voices of mortal or immortal authorities produce castes of priests or nobles that claim to be the natural authorities or their authentic interpreters and use this to derive legitimization for their exercise of power. It is only when the self takes the place of the gods that such castes become obsolete, and authoritarian constructions are replaced by other political constructions that base the legitimacy for their actions on the majority will of a large number of subjects who are perceived to be autonomous.

Finally, an important psychological difference is that the development of a self-concept establishes the precondition for individuals to become capable of perceiving themselves as persons with a coherent biography. Once established, the self becomes involved in every re-presentation and representation as an implicit personal source, and just as the same body is always present in every perceptual situation, it is the same mental self that remains identical across time and place. […]

According to the cognitive theories of schizophrenia developed in the last decade (Daprati et al., 1997; Frith, 1992), these symptoms can be explained with the same basic pattern that Julian Jaynes uses in his theory to characterize the mental organization of the protagonists in the Iliad. Patients with delusions suffer from the fact that the standardized attribution schema that localizes the sources of thoughts in the self is not available to them. Therefore, they need to explain the origins of their thoughts, ideas, and desires in another way (see, e.g., Stephens & Graham, 2000). They attribute them to person sources that are present but invisible—such as relatives, physicians, famous persons, or extraterrestrials. Frequently, they also construct effects and mechanisms to explain how the thoughts proceeding from these sources are communicated, by, for example, voices or pictures transmitted over rays or wires, and nowadays frequently also over phones, radios, or computers. […]

As bizarre as these syndromes seem against the background of our standard concept of subjectivity and personhood, they fit perfectly with the theoretical idea that mental selves are not naturally given but rather culturally constructed, and in fact set up in, attribution processes. The unity and consistency of the self are not a natural necessity but a cultural norm, and when individuals are exposed to unusual developmental and life conditions, they may well develop deviant attribution patterns. Whether these deviations are due to disturbances in attribution to persons or to disturbances in dual representation cannot be decided here. Both biological and societal conditions are involved in the formation of the self, and when they take an unusual course, the causes could lie in both domains.


“The Varieties of Dissociative Experience”
by Stanley Krippner
from Broken Images Broken Selves: Dissociative Narratives In Clinical Practice
pp. 339-341

In his provocative description of the evolution of humanity’s conscious awareness, Jaynes (1976) asserted that ancient people’s “bicameral mind” enabled them to experience auditory hallucinations— the voices of the deities— but they eventually developed an integration of the right and left cortical hemispheres. According to Jaynes, vestiges of this dissociation can still be found, most notably among the mentally ill, the extremely imaginative, and the highly suggestible. Even before the development of the cortical hemispheres, the human brain had slowly evolved from a “reptilian brain” (controlling breathing, fighting, mating, and other fixed behaviors), to the addition of an “old-mammalian brain,” (the limbic system, which contributed emotional components such as fear, anger, and affection), to the superimposition of a “new-mammalian brain” (responsible for advanced sensory processing and thought processes). MacLean (1977) describes this “triune brain” as responsible, in part, for distress and inefficiency when the parts do not work well together. Both Jaynes’ and MacLean’s theories are controversial, but I believe that there is enough autonomy in the limbic system and in each of the cortical hemispheres to justify Ornstein’s (1986) conclusion that human beings are much more complex and intricate than they imagine, consisting of “an uncountable number of small minds” (p. 72), sometimes collaborating and sometimes competing. Donald’s (1991) portrayal of mental evolution also makes use of the stylistic differences of the cerebral hemisphere, but with a greater emphasis on neuropsychology than Jaynes employs. Mithen’s (1996) evolutionary model is a sophisticated account of how specialized “cognitive domains” reached the point that integrated “cognitive fluidity” (apparent in art and the use of symbols) was possible.

James (1890) spoke of a “multitude” of selves, and some of these selves seem to go their separate ways in posttraumatic stress disorder (PTSD) (see Greening, Chapter 5), dissociative identity disorder (DID) (see Levin, Chapter 6), alien abduction experiences (see Powers, Chapter 9), sleep disturbances (see Barrett, Chapter 10), psychedelic drug experiences (see Greenberg, Chapter 11), death terrors (see Lapin, Chapter 12), fantasy proneness (see Lynn, Pintar, & Rhue, Chapter 13), near-death experiences (NDEs) (see Greyson, Chapter 7), and mediumship (see Grosso, Chapter 8). Each of these conditions can be placed into a narrative construction, and the value of these frameworks has been described by several authors (e.g., Barclay, Chapter 14; Lynn, Pintar, & Rhue, Chapter 13; White, Chapter 4). Barclay (Chapter 14) and Powers (Chapter 15) have addressed the issue of narrative veracity and validation, crucial issues when stories are used in psychotherapy. The American Psychiatric Association’s Board of Trustees (1993) felt constrained to issue an official statement that “it is not known what proportion of adults who report memories of sexual abuse were actually abused” (p. 2). Some reports may be fabricated, but it is more likely that traumatic memories may be misconstrued and elaborated (Steinberg, 1995, p. 55). Much of the same ambiguity surrounds many other narrative accounts involving dissociation, especially those described by White (Chapter 4) as “exceptional human experiences.”

Nevertheless, the material in this book makes the case that dissociative accounts are not inevitably uncontrolled and dysfunctional. Many narratives considered “exceptional” from a Western perspective suggest that dissociation once served and continues to serve adaptive functions in human evolution. For example, the “sham death” reflex found in animals with slow locomotor abilities effectively offers protection against predators with greater speed and agility. Uncontrolled motor responses often allow an animal to escape from dangerous or frightening situations through frantic, trial-and-error activity (Kretchmer, 1926). Many evolutionary psychologists have directed their attention to the possible value of a “multimodular” human brain that prevents painful, unacceptable, and disturbing thoughts, wishes, impulses, and memories from surfacing into awareness and interfering with one’s ongoing contest for survival (Nesse & Lloyd, 1992, p. 610). Ross (1991) suggests that Western societies suppress this natural and valuable capacity at their peril.

The widespread prevalence of dissociative reactions argues for their survival value, and Ludwig (1983) has identified seven of them: (1) The capacity for automatic control of complex, learned behaviors permits organisms to handle a much greater work load in as smooth a manner as possible; habitual and learned behaviors are permitted to operate with a minimum expenditure of conscious control. (2) The dissociative process allows critical judgment to be suspended so that, at times, gratification can be more immediate. (3) Dissociation seems ideally suited for dealing with basic conflicts when there is no instant means of resolution, freeing an individual to take concerted action in areas lacking discord. (4) Dissociation enables individuals to escape the bounds of reality, providing for inspiration, hope, and even some forms of “magical thinking.” (5) Catastrophic experiences can be isolated and kept in check through dissociative defense mechanisms. (6) Dissociative experiences facilitate the expression of pent-up emotions through a variety of culturally sanctioned activities. (7) Social cohesiveness and group action often are facilitated by dissociative activities that bind people together through heightened suggestibility.

Each of these potentially adaptive functions may be life-depotentiating as well as life-potentiating; each can be controlled as well as uncontrolled. A critical issue for the attribution of dissociation may be the dispositional set of the experiencer-in-context along with the event’s adaptive purpose. Salamon (1996) described her mother’s ability to disconnect herself from unpleasant surroundings or facts, a proclivity that led to her ignoring the oncoming imprisonment of Jews in Nazi Germany but that, paradoxically, enabled her to survive her years in Auschwitz. Gergen (1991) has described the jaundiced eye that modern Western science has cast toward Dionysian revelry, spiritual experiences, mysticism, and a sense of bonded unity with nature, a hostility he predicts may evaporate in the so-called “postmodern” era, which will “open the way to the full expression of all discourses” (pp. 246– 247). For Gergen, this postmodern lifestyle is epitomized by Proteus, the Greek sea god, who could change his shape from wild boar to dragon, from fire to flood, without obvious coherence through time. This is all very well and good, as long as this dissociated existence does not leave— in its wake— a residue of broken selves whose lives have lost any intentionality or meaning, who live in the midst of broken images, and whose multiplicity has resulted in nihilistic affliction and torment rather than in liberation and fulfillment (Glass, 1993, p. 59).

 

 

Choral Singing and Self-Identity

I haven’t previously given any thought to choral singing. I’ve never been much of a singer, not even in private. My desire to sing in public with a group of others is next to non-existent. So, it never occurred to me what might be the social experience and psychological result of being in a choir.

It appears something odd goes on in such a situation. Maybe it’s not odd, but it has come to seem odd to us moderns. This kind of group activity has become uncommon. In our hyper-individualistic society, we forget how much we are social animals. Our individualism is dependent on highly unusual conditions that wouldn’t have existed for most of civilization.

I was reminded a while back of this social aspect when reading about Galen in the Roman Empire. Individualism is not the normal state or, one might argue, the healthy state of humanity. It is rather difficult to create individuals and, even then, our individuality is superficial and tenuous. Humans so quickly lump themselves into groups.

This evidence about choral singing makes me wonder about earlier societies. What role did music play, specifically group singing (along with dancing and ritual), in creating particular kinds of cultures and social identities? And how might that have related to pre-literate memory systems that rooted people in a concrete sense of the world, such as Aboriginal songlines?

I’ve been meaning to write about Lynne Kelly’s book, Knowledge and Power in Prehistoric Societies. This is part of my long term focus on what sometimes is called the bicameral mind and the issue of its breakdown. Maybe choral singing touches upon the bicameral mind.

* * *

It’s better together: The psychological benefits of singing in a choir
by N.A. Stewart & A.J. Lonsdale, Psychology of Music

Previous research has suggested that singing in a choir might be beneficial for an individual’s psychological well-being. However, it is unclear whether this effect is unique to choral singing, and little is known about the factors that could be responsible for it. To address this, the present study compared choral singing to two other relevant leisure activities, solo singing and playing a team sport, using measures of self-reported well-being, entitativity, need fulfilment and motivation. Questionnaire data from 375 participants indicated that choral singers and team sport players reported significantly higher psychological well-being than solo singers. Choral singers also reported that they considered their choirs to be a more coherent or ‘meaningful’ social group than team sport players considered their teams. Together these findings might be interpreted to suggest that membership of a group may be a more important influence on the psychological well-being experienced by choral singers than singing. These findings may have practical implications for the use of choral singing as an intervention for improving psychological well-being.

More Evidence of the Psychological Benefits of Choral Singing
by Tom Jacobs, Pacific Standard

The synchronistic physical activity of choristers appears to create an unusually strong bond, giving members the emotionally satisfying experience of temporarily “disappearing” into a meaningful, coherent body. […]

The first finding was that choral singers and team sports players “reported significantly higher levels of well-being than solo singers.” While this difference was found on only one of the three measures of well-being, it does suggest that activities “pursued as part of a group” are associated with greater self-reported well-being.

Second, they found choral singers appear to “experience a greater sense of being part of a meaningful, or ‘real’ group, than team sports players.” This perception, which is known as “entitativity,” significantly predicted participants’ scores on all three measures of well-being. […]

The researchers suspect this feeling arises naturally from choral singers’ “non-conscious mimicry of others’ actions.” This form of physical synchrony “has been shown to lead to self-other merging,” they noted, “which may encourage choral singers to adopt a ‘we perspective’ rather than an egocentric perspective.”

Not surprisingly, choral singers experienced the lowest autonomy of the three groups. Given that autonomy can be very satisfying, this may explain why overall life-satisfaction scores were similar for choral singers (who reported little autonomy but strong bonding), and sports team members (who experienced moderate levels of both bonding and autonomy).

Self, Other, & World

The New Science of the Mind:
From Extended Mind to Embodied Phenomenology
by Mark Rowlands
Kindle Locations 54-62

The new way of thinking about the mind is inspired by, and organized around, not the brain but some combination of the ideas that mental processes are (1) embodied, (2) embedded, (3) enacted, and (4) extended. Shaun Gallagher has referred to this, in conversation, as the 4e conception of the mind.4 The idea that mental processes are embodied is, very roughly, the idea that they are partly constituted by, partly made up of, wider (i.e., extraneural) bodily structures and processes. The idea that mental processes cesses are embedded is, again roughly, the idea that mental processes have been designed to function only in tandem with a certain environment that lies outside the brain of the subject. In the absence of the right environmental mental scaffolding, mental processes cannot do what they are supposed to do, or can only do what they are supposed to so less than optimally. The idea that mental processes are enacted is the idea that they are made up not just of neural processes but also of things that the organism does more generally-that they are constituted in part by the ways in which an organism ism acts on the world and the ways in which world, as a result, acts back on that organism. The idea that mental processes are extended is the idea that they are not located exclusively inside an organism’s head but extend out, in various ways, into the organism’s environment.

On animism, multinaturalism, & cosmopolitics
by Adrian J Ivakhiv

If, as Latour argues, we are no longer to rely on the singular foundation of a nature that speaks to us through the singular voice of science, then we are thrown into a world in which humans are thought to resemble, in some measure, all other entities (think Darwin alongside Amazonian shamanism) and to radically differ, though in ways that are bridgeable through translation. This would be a world that demands an ontological politics, or a cosmopolitics, by which the choices open to us with respect to the different ways we can entangle ourselves with places, non-humans, technologies, and the material world as a whole, become ethically inflected open questions.

Can “Late Antiquity” Be Saved?
by Philip Rousseau

This issue of a Eurocentric “take” on the late Roman world now finds itself swept up into what has been termed the “ontological turn,” which I suppose is where the “new humanities” come in. More and more people are becoming familiar with this debate. It centers chiefly on a conviction that, in any one place at any one time, the people alive there and then had a sense of “reality” — a word we’re quite rightly not entirely happy with — that was unique to themselves. Indeed, more than a sense: their understanding of “that which is the case” was not simply a symbolizing reaction to a set of experiences that we otherwise share with them although respond to differently — that is, the material, anthropocentric, individualized world that we tend to suppose has always been “out there.” They (like many now) lived in a world (rather than just in a frame of mind) that was itself totally different from the world that we (whoever “we” are) experience. Actually (another tell-tale word), phenomenology, cognitive science, and quantum physics, if nothing else, have shown us what a far from enduring particularity the “out there” world is.

Retrieving the Lost Worlds of the Past:
The Case for an Ontological Turn
by Greg Anderson

Our discipline’s grand historicist project, its commitment to producing a kind of cumulative biography of our species, imposes strict limits on the kinds of stories we can tell about the past. Most immediately, our histories must locate all of humanity’s diverse lifeworlds within the bounds of a single, universal “real world” of time, space, and experience. To do this, they must render experiences in all those past lifeworlds duly commensurable and mutually intelligible. And to do this, our histories must use certain commonly accepted models and categories, techniques and methods. The fundamental problem here is that all of these tools of our practice presuppose a knowledge of experience that is far from universal, as postcolonial theorists and historians like Dipesh Chakrabarty have so well observed. In effect, these devices require us to “translate” the experiences of all past lifeworlds into the experiences of just one lifeworld, namely those of a post-Enlightenment “Europe,” the world of our own secular, capitalist modernity. In so doing, they actively limit our ability to represent the past’s many non-secular, non-capitalist, non-modern “ways of being
human.” […]

This ontological individualism would have been scarcely intelligible to, say, the inhabitants of precolonial Bali or Hawai’i, where the divine king or chief, the visible incarnation of the god Lono, was “the condition of possibility of the community,” and thus “encompasse[d] the people in his own person, as a projection of his own being,” such that his subjects were all “particular instances of the chief’s existence.”
12 It would have been barely imaginable, for that matter, in the world of medieval Europe, where conventional wisdom proverbially figured sovereign and subjects as the head and limbs of a single, primordial “body politic” or corpus mysticum. 13 And the idea of a natural, presocial individual would be wholly confounding to, say, traditional Hindus and the Hagen people of Papua New Guinea, who objectify all persons as permeable, partible “dividuals” or “social microcosms,” as provisional embodiments of all the actions, gifts, and accomplishments of others that have made their lives possible.1

We alone in the modern capitalist west, it seems, regard individuality as the true, primordial estate of the human person. We alone believe that humans are always already unitary, integrated selves, all born with a natural, presocial disposition to pursue a rationally calculated self-interest and act competitively upon our no less natural, no less presocial rights to life, liberty, and private property. We alone are thus inclined to see forms of sociality, like relations of kinship, nationality, ritual, class, and so forth, as somehow contingent, exogenous phenomena, not as essential constituents of our very subjectivity, of who or what we really are as beings. And we alone believe that social being exists to serve individual being, rather than the other way round. Because we alone imagine that individual humans are free-standing units in the first place, “unsocially sociable” beings who ontologically precede whatever “society” our self-interest prompts us to form at any given time.

Beyond Nature and Culture
by Philippe Descola
Kindle Locations 241-262.

Not so very long ago one could delight in the curiosities of the world without making any distinction between the information obtained from observing animals and that which the mores of antiquity or the customs of distant lands presented. “Nature was one” and reigned everywhere, distributing equally among humans and nonhumans a multitude of technical skills, ways of life, and modes of reasoning. Among the educated at least, that age came to an end a few decades after Montaigne’s death, when nature ceased to be a unifying arrangement of things, however disparate, and became a domain of objects that were subject to autonomous laws that formed a background against which the arbitrariness of human activities could exert its many-faceted fascination. A new cosmology had emerged, a prodigious collective invention that provided an unprecedented framework for the development of scientific thought and that we, at the beginning of the twenty-first century, continue, in a rather offhand way, to protect. The price to be paid for that simplification included one aspect that it has been possible to overlook, given that we have not been made to account for it: while the Moderns were discovering the lazy propensity of barbaric and savage peoples to judge everything according to their own particular norms, they were masking their own ethnocentricity behind a rational approach to knowledge, the errors of which at that time escaped notice. It was claimed that everywhere and in every age, an unchanging mute and impersonal nature established its grip, a nature that human beings strove to interpret more or less plausibly and from which they endeavored to profit, with varying degrees of success. Their widely diverse conventions and customs could now make sense only if they were related to natural regularities that were more or less well understood by those affected by them. It was decreed, but with exemplary discretion, that our way of dividing up beings and things was a norm to which there were no exceptions. Carrying forward the work of philosophy, of whose predominance it was perhaps somewhat envious, the fledgling discipline of anthropology ratified the reduction of the multitude of existing things to two heterogeneous orders of reality and, on the strength of a plethora of facts gathered from every latitude, even bestowed upon that reduction the guarantee of universality that it still lacked. Almost without noticing, anthropology committed itself to this way of proceeding, such was the fascination exerted by the shimmering vision of “cultural diversity,” the listing and study of which now provided it with its raison d’être. The profusion of institutions and modes of thought was rendered less formidable and its contingency more bearable if one took the view that all these practices— the logic of which was sometimes so hard to discover— constituted so many singular responses to a universal challenge: namely, that of disciplining and profiting from the biophysical potentialities offered by bodies and their environment.

What Kinship Is-And Is Not
by Marshall Sahlins
p. 2

In brief, the idea of kinship in question is “mutuality of being”: people who are intrinsic to one another’s existence— thus “mutual person(s),” “life itself,” “intersubjective belonging,” “transbodily being,” and the like. I argue that “mutuality of being” will cover the variety of ethnographically documented ways that kinship is locally constituted, whether by procreation, social construction, or some combination of these. Moreover, it will apply equally to interpersonal kinship relations, whether “consanguineal” or “affinal,” as well as to group arrangements of descent. Finally, “mutuality of being” will logically motivate certain otherwise enigmatic effects of kinship bonds— of the kind often called “mystical”— whereby what one person does or suffers also happens to others. Like the biblical sins of the father that descend on the sons, where being is mutual, there experience is more than individual.

The Habitus Process
A Biopsychosocial Conception
By Andreas Pickel

The habitus-personality complex is linked (at the top) to a social system. As I have proposed earlier, habitus is an emergent property of a social system. The habitus-personality complex is also linked (at the bottom) to a biopsychic system which generates a personality as an emergent property. Thus there is a bottom-up causality and a top-down causality at work. The habitus-personality complex, while composed of two emergent properties (bottom-up: personality; top down: habitus), can also be seen as a process. In this view, the habitus mechanism refers to the working of system-specific patterns of wanting, feeling, thinking, doing and interacting, while the personality mechanism refers to individual forms of wanting, feeling, thinking, doing and interacting. The two simultaneously operating mechanisms produce self-consciousness and identity, and what Elias calls the “we”-“I” balance in a personality (Elias 1991).

How Forests Think:
Toward an Anthropology Beyond the Human
By Eduardo Kohn
p. 6

Attending to our relations with those beings that exist in some way beyond the human forces us to question our tidy answers about the human. The goal here is neither to do away with the human nor to reinscribe it but to open it. In rethinking the human we must also rethink the kind of anthropology that would be adequate to this task. Sociocultural anthropology in its various forms as it is practiced today takes those attributes that are distinctive to humans— language, culture, society, and history— and uses them to fashion the tools to understand humans. In this process the analytical object becomes isomorphic with the analytics. As a result we are not able to see the myriad ways in which people are connected to a broader world of life, or how this fundamental connection changes what it might mean to be human. And this is why expanding ethnography to reach beyond the human is so important. An ethnographic focus not just on humans or only on animals but also on how humans and animals relate breaks open the circular closure that otherwise confines us when we seek to understand the distinctively human by means of that which is distinctive to humans.

Vibrant Matter:
A Political Ecology of Things
by Jane Bennett
pp. 8-10

I may have met a relative of Odradek while serving on a jury, again in Baltimore, for a man on trial for attempted homicide. It was a small glass vial with an adhesive-covered metal lid: the Gunpowder Residue Sampler. This object/ witness had been dabbed on the accused’s hand hours after the shooting and now offered to the jury its microscopic evidence that the hand had either fired a gun or been within three feet of a gun firing. Expert witnesses showed the sampler to the jury several times, and with each appearance it exercised more force, until it became vital to the verdict. This composite of glass, skin cells, glue, words, laws, metals, and human emotions had become an actant. Actant, recall, is Bruno Latour’s term for a source of action; an actant can be human or not, or, most likely, a combination of both. Latour defines it as “something that acts or to which activity is granted by others. It implies no special motivation of human individual actors, nor of humans in general.” 24 An actant is neither an object nor a subject but an “intervener,” 25 akin to the Deleuzean “quasi-causal operator.” 26 An operator is that which, by virtue of its particular location in an assemblage and the fortuity of being in the right place at the right time, makes the difference, makes things happen, becomes the decisive force catalyzing an event. Actant and operator are substitute words for what in a more subject-centered vocabulary are called agents. Agentic capacity is now seen as differentially distributed across a wider range of ontological types. This idea is also expressed in the notion of “deodand,” a figure of English law from about 1200 until it was abolished in 1846. In cases of accidental death or injury to a human, the nonhuman actant, for example, the carving knife that fell into human flesh or the carriage that trampled the leg of a pedestrian— became deodand (literally, “that which must be given to God”). In recognition of its peculiar efficacy (a power that is less masterful than agency but more active than recalcitrance), the deodand, a materiality “suspended between human and thing,” 27 was surrendered to the crown to be used (or sold) to compensate for the harm done. According to William Pietz, “any culture must establish some procedure of compensation, expiation, or punishment to settle the debt created by unintended human deaths whose direct cause is not a morally accountable person, but a nonhuman material object. This was the issue thematized in public discourse by . . . the law of deodand.” 28

There are of course differences between the knife that impales and the man impaled, between the technician who dabs the sampler and the sampler, between the array of items in the gutter of Cold Spring Lane and me, the narrator of their vitality. But I agree with John Frow that these differences need “to be flattened, read horizontally as a juxtaposition rather than vertically as a hierarchy of being. It’s a feature of our world that we can and do distinguish . . . things from persons. But the sort of world we live in makes it constantly possible for these two sets of kinds to exchange properties.” 29 And to note this fact explicitly, which is also to begin to experience the relationship between persons and other materialities more horizontally, is to take a step toward a more ecological sensibility.

pp. 20-21

Thing-power perhaps has the rhetorical advantage of calling to mind a childhood sense of the world as filled with all sorts of animate beings, some human, some not, some organic, some not. It draws attention to an efficacy of objects in excess of the human meanings, designs, or purposes they express or serve. Thing-power may thus be a good starting point for thinking beyond the life-matter binary, the dominant organizational principle of adult experience. The term’s disadvantage, however, is that it also tends to overstate the thinginess or fixed stability of materiality, whereas my goal is to theorize a materiality that is as much force as entity, as much energy as matter, as much intensity as extension. Here the term out-side may prove more apt. Spinoza’s stones, an absolute Wild, the oozing Meadowlands, the nimble Odradek, the moving deodand, a processual minerality, an incalculable nonidentity— none of these are passive objects or stable entities (though neither are they intentional subjects). 1 They allude instead to vibrant materials.

A second, related disadvantage of thing-power is its latent individualism, by which I mean the way in which the figure of “thing” lends itself to an atomistic rather than a congregational understanding of agency While the smallest or simplest body or bit may indeed express a vital impetus, conatus or clinamen, an actant never really acts alone. Its efficacy or agency always depends on the collaboration, cooperation, or interactive interference of many bodies and forces. A lot happens to the concept of agency once nonhuman things are figured less as social constructions and more as actors, and once humans themselves are assessed not as autonoms but as vital materialities.

Animism – The Seed Of Religion
by Edward Clodd
Kindle Locations 363-369

In the court held in ancient times in the Prytaneum at Athens to try any object, such as an axe or a piece of wood or stone which, independent of any human agency, had caused death, the offending thing was condemned and cast in solemn form beyond the border. Dr. Frazer cites the amusing instance of a cock which was tried at Basle in 1474 for having laid an egg, and which, being found guilty, was burnt as a sorcerer. ” The recorded pleadings in the case are said to be very voluminous.” And only as recently as 1846 there was abolished in England the law of deodand, whereby not only a beast that kills a man, but a cart-wheel that runs over him, or a tree that crushes him, were deo dandus, or ” given to God,” being forfeited and sold for the poor. The adult who, in momentary rage, kicks over the chair against which he has stumbled, is one with the child who beats the door against which he knocks his head, or who whips the ” naughty ” rocking-horse that throws him.

Architectural Agents:
The Delusional, Abusive, Addictive Lives of Buildings
by Annabel Jane Wharton
Kindle Locations 148-155

The deodand was common in medieval legal proceedings. Before the recognition of mitigating circumstances and degrees of murder and manslaughter, causing the death of a person was a capital offense. At that time the deodand may well have functioned as a legal ploy by which the liability for a death might be assessed in a just manner. Although the deodand seems to have almost disappeared in England by the eighteenth century, the law was revived in the nineteenth century. With industrialization the deodand was redeployed as a means of levying penalties for the many deaths caused by mechanical devices, particularly locomotives. 9 Legislation was passed to protect industrial interests, and the deodand was eliminated by an act of Parliament in 1846. Premodern intuitions about the animation of things allowed those things a semblance of the moral agency associated with culpability. With the rational repression of that intuition in modernity, the legal system required revision.

Kindle Locations 245-253

Now, as in the past, buildings may be immobile, but they are by no means passive. Our habitus— the way we live in the world— is certainly informed by our relations with other human beings. 24 But spatial objects also model our lives. Some structures, like Bentham’s infamous Panopticon, are insidiously manipulative. 25 But most buildings, like most people, can both confirm our familiar patterns of behavior and modify them. We build a classroom to accommodate a certain kind of learning; the classroom in turn molds the kind of learning that we do or even that we can imagine. Modifications in the room might lead to innovations in teaching practices. Buildings, in this sense, certainly have social agency. Indeed, the acts of buildings may be compared with the acts of their human counterparts insofar as those acts are similarly overdetermined— that is, fraught with more conditions in their social circumstances or individual histories than are necessary to account for the ways in which they work.

Hyperobjects:
Philosophy and Ecology after the End of the World
by Timothy Morton
Kindle Locations 547-568

While hyperobjects are near, they are also very uncanny. Some days, global warming fails to heat me up. It is strangely cool or violently stormy. My intimate sensation of prickling heat at the back of my neck is only a distorted print of the hot hand of global warming. I do not feel “at home” in the biosphere. Yet it surrounds me and penetrates me, like the Force in Star Wars. The more I know about global warming, the more I realize how pervasive it is. The more I discover about evolution, the more I realize how my entire physical being is caught in its meshwork. Immediate, intimate symptoms of hyperobjects are vivid and often painful, yet they carry with them a trace of unreality. I am not sure where I am anymore. I am at home in feeling not at home. Hyperobjects, not some hobbit hole, not some national myth of the homeland, have finally forced me to see the truth in Heidegger.

The more I struggle to understand hyperobjects, the more I discover that I am stuck to them. They are all over me. They are me. I feel like Neo in The Matrix, lifting to his face in horrified wonder his hand coated in the mirrorlike substance into which the doorknob has dissolved, as his virtual body begins to disintegrate. “Objects in mirror are closer than they appear.” The mirror itself has become part of my flesh. Or rather, I have become part of the mirror’s flesh, reflecting hyperobjects everywhere. I can see data on the mercury and other toxins in my blood. At Taipei Airport, a few weeks after the Fukushima disaster, I am scanned for radiation since I have just transited in Tokyo. Every attempt to pull myself free by some act of cognition renders me more hopelessly stuck to hyperobjects. Why?

They are already here. I come across them later, I find myself poisoned with them, I find my hair falling out. Like an evil character in a David Lynch production, or a ghost in M. Knight Shyamalan’s The Sixth Sense, hyperobjects haunt my social and psychic space with an always-already. My normal sense of time as a container, or a racetrack, or a street, prevents me from noticing this always-already, from which time oozes and flows, as I shall discuss in a later section (“ Temporal Undulation”). What the demonic Twin Peaks character Bob reveals, for our purposes, is something about hyperobjects, perhaps about objects in general. 2 Hyperobjects are agents. 3 They are indeed more than a little demonic, in the sense that they appear to straddle worlds and times, like fiber optic cables or electromagnetic fields. And they are demonic in that through them causalities flow like electricity.

We haven’t thought this way about things since the days of Plato.

Views of the Self

A Rant: The Brief Discussion of the Birth of An Error
by Skepoet2

Collectivism vs. Individualism is the primary and fundamental misreading of human self-formation out of the Enlightenment and picking sides in that dumb-ass binary has been the primary driver of bad politics left and right for the last 250 years.

The Culture Wars of the Late Renaissance: Skeptics, Libertines, and Opera
by Edward Muir
Kindle Locations 80-95

One of the most disturbing sources of late-Renaissance anxiety was the collapse of the traditional hierarchic notion of the human self. Ancient and medieval thought depicted reason as governing the lower faculties of the will, the passions, sions, and the body. Renaissance thought did not so much promote “individualism” as it cut away the intellectual props that presented humanity as the embodiment of a single divine vine idea, thereby forcing a desperate search for identity in many. John Martin has argued that during the Renaissance, individuals formed their sense of selfhood through a difficult negotiation between inner promptings and outer social roles. Individuals during the Renaissance looked both inward for emotional sustenance and outward for social assurance, and the friction between the inner and outer selves could sharpen anxieties 2 The fragmentation of the self seems to have been especially acute in Venice, where the collapse of aristocratic marriage structures led to the formation of what Virginia Cox has called the single self, most clearly manifest in the works of several women writers who argued for the moral and intellectual equality of women with men.’ As a consequence quence of the fragmented understanding of the self, such thinkers as Montaigne became obsessed with what was then the new concept of human psychology, a term in fact coined in this period.4 A crucial problem in the new psychology was to define the relation between the body and the soul, in particular ticular to determine whether the soul died with the body or was immortal. With its tradition of Averroist readings of Aristotle, some members of the philosophy faculty at the University of Padua recurrently questioned the Christian tian doctrine of the immortality of the soul as unsound philosophically. Other hierarchies of the human self came into question. Once reason was dethroned, the passions were given a higher value, so that the heart could be understood as a greater force than the mind in determining human conduct. duct. When the body itself slipped out of its long-despised position, the sexual drives of the lower body were liberated and thinkers were allowed to consider sex, independent of its role in reproduction, a worthy manifestation of nature. The Paduan philosopher Cesare Cremonini’s personal motto, “Intus ut libet, foris ut moris est,” does not quite translate to “If it feels good, do it;” but it comes very close. The collapse of the hierarchies of human psychology even altered the understanding derstanding of the human senses. The sense of sight lost its primacy as the superior faculty, the source of “enlightenment”; the Venetian theorists of opera gave that place in the hierarchy to the sense of hearing, the faculty that most directly channeled sensory impressions to the heart and passions.

Amusing Ourselves to Death: Public Discourse in the Age of Show Business
by Neil Postman

That does not say much unless one connects it to the more important idea that form will determine the nature of content. For those readers who may believe that this idea is too “McLuhanesque” for their taste, I offer Karl Marx from The German Ideology. “Is the Iliad possible,” he asks rhetorically, “when the printing press and even printing machines exist? Is it not inevitable that with the emergence of the press, the singing and the telling and the muse cease; that is, the conditions necessary for epic poetry disappear?”

Meta-Theory
by bcooney

When I read Jaynes’s book for the first time last year I was struck by the opportunities his theory affords for marrying materialism to psychology, linguistics, and philosophy. The idea that mentality is dependent on social relations, that power structures in society are related to the structure of mentality and language, and the idea that we can only understand mentality historically and socially are all ideas that appeal to me as a historical materialist.

Consciousness: Breakdown Or Breakthrough?
by ignosympathnoramus

The “Alpha version” of consciousness involved memory having authority over the man, instead of the man having authority over his memory. Bicameral man could remember some powerful admonishment from his father, but he could not recall it at will. He experienced this recollection as an external event; namely a visitation from either his father or a god. It was a sort of third-person-perspective group-think where communication was not intentional or conscious but, just like our “blush response,” unconscious and betraying of our deepest being. You can see this in the older versions of the Iliad, where, for instance, we do not learn about Achilles’ suicidal impulse by his internal feelings, thoughts, or his speaking, but instead, by the empathic understanding of his friend. Do you have to “think” in order to empathize, or does it just come on its own, in a rush of feeling? Well, that used to be consciousness. Think about it, whether you watch your friend blush or you blush yourself, the experience is remarkably similar, and seems to be nearly third-person in orientation. What you are recognizing in your friend’s blush the Greeks would have recognized as possession by a god, but it is important to notice that you have no more control over it than the Greeks did. They all used the same name for the same god (emotion) and this led to a relatively stable way of viewing human volition, that is, until it came into contact with other cultures with other “gods.” When this happens, you either have war, or you have conversion. That is, unless you can develop an operating system better than Alpha. We have done so, but at the cost of making us all homeless or orphaned. How ironic that in the modern world the biggest problem is that there are entirely too many individuals in the world, and yet their biggest problem is somehow having too few people to give each individual the support and family-type-structure that humans need to feel secure and thrive. We simply don’t have a shared themis that would allow each of us to view the other as “another self,” to use Aristotle’s phrase, or if we do, we realize that “another self” means “another broken and lost orphan like me.” It is in the nature of self-consciousness to not trust yourself, to remain skeptical, to resist immediate impulse. You cannot order your Will if you simply trust it and cave to every inclination. However, this paranoia is hardly conducive to social trust or to loving another as if he were “another self,” for that would only amount to him being another system of forces that we have to interpret, organize or buffer ourselves from. How much easier it is to empathize and care about your fellow citizens when they are not individuals, but vehicles for the very same muses, daimons, and gods that animate you! The matter is rather a bit worse that this, though. Each child discovers and secures his “inner self” by the discovery of his ability to lie, which further undermines social trust!

Marx’s theory of human nature
by Wikipedia

Marx’s theory of human nature has an important place in his critique of capitalism, his conception of communism, and his ‘materialist conception of history’. Karl Marx, however, does not refer to “human nature” as such, but to Gattungswesen, which is generally translated as ‘species-being’ or ‘species-essence’. What Marx meant by this is that humans are capable of making or shaping their own nature to some extent. According to a note from the young Marx in the Manuscripts of 1844, the term is derived from Ludwig Feuerbach’s philosophy, in which it refers both to the nature of each human and of humanity as a whole.[1] However, in the sixth Thesis on Feuerbach (1845), Marx criticizes the traditional conception of “human nature” as “species” which incarnates itself in each individual, on behalf of a conception of human nature as formed by the totality of “social relations”. Thus, the whole of human nature is not understood, as in classical idealist philosophy, as permanent and universal: the species-being is always determined in a specific social and historical formation, with some aspects being biological.

The strange case of my personal marxism (archive 2012)
by Skepoet2

It is the production capacity within a community that allows a community to exist, but communities are more than their productive capacities and subjectivities are different from subjects. Therefore, it is best to think of the schema we have given societies in terms of integrated wholes, and societies are produced by their histories both ecological and cultural. The separation of ecological and the cultural are what Ken Wilber would call “right-hand” and “left-hand” distinctions: or, the empirical experience of what is outside of us but limits us–the subject here being collective–is and what is within us that limits us.

THE KOSMOS TRILOGY VOL. II: EXCERPT A
AN INTEGRAL AGE AT THE LEADING EDGE
by Ken Wilber

One of the easiest ways to get a sense of the important ideas that Marx was advancing is to look at more recent research (such as Lenski’s) on the relation of techno-economic modes of production (foraging, horticultural, herding, maritime, agrarian, industrial, informational) to cultural practices such as slavery, bride price, warfare, patrifocality, matrifocality, gender of prevailing deities, and so on. With frightening uniformity, similar techno-economic modes have similar probabilities of those cultural practices (showing just how strongly the particular probability waves are tetra-meshed).

For example, over 90% of societies that have female-only deities are horticultural societies. 97% of herding societies, on the other hand, are strongly patriarchal. 37% of foraging tribes have bride price, but 86% of advanced horticultural do. 58% of known foraging tribes engaged in frequent or intermittent warfare, but an astonishing 100% of simple horticultural did so.

The existence of slavery is perhaps most telling. Around 10% of foraging tribes have slavery, but 83% of advanced horticultural do. The only societal type to completely outlaw slavery was patriarchal industrial societies, 0% of which sanction slavery.

Who’s correct about human nature, the left or the right?
by Ed Rooksby

So what, if anything, is human nature? Marx provides a much richer account. He is often said to have argued that there is no such thing as human nature. This is not true. Though he did think that human behaviour was deeply informed by social environment, this is not to say that human nature does not exist. In fact it is our capacity to adapt and transform in terms of social practices and behaviours that makes us distinctive as a species and in which our specifically human nature is to be located.

For Marx, we are essentially creative and producing beings. It is not just that we produce for our means of survival, it is also that we engage in creative and productive activity over and above what is strictly necessary for survival and find fulfilment in this activity. This activity is inherently social – most of what we produce is produced collectively in some sense or another. In opposition to the individualist basis of liberal thought, then, we are fundamentally social creatures.

Indeed, for Marx, human consciousness and thus our very notion of individual identity is collectively generated. We become consciously aware of ourselves as a discrete entity only through language – and language is inherently inter-subjective; it is a social practice. What we think – including what we think about ourselves – is governed by what we do and what we do is always done socially and collectively. It is for this reason that Marx refers to our “species-being” – what we are can only be understood properly in social terms because what we are is a property and function of the human species as a whole.

Marx, then, has a fairly expansive view of human nature – it is in our nature to be creatively adaptable and for our understanding of what is normal in terms of behaviour to be shaped by the social relations around us. This is not to say that any social system is as preferable as any other. We are best able to flourish in conditions that allow us to express our sociability and creativity.

Marx’s Critique of Religion
by Cris Campbell

Alienated consciousness makes sense only in contrast to un-alienated consciousness. Marx’s conception of the latter, though somewhat vague, derives from his understanding of primitive communism. It is here that Marx’s debt to anthropology is most clear. In foraging or “primitive” societies, people are whole – they are un-alienated because resources are freely available and work directly transforms those resources into useable goods. This directness and immediateness – with no interventions or distortions between the resource, work, and result – makes for creative, fulfilled, and unified people. Society is, as a consequence, tightly bound. There are no class divisions which pit one person or group against another. Because social relations are always reflected back into people’s lives, unified societies make for unified individuals. People are not alienated they have direct, productive, and creative relationships with resources, work, things, and others. This communalism is, for Marx, most conducive to human happiness and well-being.

This unity is shattered when people begin claiming ownership of resources. Private property introduces division into formerly unified societies and classes develop. When this occurs people are no longer free to appropriate and produce as they please. Creativity and fulfillment is crushed when labor is separated from life and becomes an isolated commodity. Humans who labor for something other than their needs, or for someone else, become alienated from resources, work, things, and others. When these divided social relations are reflected back into peoples’ lives, the result is discord and disharmony. People, in other words, feel alienated. As economies develop and become more complex, life becomes progressively more specialized and splintered. The alienation becomes so intense that something is required to sooth it; otherwise, life becomes unbearable.

It is at this point (which anthropologists recognize as the Neolithic transition) that religion arises. But religion is not, Marx asserts, merely a soothing palliative – it also masks the economically and socially stratified conditions that cause alienation:

“Precisely as a consequence of man’s loss of spontaneous self-activity, religion arises as a compensatory mechanism for explaining what alienated man cannot explain and for promising him elsewhere what he cannot achieve here. Thus because man does not create himself through his productive labor, he supposes that he is created by a power beyond. Because man lacks power, he attributes power to something beyond himself. Like all forms of man’s self-alienation, religion displaces reality with illusion. The reason is that man, the alienated being, requires an ideology that will simultaneously conceal his situation from him and confer upon it significance. Religion is man’s oblique and doomed effort at humanization, a search for divine meaning in the face of human meaninglessness.”

Related posts from my blog:

Facing Shared Trauma and Seeking Hope

Society: Precarious or Persistent?

Plowing the Furrows of the Mind

Démos, The People

Making Gods, Making Individuals

On Being Strange

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

I’m a Confused Hypocrite

“Identity is the Ur-form of ideology.”
~ Theodor Adorno

I was considering my confused identity.

I typically identify as a liberal, but I always mean that in the broadest sense. First and foremost, I’m psychologically liberal. This means I’m generous in attitude, if not always perfectly so in practice. More specifically, I am or seek to be: open-minded, curious, not ideologically dogmatic, lacking in group loyalty (especially in terms of groupthink, although I’m strong in personal loyalty to those I care about), tolerant of differences, tolerant of cognitive dissonance (tolerant of the differences within myself and hence tolerant of my own confused identity), etc.

I’m accepting of ambiguity even as I’m desirous of clarity. I’m critical of hypocrisy and try to avoid it, but I know that I fail. I’m inconsistent and it seems to me all humans are inconsistent. Inconsistency isn’t problematic as such. Rather, it is the unawareness of one’s inconsistency. I try to lessen my sin of hypocrisy with a dose of humility.

One of my favorite sayings is that, “It’s complex”. That is my way of saying that, although I have many opinions based on what I hope is good info and careful thought, in the end I just don’t feel all that certain about lots of things. I could be wrong, to put it lightly. No doubt, there is more that I don’t know than I do know.

On a more personal level, I’m both an idealist and a philosophical pessimist. I’m a radical skeptic (zetetic), which translates to my being skeptical of even skepticism. I’m an equal opportunity agnostic. I question and doubt everything, and that can be a quite demoralizing attitude at times when coupled with my streak of depression. I’m agnostic about belief and unbelief. I sometimes identify as an agnostic gnostic, just for shits and giggles.

All in all, my liberalism is one of the most central aspects to my identity. It is at the heart of my confusion. In ideological terms, I have many tendencies and concerns. I’m equal parts: progressive, communitarian, civil libertarian, social democrat, “rat park” municipal socialist, and on and on.

I’m socially globalist/internationalist (a humanitarian or maybe better yet a Gaian), but I simultaneously lean toward minarchism in my politics and anarchism in my economics (e.g., anarcho-syndicalism). Yet I’m not against big government or big anything on principle. It’s more of a practical emphasis, of wanting to bring the world back down to the human level, to the level of lived experience and living reality (non-human included), of personal relationships and communities, of a sense of place and a sense of home.

On the other hand, I can’t say I’m against such things bureaucracy or technocracy, per se. Nor am I necessarily opposed to capitalism and big biz. I really don’t care about such things in and of themselves. What I do care about is democracy and hence freedom, which to me are always most fundamentally personal and interpersonal, not mere abstractions or theories.

If any particular ideological system can be made to align and support democracy and freedom, then more power to it. I don’t feel I’m in a position to predict what new forms and directions society might take, but I wouldn’t mind a bureaucracy and technocracy of the variety portrayed in Star Trek: The Next Generation. As for capitalism and big biz, I just want a genuine free market which means a democratized and socially responsible economics, whatever that may be (obviously, present capitalism and big biz fails that standard to an extreme degree).

My biggest concern is about externalized costs, the free rider problem, and the precautionary principle (all of which I consider to be most fundamentally conservative-minded and so, at least superficially, opposed to my more typical liberal predisposition). More than anything, I’d like to live in a society that (1) is wise and (2) is not self-destructive. Our present society is highly dysfunctional and I feel that I have internalized much of that dysfunction. I’m a confused person because that seems like the inevitable fate right now of any person who is self-aware and of a concerned attitude.

I want to live in a world that is worth caring about. I want to live in a society that considers me worth caring about.

Because I’m a liberal, I’d like to believe such a world and society is possible. The disconnect from what I’d like to believe and what is present reality is more than a bit disconcerting. It’s downright irritating and frustrating. It would be easier to be righteous than confused, but I’m never able to maintain an attitude of righteousness for very long. Righteousness is more tiresome than even depression.

My inner child just wants the bad people to stop doing bad things. But my cynical adult self points out that I’m part of this problematic society. How can I be anything other than confused and hypocritical? How can I not fail my own idealistic standards and aspirations? Still, apathetically accepting the status quo of soul-crushing misery and injustice would be a far worse fate.

The Racial Line and Racial Identity

Beyond Biracial: When Blackness Is a Small, Nearly Invisible Fraction
by Jenée Desmond-Harris
from The Root

“Their racial mixture can feel too fragmented for old, no-longer-politically-correct terms like “mulatto” and even the irreverent hybrids like “blewish” and “blexican” that the “biracial boom” crowd created to rename themselves. Making things even more complicated for 2014’s cohort of people with just one black-identified grandparent is the dearth of cultural references providing a blueprint for how they might identify. As Ian Stewart, the 31-year-old son of a biracial father and white mother, puts it, “There’s a lot out there for the half-black, but a lot less out there for the quarter.”

[ . . . ]

“”What is different today than in, say, 1945 is the way in which we have a much more fluid understanding of race,” says Joseph. She’s referring to our ever loosening attachment to the strict red, yellow, brown, black and white racial categories conceived of by 18th-century German scientist Johann Friedrich Blumenbach, whose now debunked idea of natural divisions provided the basis for those who would push for biology-based racism.

[ . . . ]

“An extreme example of how muddy this can get: A self-confessed white supremacist attempting to launch an all-white community in North Dakota made headlines when it was revealed that he had 14 percent sub-Saharan African ancestry. (DNA tests can reveal the geographical origins of ancestors, a piece of information that is, contrary to popular belief, not the same as race.)

[ . . . ]

“When it comes to racial identity in America, “The mix itself is one piece, but the appearance thing has always been big,” says Miletsky. In fact, scientists say that people register race in about a tenth of a second, even before they discern gender.

How Many ‘White’ People Are Passing?
by Henry Louis Gates
from The Root

Here’s how Scott Hadly reported Kasia Bryc’s findings on the 23andme website on March 4, 2014: “Bryc found that about 4 percent of whites have at least 1 percent or more of African ancestry, known as “’hidden African ancestry.’”

“Although it is a relatively small percentage,” Hadly continues, “the percentage indicates that an individual with at least 1 percent African ancestry had an African ancestor within the last six generations, or in the last 200 years [meaning since the time of American slavery]. This data also suggests that individuals with mixed parentage at some point were absorbed into the white population,” which is a very polite way of saying that they “passed.”

[ . . . ]

How many ostensibly “white” Americans walking around today would be classified as “black” under the one-drop rule? Judging by the last U.S. Census (pdf), 7,872,702. To put that in context, that number is equal to roughly 20 percent, or a fifth, of the total number of people identified as African American (pdf) in the same census count!

[ . . . ]

“Southern states with the highest African American populations tended to have the highest percentages of hidden African ancestry,” Hadly writes of Bryc’s findings. “In South Carolina at least 13 percent of self-identified whites have 1 percent or more African ancestry, while in Louisiana the number is a little more than 12 percent. In Georgia and Alabama the number is about 9 percent. The differences perhaps point to different social and cultural histories within the south.”

If we apply those percentages to the last federal census (pdf), that means 487,253, “white” people in Georgia, 385,156 “white” people in South Carolina, 328,186 in Louisiana and 288,396 in Alabama are actually “black,” according to the one-drop rule. And that is a lot of the white people in these states! (It’s also worth noting that the percentage of “hidden blacks” who self-identify as white in South Carolina—13 percent—is the same as the percentage of people nationwide who self-identified as black in the 2010 U.S. Census.)

 

 

Nothing Is Inevitable

What is the relationship between who we used to be and who we became, who we might have been and who we might yet become? What defines who we are as a whole? Is there an essence to our identity, a center to our being? If so, is our ‘character’ destiny, does that center hold? After it is all over, who ultimately judges a life and what it means?

I’ve often contemplated these questions. It seems strange how I ended up where I now find myself, a path that I followed because other ways were blocked or hidden, difficult or treacherous. I really have no clue why I am the way I am, this self that is built on all that came before.

From moment to moment, I’ve acted according to what has made sense or seemed necessary in each given situation. This isn’t to imply there weren’t choices made, but it can feel as if life only offers forced choices. Certainly, I didn’t choose the larger context into which I was born, all the apparently random and incomprehensible variables, the typically unseen constraints upon every thought and action. Nor did I even choose the person I am who does the choosing.

I simply am who I am.

It’s hard for me to imagine myself as being different, but it isn’t entirely beyond my capacity. I sense, even if only in a haze, other possibilities and directions. I try to grasp that sense of unlived lives, potentials that on some level remain in the lived present. It is important not to forget all the choices made and that are continually made. Life is a set of endless choices, even if we don’t like the choices perceived or understand their implications. But choices once made tend to lose their sense of having been chosen.

We look at our personal and collective pasts with bias, most especially the bias of knowing what resulted. The telling of history, our own and that of others, has the air of inevitability. We read the ending into the beginning.

Historians don’t usually talk about what didn’t happen and might have happened, the flukes of circumstance that pushed events one direction rather than another. The same is true for all of us in making sense of the past. We comfort ourselves with the narrative of history as if it offers us an answer for why events happened that way, why people did what they did, why success or failure followed. We judge the individuals and societies of the past with 20/20 hindsight. But as the narrators of their story, we aren’t always reliable.

Before I go further about history, let me return to the present. I was involved in a debate that became slightly heated. The fundamental difference of opinion had to do with how society and human nature is defined and perceived, the specific topic having been victimization.

I mentioned the author Derrick Jensen as he offers the best commentary on victimization that I’ve ever come across. But one person responded that, “Lastly I just can’t have a serious conversation about Derrick Jensen. I’m sorry.” Though they never explained their dismissive comment, I suspect I know what they meant.

The thing about Jensen is that there is a distinction between his earliest writings and his more recent writings. He began as an ordinary guy asking questions and looking at the world with a sense of wonder, considering the panorama of data with a voraciousness that is rare. Then he found an answer and it was all downhill from there. The answer he found was a cynical view of society, in which he hoped for the collapse of civilization. The answer was anarcho-primitivism.

Jensen’s answer is less than satisfying. It is sad he went down that road. He wasn’t always like that. In his early writings, there is a profound sense of beauty and love of humanity, all of humanity. Yes, there was more than a hint of darkness in his first couple of books, but it was only a shadow of doubt, a potential that had not yet fully manifested, that had not yet become untethered from hope. His younger self didn’t dream of destruction.

I knew Jensen’s early writings years before he began his cynical phase. Nothing he could write would negate the worthiness of what he wrote before. But if all you knew were his later writings, it is perfectly understandable that your criticisms might be harsh.

I had the opposite experience in my discovery of George Orwell.

I mostly knew him as a name, having never read his works for myself. I had seen the movie adaptation of Nineteen Eighty-Four and I’ve come across quotes of his writings in various places. But I knew nothing about Orwell as a man and a writer. He was just another famous dead white guy who said some interesting stuff.

Recently, I decided to lessen my ignorance and read something by him. I randomly chose Homage to Catalonia which had an introduction by Lionel Trilling. At the same time, I did web searches about Orwell, about that particular book, and about Trilling’s intro. This led me to info about Orwell having colluded with the British government when he became an informant. He informed on people in his own social circle and, having been a critic of the British Empire, he had to have known the consequences could have destroyed lives.

That marred any respect I might have had for Orwell, maybe permanently.

So, why could I be so critical of Orwell while being so forgiving of Jensen? Well, for one, Jensen never has colluded with an oppressive government against those who voiced dissent. Plus, it might be the basic reason of my having no personal connection to Orwell’s writings. Jensen’s writings, on the other hand, helped shape my mind at a still tender age when I was looking for answers. I have a sense of knowing Jensen’s experience and worldview, and hence a sense of knowing why he turned to cynicism. But maybe I should also be more forgiving of Orwell and more accepting of his all too human weaknesses, or at least more willing to separate his early writings from his later actions.

My basic sense is that nothing in life is inevitable. As such, it wasn’t inevitable that lives of Jensen and Orwell happened as they did. Almost anything could have intervened at any moment along the way and redirected their lives, forced different choices upon them, allowed them to see new possibilities. And, in the case of Jensen, that is still possible for he remains alive.

Now, for the historical aspect, let me continue on the level of individuals and then shift to a broader perspective.

I’ll use my two favorite examples: Edmund Burke and Thomas Paine. They are long dead and so that air of inevitability hangs heavy over their respective histories. Whatever they might have become, it would be hard for either to surprise us at this point, unless previously unknown documents were to be found.

Both Burke and Paine began as progressive reformers. There was nothing in their childhoods or even their young adulthoods that would have portended the pathways of their lives, that would have predicted Burke becoming what some have deemed a reactionary conservative, an anti-revolutionary defender of the status quo, and that would have predicted Paine becoming a revolutionary, a radical rabblerouser, and one of the greatest threats to tyranny. Before all of that, they were friends and allies. They wrote letters to one another. Paine even visited Burke at his home. If events hadn’t intervened, they both might have remained partners in seeking progressive reform in Britain and her colonies.

What drove them apart began with the American Revolution and came to its head in the French Revolution. They came to opposite views of the historical forces that were playing out before them. Burke responded in fear and Paine in hope. But these responses were dependent on so many circumstantial factors. Change any single thing and a chain of events would have shifted into a new pattern, a new context.

Like Paine, Burke at one point considered going to America, and yet unlike Paine he never got around to it. His early life didn’t hit any major bumps as Paine’s did. There is no evidence that Paine had seriously considered going to America until all of his other options had been denied. There is nothing inevitable at all about these two lives. It was a chance meeting with Benjamin Franklin that sent Paine toward his seeming destiny. And it was the lack of such a similar chance meeting that kept Burke in Britain.

Along these lines, it was a complex web of events and factors that led the two revolutions down divergent paths. The French Revolution wasn’t fated to transform into Robespierre’s Reign of Terror and Napoleon’s empire. Likewise, the American Revolution wasn’t fated to end in the further institutionalization of slavery that would lead to a bloody civil war, wasn’t fated to lead to imperial expansion, Indian removal and genocide.

There is no shared character that predetermines a people’s fate. The potential of individual members of a nation are magnified by all of their potential combined, the choices and actions of each affecting those of others, millions of paths intertwining like a flock of birds shifting along unseen currents in the wind. History is a thing of luck and chance, infinite possibilities bounded by necessity and circumstance, an interplay of forces that can’t be controlled or predicted. People act never knowing for sure what may or may not come of it.

People and nations are filled with near infinite potential. None of us knows what might have been or what yet might become. Nothing is inevitable.

The path we seem to be on may change in an instant, may change in ways we can’t even imagine. But, no matter what changes, it will never alter all that came before. The many facets of our lives, individual and shared, offer diverse windows onto the world we see. Even as the past doesn’t change, our relationship to the past does and along with it our understanding, along with it the memories we recollect and the stories we tell. And from understanding, one hopes, comes empathy and compassion.

We are who we are, all the many selves we hold within, all the many identities we have taken. The past and the future, the potential and the manifest meets in the world in which we live. No story is completely told while the actors remain.