“…consciousness is itself the result of learning.”

As above, so below
by Axel Cleeremans

A central aspect of the entire hierarchical predictive coding approach, though this is not readily apparent in the corresponding literature, is the emphasis it puts on learning mechanisms. In other works (Cleeremans, 2008, 2011), I have defended the idea that consciousness is itself the result of learning. From this perspective, agents become conscious in virtue of learning to redescribe their own activity to themselves. Taking the proposal that consciousness is inherently dynamical seriously opens up the mesmerizing possibility that conscious awareness is itself a product of plasticity-driven dynamics. In other words, from this perspective, we learn to be conscious. To dispel possible misunderstandings of this proposal right away, I am not suggesting that consciousness is something that one learns like one would learn about the Hundred Years War, that is, as an academic endeavour, but rather that consciousness is the result (vs. the starting point) of continuous and extended interaction with the world, with ourselves, and with others. The brain, from this perspective, continuously (and unconsciously) learns to anticipate the consequences of its own activity on itself, on the environment, and on other brains, and it is from the practical knowledge that accrues in such interactions that conscious experience is rooted. This perspective, in short, endorses the enactive approach introduced by O’Regan and Noë (2001), but extends it both inwards (the brain learning about itself) and further outwards (the brain learning about other brains), so connecting with the central ideas put forward by the predictive coding approach to cognition. In this light, the conscious mind is the brain’s (implicit, enacted) theory about itself, expressed in a language that other minds can understand.

The theory rests on several assumptions and is articulated over three core ideas. A first assumption is that information processing as carried out by neurons is intrinsically unconscious. There is nothing in the activity of individual neurons that make it so that their activity should produce conscious experience. Important consequences of this assumption are (1) that conscious and unconscious processing must be rooted in the same set of representational systems and neural processes, and (2) that tasks in general will always involve both conscious and unconscious influences, for awareness cannot be “turned off” in normal participants.

A second assumption is that information processing as carried out by the brain is graded and cascades (McClelland, 1979) in a continuous flow (Eriksen & Schultz, 1979) over the multiple levels of a heterarchy (Fuster, 2008) extending from posterior to anterior cortex as evidence accumulates during an information processing episode. An implication of this assumption is that consciousness takes time.

The third assumption is that plasticity is mandatory: The brain learns all the time, whether we intend to or not. Each experience leaves a trace in the brain (Kreiman, Fried, & Koch, 2002).

The social roots of consciousness
by Axel Cleeremans

How does this ability to represent the mental states of other agents get going? While there is considerable debate about this issue, it is probably fair to say that one crucial mechanism involves learning about the consequences of the actions that one directs towards other agents. In this respect, interactions with the natural world are fundamentally different from interactions with other agents, precisely because other agents are endowed with unobservable internal states. If I let a spoon drop on a hard floor, the sound that results will always be the same, within certain parameters that only vary in a limited range. The consequences of my action are thus more or less entirely predictable. But if I smile to someone, the consequences that may result are many. Perhaps the person will smile back to me, but it may also be the case that the person will ignore me or that she will display puzzlement, or even that she will be angry at me. It all depends on the context and on the unobservable mental states that the person currently entertains. Of course, there is a lot I can learn about the space of possible responses based on my knowledge of the person, my history of prior interactions with her, and on the context in which my interactions take place. But the point is simply to say that in order to successfully predict the consequences of the actions that I direct towards other agents, I have to build a model of how these agents work. And this is complex because, unlike what is the case for interactions with the natural world, it is an inverse problem: The same action may result in many different reactions, and those different reactions can themselves be caused by many different internal states.

Based on these observations, one provocative claim about the relationships between self-awareness and one’s ability to represent the mental states of other agents (“theory of mind”, as it is called) is thus that theory of mind comes first, as the philosopher Peter Caruthers has defended. That is, it is in virtue of my learning to correctly anticipate the consequences of the actions that  dIirect towards other agents that I end up developing models of the internal states of such agents, and it is in virtue of the existence of such models that I become able to gain insight about myself (more specifically: about my self). Thus, by this view, self-awareness, and perhaps subjective experience itself, is a consequence of theory of mind as it develops over extended periods of social intercourse.

Advertisements

Vestiges of an Earlier Mentality: Different Psychologies

“The Self as Interiorized Social Relations Applying a Jaynesian Approach to Problems of Agency and Volition”
By Brian J. McVeigh

(II) Vestiges of an Earlier Mentality: Different Psychologies

If what Jaynes has proposed about bicamerality is correct, we should expect to find remnants of this extinct mentality. In any case, an examination of the ethnopsychologies of other societies should at least challenge our assumptions. What kinds of metaphors do they employ to discuss the self? Where is agency localized? To what extent do they even “psychologize” the individual, positing an “interior space” within the person? If agency is a socio-historical construction (rather than a bio-evolutionary product), we should expect some cultural variability in how it is conceived. At the same time, we should also expect certain parameters within which different theories of agency are built.

Ethnographies are filled with descriptions of very different psychologies. For example, about the Maori, Jean Smith writes that

it would appear that generally it was not the “self” which encompassed the experience, but experience which encompassed the “self” … Because the “self” was not in control of experience, a man’s experience was not felt to be integral to him; it happened in him but was not of him. A Maori individual was not so much the experiencer of his experience as the observer of it. 22

Furthermore, “bodily organs were endowed with independent volition.” 23 Renato Rosaldo states that the Ilongots of the Philippines rarely concern themselves with what we refer to as an “inner self” and see no major differences between public presentation and private feeling. 24

Perhaps the most intriguing picture of just how radically different mental concepts can be is found in anthropologist Maurice Leenhardt’s   intriguing book Do Kamo, about the Canaque of New Caledonia, who are “unaware” of their own existence: the “psychic or psychological aspect of man’s actions are events in nature. The Canaque sees them as outside of himself, as externalized. He handles his existence similarly: he places it in an object — a yam, for instance — and through the yam he gains some knowledge of his existence, by identifying himself with it.” 25

Speaking of the Dinka, anthropologist Godfrey Lienhardt writes that “the man is the object acted upon,” and “we often find a reversal of European expressions which assume the human self, or mind, as subject in relation to what happens to it.” 26 Concerning the mind itself,

The Dinka have no conception which at all closely corresponds to our popular modern conception of the “mind,” as mediating and, as it were, storing up the experiences of the self. There is for them no such interior entity to appear, on reflection, to stand between the experiencing self at any given moment and what is or has been an exterior influence upon the self. So it seems that what we should call in some cases the memories of experiences, and regard therefore as in some way intrinsic and interior to the remembering person and modified in their effect upon him by that interiority, appear to the Dinka as exteriority acting upon him, as were the sources from which they derived. 27

The above mentioned ethnographic examples may be interpreted as merely colorful descriptions, as exotic and poetic folk psychologies. Or, we may take a more literal view, and entertain the idea that these ethnopsychological accounts are vestiges of a distant past when individuals possessed radically different mentalities. For example, if it is possible to be a person lacking interiority in which a self moves about making conscious decisions, then we must at least entertain the idea that entire civilizations existed whose members had a radically different mentality. The notion of a “person without a self” is admittedly controversial and open to misinterpretation. Here allow me to stress that I am not suggesting that in today’s world there are groups of people whose mentality is distinct from our own. However, I am suggesting that remnants of an earlier mentality are evident in extant ethnopsychologies, including our own. 28

* * *

Text from:

Reflections on the Dawn of Consciousness:
Julian Jaynes’s Bicameral Mind Theory Revisited
Edited by Marcel Kuijsten
Chapter 7, Kindle Locations 3604-3636

See also:

Survival and Persistence of Bicameralism
Piraha and Bicameralism

“Lack of the historical sense is the traditional defect in all philosophers.”

Human, All Too Human: A Book for Free Spirits
by Friedrich Wilhelm Nietzsche

The Traditional Error of Philosophers.—All philosophers make the common mistake of taking contemporary man as their starting point and of trying, through an analysis of him, to reach a conclusion. “Man” involuntarily presents himself to them as an aeterna veritas as a passive element in every hurly-burly, as a fixed standard of things. Yet everything uttered by the philosopher on the subject of man is, in the last resort, nothing more than a piece of testimony concerning man during a very limited period of time. Lack of the historical sense is the traditional defect in all philosophers. Many innocently take man in his most childish state as fashioned through the influence of certain religious and even of certain political developments, as the permanent form under which man must be viewed. They will not learn that man has evolved,4 that the intellectual faculty itself is an evolution, whereas some philosophers make the whole cosmos out of this intellectual faculty. But everything essential in human evolution took place aeons ago, long before the four thousand years or so of which we know anything: during these man may not have changed very much. However, the philosopher ascribes “instinct” to contemporary man and assumes that this is one of the unalterable facts regarding man himself, and hence affords a clue to the understanding of the universe in general. The whole teleology is so planned that man during the last four thousand years shall be spoken of as a being existing from all eternity, and with reference to whom everything in the cosmos from its very inception is naturally ordered. Yet everything evolved: there are no eternal facts as there are no absolute truths. Accordingly, historical philosophising is henceforth indispensable, and with it honesty of judgment.

What Locke Lacked
by Louise Mabille

Locke is indeed a Colossus of modernity, but one whose twin projects of providing a concept of human understanding and political foundation undermine each other. The specificity of the experience of perception alone undermines the universality and uniformity necessary to create the subject required for a justifiable liberalism. Since mere physical perspective can generate so much difference, it is only to be expected that political differences would be even more glaring. However, no political order would ever come to pass without obliterating essential differences. The birth of liberalism was as violent as the Empire that would later be justified in its name, even if its political traces are not so obvious. To interpret is to see in a particular way, at the expense of all other possibilities of interpretation. Perspectives that do not fit are simply ignored, or as that other great resurrectionist of modernity, Freud, would concur, simply driven underground. We ourselves are the source of this interpretative injustice, or more correctly, our need for a world in which it is possible to live, is. To a certain extent, then, man is the measure of the world, but only his world. Man is thus a contingent measure and our measurements do not refer to an original, underlying reality. What we call reality is the result not only of our limited perspectives upon the world, but the interplay of those perspectives themselves. The liberal subject is thus a result of, and not a foundation for, the experience of reality. The subject is identified as origin of meaning only through a process of differentiation and reduction, a course through which the will is designated as a psychological property.

Locke takes the existence of the subject of free will – free to exercise political choice such as rising against a tyrant, choosing representatives, or deciding upon political direction – simply for granted. Furthermore, he seems to think that everyone should agree as to what the rules are according to which these events should happen. For him, the liberal subject underlying these choices is clearly fundamental and universal.

Locke’s philosophy of individualism posits the existence of a discreet and isolated individual, with private interests and rights, independent of his linguistic or socio-historical context. C. B. MacPhearson identifies a distinctly possessive quality to Locke’s individualist ethic, notably in the way in which the individual is conceived as proprietor of his own personhood, possessing capacities such as self-reflection and free will. Freedom becomes associated with possession, which the Greeks would associate with slavery, and society conceived in terms of a collection of free and equal individuals who are related to each through their means of achieving material success – which Nietzsche, too, would associate with slave morality.  […]

There is a central tenet to John Locke’s thinking that, as conventional as it has become, remains a strange strategy. Like Thomas Hobbes, he justifies modern society by contrasting it with an original state of nature. For Hobbes, as we have seen, the state of nature is but a hypothesis, a conceptual tool in order to elucidate a point. For Locke, however, the state of nature is a very real historical event, although not a condition of a state of war. Man was social by nature, rational and free. Locke drew this inspiration from Richard Hooker’s Laws of Ecclesiastical Polity, notably from his idea that church government should be based upon human nature, and not the Bible, which, according to Hooker, told us nothing about human nature. The social contract is a means to escape from nature, friendlier though it be on the Lockean account. For Nietzsche, however, we have never made the escape: we are still holus-bolus in it: ‘being conscious is in no decisive sense the opposite of the instinctive – most of the philosopher’s conscious thinking is secretly directed and compelled into definite channels by his instincts. Behind all logic too, and its apparent autonomy there stand evaluations’ (BGE, 3). Locke makes a singular mistake in thinking the state of nature a distant event. In fact, Nietzsche tells us, we have never left it. We now only wield more sophisticated weapons, such as the guilty conscience […]

Truth originates when humans forget that they are ‘artistically creating subjects’ or products of law or stasis and begin to attach ‘invincible faith’ to their perceptions, thereby creating truth itself. For Nietzsche, the key to understanding the ethic of the concept, the ethic of representation, is conviction […]

Few convictions have proven to be as strong as the conviction of the existence of a fundamental subjectivity. For Nietzsche, it is an illusion, a bundle of drives loosely collected under the name of ‘subject’ —indeed, it is nothing but these drives, willing, and actions in themselves—and it cannot appear as anything else except through the seduction of language (and the fundamental errors of reason petrified in it), which understands and misunderstands all action as conditioned by something which causes actions, by a ‘Subject’ (GM I 13). Subjectivity is a form of linguistic reductionism, and when using language, ‘[w]e enter a realm of crude fetishism when we summon before consciousness the basic presuppositions of the metaphysics of language — in plain talk, the presuppositions of reason. Everywhere reason sees a doer and doing; it believes in will as the cause; it believes in the ego, in the ego as being, in the ego as substance, and it projects this faith in the ego-substance upon all things — only thereby does it first create the concept of ‘thing’ (TI, ‘Reason in Philosophy’ 5). As Nietzsche also states in WP 484, the habit of adding a doer to a deed is a Cartesian leftover that begs more questions than it solves. It is indeed nothing more than an inference according to habit: ‘There is activity, every activity requires an agent, consequently – (BGE, 17). Locke himself found the continuous existence of the self problematic, but did not go as far as Hume’s dissolution of the self into a number of ‘bundles’. After all, even if identity shifts occurred behind the scenes, he required a subject with enough unity to be able to enter into the Social Contract. This subject had to be something more than merely an ‘eternal grammatical blunder’ (D, 120), and willing had to be understood as something simple. For Nietzsche, it is ‘above all complicated, something that is a unit only as a word, a word in which the popular prejudice lurks, which has defeated the always inadequate caution of philosophers’ (BGE, 19).

Nietzsche’s critique of past philosophers
by Michael Lacewing

Nietzsche is questioning the very foundations of philosophy. To accept his claims means being a new kind of philosopher, ones who ‘taste and inclination’, whose values, are quite different. Throughout his philosophy, Nietzsche is concerned with origins, both psychological and historical. Much of philosophy is usually thought of as an a priori investigation. But if Nietzsche can show, as he thinks he can, that philosophical theories and arguments have a specific historical basis, then they are not, in fact, a priori. What is known a priori should not change from one historical era to the next, nor should it depend on someone’s psychology. Plato’s aim, the aim that defines much of philosophy, is to be able to give complete definitions of ideas – ‘what is justice?’, ‘what is knowledge?’. For Plato, we understand an idea when we have direct knowledge of the Form, which is unchanging and has no history. If our ideas have a history, then the philosophical project of trying to give definitions of our concepts, rather than histories, is radically mistaken. For example, in §186, Nietzsche argues that philosophers have consulted their ‘intuitions’ to try to justify this or that moral principle. But they have only been aware of their own morality, of which their ‘justifications’ are in fact only expressions. Morality and moral intuitions have a history, and are not a priori. There is no one definition of justice or good, and the ‘intuitions’ that we use to defend this or that theory are themselves as historical, as contentious as the theories we give – so they offer no real support. The usual ways philosophers discuss morality misunderstands morality from the very outset. The real issues of understanding morality only emerge when we look at the relation between this particular morality and that. There is no world of unchanging ideas, no truths beyond the truths of the world we experience, nothing that stands outside or beyond nature and history.

GENEALOGY AND PHILOSOPHY

Nietzsche develops a new way of philosophizing, which he calls a ‘morphology and evolutionary theory’ (§23), and later calls ‘genealogy’. (‘Morphology’ means the study of the forms something, e.g. morality, can take; ‘genealogy’ means the historical line of descent traced from an ancestor.) He aims to locate the historical origin of philosophical and religious ideas and show how they have changed over time to the present day. His investigation brings together history, psychology, the interpretation of concepts, and a keen sense of what it is like to live with particular ideas and values. In order to best understand which of our ideas and values are particular to us, not a priori or universal, we need to look at real alternatives. In order to understand these alternatives, we need to understand the psychology of the people who lived with them. And so Nietzsche argues that traditional ways of doing philosophy fail – our intuitions are not a reliable guide to the ‘truth’, to the ‘real’ nature of this or that idea or value. And not just our intuitions, but the arguments, and style of arguing, that philosophers have used are unreliable. Philosophy needs to become, or be informed by, genealogy. A lack of any historical sense, says Nietzsche, is the ‘hereditary defect’ of all philosophers.

MOTIVATIONAL ANALYSIS

Having long kept a strict eye on the philosophers, and having looked between their lines, I say to myself… most of a philosopher’s conscious thinking is secretly guided and channelled into particular tracks by his instincts. Behind all logic, too, and its apparent tyranny of movement there are value judgements, or to speak more clearly, physiological demands for the preservation of a particular kind of life. (§3) A person’s theoretical beliefs are best explained, Nietzsche thinks, by evaluative beliefs, particular interpretations of certain values, e.g. that goodness is this and the opposite of badness. These values are best explained as ‘physiological demands for the preservation of a particular kind of life’. Nietzsche holds that each person has a particular psychophysical constitution, formed by both heredity and culture. […] Different values, and different interpretations of these values, support different ways of life, and so people are instinctively drawn to particular values and ways of understanding them. On the basis of these interpretations of values, people come to hold particular philosophical views. §2 has given us an illustration of this: philosophers come to hold metaphysical beliefs about a transcendent world, the ‘true’ and ‘good’ world, because they cannot believe that truth and goodness could originate in the world of normal experience, which is full of illusion, error, and selfishness. Therefore, there ‘must’ be a pure, spiritual world and a spiritual part of human beings, which is the origin of truth and goodness. Philosophy and values But ‘must’ there be a transcendent world? Or is this just what the philosopher wants to be true? Every great philosophy, claims Nietzsche, is ‘the personal confession of its author’ (§6). The moral aims of a philosophy are the ‘seed’ from which the whole theory grows. Philosophers pretend that their opinions have been reached by ‘cold, pure, divinely unhampered dialectic’ when in fact, they are seeking reasons to support their pre-existing commitment to ‘a rarefied and abstract version of their heart’s desire’ (§5), viz. that there is a transcendent world, and that good and bad, true and false are opposites. Consider: Many philosophical systems are of doubtful coherence, e.g. how could there be Forms, and if there were, how could we know about them? Or again, in §11, Nietzsche asks ‘how are synthetic a priori judgments possible?’. The term ‘synthetic a priori’ was invented by Kant. According to Nietzsche, Kant says that such judgments are possible, because we have a ‘faculty’ that makes them possible. What kind of answer is this?? Furthermore, no philosopher has ever been proved right (§25). Given the great difficulty of believing either in a transcendent world or in human cognitive abilities necessary to know about it, we should look elsewhere for an explanation of why someone would hold those beliefs. We can find an answer in their values. There is an interesting structural similarity between Nietzsche’s argument and Hume’s. Both argue that there is no rational explanation of many of our beliefs, and so they try to find the source of these beliefs outside or beyond reason. Hume appeals to imagination and the principle of ‘Custom’. Nietzsche appeals instead to motivation and ‘the bewitchment of language’ (see below). So Nietzsche argues that philosophy is not driven by a pure ‘will to truth’ (§1), to discover the truth whatever it may be. Instead, a philosophy interprets the world in terms of the philosopher’s values. For example, the Stoics argued that we should live ‘according to nature’ (§9). But they interpret nature by their own values, as an embodiment of rationality. They do not see the senselessness, the purposelessness, the indifference of nature to our lives […]

THE BEWITCHMENT OF LANGUAGE

We said above that Nietzsche criticizes past philosophers on two grounds. We have looked at the role of motivation; the second ground is the seduction of grammar. Nietzsche is concerned with the subject-predicate structure of language, and with it the notion of a ‘substance’ (picked out by the grammatical ‘subject’) to which we attribute ‘properties’ (identified by the predicate). This structure leads us into a mistaken metaphysics of ‘substances’. In particular, Nietzsche is concerned with the grammar of ‘I’. We tend to think that ‘I’ refers to some thing, e.g. the soul. Descartes makes this mistake in his cogito – ‘I think’, he argues, refers to a substance engaged in an activity. But Nietzsche repeats the old objection that this is an illegitimate inference (§16) that rests on many unproven assumptions – that I am thinking, that some thing is thinking, that thinking is an activity (the result of a cause, viz. I), that an ‘I’ exists, that we know what it is to think. So the simple sentence ‘I think’ is misleading. In fact, ‘a thought comes when ‘it’ wants to, and not when ‘I’ want it to’ (§17). Even ‘there is thinking’ isn’t right: ‘even this ‘there’ contains an interpretation of the process and is not part of the process itself. People are concluding here according to grammatical habit’. But our language does not allow us just to say ‘thinking’ – this is not a whole sentence. We have to say ‘there is thinking’; so grammar constrains our understanding. Furthermore, Kant shows that rather than the ‘I’ being the basis of thinking, thinking is the basis out of which the appearance of an ‘I’ is created (§54). Once we recognise that there is no soul in a traditional sense, no ‘substance’, something constant through change, something unitary and immortal, ‘the way is clear for new and refined versions of the hypothesis about the soul’ (§12), that it is mortal, that it is multiplicity rather than identical over time, even that it is a social construct and a society of drives. Nietzsche makes a similar argument about the will (§19). Because we have this one word ‘will’, we think that what it refers to must also be one thing. But the act of willing is highly complicated. First, there is an emotion of command, for willing is commanding oneself to do something, and with it a feeling of superiority over that which obeys. Second, there is the expectation that the mere commanding on its own is enough for the action to follow, which increases our sense of power. Third, there is obedience to the command, from which we also derive pleasure. But we ignore the feeling the compulsion, identifying the ‘I’ with the commanding ‘will’. Nietzsche links the seduction of language to the issue of motivation in §20, arguing that ‘the spell of certain grammatical functions is the spell of physiological value judgements’. So even the grammatical structure of language originates in our instincts, different grammars contributing to the creation of favourable conditions for different types of life. So what values are served by these notions of the ‘I’ and the ‘will’? The ‘I’ relates to the idea that we have a soul, which participates in a transcendent world. It functions in support of the ascetic ideal. The ‘will’, and in particular our inherited conception of ‘free will’, serves a particular moral aim

Hume and Nietzsche: Moral Psychology (short essay)
by epictetus_rex

1. Metaphilosophical Motivation

Both Hume and Nietzsche1 advocate a kind of naturalism. This is a weak naturalism, for it does not seek to give science authority over philosophical inquiry, nor does it commit itself to a specific ontological or metaphysical picture. Rather, it seeks to (a) place the human mind firmly in the realm of nature, as subject to the same mechanisms that drive all other natural events, and (b) investigate the world in a way that is roughly congruent with our best current conception(s) of nature […]

Furthermore, the motivation for this general position is common to both thinkers. Hume and Nietzsche saw old rationalist/dualist philosophies as both absurd and harmful: such systems were committed to extravagant and contradictory metaphysical claims which hinder philosophical progress. Furthermore, they alienated humanity from its position in nature—an effect Hume referred to as “anxiety”—and underpinned religious or “monkish” practises which greatly accentuated this alienation. Both Nietzsche and Hume believe quite strongly that coming to see ourselves as we really are will banish these bugbears from human life.

To this end, both thinkers ask us to engage in honest, realistic psychology. “Psychology is once more the path to the fundamental problems,” writes Nietzsche (BGE 23), and Hume agrees:

the only expedient, from which we can hope for success in our philosophical researches, is to leave the tedious lingering method, which we have hitherto followed, and instead of taking now and then a castle or village on the frontier, to march up directly to the capital or center of these sciences, to human nature itself.” (T Intro)

2. Selfhood

Hume and Nietzsche militate against the notion of a unified self, both at-a-time and, a fortiori, over time.

Hume’s quest for a Newtonian “science of the mind” lead him to classify all mental events as either impressions (sensory) or ideas (copies of sensory impressions, distinguished from the former by diminished vivacity or force). The self, or ego, as he says, is just “a kind of theatre, where several perceptions successively make their appearance; pass, re-pass, glide away, and mingle in an infinite variety of postures and situations. There is properly no simplicity in it at one time, nor identity in different; whatever natural propension we may have to imagine that simplicity and identity.” (Treatise 4.6) […]

For Nietzsche, the experience of willing lies in a certain kind of pleasure, a feeling of self-mastery and increase of power that comes with all success. This experience leads us to mistakenly posit a simple, unitary cause, the ego. (BGE 19)

The similarities here are manifest: our minds do not have any intrinsic unity to which the term “self” can properly refer, rather, they are collections or “bundles” of events (drives) which may align with or struggle against one another in a myriad of ways. Both thinkers use political models to describe what a person really is. Hume tells us we should “more properly compare the soul to a republic or commonwealth, in which the several members [impressions and ideas] are united by ties of government and subordination, and give rise to persons, who propagate the same republic in the incessant change of its parts” (T 261)

3. Action and The Will

Nietzsche and Hume attack the old platonic conception of a “free will” in lock-step with one another. This picture, roughly, involves a rational intellect which sits above the appetites and ultimately chooses which appetites will express themselves in action. This will is usually not considered to be part of the natural/empirical order, and it is this consequence which irks both Hume and Nietzsche, who offer two seamlessly interchangeable refutations […]

Since we are nothing above and beyond events, there is nothing for this “free will” to be: it is a causa sui, “a sort of rape and perversion of logic… the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense” (BGE 21).

When they discover an erroneous or empty concept such as “Free will” or “the self”, Nietzsche and Hume engage in a sort of error-theorizing which is structurally the same. Peter Kail (2006) has called this a “projective explanation”, whereby belief in those concepts is “explained by appeal to independently intelligible features of psychology”, rather than by reference to the way the world really is1.

The Philosophy of Mind
INSTRUCTOR: Larry Hauser
Chapter 7: Egos, bundles, and multiple selves

  • Who dat?  “I”
    • Locke: “something, I know not what”
    • Hume: the no-self view … “bundle theory”
    • Kant’s transcendental ego: a formal (nonempirical) condition of thought that the “I’ must accompany every perception.
      • Intentional mental state: I think that snow is white.
        • to think: a relation between
          • a subject = “I”
          • a propositional content thought =  snow is white
      • Sensations: I feel the coldness of the snow.
        • to feel: a relation between
          • a subject = “I”
          • a quale = the cold-feeling
    • Friedrich Nietzsche
      • A thought comes when “it” will and not when “I” will. Thus it is a falsification of the evidence to say that the subject “I” conditions the predicate “think.”
      • It is thought, to be sure, but that this “it” should be that old famous “I” is, to put it mildly, only a supposition, an assertion. Above all it is not an “immediate certainty.” … Our conclusion is here formulated out of our grammatical custom: “Thinking is an activity; every activity presumes something which is active, hence ….” 
    • Lichtenberg: “it’s thinking” a la “it’s raining”
      • a mere grammatical requirement
      • no proof of an thinking self

[…]

  • Ego vs. bundle theories (Derek Parfit (1987))
    • Ego: “there really is some kind of continuous self that is the subject of my experiences, that makes decisions, and so on.” (95)
      • Religions: Christianity, Islam, Hinduism
      • Philosophers: Descartes, Locke, Kant & many others (the majority view)
    • Bundle: “there is no underlying continuous and unitary self.” (95)
      • Religion: Buddhism
      • Philosophers: Hume, Nietzsche, Lichtenberg, Wittgenstein, Kripke(?), Parfit, Dennett {a stellar minority}
  • Hume v. Reid
    • David Hume: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure.  I never can catch myself at any time without a perception, and never can observe anything but the perception.  (Hume 1739, Treatise I, VI, iv)
    • Thomas Reid: I am not thought, I am not action, I am not feeling: I am something which thinks and acts and feels. (1785)

Edge of the Depths

“In Science there are no ‘depths’; there is surface everywhere.”
~ Rodolf Carnap

I was reading Richard S. Hallam’s Virtual Selves, Real Persons. I’ve enjoyed it, but I find a point of disagreement or maybe merely doubt and questioning. He emphasizes persons as being real, in that they are somehow pre-existing and separate. He distinguishes the person from selves, although this distinction isn’t necessarily relevant to my thoughts here.

I’m not sure to what degree our views diverge, as I find much of the text to be insightful and a wonderful overview. However, to demonstrate my misgivings, the author only mentions David Hume’s bundle theory a couple of times on a few pages (in a several hundred page book), a rather slight discussion for such a key perspective. He does give a bit more space to Julian Jaynes’ bicameral theory, but even Jaynes is isolated to one fairly small section and not fully integrated into the author’s larger analysis.

The commonality between Humes and Jaynes is that they perceived conscious identity as being more nebulous — no there there. In my own experience, that feels more right to me. As one dives down into the psyche, the waters become quite murky, so dark that one can’t even see one’s hands in front of one’s face, much less know what one might be attempting to grasp. Notions of separateness, at a great enough depth, fades away — one finds oneself floating in darkness with no certain sense of distance or direction. I don’t know how to explain this, if one hasn’t experienced altered states of mind, from extended meditation to psychedelic trips.

This is far from a new line of thought for me, but it kept jumping out at me as I read Hallam’s book. His writing is scholarly to a high degree and, for me, that is never a criticism. The downside is that a scholarly perspective alone can’t be taken into the depths. Jaynes solved this dilemma by maintaining a dual focus, intellectual argument balanced with a sense of wonder — speaking of our search for certainty, he said that, “Beyond that, there is only awe.

I keep coming back to that. For all I appreciate of Hallam’s book, I never once experienced awe. Then again, he probably wasn’t attempting to communicate awe. So, it’s not exactly that I judge this as a failing, even if it can feel like an inadequacy from the perspective of human experience or at least my experience. In the throes of awe, we are humbled into an existential state of ignorance. A term like ‘separation’ becomes yet another word. To take consciousness directly and fully is to lose any sense of separateness for, then, there is consciousness alone — not my consciousness and your consciousness, just consciousness.

I could and have made more intellectual arguments about consciousness and how strange it can be. It’s not clear to me, as it is clear to some, that there is any universal experience of consciousness (human or otherwise). There seems to be a wide variety of states of mind found across diverse societies and species. Consider animism that seems so alien to the modern sensibility. What does ‘separation’ mean in an animate world that doesn’t assume the individual as the starting point of human existence?

I don’t need to rationally analyze any of this. Rationality as easily turns into rationalization, justifying what we think we already know. All I can say is that, intuitively, Hume’s bundle theory makes more sense of what I know directly within my own mind, whatever that may say about the minds of others. That viewpoint can’t be scientifically proven for the experience behind it is inscrutable, not an object to be weighed and measured, even as brain scans remain fascinating. Consciousness can’t be found by pulling apart Hume’s bundle anymore than a frog’s soul can be found by dissecting its beating heart — consciousness having a similar metaphysical status as the soul. Something like the bundle theory either makes sense or not. Consciousness is a mystery, no matter how unsatisfying that may seem. Science can take us to the edge of the depths, but that is where it stops. To step off that edge requires something else entirely.

Actually, stepping off rarely happens since few, if any, ever choose to sink into the depths. One slips and falls and the depths envelop one. Severe depression was my initiation experience, the weight dragging me down. There are many possible entry points to this otherness. When that happens, thoughts on consciousness stop being intellectual speculation and thought experiment. One knows consciousness as well as one will ever know it when one drowns in it. If one thrashes their way back to the surface, then and only then can one offer meaningful insight but more likely one is lost in silence, water still choking in one’s throat.

This is why Julian Jaynes, for all of his brilliance and insight, reached the end of his life filled with frustration at what felt like a failure to communicate. As his historical argument went, individuals don’t change their mindsets so much as the social system that maintains a particular mindset is changed, which in the case of bicameralism meant the collapse of the Bronze Age civilizations. Until our society faces a similar crises and is collectively thrown into the depths, separation will remain the dominant mode of experience and understanding. As for what might replace it, that is anyone’s guess.

Here we stand, our footing not entirely secure, at the edge of the depths.

Delirium of Hyper-Individualism

Individualism is a strange thing. For anyone who has spent much time meditating, it’s obvious that there is no there there. It slips through one’s grasp like an ancient philosopher trying to study aether. The individual self is the modernization of the soul. Like the ghost in the machine and the god in the gaps, it is a theological belief defined by its absence in the world. It’s a social construct, a statement that is easily misunderstood.

In modern society, individualism has been raised up to an entire ideological worldview. It is all-encompassing, having infiltrated nearly every aspect of our social lives and become internalized as a cognitive frame. Traditional societies didn’t have this obsession with an idealized self as isolated and autonomous. Go back far enough and the records seem to show societies that didn’t even have a concept, much less an experience, of individuality.

Yet for all its dominance, the ideology of individualism is superficial. It doesn’t explain much of our social order and personal behavior. We don’t act as if we actually believe in it. It’s a convenient fiction that we so easily disregard when inconvenient, as if it isn’t all that important after all. In our most direct experience, individuality simply makes no sense. We are social creatures through and through. We don’t know how to be anything else, no matter what stories we tell ourselves.

The ultimate value of this individualistic ideology is, ironically, as social control and social justification.

The wealthy, the powerful and privileged, even the mere middle class to a lesser degree — they get to be individuals when everything goes right. They get all the credit and all the benefits. All of society serves them because they deserve it. But when anything goes wrong, they hire lawyers who threaten anyone who challenges them or they settle out of court, they use their crony connections and regulatory capture to avoid consequences, they declare bankruptcy when one of their business ventures fail, and they endlessly scapegoat those far below them in the social hierarchy.

The profits and benefits are privatized while the costs are externalized. This is socialism for the rich and capitalism for the poor, with the middle class getting some combination of the two. This is why democratic rhetoric justifies plutocracy while authoritarianism keeps the masses in line. This stark reality is hidden behind the utopian ideal of individualism with its claims of meritocracy and a just world.

The fact of the matter is that no individual ever became successful. Let’s do an experiment. Take an individual baby, let’s say the little white male baby of wealthy parents with their superior genetics. Now leave that baby in the woods to raise himself into adulthood and bootstrap himself into a self-made man. I wonder how well that would work for his survival and future prospects. If privilege and power, if opportunity and resources, if social capital and collective inheritance, if public goods and the commons have no major role to play such that the individual is solely responsible to himself, we should expect great things from this self-raised wild baby.

But if it turns out that hyper-individualism is total bullshit, we should instead expect that baby to die of exposure and starvation or become the the prey of a predator feeding its own baby without any concerns for individuality. Even simply leaving a baby untouched and neglected in an orphanage will cause failure to thrive and death. Without social support, our very will to live disappears. Social science research has proven the immense social and environmental influences on humans. For a long time now there has been no real debate about this social reality of our shared humanity.

So why does this false belief and false idol persist? What horrible result do we fear if we were ever to be honest with ourselves? I get that the ruling elite are ruled by their own egotistic pride and narcissism. I get that the comfortable classes are attached to their comforting lies. But why do the rest of us go along with their self-serving delusions? It is the strangest thing in the world for a society to deny it is a society.

“illusion of a completed, unitary self”

The Voices Within:
The History and Science of How We Talk to Ourselves
by Charles Fernyhough
Kindle Locations 3337-3342

And we are all fragmented. There is no unitary self. We are all in pieces, struggling to create the illusion of a coherent “me” from moment to moment. We are all more or less dissociated. Our selves are constantly constructed and reconstructed in ways that often work well, but often break down. Stuff happens, and the center cannot hold. Some of us have more fragmentation going on, because of those things that have happened; those people face a tougher challenge of pulling it all together. But no one ever slots in the last piece and makes it whole. As human beings, we seem to want that illusion of a completed, unitary self, but getting there is hard work. And anyway, we never get there.

Kindle Locations 3357-3362

This is not an uncommon story among people whose voices go away. Someone is there, and then they’re not there anymore. I was reminded of what I had been told about the initial onset of voice-hearing: how it can be like dialing into a transmission that has always been present. “Once you hear the voices,” wrote Mark Vonnegut of his experiences, “you realise they’ve always been there. It’s just a matter of being tuned to them.” If you can tune in to something, then perhaps you can also tune out.

Kindle Locations 3568-3570

It is also important to bear in mind that for many voice-hearers the distinction between voices and thoughts is not always clear-cut. In our survey, a third of the sample reported either a combination of auditory and thought-like voices, or experiences that fell somewhere between auditory voices and thoughts.

* * *

Charles Fernyhough recommends the best books on Streams of Consciousness
from Five Books

Charles Fernyhough Listens in on Thought Itself in ‘The Voices Within’
by Raymond Tallis, WSJ (read here)

Neuroscience: Listening in on yourself
by Douwe Draaisma, Nature

The Song Of The Psyche
by Megan Sherman, Huffington Post

Bundle Theory: Embodied Mind, Social Nature

I was listening to the audio version of Susan Blackmore’s Consciousness: A Very Short Introduction. It’s less than five hours long and so I listened to it multiple times to get a good sense of it. I’ve read plenty about the topic and I’m already generally familiar with the material, but it was still helpful getting an overview.

One part that interested me was about split brain research, something that always interests me. The roles of and relationship between the hemispheres indicates much about how our minds operate. Blackmore discussed one often referenced study where split brain patients had information given separately to each hemisphere in order to see how the individual would explain their behavior. As the left hemisphere typically controls linguistic communication, individuals couldn’t give accurate reasons for what was done by their right hemisphere.

The author wrote that (pp. 72-3),

“In this way, the verbal left brain covered up its ignorance by confabulating. It did the same when the other half was shown an emotional picture – making up a plausible excuse for laughing, smiling, blushing, or whatever emotional reaction had been provoked. This might help to explain how these patients can appear so normal. But it should also make us wonder about ourselves. Our brains consist of lots of relatively independent modules, and the verbal part does not have access to everything that goes on, yet it frequently supplies convincing reasons for our actions. How many of these are plausible confabulations rather than true reasons, and can we tell?

“From these experiments, Sperry concluded that his patients had two conscious entities in one head; each having private sensations and free will. In contrast, Gazzaniga argued that only the left hemisphere sustains ‘the interpreter’, which uses language, organizes beliefs, and ascribes actions and intentions to people. Only this hemisphere has ‘high-level consciousness’, leaving the other hemisphere with many abilities and skills but without true consciousness.”

She points out that there is no way to resolve this issue. We can’t prove what is really going on here, even as it touches upon our most personal experience. But she adds that, “Bundle theory does away with the problem altogether. There is neither one self nor two selves inside the split brain; there are experiences but there is no one who is having them” (p. 74). What this means is that our experience of an egoic consciousness is overlaid on the entire experiential field, one experience presenting itself as all experience. Or else an interpretation of experience that alters what we experience and how we experience it. The self as coherent individuality is a mirage. That isn’t to say it is meaningless. Our minds naturally look for patterns, even or especially within our own minds. Meaning is always what we bring to our experience.

As for actual reading, as opposed to listening to audiobooks, my focus has still been on Daniel Everett’s recent publication, Dark Matter of the Mind. It is a difficult read in many parts because much of the linguistics scholarship goes over my head and the academic language can get tiresome, but I’ve been determined to finish it and I’m now near the last chapter. Parts of it are quite interesting, such as his mentioning the theory that “gestures and speech were equally and simultaneously implicated in the evolution of language” (Kindle Location 5102). He then details the relevance of gestures and the embodied communication (Kindle Locations 5108-5111):

““Mead’s loop,” wherein one’s own gestures are responded to by one’s own mirror neurons in the same way that these neurons respond to the actions of others, thus bringing one’s own actions into the realm of the social and contributing crucially to the development of a theory of mind— being able to interpret the actions of others under the assumption that others have minds like we do and think according to similar processes.”

That is what came to mind while listening to what Blackmore had to say about bundle theory of experience. The parts of the ‘self’ don’t form a coherent whole so much as they are involved in intimate contact and communication.

Our experience is social at the most fundamental level, a social phenomenon within each person’s body and social connection to the bodies of others. Our embodied selves are shifting realities with blurred boundaries, out of which forms patterns of social order and social identities. As others have argued, we develop a theory of mind within ourselves by first sussing out a theory of mind about others. So, our sense of self is built on our sense of others, which is to say we understand the relationships between experiences within own embodied minds as an inseparable understanding of our relationships with the larger world.

It’s hard to get at what this might mean. But one important factor is that of language. As Julian Jaynes argued in his book about the bicameral mind, “language is an organ of perception, not simply a means of communication” (p. 50, Kindle edition). Perception is always embodied. In The Master and His Emissary, Iain McGilchrist offers a summary that resonates with what I shared above by Everett (pp. 122-123):

“language originates as an embodied expression of emotion, that is communicated by one individual ‘inhabiting’ the body, and  therefore the emotional world, of another; a bodily skill, further, that is acquired by each of us through imitation, by the emotional identification and intuitive harmonisation of the bodily states of the one who learns with the one from whom it is learnt; a skill moreover that originates in the brain as an analogue of bodily movement, and involves the same processes, and even the same brain areas, as certain highly expressive gestures, as well as involving neurones (mirror neurones) that are activated equally when we carry out an action and when we see another carry it out (so that in the process we can almost literally be said to share one another’s bodily experience and inhabit one another’s bodies); a process, finally, that anthropologists see as derived from music, in turn an extension of grooming, which binds us together as physically embodied beings through a form of extended body language that is emotionally compelling across a large number of individuals within the group.”

Both Everett and McGilchrist are concerned with the evolution and development of language. They see it as inseparable from the embodied mind and the enculturated self. As Everett discusses the importance of gesture, McGilchrist explores the role of music and poetry. There is a strong argument that non-linguistic communication (gesture and/or poetry-music) was well established and highly effective among the earliest hominids, including pre-linguistic homo sapiens. It seems likely that this was the base upon which was built language as we know it.

Jaynes argues that written language was one of the factors that weakened the bicameral mind, a particular pre-egoic bundle theory. Prior to that, oral culture dominated; and in oral culture, language is intertwined with other aspects of human experience and behavior. Some of the evidence supporting this is how ancient humans sometimes spoke of body parts as having their own minds (a way of talking that continued into late Axial Age such as the New Testament canon, such that hands and eyes aren’t necessarily considered part of an integrally whole self; and it should be noted that the New Testament tradition was passed on orally for a number of generations before being written down). This is an experience still spoken of by some of those with schizophrenia and dissociative identity disorder. Even otherwise normal people will have voice-hearing experiences where the voices heard aren’t located in the head, sometimes in or around other parts of the body.

Most of human cognition and behavior is unconscious. The same goes for most of human communication and much of that non-conscious communication is also non-linguistic. This is the bodily or embodied unconscious. This relates to the social nature of our psyches, as with rapport where people mimic each other unawares (gestures, posture, breathing, etc) along with how yawns and laughter can be contagious. What I’m wondering about is how does the body-mind create rapport with itself in order to coordinate its vast multitudinous complexity.

Because of hemispheric divisions, for example, parts of the mind act rather independently. The corpus callosum doesn’t just allow the hemispheres to communicate for it also inhibits and restricts that communication, in ways and for reasons we don’t yet fully understand. Even when the corpus callosum is entirely cut making direct neurological communication impossible, the two hemispheres are able to coordinate behavior such that a person appears normal, even as two separate minds seem to be operating within the skull. Without directly communicating with one another, how do the hemispheres accomplish this?

The simplest answer is that both hemispheres have access to the sensory organs on the opposite side of the body and so can indirectly observe what the other hemisphere is doing (and, in the case of the left hemisphere, hear it’s explanations). But interestingly the two divided hemispheres can come to different conclusions based on different their separate input and processing. They can also act independently, a literal scenario of the left hand not knowing what the right hand is doing.

Here is a different kind of example from Everett (Kindle Locations 5071-5076):

“At age nineteen, IW suddenly lost all sense of touch and proprioception below the neck due to an infection. The experiments conducted by McNeill and his colleagues show that IW is unable to control instrumental movements when he cannot see his hands (though when he can see his hands, he has learned how to use this visual information to control them in a natural-appearing manner). What is fascinating is that IW, when speaking, uses a number of (what IW refers to as) “throwaway gestures” that are well coordinated, unplanned, nonvisually reliant, speech-connected gestures. McNeill concludes that at a minimum, this case provides evidence that speech gestures are different from other uses of the hands— even other gesturing uses of the hands.”

So, gestures are connected to speech. And gestures happen spontaneously. But even without proprioreception, other senses can be used to bridge the gap between conscious and unconscious expression. There are clearly different areas of behavior, cognition, and communication that relate in different ways. We are embodied minds and we know our minds through our bodies. And most of what our mind does is never accessed or controlled by consciousness. As research has shown, consciousness often only plays a role after behavior has already been initiated (less a power of will than a power of won’t).

So, what kind of mind is it that we have or rather that has us?

“Beyond that, there is only awe.”

“What is the meaning of life?” This question has no answer except in the history of how it came to be asked. There is no answer because words have meaning, not life or persons or the universe itself. Our search for certainty rests in our attempts at understanding the history of all individual selves and all civilizations. Beyond that, there is only awe.
~ Julian Jaynes, 1988, Life Magazine

That is always a nice quote. Jaynes never seemed like an ideologue about his own speculations. In his controversial book, more than a decade earlier (1976), he titled his introduction as “The Problem of Consciousness”. That is what frames his thought, confronting a problem. The whole issue of consciousness is still problematic to this day and likely will be so for a long time. After a lengthy analysis of complex issues, he concludes his book with some humbling thoughts:

For what is the nature of this blessing of certainty that science so devoutly demands in its very Jacob-like wrestling with nature? Why should we demand that the universe make itself clear to us? Why do we care?

To be sure, a part of the impulse to science is simple curiosity, to hold the unheld and watch the unwatched. We are all children in the unknown.

Following that, he makes a plea for understanding. Not just understanding of the mind but also of experience. It is a desire to grasp what makes us human, the common impulses that bind us, underlying both religion and science. There is a tender concern being given voice, probably shaped and inspired by his younger self having poured over his deceased father’s Unitarian sermons.

As individuals we are at the mercies of our own collective imperatives. We see over our everyday attentions, our gardens and politics, and children, into the forms of our culture darkly. And our culture is our history. In our attempts to communicate or to persuade or simply interest others, we are using and moving about through cultural models among whose differences we may select, but from whose totality we cannot escape. And it is in this sense of the forms of appeal, of begetting hope or interest or appreciation or praise for ourselves or for our ideas, that our communications are shaped into these historical patterns, these grooves of persuasion which are even in the act of communication an inherent part of what is communicated. And this essay is no exception.

That humility feels genuine. His book was far beyond mere scholarship. It was an expression of decades of questioning and self-questioning, about what it means to be human and what it might have meant for others throughout the millennia.

He never got around to writing another book on the topic, despite his stated plans to do so. But during the last decade of his life, he wrote an afterword to his original work. It was placed in the 1990 edition, fourteen years after the original publication. He had faced much criticism and one senses a tired frustration in those last years. Elsewhere, he complained about the expectation to explain himself and make himself understood to people who, for whatever reason, didn’t understand. Still, he realized that was the nature of his job as an academic scholar working at a major university. From the after word, he wrote:

A favorite practice of some professional intellectuals when at first faced with a theory as large as the one I have presented is to search for that loose thread which, when pulled, will unravel all the rest. And rightly so. It is part of the discipline of scientific thinking. In any work covering so much of the terrain of human nature and history, hustling into territories jealously guarded by myriad aggressive specialists, there are bound to be such errancies, sometimes of fact but I fear more often of tone. But that the knitting of this book is such that a tug on such a bad stitch will unravel all the rest is more of a hope on the part of the orthodox than a fact in the scientific pursuit of truth. The book is not a single hypothesis.

Interestingly, Jaynes doesn’t state the bicameral mind as an overarching context for the hypotheses he lists. In fact, it is just one among the several hypotheses and not even the first to be mentioned. That shouldn’t be surprising since decades of his thought and research, including laboratory studies done on animal behavior, preceded the formulation of the bicameral hypothesis. Here are the four hypotheses:

  1. Consciousness is based on language.
  2. The bicameral mind.
  3. The dating.
  4. The double brain.

He states that, “I wish to emphasize that these four hypotheses are separable. The last, for example, could be mistaken (at least in the simplified version I have presented) and the others true. The two hemispheres of the brain are not the bicameral mind but its present neurological model. The bicameral mind is an ancient mentality demonstrated in the literature and artifacts of antiquity.” Each hypothesis is connected to the others but must be dealt with separately. The key element to his project is consciousness, as that is the key problem. And as problems go, it is a doozy. Calling it a problem is like calling the moon a chunk of rock and the sun a warm fire.

Related to these hypotheses, earlier in his book, Jaynes proposes a useful framework. He calls it the General Bicameral Paradigm. “By this phrase,” he explains, “I mean an hypothesized structure behind a large class of phenomena of diminished consciousness which I am interpreting as partial holdovers from our earlier mentality.” There are four components:

  1. “the collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form;”
  2. “an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations;”
  3. “the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group; and”
  4. “the archaic authorization to which the trance is directed or related to, usually a god, but sometimes a person who is accepted by the individual and his culture as an authority over the individual, and who by the collective cognitive imperative is prescribed to be responsible for controlling the trance state.”

The point is made that the reader shouldn’t assume that they are “to be considered as a temporal succession necessarily, although the induction and trance usually do follow each other. But the cognitive imperative and the archaic authorization pervade the whole thing. Moreover, there is a kind of balance or summation among these elements, such that when one of them is weak the others must be strong for the phenomena to occur. Thus, as through time, particularly in the millennium following the beginning of consciousness, the collective cognitive imperative becomes weaker (that is, the general population tends toward skepticism about the archaic authorization), we find a rising emphasis on and complication of the induction procedures, as well as the trance state itself becoming more profound.”

This general bicameral paradigm is partly based on the insights he gained from studying ancient societies. But ultimately it can be considered separately from that. All you have to understand is that these are a basic set of cognitive abilities and tendencies that have been with humanity for a long time. These are the vestiges of human evolution and societal development. They can be combined and expressed in multiple ways. Our present society is just one of many possible manifestations. Human nature is complex and human potential is immense, and so diversity is to be expected among human neurocognition, behavior, and culture.

An important example of the general bicameral paradigm is hypnosis. It isn’t just an amusing trick done for magic shows. Hypnosis shows something profoundly odd, disturbing even, about the human mind. Also, it goes far beyond the individual for it is about how humans relate. It demonstrates the power of authority figures, in whatever form they take, and indicates the significance of what Jaynes calls authorization. By the way, this leads down the dark pathways of authoritarianism, brainwashing, propaganda, and punishment — as for the latter, Jaynes writes that:

If we can regard punishment in childhood as a way of instilling an enhanced relationship to authority, hence training some of those neurological relationships that were once the bicameral mind, we might expect this to increase hypnotic susceptibility. And this is true. Careful studies show that those who have experienced severe punishment in childhood and come from a disciplined home are more easily hypnotized, while those who were rarely punished or not punished at all tend to be less susceptible to hypnosis.

He discusses the history of hypnosis beginning with Mesmer. In this, he shows how metaphor took different form over time. And, accordingly, it altered shared experience and behavior.

Now it is critical here to realize and to understand what we might call the paraphrandic changes which were going on in the people involved, due to these metaphors. A paraphrand, you will remember, is the projection into a metaphrand of the associations or paraphiers of a metaphier. The metaphrand here is the influences between people. The metaphiers, or what these influences are being compared to, are the inexorable forces of gravitation, magnetism, and electricity. And their paraphiers of absolute compulsions between heavenly bodies, of unstoppable currents from masses of Ley den jars, or of irresistible oceanic tides of magnetism, all these projected back into the metaphrand of interpersonal relationships, actually changing them, changing the psychological nature of the persons involved, immersing them in a sea of uncontrollable control that emanated from the ‘magnetic fluids’ in the doctor’s body, or in objects which had ‘absorbed’ such from him.

It is at least conceivable that what Mesmer was discovering was a different kind of mentality that, given a proper locale, a special education in childhood, a surrounding belief system, and isolation from the rest of us, possibly could have sustained itself as a society not based on ordinary consciousness, where metaphors of energy and irresistible control would assume some of the functions of consciousness.

How is this even possible? As I have mentioned already, I think Mesmer was clumsily stumbling into a new way of engaging that neurological patterning I have called the general bicameral paradigm with its four aspects: collective cognitive imperative, induction, trance, and archaic authorization.

Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.

This touches upon the issue of rhetoric, although Jaynes never mentioned the topic. It’s disappointing since his original analysis of metaphor has many implications. Fortunately, others have picked up where he left off (see Ted Remington, Brian J. McVeigh, and Frank J. D’Angelo). Authorization in the ancient world came through a poetic voice, but today it is most commonly heard in rhetoric.

Still, that old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”

Metaphor grows naturally in poetic soil, but its seeds are planted in every aspect of language and thought, giving fruit to our perceptions and actions. This is a thousandfold true on the collective level of society and politics. Metaphors are most powerful when we don’t see them as metaphors. So, the most persuasive rhetoric is that which hides its metaphorical frame and obfuscates any attempts to bring it to light.

Going far back into the ancient world, metaphors didn’t need to be hidden in this sense. The reason for this is that there was no intellectual capacity or conceptual understanding of metaphors as metaphors. Instead, metaphors were taken literally. The way people spoke about reality was inseparable from their experience of reality and they had no way of stepping back from their cultural biases, as the cultural worldviews they existed within were all-encompassing. It’s only with the later rise of multicultural societies, especially the vast multi-ethnic trade empires, that people began to think in terms of multiple perspectives. Such a society was developing in the trade networking and colonizing nation-states of Greece in the centuries leading up to Hellenism.

That is the well known part of Jaynes’ speculations, the basis of his proposed bicameral mind. And Jaynes considered it extremely relevant to the present.

Marcel Kuijsten wrote that, “Jaynes maintained that we are still deep in the midst of this transition from bicamerality to consciousness; we are continuing the process of expanding the role of our internal dialogue and introspection in the decision-making process that was started some 3,000 years ago. Vestiges of the bicameral mind — our longing for absolute guidance and external control — make us susceptible to charismatic leaders, cults, trends, and persuasive rhetoric that relies on slogans to bypass logic” (“Consciousness, Hallucinations, and the Bicameral Mind Three Decades of New Research”, Reflections on the Dawn of Consciousness, Kindle Locations 2210-2213). Considering the present, in Authoritarian Grammar and Fundamentalist Arithmetic, Ben G. Price puts it starkly: “Throughout, tyranny asserts its superiority by creating a psychological distance between those who command and those who obey. And they do this with language, which they presume to control.” The point made by the latter is that this knowledge, even as it can be used as intellectual defense, might just lead to even more effective authoritarianism.

We’ve grown less fearful of rhetoric because we see ourselves as being savvy, experienced consumers of media. The cynical modern mind is always on guard, our well-developed and rigid state of consciousness offering a continuous psychological buffering against the intrusions of the world. So we like to think. I remember, back in 7th grade, being taught how the rhetoric of advertising is used to manipulate us. But we are over-confident. Consciousness operates at the surface of the psychic depths. We are better at rationalizing than being rational, something we may understand intellectually but rarely do we fully acknowledge the psychological and societal significance of this. That is the usefulness of theories like that of bicameralism, as they remind us that we are out of our depths. In the ancient world, there was a profound mistrust between the poetic and rhetorical, and for good reason. We would be wise to learn from that clash of mindsets and worldviews.

We shouldn’t be so quick to assume we understand our own minds, the kind of vessel we find ourselves on. Nor should we allow ourselves to get too comfortable within the worldview we’ve always known, the safe harbor of our familiar patterns of mind. It’s hard to think about these issues because they touch upon our own being, the surface of consciousness along with the depths below it. This is the near difficult task of fathoming the ocean floor using rope and a weight, an easier task the closer we hug the shoreline. But what might we find if cast ourselves out on open waters? What new lands might be found, lands to be newly discovered and lands already inhabited?

We moderns love certainty. And it’s true we possess more knowledge than any civilization before has accumulated. Yet we’ve partly made the unfamiliar into familiar by remaking the world in our own image. There is no place on earth that remains entirely untouched. Only a couple hundred small isolated tribes are still uncontacted, representing foreign worldviews not known or studied, but even they live under unnatural conditions of stress as the larger world closes in on them. Most of the ecological and cultural diversity that once existed has been obliterated from the face of the earth, most of it having left not a single trace or record, just simply gone. Populations beyond count have faced extermination by outside influences and forces before they ever got a chance to meet an outsider. Plagues, environmental destruction, and societal collapse wiped them out often in short periods of time.

Those other cultures might have gifted us with insights about our humanity that now are lost forever, just as extinct species might have held answers to questions not yet asked and medicines for diseases not yet understood. Almost all that now is left is a nearly complete monoculture with the differences ever shrinking into the constraints of capitalist realism. If not for scientific studies done on the last of isolated tribal people, we would never know how much diversity exists within human nature. Many of the conclusions that earlier social scientists had made were based mostly on studies involving white, middle class college kids in Western countries, what some have called the WEIRD: Western, Educated, Industrialized, Rich, and Democratic. But many of those conclusions have since proven wrong, biased, or limited.

When Jaynes’ first thought on such matters, the social sciences were still getting established as serious fields of study. His entered college around 1940 when behaviorism was a dominant paradigm. It was only in the prior decades that the very idea of ‘culture’ began to take hold among anthropologists. He was influenced by anthropologists, directly and indirectly. One indirect influence came by way of E. R. Dodds, a classical scholar, who in writing his 1951 The Greeks and the Irrational found inspiration from Ruth Benedict’s anthropological work comparing cultures (Benedict taking this perspective through the combination of the ideas of Franz Boas and Carl Jung). Still, anthropology was young and the fascinating cases so well known today were unknown back then (e.g., Daniel Everett’s recent books on the Pirahã). So, in following Dodds example, Jaynes turned to ancient societies and their literature.

His ideas were forming at the same time the social sciences were gaining respectability and maturity. It was a time when many scholars and other intellectuals were more fully questioning Western civilization. But it was also the time when Western ascendancy was becoming clear with the WWI ending of the Ottoman Empire and the WWII ending of the Japanese Empire. The whole world was falling under Western cultural influence. And traditional societies were in precipitous decline. That was the dawning of the age of monoculture.

We are the inheritors of the world that was created from that wholesale destruction of all that came before. And even what came before was built on millennia of collapsing civilizations. Jaynes focused on the earliest example of mass destruction and chaos leading him to see a stark division to what came before and after. How do we understand why we came to be the way we are when so much has been lost? We are forced back on our own ignorance. Jaynes apparently understood that and so considered awe to be the proper response. We know the world through our own humanity, but we can only know our own humanity through the cultural worldview we are born into. It is our words that have meaning, was Jaynes response, “not life or persons or the universe itself.” That is to say we bring meaning to what we seek to understand. Meaning is created, not discovered. And the kind of meaning we create depends on our cultural worldview.

In Monoculture, F. S. Michaels writes (pp. 1-2):

THE HISTORY OF HOW we think and act, said twentieth-century philosopher Isaiah Berlin, is, for the most part, a history of dominant ideas. Some subject rises to the top of our awareness, grabs hold of our imagination for a generation or two, and shapes our entire lives. If you look at any civilization, Berlin said, you will find a particular pattern of life that shows up again and again, that rules the age. Because of that pattern, certain ideas become popular and others fall out of favor. If you can isolate the governing pattern that a culture obeys, he believed, you can explain and understand the world that shapes how people think, feel and act at a distinct time in history.1

The governing pattern that a culture obeys is a master story — one narrative in society that takes over the others, shrinking diversity and forming a monoculture. When you’re inside a master story at a particular time in history, you tend to accept its definition of reality. You unconsciously believe and act on certain things, and disbelieve and fail to act on other things. That’s the power of the monoculture; it’s able to direct us without us knowing too much about it.

Over time, the monoculture evolves into a nearly invisible foundation that structures and shapes our lives, giving us our sense of how the world works. It shapes our ideas about what’s normal and what we can expect from life. It channels our lives in a certain direction, setting out strict boundaries that we unconsciously learn to live inside. It teaches us to fear and distrust other stories; other stories challenge the monoculture simply by existing, by representing alternate possibilities.

Jaynes argued that ideas are more than mere concepts. Ideas are embedded in language and metaphor. And ideas take form not just as culture but as entire worldviews built on interlinked patterns of attitudes, thought, perception, behavior, and identity. Taken together, this is the reality tunnel we exist within.

It takes a lot to shake us loose from these confines of the mind. Certain practices, from meditation to imbibing psychedelics, can temporarily or permanently alter the matrix of our identity. Jaynes, for reasons of his own, came to question the inevitability of the society around him which allowed him to see that other possibilities may exist. The direction his queries took him landed him in foreign territory, outside of the idolized individualism of Western modernity.

His ideas might have been less challenging in a different society. We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.

Maybe we modern Westerners need to practice a sense of awe, to inquire more deeply. Jaynes offers a different way of thinking that doesn’t even require us to look to another society. If he is correct, this radical worldview is at the root of Western Civilization. Maybe the traces of the past are still with us.

* * *

The Origin of Rhetoric in the Breakdown of the Bicameral Mind
by Ted Remington

Endogenous Hallucinations and the Bicameral Mind
by Rick Straussman

Consciousness and Dreams
by Marcel Kuijsten, Julian Jaynes Society

Ritual and the Consciousness Monoculture
by Sarah Perry, Ribbonfarm

“I’m Nobody”: Lyric Poetry and the Problem of People
by David Baker, The Virginia Quarterly Review

It is in fact dangerous to assume a too similar relationship between those ancient people and us. A fascinating difference between the Greek lyricists and ourselves derives from the entity we label “the self.” How did the self come to be? Have we always been self-conscious, of two or three or four minds, a stew of self-aware voices? Julian Jaynes thinks otherwise. In The Origin of Consciousness in the Breakdown of the Bicameral Mind—that famous book my poetry friends adore and my psychologist friends shrink from—Jaynes surmises that the early classical mind, still bicameral, shows us the coming-into-consciousness of the modern human, shows our double-minded awareness as, originally, a haunted hearing of voices. To Jaynes, thinking is not the same as consciousness: “one does one’s thinking before one knows what one is to think about.” That is, thinking is not synonymous with consciousness or introspection; it is rather an automatic process, notably more reflexive than reflective. Jaynes proposes that epic poetry, early lyric poetry, ritualized singing, the conscience, even the voices of the gods, all are one part of the brain learning to hear, to listen to, the other.

Auditory Hallucinations: Psychotic Symptom or Dissociative Experience?
by Andrew Moskowitz & Dirk Corstens

Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or by persons with no mental disorder at all.

Neuroimaging, auditory hallucinations, and the bicameral mind.
by L. Sher, Journal of Psychiatry and Neuroscience

Olin suggested that recent neuroimaging studies “have illuminated and confirmed the importance of Jaynes’ hypothesis.” Olin believes that recent reports by Lennox et al and Dierks et al support the bicameral mind. Lennox et al reported a case of a right-handed subject with schizophrenia who experienced a stable pattern of hallucinations. The authors obtained images of repeated episodes of hallucination and observed its functional anatomy and time course. The patient’s auditory hallucination occurred in his right hemisphere but not in his left.

What Is It Like to Be Nonconscious?: A Defense of Julian Jaynes
by Gary William, Phenomenology and the Cognitive Sciences

To explain the origin of consciousness is to explain how the analog “I” began to narratize in a functional mind-space. For Jaynes, to understand the conscious mind requires that we see it as something fleeting rather than something always present. The constant phenomenality of what-it-is-like to be an organism is not equivalent to consciousness and, subsequently, consciousness must be thought in terms of the authentic possibility of consciousness rather than its continual presence.

Defending Damasio and Jaynes against Block and Gopnik
by Emilia Barile, Phenomenology Lab

When Jaynes says that there was “nothing it is like” to be preconscious, he certainly didn’t mean to say that nonconscious animals are somehow not having subjective experience in the sense of “experiencing” or “being aware” of the world. When Jaynes said there is “nothing it is like” to be preconscious, he means that there is no sense of mental interiority and no sense of autobiographical memory. Ask yourself what it is like to be driving a car and then suddenly wake up and realize that you have been zoned out for the past minute. Was there something it is like to drive on autopilot? This depends on how we define “what it is like”.

“The Evolution of the Analytic Topoi: A Speculative Inquiry”
by Frank J. D’Angelo
from Essays on Classical Rhetoric and Modern Discourse
ed. Robert J. Connors, Lisa S. Ede, & Andrea A. Lunsford
pp. 51-5

The first stage in the evolution of the analytic topoi is the global stage. Of this stage we have scanty evidence, since we must assume the ontogeny of invention in terms of spoken language long before the individual is capable of anything like written language. But some hints of how logical invention might have developed can be found in the work of Eric Havelock. In his Preface to Plato, Havelock, in recapitulating the educational experience of the Homeric and post-Homeric Greek, comments that the psychology of the Homeric Greek is characterized by a high degree of automatism.

He is required as a civilised being to become acquainted with the history, the social organisation, the technical competence and the moral imperatives of his group. This in turn is able to function only as a fragment of the total Hellenic world. It shares a consciousness in which he is keenly aware that he, as a Hellene, in his memory. Such is poetic tradition, essentially something he accepts uncritically, or else it fails to survive in his living memory. Its acceptance and retention are made psychologically possible by a mechanism of self-surrender to the poetic performance and of self-identification with the situations and the stories related in the performance. . . . His receptivity to the tradition has thus, from the standpoint of inner psychology, a degree of automatism which however is counter-balanced by a direct and unfettered capacity for action in accordance with the paradigms he has absorbed. 6

Preliterate man was apparently unable to think logically. He acted, or as Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, puts it, “reacted” to external events. “There is in general,” writes Jaynes, “no consciousness in the Iliad . . . and in general therefore, no words for consciousness or mental acts.” 7 There was, in other words, no subjective consciousness in Iliadic man. His actions were not rooted in conscious plans or in reasoning. We can only speculate, then, based on the evidence given by Havelock and Jaynes that logical invention, at least in any kind of sophisticated form, could not take place until the breakdown of the bicameral mind, with the invention of writing. If ancient peoples were unable to introspect, then we must assume that the analytic topoi were a discovery of literate man. Eric Havelock, however, warns that the picture he gives of Homeric and post-Homeric man is oversimplified and that there are signs of a latent mentality in the Greek mind. But in general, Homeric man was more concerned to go along with the tradition than to make individual judgments.

For Iliadic man to be able to think, he must think about something. To do this, states Havelock, he had to be able to revolt against the habit of self-identification with the epic poem. But identification with the poem at this time in history was necessary psychologically (identification was necessary for memorization) and in the epic story implicitly as acts or events that are carried out by important people, must be abstracted from the narrative flux. “Thus the autonomous subject who no longer recalls and feels, but knows, can now be confronted with a thousand abstract laws, principles, topics, and formulas which become the objects of his knowledge.” 8

The analytic topoi, then, were implicit in oral poetic discourse. They were “experienced” in the patterns of epic narrative, but once they are abstracted they can become objects of thought as well as of experience. As Eric Havelock puts it,

If we view them [these abstractions] in relation to the epic narrative from which, as a matter of historical fact, they all emerged they can all be regarded as in one way or another classifications of an experience which was previously “felt” in an unclassified medley. This was as true of justice as of motion, of goodness as of body or space, of beauty as of weight or dimension. These categories turn into linguistic counters, and become used as a matter of course to relate one phenomenon to another in a non-epic, non-poetic, non-concrete idiom. 9

The invention of the alphabet made it easier to report experience in a non-epic idiom. But it might be a simplification to suppose that the advent of alphabetic technology was the only influence on the emergence of logical thinking and the analytic topics, although perhaps it was the major influence. Havelock contends that the first “proto-thinkers” of Greece were the poets who at first used rhythm and oral formulas to attempt to arrange experience in categories, rather than in narrative events. He mentions in particular that it was Hesiod who first parts company with the narrative in the Theogony and Works and Days. In Works and Days, Hesiod uses a cataloging technique, consisting of proverbs, aphorisms, wise sayings, exhortations, and parables, intermingled with stories. But this effect of cataloging that goes “beyond the plot of a story in order to impose a rough logic of topics . . . presumes that Hesiod is 10

The kind of material found in the catalogs of Hesiod was more like the cumulative commonplace material of the Renaissance than the abstract topics that we are familiar with today. Walter Ong notes that “the oral performer, poet or orator needed a stock of material to keep him going. The doctrine of the commonplaces is, from one point of view, the codification of ways of assuring and managing this stock.” 11 We already know what some of the material was like: stock epithets, figures of speech, exempla, proverbs, sententiae, quotations, praises or censures of people and things, and brief treatises on virtues and vices. By the time we get to the invention of printing, there are vast collections of this commonplace material, so vast, relates Ong, that scholars could probably never survey it all. Ong goes on to observe that

print gave the drive to collect and classify such excerpts a potential previously undreamed of. . . . the ranging of items side by side on a page once achieved, could be multiplied as never before. Moreover, printed collections of such commonplace excerpts could be handily indexed; it was worthwhile spending days or months working up an index because the results of one’s labors showed fully in thousands of copies. 12

To summarize, then, in oral cultures rhetorical invention was bound up with oral performance. At this stage, both the cumulative topics and the analytic topics were implicit in epic narrative. Then the cumulative commonplaces begin to appear, separated out by a cataloging technique from poetic narrative, in sources such as the Theogony and Works and Days . Eric Havelock points out that in Hesiod, the catalog “has been isolated or abstracted . . . out of a thousand contexts in the rich reservoir of oral tradition. … A general world view is emerging in isolated or ‘abstracted’ form.” 13 Apparently, what we are witnessing is the emergence of logical thinking. Julian Jaynes describes the kind of thought to be found in the Works and Days as “preconscious hypostases.” Certain lines in Hesiod, he maintains, exhibit “some kind of bicameral struggle.” 14

The first stage, then, of rhetorical invention is that in which the analytic topoi are embedded in oral performance in the form of commonplace material as “relationships” in an undifferentiated matrix. Oral cultures preserve this knowledge by constantly repeating the fixed sayings and formulae. Mnemonic patterns, patterns of repetition, are not added to the thought of oral cultures. They are what the thought consists of.

Emerging selves: Representational foundations of subjectivity
by Wolfgang Prinz, Consciousness and Cognition

What, then, may mental selves be good for and why have they emerged during evolution (or, perhaps, human evolution or even early human history)? Answers to these questions used to take the form of stories explaining how the mental self came about and what advantages were associated with it. In other words, these are theories that construct hypothetical scenarios offering plausible explanations for why certain (groups of) living things that initially do not possess a mental self gain fitness advantages when they develop such an entity—with the consequence that they move from what we can call a self-less to a self-based or “self-morphic” state.

Modules for such scenarios have been presented occasionally in recent years by, for example, Dennett, 1990 and Dennett, 1992, Donald (2001), Edelman (1989), Jaynes (1976), Metzinger, 1993 and Metzinger, 2003, or Mithen (1996). Despite all the differences in their approaches, they converge around a few interesting points. First, they believe that the transition between the self-less and self-morphic state occurred at some stage during the course of human history—and not before. Second, they emphasize the cognitive and dynamic advantages accompanying the formation of a mental self. And, third, they also discuss the social and political conditions that promote or hinder the constitution of this self-morphic state. In the scenario below, I want to show how these modules can be keyed together to form a coherent construction. […]

Thus, where do thoughts come from? Who or what generates them, and how are they linked to the current perceptual situation? This brings us to a problem that psychology describes as the problem of source attribution ( Heider, 1958).

One obvious suggestion is to transfer the schema for interpreting externally induced messages to internally induced thoughts as well. Accordingly, thoughts are also traced back to human sources and, likewise, to sources that are present in the current situation. Such sources can be construed in completely different ways. One solution is to trace the occurrence of thoughts back to voices—the voices of gods, priests, kings, or ancestors, in other words, personal authorities that are believed to have an invisible presence in the current situation. Another solution is to locate the source of thoughts in an autonomous personal authority bound to the body of the actor: the self.

These two solutions to the attribution problem differ in many ways: historically, politically, and psychologically. In historical terms, the former must be markedly older than the latter. The transition from one solution to the other and the mentalities associated with them are the subject of Julian Jaynes’s speculative theory of consciousness. He even considers that this transfer occurred during historical times: between the Iliad and the Odyssey. In the Iliad, according to Jaynes, the frame of mind of the protagonists is still structured in a way that does not perceive thoughts, feelings, and intentions as products of a personal self, but as the dictates of supernatural voices. Things have changed in the Odyssey: Odysseus possesses a self, and it is this self that thinks and acts. Jaynes maintains that the modern consciousness of Odysseus could emerge only after the self had taken over the position of the gods (Jaynes, 1976; see also Snell, 1975).

Moreover, it is obvious why the political implications of the two solutions differ so greatly: Societies whose members attribute their thoughts to the voices of mortal or immortal authorities produce castes of priests or nobles that claim to be the natural authorities or their authentic interpreters and use this to derive legitimization for their exercise of power. It is only when the self takes the place of the gods that such castes become obsolete, and authoritarian constructions are replaced by other political constructions that base the legitimacy for their actions on the majority will of a large number of subjects who are perceived to be autonomous.

Finally, an important psychological difference is that the development of a self-concept establishes the precondition for individuals to become capable of perceiving themselves as persons with a coherent biography. Once established, the self becomes involved in every re-presentation and representation as an implicit personal source, and just as the same body is always present in every perceptual situation, it is the same mental self that remains identical across time and place. […]

According to the cognitive theories of schizophrenia developed in the last decade (Daprati et al., 1997; Frith, 1992), these symptoms can be explained with the same basic pattern that Julian Jaynes uses in his theory to characterize the mental organization of the protagonists in the Iliad. Patients with delusions suffer from the fact that the standardized attribution schema that localizes the sources of thoughts in the self is not available to them. Therefore, they need to explain the origins of their thoughts, ideas, and desires in another way (see, e.g., Stephens & Graham, 2000). They attribute them to person sources that are present but invisible—such as relatives, physicians, famous persons, or extraterrestrials. Frequently, they also construct effects and mechanisms to explain how the thoughts proceeding from these sources are communicated, by, for example, voices or pictures transmitted over rays or wires, and nowadays frequently also over phones, radios, or computers. […]

As bizarre as these syndromes seem against the background of our standard concept of subjectivity and personhood, they fit perfectly with the theoretical idea that mental selves are not naturally given but rather culturally constructed, and in fact set up in, attribution processes. The unity and consistency of the self are not a natural necessity but a cultural norm, and when individuals are exposed to unusual developmental and life conditions, they may well develop deviant attribution patterns. Whether these deviations are due to disturbances in attribution to persons or to disturbances in dual representation cannot be decided here. Both biological and societal conditions are involved in the formation of the self, and when they take an unusual course, the causes could lie in both domains.


“The Varieties of Dissociative Experience”
by Stanley Krippner
from Broken Images Broken Selves: Dissociative Narratives In Clinical Practice
pp. 339-341

In his provocative description of the evolution of humanity’s conscious awareness, Jaynes (1976) asserted that ancient people’s “bicameral mind” enabled them to experience auditory hallucinations— the voices of the deities— but they eventually developed an integration of the right and left cortical hemispheres. According to Jaynes, vestiges of this dissociation can still be found, most notably among the mentally ill, the extremely imaginative, and the highly suggestible. Even before the development of the cortical hemispheres, the human brain had slowly evolved from a “reptilian brain” (controlling breathing, fighting, mating, and other fixed behaviors), to the addition of an “old-mammalian brain,” (the limbic system, which contributed emotional components such as fear, anger, and affection), to the superimposition of a “new-mammalian brain” (responsible for advanced sensory processing and thought processes). MacLean (1977) describes this “triune brain” as responsible, in part, for distress and inefficiency when the parts do not work well together. Both Jaynes’ and MacLean’s theories are controversial, but I believe that there is enough autonomy in the limbic system and in each of the cortical hemispheres to justify Ornstein’s (1986) conclusion that human beings are much more complex and intricate than they imagine, consisting of “an uncountable number of small minds” (p. 72), sometimes collaborating and sometimes competing. Donald’s (1991) portrayal of mental evolution also makes use of the stylistic differences of the cerebral hemisphere, but with a greater emphasis on neuropsychology than Jaynes employs. Mithen’s (1996) evolutionary model is a sophisticated account of how specialized “cognitive domains” reached the point that integrated “cognitive fluidity” (apparent in art and the use of symbols) was possible.

James (1890) spoke of a “multitude” of selves, and some of these selves seem to go their separate ways in posttraumatic stress disorder (PTSD) (see Greening, Chapter 5), dissociative identity disorder (DID) (see Levin, Chapter 6), alien abduction experiences (see Powers, Chapter 9), sleep disturbances (see Barrett, Chapter 10), psychedelic drug experiences (see Greenberg, Chapter 11), death terrors (see Lapin, Chapter 12), fantasy proneness (see Lynn, Pintar, & Rhue, Chapter 13), near-death experiences (NDEs) (see Greyson, Chapter 7), and mediumship (see Grosso, Chapter 8). Each of these conditions can be placed into a narrative construction, and the value of these frameworks has been described by several authors (e.g., Barclay, Chapter 14; Lynn, Pintar, & Rhue, Chapter 13; White, Chapter 4). Barclay (Chapter 14) and Powers (Chapter 15) have addressed the issue of narrative veracity and validation, crucial issues when stories are used in psychotherapy. The American Psychiatric Association’s Board of Trustees (1993) felt constrained to issue an official statement that “it is not known what proportion of adults who report memories of sexual abuse were actually abused” (p. 2). Some reports may be fabricated, but it is more likely that traumatic memories may be misconstrued and elaborated (Steinberg, 1995, p. 55). Much of the same ambiguity surrounds many other narrative accounts involving dissociation, especially those described by White (Chapter 4) as “exceptional human experiences.”

Nevertheless, the material in this book makes the case that dissociative accounts are not inevitably uncontrolled and dysfunctional. Many narratives considered “exceptional” from a Western perspective suggest that dissociation once served and continues to serve adaptive functions in human evolution. For example, the “sham death” reflex found in animals with slow locomotor abilities effectively offers protection against predators with greater speed and agility. Uncontrolled motor responses often allow an animal to escape from dangerous or frightening situations through frantic, trial-and-error activity (Kretchmer, 1926). Many evolutionary psychologists have directed their attention to the possible value of a “multimodular” human brain that prevents painful, unacceptable, and disturbing thoughts, wishes, impulses, and memories from surfacing into awareness and interfering with one’s ongoing contest for survival (Nesse & Lloyd, 1992, p. 610). Ross (1991) suggests that Western societies suppress this natural and valuable capacity at their peril.

The widespread prevalence of dissociative reactions argues for their survival value, and Ludwig (1983) has identified seven of them: (1) The capacity for automatic control of complex, learned behaviors permits organisms to handle a much greater work load in as smooth a manner as possible; habitual and learned behaviors are permitted to operate with a minimum expenditure of conscious control. (2) The dissociative process allows critical judgment to be suspended so that, at times, gratification can be more immediate. (3) Dissociation seems ideally suited for dealing with basic conflicts when there is no instant means of resolution, freeing an individual to take concerted action in areas lacking discord. (4) Dissociation enables individuals to escape the bounds of reality, providing for inspiration, hope, and even some forms of “magical thinking.” (5) Catastrophic experiences can be isolated and kept in check through dissociative defense mechanisms. (6) Dissociative experiences facilitate the expression of pent-up emotions through a variety of culturally sanctioned activities. (7) Social cohesiveness and group action often are facilitated by dissociative activities that bind people together through heightened suggestibility.

Each of these potentially adaptive functions may be life-depotentiating as well as life-potentiating; each can be controlled as well as uncontrolled. A critical issue for the attribution of dissociation may be the dispositional set of the experiencer-in-context along with the event’s adaptive purpose. Salamon (1996) described her mother’s ability to disconnect herself from unpleasant surroundings or facts, a proclivity that led to her ignoring the oncoming imprisonment of Jews in Nazi Germany but that, paradoxically, enabled her to survive her years in Auschwitz. Gergen (1991) has described the jaundiced eye that modern Western science has cast toward Dionysian revelry, spiritual experiences, mysticism, and a sense of bonded unity with nature, a hostility he predicts may evaporate in the so-called “postmodern” era, which will “open the way to the full expression of all discourses” (pp. 246– 247). For Gergen, this postmodern lifestyle is epitomized by Proteus, the Greek sea god, who could change his shape from wild boar to dragon, from fire to flood, without obvious coherence through time. This is all very well and good, as long as this dissociated existence does not leave— in its wake— a residue of broken selves whose lives have lost any intentionality or meaning, who live in the midst of broken images, and whose multiplicity has resulted in nihilistic affliction and torment rather than in liberation and fulfillment (Glass, 1993, p. 59).

 

 

Confusion on Consciousness

There is many difficulties in dealing with Julian Jaynes’ theory of the bicameral mind, first argued in The Origin of Consciousness in the Breakdown of the Bicameral Mind. It attacks straight on the most daunting of challenges to our humanity. What is consciousness? From that, many questions follow.

Jayne’s book has often been discussed, for decades at this point. Almost anyone who has heard about the idea of bicameralism has an opinion on it, whether or not they’ve read much about it. The book itself is a scholarly book and so few have bothered reading it. To be honest, it took me many years to finally get around to looking seriously at it and even then I’ve never read it in a linear fashion (then again, I almost never read any book in a linear fashion).

There are a number of essays that deal solely with the issue of misunderstandings about the theory of bicameralism and post-bicameral consciousness. Confusion is to be expected, considering the complexity of the subject matter, involving multiple areas of scholarship. It was an ambitious work, to say the least. Few could attempt such a massive project. You have to give Jaynes credit for having the intellectual courage and vision to pull it off, even if you ultimately disagree with the conclusions.

Let me give some of examples of the confusion that easily follows. The first one comes from a book that discusses bicameralism a bit: The Fall by Steve Taylor. I was only skimming it out of curiosity when I came across this quote (p. 142):

If pre-historic people had no self-consciousness, as Julian Jaynes suggests, they would also have had no awareness of death. But this wasn’t the case, of course, as their funerals, graves and afterlife beliefs testify.

That misses the point of the bicameral theory (as I explained in a comment to a review by Frank S. Robinson). If ancient societies actually were bicameral, they wouldn’t have had our dualistic experience of life and death. It’s not an issue of awareness of death, since death wouldn’t have been perceived as post-bicameral people perceived it. Dead people, in a sense, didn’t die.

Bicameral people, according to theory, kept hearing the voices of the people they knew when they were living. The memory of the person was experienced as still being part of the world. They wouldn’t merely remember the voice of a loved one, a priest, or a king for memory to them would have been the voice of the person still speaking within their experience. That voice would go on speaking, until those who had known the person also died and there was no living memory left to call them back into existence.

Their burial practices, therefore, were done with such care because the person in question was still present to them. Such burial practices are in no way evidence against Jayne’s theory. And their beliefs about an afterlife were a continuum with their beliefs about the living world, no absolute demarcation required. The criticism by Taylor isn’t an actual counter-argument.

This is common. Few people seem able to grasp what Jaynes was trying to explain. It doesn’t mean valid criticisms can’t be made. But it is interesting that those who disagree with Jaynes so rarely make valid criticisms. The best critiques come from those like Iain McGilchrist who, in proposing a slightly different theory, are looking closely at the same kind of evidence that Jaynes knew so well. The problem is those who dismiss Jaynes would also likely dismiss McGilchrist or anyone else who sincerely attempts to deal with this evidence.

Here is another example I came across. It’s from and essay, “Do Animals Need a ‘Theory of Mind’?” by Michael Bavidge and Ian Ground, in the book Against Theory of Mind edited by I. Leudar and A. Costall (p. 177):

As an illustration of closet-Cartesianism in the discussion of TToM [Theory ‘Theory of Mind’] consider the controversy over mirror experiments on chimps. Julian Jaynes writes:

“that a mirror-educated chimpanzee immediately rubs off a spot on his forehead when he sees it in a mirror is not […] clear evidence for self-awareness, at least in its usual sense […] Our conscious selves are not our bodies […] we do not see our conscious selves in mirrors. Gallup’s chimpanzee has learnt a point to point relation between a mirror image and his body, wonderful as that is.”
(Jaynes, 1978, quoted in Kennedy, 1992, p. 108)

Here straightforwardly dualist thoughts — that ‘our conscious selves are not our bodies’ and ‘we do not see our conscious selves in mirrors’ — are used to object to the claim that chimps might have a concept of self.

Of course, viewed in a different light, Jaynes’ objection can be given a sense. As Hume pointed out, conscious selves could never appear in anything like a mirror:

“For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.”
(Hume, 2004, Book I, Part 4, Section 6)5

That is, the self, chimp or human, conceived as the conscious owner of experience, could never be data at all, not even in a inner ‘mirror of introspection’. More likely, however, Jaynes simply thinks that selves just are the sort of things that could only appear in inner mirrors: this is Cartesianism disguised as stringent scientific methodology.

It’s hard to even make sense of what is being criticized.

Jaynes is making an argument about societies that were prior to Cartesianism and other forms of abstract dualistic thought. He hypothesizes that internal experience was metaphorically based on external experience. The point of the argument for bicameralism is to explain the close relationship between inner and outer, specifically in terms of identity formation.

If anything, that is the opposite of Cartesianism. It’s not clear that Bavidge and Ground even grasp what they are trying to criticize. This is compounded by the fact that they are responding to a quote that comes from yet another book, indicating they might not even have read Jaynes’ book or sought to understand any of the context around the quote. For the sake of clarity, here is more of the context (from the Afterword of the 1990 and later editions):

This conclusion is incorrect. Self-awareness usually means the consciousness of our own persona over time, a sense of who we are, our hopes and fears, as we daydream about ourselves in relation to others. We do not see our conscious selves in mirrors, even though that image may become the emblem of the self in many cases. The chimpanzees in this experiment and the two-year old child learned a point-to-point relation between a mirror image and the body, wonderful as that is. Rubbing a spot noticed in the mirror is not essentially different from rubbing a spot noticed on the body without a mirror. The animal is not shown to be imagining himself anywhere else, or thinking of his life over time, or introspecting in any sense — all signs of a conscious life.

This less interesting, more primitive interpretation was made even clearer by an ingenious experiment done in Skinner’s laboratory (Epstein, 1981). Essentially the same paradigm was followed with pigeons, except that it required a series of specific trainings with the mirror, whereas the chimpanzee or child in the earlier experiments was, of course, self-trained. But after about fifteen hours of such training when the contingencies were carefully controlled, it was found that a pigeon also could use a mirror to locate a blue spot on its body which it could not see directly, though it had never been explicitly trained to do so. I do not think that a pigeon because it can be so trained has a self-concept.

As can be seen, some important points were left out in the cut-up quote from John S. Kennedy’s book (The New Anthropomorphism). Besides, that brief mention is the only time Kennedy discusses Jaynes at all. Like Bavidge and Ground, Kennedy showed no evidence of grappling with the challenges of bicameral theory.

Such meager partial quotes and superficial commentary is the most that such people ever learn about Jaynes’ theory of bicameralism. It’s brought up only to be dismissed, often just in a few sentences, based on the assumption that others must have already analyzed it elsewhere and so there must be no point in taking it seriously at this point. It’s crazy talk, plain absurd, and obviously wrong. All respectable thinkers already know this and so don’t need to read the book in order to understand what was disproven long ago. This is an intellectual laziness based on mainstream thought or rather thoughtlessness.

In the full passage and throughout the rest of his book, Jaynes makes clear that a metaphorically imagined, interiorized, spatialized, and narrativized self-conscious identity (what Jaynes means by “consciousness”) isn’t necessary to respond to a perceived spot on the body, whether perceived directly or in a mirror. The confusion is that few people trying to make sense of Jaynes theory ever bother trying to understand his definition and explanation of consciousness, a more complicated issue than most realize since our folk psychology assumptions rarely are questioned. To put it simply, few people ever become conscious of their own beliefs and biases about consciousness, since their subjective perceptions are inseparable from their cultural conceptions.

Part of the struggle here is the strangeness of the evidence itself. Jaynes didn’t begin with a conclusion and then look for proof to confirm it. He came across ancient texts that described experiences that didn’t match what modern Westerners assume to be reality. That is a problem requiring a solution, even if one prefers a different kind of explanation.

So, what are we to do with such extreme inconsistencies between past and present use of language in describing experience and identity? If we don’t attempt to take at face value the words of other people, how do we avoid simply projecting our assumptions and biases in interpreting those words? How can we ever come to terms with a foreign worldview that doesn’t match our cultural expectations and frameworks of understanding? What if ancient humans weren’t (and chimpanzees aren’t) just a simpler version of modern Westerners?

Jaynes answer to these questions and others could be wrong, partly or entirely. The debate about this hasn’t ended. It’s barely begun. But most people don’t yet even have the conceptual framework and basic knowledge to understand what the debate is about, much less the capacity to join that debate. This is a tough nut to crack. Even four decades after its original publication, The Origin of Consciousness in the Breakdown of the Bicameral Mind should not be underestimated. That book was just a parting shot, as impressive as it was for its time. Dozens of books have been inspired by it and brought the theory up to date, either with new evidence or entirely reformulated into new theories.

As with everything, if it is worth having an opinion about, it is worth spending the time to learn about and understand. Plus, it’s fascinating. Let loose the reigns of your imagination and let your curiosity get the better of you. Take it as a thought experiment. What if the human mind did radically change in the past? And what if it still has the potential for radical change? How would we know and recognize this? What harm would come from honestly and carefully looking at the evidence that doesn’t fit our preconceptions?

How do we make the strange familiar?

I’ve been simultaneously looking at two books: This is Your Brain on Parasites by Kathleen McAuliffe. And Stranger Than We Can Imagine by John Higgs. The two relate, with the latter offering a larger context for the former. The theme of both might well be summed up with the word ‘strange’. The world is strange and becoming ever stranger. We are becoming aware of how utterly bizarre the world is, both within us and all around us.

The first is not only about parasites, despite the catchy title. It goes so far beyond just that. After all, most of the genetic material we carry around with us, including within our brains, is non-human. It’s not merely that we are part of environments for we are environments. We are mobile ecosystems with boundaries that are fluid and permeable.

For a popular science book, it covers a surprising amount of territory and done so with more depth than one might expect. Much of the research discussed is preliminary and exploratory, as the various scientific fields have been slow to emerge. This might be because of how much they challenge the world as we know it and society as it is presently ordered. There are other psychological factors the author details such as the resistance humans have in dealing with topics of perceived disgust.

To summarize the book, McAuliffe explores the conclusions and implications of research involving parasitism and microbiomes in terms of neurocognitive functioning, behavioral tendencies, personality traits, political ideologies, population patterns, social structures, and culture. She offers some speculations of those involved in these fields, and what makes the speculations interesting is how they demonstrate the potential challenges of these new understandings. Whether or not we wish to take the knowledge and speculations seriously, the real world consequences will remain to be dealt with somehow.

The most obvious line of thought is the powerful influence of environments. The world around us doesn’t just effect us. It shapes who we are at a deep level and so shapes our entire society. There is no way to separate the social world from the natural world. This isn’t fatalism, since we also shape our environments. The author points to the possibility that Western societies have been liberalized at least partly because of the creation of healthier conditions that allow human flourishing. All of the West not that long ago was dominated by fairly extreme forms of social conservatism, violent ethnocentrism, authoritarian systems, etc. Yet in the generations following the creation of sewer systems, clean water, environmental regulations and improved healthcare, there was a revolution in Western social values along with vast improvements in human development.

In terms of intelligence, some call this the Moral Flynn Effect, a convergence of diverse improvements. And there is no reason to assume it will stop and won’t spread further. We know the problems we face. We basically understand what those problems are, what causes them and alleviates them, even if not entirely eliminates them. So, we know what we should do, assuming we actually wanted to create a better world. Most importantly, we have the monetary wealth, natural resources, and human capacity to implement what needs to be done. It’s not a mystery, not beyond our comprehension and ability. But the general public has so far lacked this knowledge, for it takes a while for new info and understandings to spread — e.g., Enlightenment ideas developed over centuries and it wasn’t until the movable type printing press became common that revolutions began. The ruling elite, as in the past, will join in solving these problems when fear of the masses forces them to finally act. Or else the present ruling elite will itself be eliminated, as happened with previous societies.

What is compelling about this book are the many causal links and correlations shown. It matches closely with what is seen from other fields, forming a picture that can’t be ignored. It’s probably no accident that ethnocentric populations, socially conservative societies, authoritarian governments, and strict religions all happen to be found where there are high rates of disease, parasites, toxins, malnutrition, stress, poverty, inequality, etc — all the conditions that stunt and/or alter physical, neurocognitive, and psychological development.

For anti-democratic ruling elites, there is probably an intuitive or even conscious understanding that the only way to maintain social control is through keeping the masses to some degree unhealthy and stunted. If you let people develop more of their potential, they will start demanding more. If you let intelligence increase and education improve, individuals will start thinking for themselves and the public imagining new possibilities.

Maybe its unsurprising that American conservatives have seen the greatest threat not just in public education but, more imporantly, in public health. The political right doesn’t fear the failures of the political left, the supposed wasted use of tax money. No, what they fear is that the key leftist policies have been proven to work. The healthier, smarter, and better educated people become the more they develop attitudes of social liberalism and anti-authoritarianism, which leads toward the possibility of radical imagination and radical action. Until people are free to more fully develop their potentials, freedom is a meaningless and empty abstraction. The last thing the political right wants, and sadly this includes many mainstream ‘liberals’, is a genuinely free population.

This creates a problem. The trajectory of Western civilization for centuries has been the improvement of all these conditions that seems to near inevitably create a progressive society. That isn’t to say the West is perfect. Far from it. But imagine what kind of world it would be if universal healthcare and education was provided to every person on the planet. This is within the realm of possibility at this very moment, if we so chose to invest our resources in this way. It’s nothing special about the West and even in the West there are still large parts of the population living in severe deprivation and oppression. In a single generation, we could transform civilization and solve (or at least shrink to manageable size) the worst social problems. There is absolutely nothing stopping us but ourselves. Instead, Western governments have been using their vast wealth and power to dominate other countries, making the world a worst place in the process, helping to create the very conditions that further undermine any hope for freedom and democracy. Blowing up hospitals, destroying infrastructure, and banning trade won’t lead to healthier and more peaceful populations; if anything, the complete opposite.

A thought occurred to me. If environmental conditions are so important to how individuals and societies form, then maybe political ideologies are less key than we think or else not as important in the way we normally think about them. Our beliefs about our society might be more result than cause (maybe the limited healthcare availability in the American South being a central factor in maintaining its historical conservatism and authoritarianism). We have a hard time thinking outside of the conditions that have shaped our very minds.

That isn’t to say there is no feedback loop where ideology can reinforce the conditions that made it possible. The point is that free individuals aren’t fully possible in an unfree society where individuals aren’t free on a practical level to develop toward optimal health and ability. As such, fights over ideology miss an important point. The actual fight needs to be over the conditions that precede any particular ideological framing and conflict. On a practical level, we would be better off investing money and resources where it is needed most and in ways that practically improve lives, rather than simply imprisoning populations into submission and bombing entire societies into oblivion, either of which worsens the problems for those people and for everyone else as well. The best way to fight crime and terrorism would be by improving the lives for all people. Imagine that!

The only reason we can have a public debate now is because we have finally come to the point in society where conditions have improved just enough where these issues are finally comprehensible, as we have begun to see their real world impact in improving society. It would have been fruitless trying to have a public debate about public goods such as public healthcare and public education in centuries past when even the notion of a ‘public’ still seemed radical. The conditions for a public with a voice to be heard had to first be created. Once that was in place, it is unsurprising that it required radicals like socialists to take it to the next level in suggesting the creation of public sanitation and public bakeries, based on the idea that health was a priority, if not an individual right then a social responsibility. Now, these kinds of socialist policies have become the norm in Western societies, the most basic level of a social safety net.

As I began reading McAuliffe’s book, I came across Higgs’ book. It wasn’t immediately apparent that there was a connection between the two. Reading some reviews and interviews showed the importance Higgs placed on the role (hyper-)individualism has played this past century. And upon perusing the book, it became clear that he understood how this went beyond philosophy and politics, touching upon every aspect of our society, most certainly including science.

It was useful thinking about the issue of micro-organisms in a larger historical context. McAuliffe doesn’t shy away from the greater implications, but her writing was focused on a single area of study. To both of these books, we could also add such things as the research on epigentics which might further help transform our entire understanding of humanity. Taken together, it is clear that we are teetering on the edge of a paradigm shift, of the extent only seen a few times before. We live in a transitional era, but it isn’t a smooth transition. As Higgs argues, the 20th century has been a rupture, what having developed not being fully explicable according to what came before.

We are barely beginning to scratch the surface of our own ignorance, which is to say our potential new knowledge. We know just enough to realize how wrong mainstream views have been in the past. Our society was built upon and has been operating according to beliefs that have been proven partial, inaccurate, and false. The world is more complex and fascinating than we previously acknowledged.

Realizing we have been so wrong, how do we make it right going forward? What will it take for us to finally confront what we’ve ignored for so long? How do we make the strange familiar?

* * *

Donald Trump: Stranger Than We Can Imagine?
by David McConkey

Why Jeremy Corbyn makes sense in the age of the selfie
By John Higgs

Stranger Than We Can Imagine:
Making Sense of the Twentieth Century
by John Higgs
pp. 308-310

In the words of the American social physicist Alex Pentland, “It is time that we dropped the fiction of individuals as the unit of rationality, and recognised that our rationality is largely determined by the surrounding social fabric. Instead of being actors in markets, we are collaborators in determining the public good.” Pentland and his team distributed smartphones loaded with tracking software to a number of communities in order to study the vast amount of data the daily interactions of large groups generated. They found that the overriding factor in a whole range of issues, from income to weight gain and voting intentions, was not individual free will but the influence of others. The most significant factor deciding whether you would eat a doughnut was not willpower or good intentions, but whether everyone else in the office took one. As Pentland discovered, “The single biggest factor driving adoption of new behaviours was the behaviour of peers. Put another way, the effects of this implicit social learning were roughly the same size as the influence of your genes on your behaviour, or your IQ on your academic performance.”

A similar story is told by the research into child development and neuroscience. An infant is not born with language, logic and an understanding of how to behave in society. They are instead primed to acquire these skills from others. Studies of children who have been isolated from the age of about six months, such as those abandoned in the Romanian orphanages under the dictatorship of Nicolae Ceauşescu, show that they can never recover from the lost social interaction at that crucial age. We need others, it turns out, in order to develop to the point where we’re able to convince ourselves that we don’t need others.

Many aspects of our behaviour only make sense when we understand their social role. Laughter, for example, creates social bonding and strengthens ties within a group. Evolution did not make us make those strange noises for our own benefit. In light of this, it is interesting that there is so much humour on the internet.

Neuroscientists have come to view our sense of “self,” the idea that we are a single entity making rational decisions, as no more than a quirk of the mind. Brain-scanning experiments have shown that the mental processes that lead to an action, such as deciding to press a button, occur a significant period before the conscious brain believes it makes the decision to press the button. This does not indicate a rational individual exercising free will. It portrays the conscious mind as more of a spin doctor than a decision maker, rationalising the actions of the unconscious mind after the fact. As the Canadian-British psychologist Bruce Hood writes, “Our brain creates the experience of our self as a model – a cohesive, integrated character – to make sense of the multitude of experiences that assault our senses throughout our lifetime.”

In biology an “individual” is an increasingly complicated word to define. A human body, for example, contains ten times more non-human bacteria than it does human cells. Understanding the interaction between the two, from the immune system to the digestive organs, is necessary to understand how we work. This means that the only way to study a human is to study something more than that human.

Individualism trains us to think of ourselves as isolated, self-willed units. That description is not sufficient, either biologically, socially, psychologically, emotionally or culturally. This can be difficult to accept if you were raised in the twentieth century, particularly if your politics use the idea of a free individual as your primary touchstone. The promotion of individualism can become a core part of a person’s identity, and something that must be defended. This is ironic, because where did that idea come from? Was it created by the person who defends their individualism? Does it belong to them? In truth, that idea was, like most ideas, just passing through.

* * *

Social Conditions of an Individual’s Condition

Uncomfortable Questions About Ideology

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

Social Disorder, Mental Disorder

The Desperate Acting Desperately

Homelessness and Mental Illness

It’s All Your Fault, You Fat Loser!

Morality-Punishment Link

Denying the Agency of the Subordinate Class

Freedom From Want, Freedom to Imagine

Ideological Realism & Scarcity of Imagination

The Unimagined: Capitalism and Crappiness

Neoliberalism: Dream & Reality

Moral Flynn Effect?

Racists Losing Ground: Moral Flynn Effect?

Immoral/Amoral Flynn Effect?

Of Mice and Men and Environments

What do we inherit? And from whom?

Radical & Moderate Enlightenments: Revolution & Reaction, Science & Religion

No One Knows