The Mind in the Body

“[In the Old Testament], human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’ […] In short, the biblical character presents itself to us more as parts than as a whole”
(Robert A. Di Vito, “Old Testament Anthropology and the Construction of Personal Identity”, p. 227-228)

The Axial Age was a transitional stage following the collapse of the Bronze Age Civilizations. And in that transition, new mindsets mixed with old, what came before trying to contain the rupture and what was forming not yet fully born. Writing, texts, and laws were replacing voices gone quiet and silent. Ancient forms of authorization no longer were as viscerally real and psychologically compelling. But the transition period was long and slow, and in many ways continues to this day (e.g., authoritarianism as vestigial bicameralism).

One aspect was the changing experience of identity, as experienced within the body and the world. But let me take it a step back. In hunter-gatherer societies, there is the common attribute of animism where the world is alive with voices and along with this the sense of identity that, involving sensory immersion not limited to the body, extends into the surrounding environment. The bicameral mind seems to have been a reworking of this mentality for the emerging agricultural villages and city-states. Instead of body as part of the natural environment, there was the body politic with the community as a coherent whole, a living organism. Without a metaphorical framing of inside and outside as the crux of identity as would later develop, self and other was defined by permeable collectivism rather than rigid individualism (bundle theory of mind taken to the extreme of bundle theory of society).

In the late Bronze Age, large and expansive theocratic hierarchies formed. Writing increasingly took a greater role. All of this combined to make the bicameral order precarious. The act of writing and reading texts was still integrated with voice-hearing traditions, a text being the literal ‘word’ of a god, spirit, or ancestor. But the voices being written down began the process of creating psychological distance, the text itself beginning to take onto itself authority. This became a competing metaphorical framing, that of truth and reality as text.

This transformed the perception of the body. The voices became harder to decipher. Hearing a voice of authority speak to you required little interpretation, but a text emphasizes the need for interpretation. Reading became a way of thinking about the world and about one’s way of being in the world. Divination and similar practices was the attempt to read the world. Clouds or lightning, the flight of birds or the organs of a sacrificial animal — these were texts to be read.

Likewise, the body became a repository of voices, although initially not quite a unitary whole. Different aspects of self and spirits, different energies and forces were located and contained in various organs and body parts — to the extent that they had minds of their own, a potentially distressing condition and sometimes interpreted as possession. As the bicameral community was a body politic, the post-bicameral body initiated the internalization of community. But this body as community didn’t at first have a clear egoic ruler — the need for this growing stronger as external authorization further weakened. Eventually, it became necessary to locate the ruling self in a particular place within, such as the heart or throat or head. This was a forceful suppression of the many voices and hence a disallowing of the perception of self as community. The narrative of individuality began to be told.

Even today, we go on looking for a voice in some particular location. Noam Chomsky’s theory of a language organ is an example of this. We struggle for authorization within consciousness, as the ancient grounding of authorization in the world and in community has been lost, cast into the shadows.

Still, dissociation having taken hold, the voices never disappear and they continue to demand being heard, if only as symptoms of physical and psychological disease. Or else we let the thousand voices of media to tell us how to think and what to do. Ultimately, trying to contain authorization within us is impossible and so authorization spills back out into the world, the return of the repressed. Our sense of individualism is much more of a superficial rationalization than we’d like to admit. The social nature of our humanity can’t be denied.

As with post-bicameral humanity, we are still trying to navigate this complex and confounding social reality. Maybe that is why Axial Age religions, in first articulating the dilemma of conscious individuality, remain compelling in what was taught. The Axial Age prophets gave voice to our own ambivalance and maybe that is what gives the ego such power over us. We moderns haven’t become disconnected and dissociated merely because of some recent affliction — such a state of mind is what we inherited, as the foundation of our civilization.

* * *

“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward. But when thou doest alms, let not thy left hand know what thy right hand doeth: That thine alms may be in secret: and thy Father which seeth in secret himself shall reward thee openly.” (Matthew 6:2-4)

“Wherefore if thy hand or thy foot offend thee, cut them off, and cast them from thee: it is better for thee to enter into life halt or maimed, rather than having two hands or two feet to be cast into everlasting fire. And if thine eye offend thee, pluck it out, and cast it from thee: it is better for thee to enter into life with one eye, rather than having two eyes to be cast into hell fire.” (Matthew 18:8-9)

The Prince of Medicine
by Susan P. Mattern
pp. 232-233

He mentions speaking with many women who described themselves as “hysterical,” that is, having an illness caused, as they believed, by a condition of the uterus (hystera in Greek) whose symptoms varied from muscle contractions to lethargy to nearly complete asphyxia (Loc. Affect. 6.5, 8.414K). Galen, very aware of Herophilus’s discovery of the broad ligaments anchoring the uterus to the pelvis, denied that the uterus wandered around the body like an animal wreaking havoc (the Hippocratics imagined a very actively mobile womb). But the uterus could, in his view, become withdrawn in some direction or inflamed; and in one passage he recommends the ancient practice of fumigating the vagina with sweet-smelling odors to attract the uterus, endowed in this view with senses and desires of its own, to its proper place; this technique is described in the Hippocratic Corpus but also evokes folk or shamanistic medicine.

“Between the Dream and Reality”:
Divination in the Novels of Cormac McCarthy

by Robert A. Kottage
pp. 50-52

A definition of haruspicy is in order. Known to the ancient Romans as the Etrusca disciplina or “Etruscan art” (P.B. Ellis 221), haruspicy originally included all three types of divination practiced by the Etruscan hierophant: interpretation of fulgura (lightnings), of monstra (birth defects and unusual meteorological occurrences), and of exta (internal organs) (Hammond). ”Of these, the practice still commonly associated with the term is the examination of organs, as evidenced by its OED definition: “The practice or function of a haruspex; divination by inspection of the entrails of victims” (“haruspicy”).”A detailed science of liver divination developed in the ancient world, and instructional bronze liver models formed by the Etruscans—as well as those made by their predecessors the Hittites and Babylonians—have survived (Hammond). ”Any unusual features were noted and interpreted by those trained in the esoteric art: “Significant for the exta were the size, shape, colour, and markings of the vital organs, especially the livers and gall bladders of sheep, changes in which were believed by many races to arise supernaturally… and to be susceptible of interpretation by established rules”(Hammond). Julian Jaynes, in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, comments on the unique quality of haruspicy as a form of divination, arriving as it did at the dawn of written language: “Extispicy [divining through exta] differs from other methods in that the metaphrand is explicitly not the speech or actions of the gods, but their writing. The baru [Babylonian priest] first addressed the gods… with requests that they ‘write’ their message upon the entrails of the animal” (Jaynes 243). Jaynes also remarks that organs found to contain messages of import would sometimes be sent to kings, like letters from the gods (Jaynes 244). Primitive man sought (and found) meaning everywhere.

The logic behind the belief was simple: the whole universe is a single, harmonious organism, with the thoughts and intensions of the intangible gods reflected in the tangible world. For those illiterate to such portents, a lightning bolt or the birth of a hermaphrodite would have been untranslatable; but for those with proper training, the cosmos were as alive with signs as any language:

The Babylonia s believed that the decisions of their gods, like those of their kings, were arbitrary, but that mankind could at least guess their will. Any event on earth, even a trivial one, could reflect or foreshadow the intentions of the gods because the universe is a living organism, a whole, and what happens in one part of it might be caused by a happening in some distant part. Here we see a germ of the theory of cosmic sympathy formulated by Posidonius. (Luck 230)

This view of the capricious gods behaving like human king is reminiscent of the evil archons of gnosticism; however, unlike gnosticism, the notion of cosmic sympathy implies an illuminated and vastly “readable” world, even in the darkness of matter. The Greeks viewed pneuma as “the substance that penetrates and unifies all things. In fact, this tension holds bodies together, and every coherent thing would collapse without it” (Lawrence)—a notion that diverges from the gnostic idea of pneuma as spiritual light temporarily trapped in the pall of physicality.

Proper vision, then, is central to all the offices of the haruspex. The world cooperates with the seer by being illuminated, readable.

p. 160

Jaynes establishes the important distinction between the modern notion of chance commonly associated with coin flipping and the attitude of the ancient Mesopotamians toward sortilege:

We are so used to the huge variety of games of chance, of throwing dice, roulette wheels, etc., all of them vestiges of this ancient practice of divination by lots, that we find it difficult to really appreciate the significance of this practice historically. It is a help here to realize that there was no concept of chance whatever until very recent times…. [B]ecause there was no chance, the result had to be caused by the gods whose intentions were being divined. (Jaynes 240)

In a world devoid of luck, proper divination is simply a matter of decoding the signs—bad readings are never the fault of the gods, but can only stem from the reader.

The Consciousness of John’s Gospel
A Prolegomenon to a Jaynesian-Jamesonian Approach

by Jonathan Bernier

When reading the prologue’s historical passages, one notes a central theme: the Baptist witnesses to the light coming into the world. Put otherwise, the historical witnesses to the cosmological. This, I suggest, can be understood as an example of what Jaynes (1976: 317–338) calls ‘the quest for authorization.’ As the bicameral mind broke down, as exteriorised thought ascribed to other-worldly agents gave way to interiorised thought ascribed to oneself, as the voices of the gods spoke less frequently, people sought out new means, extrinsic to themselves, by which to authorise belief and practice; they quite literally did not trust themselves. They turned to oracles and prophets, to auguries and haruspices, to ecstatics and ecstasy. Proclamatory prophecy of the sort practiced by John the Baptist should be understood in terms of the bicameral mind: the Lord God of Israel, external to the Baptist, issued imperatives to the Baptist, and then the Baptist, external to his audience, relayed those divine imperatives to his listeners. Those who chose to follow the Baptist’s imperatives operated according to the logic of the bicameral mind, as described by Jaynes (1976: 84–99): the divine voice speaks, therefore I act. That voice just happens now to be mediated through the prophet, and not apprehended directly in the way that the bicameral mind apprehended the voices and visions. The Baptist as witness to God’s words and Word is the Baptist as bicameral vestige.

By way of contrast, the Word-become-flesh can be articulated in terms of the bicameral mind giving way to consciousness. The Jesus of the prologue represents the apogee of interiorised consciousness: the Word is not just inside him, but he in fact is the Word. 1:17 draws attention to an implication consequent to this indwelling of the Word: with the divine Word – and thus also the divine words – dwelling fully within oneself, what need is there for that set of exteriorised thoughts known as the Mosaic Law? […]

[O]ne notes Jaynes’ (1976: 301, 318) suggestion that the Mosaic Law represents a sort of half-way house between bicameral exteriority and conscious interiority: no longer able to hear the voices, the ancient Israelites sought external authorisation in the written word; eventually, however, as the Jewish people became increasingly acclimated to conscious interiority, they became increasingly ambivalent towards the need for and role of such exteriorised authorisation. Jaynes (1976: 318) highlights Jesus’ place in this emerging ambivalence; however, in 1:17 it is not so much that exteriorised authorisation is displaced by interiorised consciousness but that Torah as exteriorised authority is replaced by Jesus as exteriorised authority. Jesus, the fully conscious Word-made-flesh, might displace the Law, but it is not altogether clear that he offers his followers a full turn towards interiorised consciousness; one might, rather, read 1:17 as a bicameral attempt to re-contain the cognition revolution of which Jaynes considers Jesus to be a flag-bearer.

The Discovery of the Mind
by Bruno Snell
pp. 6-8

We find it difficult to conceive of a mentality which made no provision for the body as such. Among the early expressions designating what was later rendered as soma or ‘body’, only the plurals γυα, μλεα, etc. refer to the physical nature of the body; for chros is merely the limit of the body, and demas represents the frame, the structure, and occurs only in the accusative of specification. As it is, early Greek art actually corroborates our impression that the physical body of man was comprehended, not as a unit but as an aggregate. Not until the classical art of the fifth century do we find attempts to depict the body as an organic unit whose parts are mutually correlated. In the preceding period the body is a mere construct of independent parts variously put together.6 It must not be thought, however, that the pictures of human beings from the time of Homer are like the primitive drawings to which our children have accustomed us, though they too simply add limb to limb.

Our children usually represent the human shape as shown in fig. i, whereas fig. 2 reproduces the Greek concept as found on the vases of the geometric period. Our children first draw a body as the central and most important part of their design; then they add the head, the arms and the legs. The geometric figures, on the other hand, lack this central part; they are nothing but μλεα κα γυα, i.e. limbs with strong muscles, separated from each other by means of exaggerated joints. This difference is of course partially dependent upon the clothes they wore, but even after we have made due allowance for this the fact remains that the Greeks of this early period seem to have seen in a strangely ‘articulated’ way. In their eyes the individual limbs are clearly distinguished from each other, and the joints are, for the sake of emphasis, presented as extraordinarily thin, while the fleshy parts are made to bulge just as unrealistically. The early Greek drawing seeks to demonstrate the agility of the human figure, the drawing of the modern child its compactness and unity.

Thus the early Greeks did not, either in their language or in the visual arts, grasp the body as a unit. The phenomenon is the same as with the verbs denoting sight; in the latter, the activity is at first understood in terms of its conspicuous modes, of the various attitudes and sentiments connected with it, and it is a long time before speech begins to address itself to the essential function of this activity. It seems, then, as if language aims progressively to express the essence of an act, but is at first unable to comprehend it because it is a function, and as such neither tangibly apparent nor associated with certain unambiguous emotions. As soon, however, as it is recognized and has received a name, it has come into existence, and the knowledge of its existence quickly becomes common property. Concerning the body, the chain of events may have been somewhat like this: in the early period a speaker, when faced by another person, was apparently satisfied to call out his name: this is Achilles, or to say: this is a man. As a next step, the most conspicuous elements of his appearance are described, namely his limbs as existing side by side; their functional correlation is not apprehended in its full importance until somewhat later. True enough, the function is a concrete fact, but its objective existence does not manifest itself so clearly as the presence of the individual corporeal limbs, and its prior significance escapes even the owner of the limbs himself. With the discovery of this hidden unity, of course, it is at once appreciated as an immediate and self-explanatory truth.

This objective truth, it must be admitted, does not exist for man until it is seen and known and designated by a word; until, thereby, it has become an object of thought. Of course the Homeric man had a body exactly like the later Greeks, but he did not know it qua body, but merely as the sum total of his limbs. This is another way of saying that the Homeric Greeks did not yet have a body in the modern sense of the word; body, soma, is a later interpretation of what was originally comprehended as μλη or γυα, i.e. as limbs. Again and again Homer speaks of fleet legs, of knees in speedy motion, of sinewy arms; it is in these limbs, immediately evident as they are to his eyes, that he locates the secret of life.7

Hebrew and Buddhist Selves:
A Constructive Postmodern Study

by Nicholas F. Gier

Finally, at least two biblical scholars–in response to the question “What good is this pre-modern self?”–have suggested that the Hebrew view (we add the Buddhist and the Chinese) can be used to counter balance the dysfunctional elements of modern selfhood. Both Robert Di Vito and Jacqueline Lapsley have called this move “postmodern,” based, as they contend, on the concept of intersubjectivity.[3] In his interpretation of Charles S. Peirce as a constructive postmodern thinker, Peter Ochs observes that Peirce reaffirms the Hebraic view that relationality is knowledge at its most basic level.  As Ochs states: “Peirce did not read Hebrew, but the ancient Israelite term for ‘knowledge’–yidiah–may convey Peirce’s claim better than any term he used.  For the biblical authors, ‘to know’ is ‘to have intercourse with’–with the world, with one’s spouse, with God.”[4]

The view that the self is self-sufficient and self-contained is a seductive abstraction that contradicts the very facts of our interdependent existence.  Modern social atomism was most likely the result of modeling the self on an immutable transcendent deity (more Greek than biblical) and/or the inert isolated atom of modern science. […]

It is surprising to discover that the Buddhist skandhas are more mental in character, while the Hebrew self is more material in very concrete ways.  For example, the Psalmist says that “all my inner parts (=heart-mind) bless God’s holy name” (103.1); his kidneys (=conscience) chastise him (16.7); and broken bones rejoice (16:7).  Hebrew bones offer us the most dramatic example of a view of human essence most contrary to Christian theology.  One’s essential core is not immaterial and invisible; rather, it is one’s bones, the most enduring remnant of a person’s being.  When the nepeš “rejoices in the Lord” at Ps. 35.9, the poet, in typical parallel fashion, then has the bones speak for her in v. 10.  Jeremiah describes his passion for Yahweh as a “fire” in his heart (l�b) that is also in his bones (20.9), just as we say that a great orator has “fire in his belly.” The bones of the exiles will form the foundation of those who will be restored by Yahweh’s rãah in Ezekiel 37, and later Pharisidic Judaism speaks of the bones of the deceased “sprouting” with new life in their resurrected bodies.[7]  The bones of the prophet Elijah have special healing powers (2 Kgs. 13.21).  Therefore, the cult of relic bones does indeed have scriptural basis, and we also note the obvious parallel to the worship of the Buddha’s bones.

With all these body parts functioning in various ways, it is hard to find, as Robert A. Di Vito suggests, “a true ‘center’ for the [Hebrew] person . . . a ‘consciousness’ or a self-contained ‘self.’”[8] Di Vito also observes that the Hebrew word for face (p~n§m) is plural, reflecting all the ways in which a person appears in multifarious social interactions.  The plurality of faces in Chinese culture is similar, including the “loss of face” when a younger brother fails to defer to his elder brother, who would have a difference “face” with respect to his father.  One may be tempted to say that the j§va is the center of the Buddhist self, but that would not be accurate because this term simply designates the functioning of all the skandhas together.

Both David Kalupahana and Peter Harvey demonstrate how much influence material form (rãpa) has on Buddhist personality, even at the highest stage of spiritual development.[9]  It is Zen Buddhists, however, who match the earthy Hebrew rhetoric about the human person. When Bodhidharma (d. 534 CE) prepared to depart from his body, he asked four of his disciples what they had learned from him.  As each of them answered they were offered a part of his body: his skin, his flesh, his bones, and his marrow.  The Zen monk Nangaku also compared the achievements of his six disciples to six parts of his body. Deliberately inverting the usual priority of mind over body, the Zen monk Dogen (1200-1253) declared that “The Buddha Way is therefore to be attained above all through the body.”[10]  Interestingly enough, the Hebrews rank the flesh, skin, bones, and sinews as the most essential parts of the body-soul.[11]  The great Buddhist dialectician Nagarjuna (2nd Century CE) appears to be the source of Bodhidharma’s body correlates, but it is clear that Nagarjuna meant them as metaphors.[12]  In contrast it seems clear that, although dead bones rejoicing is most likely a figure of speech, the Hebrews were convinced that we think, feel, and perceive through and with all parts of our bodies.

In Search of a Christian Identity
by Robert Hamilton

The essential points here, are the “social disengagement” of the modern self, away from identifying solely with roles defined by the family group, and the development of a “personal unity” within the individual. Morally speaking, we are no longer empty vessels to be filled up by some god, or servant of god, we are now responsible for our own actions, and decisions, in light of our own moral compass. I would like to mention Julian Jayne’s seminal work, The Origin of Consciousness in the Breakdown of the Bicameral Mind, as a pertinent hypothesis for an attempt to understand the enormous distance between the modern sense of self with that of the ancient mind, and its largely absent subjective state.[13]

“The preposterous hypothesis we have come to in the previous chapter is that at one time human nature was split in two, an executive part called a god, and a follower part called a man.”[14]

This hypothesis sits very well with De Vitos’ description of the permeable personal identity of Old Testament characters, who are “taken over,” or possessed, by Yahweh.[15] The evidence of the Old Testament stories points in this direction, where we have patriarchal family leaders, like Abraham and Noah, going around making morally contentious decisions (in today’s terms) based on their internal dialogue with a god – Jehovah.[16] As Jaynes postulates later in his book, today we would call this behaviour schizophrenia. De Vito, later in the article, confirms, that:

“Of course, this relative disregard for autonomy in no way limits one’s responsibility for conduct–not even when Yhwh has given “statutes that were not good” in order to destroy Israel “(Ezek 20:25-26).[17]

Cognitive Perspectives on Early Christology
by Daniel McClellan

The insights of CSR [cognitive science of religion] also better inform our reconstruction of early Jewish concepts of agency, identity, and divinity. Almost twenty years ago, Robert A. Di Vito argued from an anthropological perspective that the “person” in the Hebrew Bible “is more radically decentered, ‘dividual,’ and undefined with respect to personal boundaries … [and] in sharp contrast to modernity, it is identified more closely with, and by, its social roles.”40 Personhood was divisible and permeable in the Hebrew Bible, and while there was diachronic and synchronic variation in certain details, the same is evident in the literature of Second Temple Judaism and early Christianity. This is most clear in the widespread understanding of the spirit (רוח (and the soul (נפש – (often used interchangeably – as the primary loci of a person’s agency or capacity to act.41 Both entities were usually considered primarily constitutive of a person’s identity, but also distinct from their physical body and capable of existence apart from it.42 The physical body could also be penetrated or overcome by external “spirits,” and such possession imposed the agency and capacities of the possessor.43 The God of Israel was largely patterned after this concept of personhood,44 and was similarly partible, with God’s glory (Hebrew: כבוד ;Greek: δόξα), wisdom (חכמה/σοφία), spirit (רוח/πνεῦµα), word (דבר/λόγος), presence (שכינה ,(and name (שם/ὄνοµα) operating as autonomous and sometimes personified loci of agency that could presence the deity and also possess persons (or cultic objects45) and/or endow them with special status or powers.46

Did Christianity lead to schizophrenia?
Psychosis, psychology and self reference

by Roland Littlewood

This new deity could be encountered anywhere—“Wherever two are gathered in my name” (Mathew 18.20)—for Christianity was universal and individual (“neither Jew nor Greek… bond nor free… male or female, for you are all one man in Christ Jesus” says St. Paul). And ultimate control rested with Him, Creator and Master of the whole universe, throughout the whole universe. No longer was there any point in threatening your recalcitrant (Egyptian) idol for not coming up with the goods (Cumont, 1911/1958, p. 93): as similarly in colonial Africa, at least according to the missionaries (Peel, 2000). If God was independent of social context and place, then so was the individual self at least in its conversations with God (as Dilthey argues). Religious status was no longer signalled by external signs (circumcision), or social position (the higher stages of the Roman priesthood had been occupied by aspiring politicians in the course of their career: “The internal status of the officiating person was a matter of… indifference to the celestial spirits” [Cumont, 1911/1958, p. 91]). “Now it is not our flesh that we must circumcise, we must crucify ourselves, exterminate and mortify our unreasonable desires” (John Chrysostom, 1979), “circumcise your heart” says “St. Barnabas” (2003, p. 45) for religion became internal and private. Like the African or Roman self (Mauss, 1938/1979), the Jewish self had been embedded in a functioning society, individually decentred and socially contextualised (Di Vito, 1999); it survived death only through its bodily descendants: “But Abram cried, what can you give me, seeing I shall die childless” (Genesis 15.2). To die without issue was extinction in both religious systems (Madigan & Levenson, 2008). But now an enduring part of the self, or an associate of it—the soul—had a connection to what might be called body and consciousness yet had some sort of ill defined association with them. In its earthly body it was in potential communication with God. Like God it was immaterial and immortal. (The associated resurrection of the physical body, though an essential part of Christian dogma, has played an increasingly less important part in the Church [cf. Stroumsa, 1990].) For 19th-century pagan Yoruba who already accepted some idea of a hereafter, each village has its separate afterlife which had to be fused by the missionaries into a more universal schema (Peel, 2000, p. 175). If the conversation with God was one to one, then each self-aware individual had then to make up their own mind on adherence—and thus the detached observer became the surveyor of the whole world (Dumont, 1985). Sacral and secular became distinct (separate “functions” as Dumont calls them), further presaging a split between psychological faculties. The idea of the self/soul as an autonomous unit facing God became the basis, via the stages Mauss (1938/1979) briefly outlines, for a political philosophy of individualism (MacFarlane, 1978). The missionaries in Africa constantly attempted to reach the inside of their converts, but bemoaned that the Yoruba did not seem to have any inward core to the self (Peel, 2000, Chapter 9).

Embodying the Gospel:
Two Exemplary Practices

by Joel B. Green
pp. 12-16

Philosopher Charles Taylor’s magisterial account of the development of personal identity in the West provides a useful point of entry into this discussion. He shows how modern assumptions about personhood in the West developed from Augustine in the fourth and fifth centuries, through major European philosophers in the seventeenth and eighteenth centuries (e.g.,Descartes, Locke, Kant), and into the present. The result is a modern human “self defined by the powers of disengaged reason—with its associated ideals of self-responsible freedom and dignity—of self-exploration, and of personal commitment.”2 These emphases provide a launching point for our modern conception of “inwardness,” that is, the widespread view that people have an inner self, which is the authentic self.

Given this baseline understanding of the human person, it would seem only natural to understand conversion in terms of interiority, and this is precisely what William James has done for the modern west. In his enormously influential 1901–02 Gifford Lectures at Edinburgh University, published in 1902 under the title The Varieties of Religious Experience, James identifies salvation as the resolution of a person’s inner, subjective crisis.Salvation for James is thus an individual, instantaneous, feeling-based, interior experience.3 Following James, A.D. Nock’s celebrated study of conversion in antiquity reached a similar conclusion: “By conversion we mean there orientation of the soul of an individual, his [sic] deliberate turning from in different or from an earlier form of piety to another, a turning which involves a consciousness that a great change is involved, that the old was wrong and the new is right.” Nock goes on to write of “a passion of willingness and acquiescence, which removes the feeling of anxiety, a sense of perceiving truths not known before, a sense of clean and beautiful newness within and without and an ecstasy of happiness . . .”4 In short, what is needed is a “change of heart.”

However pervasive they may be in the contemporary West, whether in-side or outside the church, such assumptions actually sit uneasily with Old and New Testament portraits of humanity. Let me mention two studies that press our thinking in an alternative direction. Writing with reference to Old Testament anthropology, Robert Di Vito finds that the human “(1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively trans-parent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘inner depths’), and (4) is ‘authentic’ precisely in its heteronomy, in its obedience to another and dependence upon another.”5 Two aspects of Di Vito’s summary are of special interest: first, his emphasis on a more communitarian experience of personhood; and second, his emphasis on embodiment. Were we to take seriously what these assumptions might mean for embracing and living out the Gospel, we might reflect more on what it means to be saved within the community of God’s people and, indeed, what it means to be saved in relation to the whole of God’s creation. We might also reflect less on conversion as decision-making and more on conversion as pattern-of-life.

The second study, by Klaus Berger, concerns the New Testament. Here,Berger investigates the New Testament’s “historical psychology,” repeatedly highlighting both the ease with which we read New Testament texts against modern understandings of humanity and the problems resident in our doing so.6 His list of troublesome assumptions—troublesome because they are more at home in the contemporary West than in the ancient Mediterranean world—includes these dualities, sometimes even dichotomies: doing and being, identity and behavior, internal and external. A more integrated understanding of people, the sort we find in the New Testament world, he insists, would emphasize life patterns that hold together believing, thinking, feeling, and behaving, and allow for a clear understanding that human behavior in the world is both simply and profoundly em-bodied belief. Perspectives on human transformation that take their point  of departure from this “psychology” would emphasize humans in relation-ship with other humans, the bodily nature of human allegiances and commitments, and the fully integrated character of human faith and life. […]

Given how John’s message is framed in an agricultural context, it is not a surprise that his point turns on an organic metaphor rather than a mechanical one. The resulting frame has no room for prioritizing inner (e.g.,“mind” or “heart”) over outer (e.g., “body” or “behavior”), nor of fitting disparate pieces together to manufacture a “product,” nor of correlating status and activity as cause and effect. Organic metaphors neither depend on nor provoke images of hierarchical systems but invite images of integration, interrelation, and interdependence. Consistent with this organic metaphor, practices do not occupy a space outside the system of change, but are themselves part and parcel of the system. In short, John’s agricultural metaphor inseparably binds “is” and “does” together.

Ressurrection and the Restoration of Israel:
The Ultimate Victory of the God of Life
by Jon Douglas Levenson
pp. 108-114

In our second chapter, we discussed one of the prime warrants often adduced either for the rejection of resurrection (by better-informed individuals) or for its alleged absence, and the alleged absence of any notion of the afterlife, in Judaism (by less informed individuals). That warrant is the finality of death in the Hebrew Bible, or at least in most of it, and certainly in what is from a Jewish point of view its most important subsection, the first five books. For no resurrections take place therein, and predictions of a general resurrection at the end of time can be found in the written Torah only through ingenious derash of the sort that the rabbinic tradition itself does not univocally endorse or replicate in its translations. In the same chapter, we also identified one difficulty with this notion that the Pentateuch exhibits no possibility of an afterlife but supports, instead, the absolute finality of death, and to this point we must now return. I am speaking of the difficulty of separating individuals from their families (including the extended family that is the nation). If, in fact, individuals are fundamentally and inextricably embedded within their fam ilies, then their own deaths, however terrifying in prospect, will lack the final ity that death carries with it in a culture with a more individualistic, atomistic understanding of the self. What I am saying here is something more radical than the truism that in the Hebrew Bible, parents draw consolation from the thought that their descendants will survive them (e.g., Gen 48:11), just as, conversely, the parents are plunged into a paralyzing grief at the thought that their progeny have perished (e.g., Gen 37:33–35; Jer 31:15). This is, of course, the case, and probably more so in the ancient world, where children were the support of one’s old age, than in modern societies, where the state and the pension fund fill many roles previously concentrated in the family. That to which I am pointing, rather, is that the self of an individual in ancient Israel was entwined with the self of his or her family in ways that are foreign to the modern West, and became foreign to some degree already long ago.

Let us take as an example the passage in which Jacob is granted ‘‘the blessing of Abraham,’’ his grandfather, according to the prayer of Isaac, his father, to ‘‘possess the land where you are sojourning, which God assigned to Abraham’’ (Gen 28:1–4). The blessing on Abraham, as we have seen, can be altogether and satisfactorily fulfilled in Abraham’s descendants. Thus, too, can Ezekiel envision the appointment of ‘‘a single shepherd over [Israel] to tend them—My servant David,’’ who had passed away many generations before (Ezek 34:23). Can we, without derash, see in this a prediction that David, king of Judah and Israel, will be raised from the dead? To do so is to move outside the language of the text and the culture of Israel at the time of Ezekiel, which does not speak of the resurrections of individuals at all. But to say, as the School of Rabbi Ishmael said about ‘‘to Aaron’’ in Num 18:28,1 that Ezekiel means only one who is ‘‘like David’’—a humble shepherd boy who comes to triumph in battle and rises to royal estate, vindicating his nation and making it secure and just—is not quite the whole truth, either. For biblical Hebrew is quite capable of saying that one person is ‘‘like’’ another or descends from another’s lineage (e.g., Deut 18:15; 2 Kgs 22:2; Isa 11:1) without implying identity of some sort. The more likely interpretation, rather, is that Ezekiel here predicts the miraculous appearance of a royal figure who is not only like David but also of David, a person of Davidic lineage, that is, who functions as David redivivus. This is not the resurrection of a dead man, to be sure, but neither is it the appearance of some unrelated person who only acts like David, or of a descendant who is ‘‘a chip off the old block.’’ David is, in one obvious sense, dead and buried (1 Kgs 2:10), and his death is final and irreversible. In another sense, harder for us to grasp, however, his identity survives him and can be manifested again in a descendant who acts as he did (or, to be more precise, as Ezekiel thought he acted) and in whom the promise to David is at long last fulfilled. For David’s identity was not restricted to the one man of that name but can reappear to a large measure in kin who share it.

This is obviously not reincarnation. For that term implies that the ancient Israelites believed in something like the later Jewish and Christian ‘‘soul’’ or like the notion (such as one finds in some religions) of a disembodied consciousness that can reappear in another person after its last incarnation has died. In the Hebrew Bible, however, there is nothing of the kind. The best approximation is the nepes, the part of the person that manifests his or her life force or vitality most directly. James Barr defines the nepes as ‘‘a superior controlling centre which accompanies, exposes and directs the existence of that totality [of the personality] and one which, especially, provides the life to the whole.’’2 Although the nepes does exhibit a special relationship to the life of the whole person, it is doubtful that it constitutes ‘‘a superior controlling center.’’ As Robert Di Vito points out, ‘‘in the OT, human faculties and bodily organs enjoy a measure of independence that is simply difficult to grasp today without dismissing it as merely poetic speech or, even worse, ‘primitive thinking.’’’ Thus, the eye talks or thinks (Job 24:15) and even mocks (Prov 30:17), the ear commends or pronounces blessed (Job 29:11), blood cries out (Gen 4:10), the nepes (perhaps in the sense of gullet or appetite) labors (Prov 16:26) or pines (Ps 84:3), kidneys rejoice and lips speak (Prov 23:16), hands shed blood (Deut 21:7), the heart and flesh sing (Ps 84:3), all the psalmist’s bones say, ‘‘Lord, who is like you?’’ (Ps 35:10), tongue and lips lie or speak the truth (Prov 12:19, 22), hearts are faithful (Neh 9:8) or wayward (Jer 5:23), and so forth.3 The point is not that the individual is simply an agglomeration of distinct parts. It is, rather, that the nepes is one part of the self among many and does not control the entirety, as the old translation ‘‘soul’’ might lead us to expect.4 A similar point might be made about the modern usage of the term person.

[4. It is less clear to me that this is also Di Vito’s point. He writes, for example: ‘‘The biblical character presents itself to us more as parts than as a whole . . . accordingly, in the OT one searches in vain for anything really corresponding to the Platonic localization of desire and emotion in a central ‘locale,’ like the ‘soul’ under the hegemony of reason, a unified and self-contained center from which the individual’s activities might flow, a ‘self’ that might finally assert its control’’ (‘‘Old Testament Anthropology,’’ 228).]

All of the organs listed above, Di Vito points out, are ‘‘susceptible to moral judgment and evaluation.’’5 Not only that, parts of the body besides the nepes can actually experience emotional states. As Aubrey R. Johnson notes, ‘‘Despondency, for example, is felt to have a shriveling effect upon the bones . . . just as they are said to decay or become soft with fear or distress, and so may be referred to as being themselves troubled or afraid’’ (e.g., Ezek 37:11; Hab 3:16; Jer 23:9; Ps 31:11). In other words, ‘‘the various members and secretions of the body . . . can all be thought of as revealing psychical properties,’’6 and this is another way of saying that the nepes does not really correspond to Barr’s ‘‘superior controlling centre’’ at all. For many of the functions here attributed to the nepes are actually distributed across a number of parts of the body. The heart, too, often functions as the ‘‘controlling centre,’’ determining, for example, whether Israel will follow God’s laws or not (e.g., Ezek 11:19). The nepes in the sense of the life force of the body is sometimes identified with the blood, rather than with an insensible spiritual essence of the sort that words like ‘‘soul’’ or ‘‘person’’ imply. It is in light of this that we can best understand the Pentateuchal laws that forbid the eating of blood on the grounds that it is the equivalent of eating life itself, eating, that is, an animal that is not altogether dead (Lev 17:11, 14; Deut 12:23; cf. Gen 9:4–5). If the nepes ‘‘provides the life to the whole,’’7 so does the blood, with which laws like these, in fact, equate it. The bones, which, as we have just noted, can experience emotional states, function likewise on occasion. When a dead man is hurriedly thrown into Elisha’s grave in 2 Kgs 13:21, it is contact with the wonder-working prophet’s bones that brings about his resurrection. And when the primal man at long last finds his soul mate, he exclaims not that she (unlike the animals who have just been presented to him) shares a nepes with him but rather that she ‘‘is bone of my bones / And flesh of my flesh’’ (Gen 2:23).

In sum, even if the nepes does occasionally function as a ‘‘controlling centre’’ or a provider of life, it does not do so uniquely. The ancient Israelite self is more dynamic and internally complex than such a formulation allows. It should also be noticed that unlike the ‘‘soul’’ in most Western philosophy, the biblical nepes can die. When the non-Israelite prophet Balaam expresses his wish to ‘‘die the death of the upright,’’ it is his nepes that he hopes will share their fate (Num 23:10), and the same applies to Samson when he voices his desire to die with the Philistines whose temple he then topples upon all (Judg 16:30). Indeed, ‘‘to kill the nepes’’ functions as a term for homicide in biblical Hebrew, in which context, as elsewhere, it indeed has a meaning like that of the English ‘‘person’’ (e.g., Num 31:19; Ezek 13:19).8 As Hans Walter Wolff puts it, nepes ‘‘is never given the meaning of an indestructible core of being, in contradistinction to the physical life . . . capable of living when cut off from that life.’’9 Like heart, blood, and bones, the nepes can cease to function. It is not quite correct to say, however, that this is because it is ‘‘physical’’ rather than ‘‘spiritual,’’ for the other parts of the self that we consider physical— heart, blood, bones, or whatever—are ‘‘spiritual’’ as well—registering emotions, reacting to situations, prompting behavior, expressing ideas, each in its own way. A more accurate summary statement would be Johnson’s: ‘‘The Israelite conception of man [is] as a psycho-physical organism.’’10 ‘‘For some time at least [after a person’s death] he may live on as an individual (apart from his possible survival within the social unit),’’ observes Johnson, ‘‘in such scattered elements of his personality as the bones, the blood and the name.’’11 It would seem to follow that if ever he is to return ‘‘as a psycho-physical organ ism,’’ it will have to be not through reincarnation of his soul in some new person but through the resurrection of the body, with all its parts reassembled and revitalized. For in the understanding of the Hebrew Bible, a human being is not a spirit, soul, or consciousness that happens to inhabit this body or that—or none at all. Rather, the unity of body and soul (to phrase the point in the unhappy dualistic vocabulary that is still quite removed from the way the Hebrew Bible thought about such things) is basic to the person. It thus follows that however distant the resurrection of the dead may be from the understanding of death and life in ancient Israel, the concept of immortality in the sense of a soul that survives death is even more distant. And whatever the biblical problems with the doctrine of resurrection—and they are formidable—the biblical problems with the immortality that modern Jewish prayer books prefer (as we saw in our first chapter) are even greater.

Di Vito points, however, to an aspect of the construction of the self in ancient Israel that does have some affinities with immortality. This is the thorough embeddedness of that individual within the family and the corollary difficulty in the context of this culture of isolating a self apart from the kin group. Drawing upon Charles Taylor’s highly suggestive study The Sources of the Self,12 Di Vito points out that ‘‘salient features of modern identity, such as its pronounced individualism, are grounded in modernity’s location of the self in the ‘inner depths’ of one’s interiority rather than in one’s social role or public relations.’’13 Cautioning against the naïve assumption that ancient Israel adhered to the same conception of the self, Di Vito develops four points of contrast between modern Western and ancient Israelite thinking on this point. In the Hebrew Bible,

the subject (1) is deeply embedded, or engaged, in its social identity, (2) is comparatively decentered and undefined with respect to personal boundaries, (3) is relatively transparent, socialized, and embodied (in other words, is altogether lacking in a sense of ‘‘inner depths’’), and (4) is ‘‘authentic’’ precisely in its heteronomy, in its obedience to another and dependence upon another.14

Although Di Vito’s formulation is overstated and too simple—is every biblical figure, even David, presented as ‘‘altogether lacking in a sense of ‘inner depths’’’?—his first and last points are highly instructive and suggest that the familial and social understanding of ‘‘life’’ in the Hebrew Bible is congruent with larger issues in ancient Israelite culture. ‘‘Life’’ and ‘‘death’’ mean different things in a culture like ours, in which the subject is not so ‘‘deeply embedded . . . in its social identity’’ and in which authenticity tends to be associated with cultivation of individual traits at the expense of conformity, and with the attainment of personal autonomy and independence.

The contrast between the biblical and the modern Western constructions of personal identity is glaring when one considers the structure of what Di Vito calls ‘‘the patriarchal family.’’ This ‘‘system,’’ he tells us, ‘‘with strict subor dination of individual goals to those of the extended lineal group, is designed to ensure the continuity and survival of the family.’’15 In this, of course, such a system stands in marked contrast to liberal political theory that has developed over the past three and a half centuries, which, in fact, virtually assures that people committed to that theory above all else will find the Israelite system oppressive. For the liberal political theory is one that has increasingly envi sioned a system in which society is composed of only two entities, the state and individual citizens, all of whom have equal rights quite apart from their famil ial identities and roles. Whether or not one affirms such an identity or plays the role that comes with it (or any role different from that of other citizens) is thus relegated to the domain of private choice. Individuals are guaranteed the free dom to renounce the goals of ‘‘the extended lineal group’’ and ignore ‘‘the continuity and survival of the family,’’ or, increasingly, to redefine ‘‘family’’ according to their own private preferences. In this particular modern type of society, individuals may draw consolation from the thought that their group (however defined) will survive their own deaths. As we have had occasion to remark, there is no reason to doubt that ancient Israelites did so, too. But in a society like ancient Israel, in which ‘‘the subject . . . is deeply embedded, or engaged, in its social identity,’’ ‘‘with strict subordination of individual goals to those of the extended lineal group,’’ the loss of the subject’s own life and the survival of the familial group cannot but have a very different resonance from the one most familiar to us. For even though the subject’s death is irreversible—his or her nepes having died just like the rest of his or her body/soul—his or her fulfillment may yet occur, for identity survives death. God can keep his promise to Abraham or his promise to Israel associated with the gift of David even after Abraham or David, as an individual subject, has died. Indeed, in light of Di Vito’s point that ‘‘the subject . . . is comparatively decentered and undefined with respect to personal boundaries,’’ the very distinction between Abraham and the nation whose covenant came through him (Genesis 15; 17), or between David and the Judean dynasty whom the Lord has pledged never to abandon (2 Sam 7:8–16; Ps 89:20–38), is too facile.

Our examination of personal identity in the earlier literature of the Hebrew Bible thus suggests that the conventional view is too simple: death was not final and irreversible after all, at least not in the way in which we are inclined to think of these matters. This is not, however, because individuals were be lieved to possess an indestructible essence that survived their bodies. On the one hand, the body itself was thought to be animated in ways foreign to modern materialistic and biologistic thinking, but, on the other, even its most spiritual part, its nepeˇs (life force) or its n˘eˇs¯amâ (breath), was mortal. Rather, the boundary between individual subjects and the familial/ethnic/national group in which they dwelt, to which they were subordinate, and on which they depended was so fluid as to rob death of some of the horror it has in more individualistic cultures, influenced by some version of social atomism. In more theological texts, one sees this in the notion that subjects can die a good death, ‘‘old and contented . . . and gathered to [their] kin,’’ like Abraham, who lived to see a partial—though only a partial—fulfillment of God’s promise of land, progeny, and blessing upon him, or like Job, also ‘‘old and contented’’ after his adversity came to an end and his fortunes—including progeny—were restored (Gen 25:8; Job 42:17). If either of these patriarchal figures still felt terror in the face of his death, even after his afflictions had been reversed, the Bible gives us no hint of it.16 Death in situations like these is not a punishment, a cause for complaint against God, or the provocation of an existential crisis. But neither is it death as later cultures, including our own, conceive it.

Given this embeddedness in family, there is in Israelite culture, however, a threat that is the functional equivalent to death as we think of it. This is the absence or loss of descendants.

The Master and His Emissary
by Iain McGilchrist
pp. 263-264

Whoever it was that composed or wrote them [the Homeric epics], they are notable for being the earliest works of Western civilisation that exemplify a number of characteristics that are of interest to us. For in their most notable qualities – their ability to sustain a unified theme and produce a single, whole coherent narrative over a considerable length, in their degree of empathy, and insight into character, and in their strong sense of noble values (Scheler’s Lebenswerte and above) – they suggest a more highly evolved right hemisphere.

That might make one think of the importance to the right hemisphere of the human face. Yet, despite this, there are in Homeric epic few descriptions of faces. There is no doubt about the reality of the emotions experienced by the figures caught up in the drama of the Iliad or the Odyssey: their feelings of pride, hate, envy, anger, shame, pity and love are the stuff of which the drama is made. But for the most part these emotions are conveyed as relating to the body and to bodily gesture, rather than the face – though there are moments, such as at the reunion of Penelope and Odysseus at the end of the Odyssey, when we seem to see the faces of the characters, Penelope’s eyes full of tears, those of Odysseus betraying the ‘ache of longing rising from his breast’. The lack of emphasis on the face might seem puzzling at a time of increasing empathic engagement, but I think there is a reason for this.

In Homer, as I mentioned in Part I, there was no word for the body as such, nor for the soul or the mind, for that matter, in the living person. The sōma was what was left on the battlefield, and the psuchēwas what took flight from the lips of the dying warrior. In the living person, when Homer wants to speak of someone’s mind or thoughts, he refers to what is effectively a physical organ – Achilles, for example, ‘consulting his thumos’. Although the thumos is a source of vital energy within that leads us to certain actions, the thumos has fleshly characteristics such as requiring food and drink, and a bodily situation, though this varies. According to Michael Clarke’s Flesh and Spirit in the Songs of Homer, Homeric man does not have a body or a mind: ‘rather this thought and consciousness are as inseparable a part of his bodily life as are movement and metabolism’. 15 The body is indistinguishable from the whole person. 16 ‘Thinking, emotion, awareness, reflection, will’ are undertaken in the breast, not the head: ‘the ongoing process of thought is conceived of as if it were precisely identified with the palpable inhalation of the breath, and the half-imagined mingling of breath with blood and bodily fluids in the soft, warm, flowing substances that make up what is behind the chest wall.’ 17 He stresses the importance of flow, of melting and of coagulation. The common ground of meaning is not in a particular static thing but in the ongoing process of living, which ‘can be seen and encapsulated in different contexts by a length of time or an oozing liquid’. These are all images of transition between different states of flux, different degrees of permanence, and allowing the possibility of ambiguity: ‘The relationship between the bodily and mental identity of these entities is subtle and elusive.’ 18 Here there is no necessity for the question ‘is this mind or is it body?’ to have a definitive answer. Such forbearance, however, had become impossible by the time of Plato, and remains, according to current trends in neurophilosophy, impossible today.

Words suggestive of the mind, the thumos ‘family’, for example, range fluidly and continuously between actor and activity, between the entity that thinks and the thoughts or emotions that are its products. 19 Here Clarke is speaking of terms such as is, aiōn, menos. ‘The life of Homeric man is defined in terms of processes more precisely than of things.’ 20 Menos, for example, refers to force or strength, and can also mean semen, despite being often located in the chest. But it also refers to ‘the force of violent self-propelled motion in something non-human’, perhaps like Scheler’s Drang: again more an activity than a thing. 21

This profound embodiment of thought and emotion, this emphasis on processes that are always in flux, rather than on single, static entities, this refusal of the ‘either/ or’ distinction between mind and body, all perhaps again suggest a right-hemisphere-dependent version of the world. But what is equally obvious to the modern mind is the relative closeness of the point of view. And that, I believe, helps to explain why there is little description of the face: to attend to the face requires a degree of detached observation. That there is here a work of art at all, a capacity to frame human existence in this way, suggests, it is true, a degree of distance, as well as a degree of co-operation of the hemispheres in achieving it. But it is the gradual evolution of greater distance in post-Homeric Greek culture that causes the efflorescence, the ‘unpacking’, of both right and left hemisphere capacities in the service of both art and science.

With that distance comes the term closest to the modern, more disembodied, idea of mind, nous (or noos), which is rare in Homer. When nous does occur in Homer, it remains distinct, almost always intellectual, not part of the body in any straightforward sense: according to Clarke it ‘may be virtually identified with a plan or stratagem’. 22 In conformation to the processes of the left hemisphere, it is like the flight of an arrow, directional. 23

By the late fifth and fourth centuries, separate ‘concepts of body and soul were firmly fixed in Greek culture’. 24 In Plato, and thence for the next two thousand years, the soul is a prisoner in the body, as he describes it in the Phaedo, awaiting the liberation of death.

The Great Shift
by James L. Kugel
pp. 163-165

A related belief is attested in the story of Hannah (1 Sam 1). Hannah is, to her great distress, childless, and on one occasion she goes to the great temple at Shiloh to seek God’s help:

The priest Eli was sitting on a seat near the doorpost of the temple of the LORD . In the bitterness of her heart, she prayed to the LORD and wept. She made a vow and said: “O LORD of Hosts, if You take note of Your maidservant’s distress, and if You keep me in mind and do not neglect Your maidservant and grant Your maidservant a male offspring, I will give him to the LORD for all the days of his life; and no razor shall ever touch his head.” * Now as she was speaking her prayer before the LORD , Eli was watching her mouth. Hannah was praying in her heart [i.e., silently]; her lips were moving, but her voice could not be heard, so Eli thought she was drunk. Eli said to her: “How long are you going to keep up this drunkenness? Cut out the boozing!” But Hannah answered: “Oh no, sir, I am a woman of saddened spirit. I have drunk no wine or strong drink, but I have been pouring out my heart to the LORD . Don’t take your maidservant for an ill-behaved woman! I have been praying this long because of my great distress.” Eli answered her: “Then go in peace, and may the God of Israel grant you what you have asked of Him.” (1 Sam 1:9–17)

If Eli couldn’t hear her, how did Hannah ever expect God to hear her? But she did. Somehow, even though no sound was coming out of her mouth, she apparently believed that God would hear her vow and, she hoped, act accordingly. (Which He did; “at the turn of the year she bore a son,” 1 Sam 1:20.) This too seemed to defy the laws of physics, just as much as Jonah’s prayer from the belly of the fish, or any prayer uttered at some distance from God’s presumed locale, a temple or other sacred spot.

Many other things could be said about the Psalms, or about biblical prayers in general, but the foregoing three points have been chosen for what they imply for the overall theme of this book. We have already seen a great deal of evidence indicating that people in biblical times believed the mind to be semipermeable, capable of being infiltrated from the outside. This is attested not only in the biblical narratives examined earlier, but it is the very premise on which all of Israel’s prophetic corpus stands. The semipermeable mind is prominent in the Psalms as well; in a telling phrase, God is repeatedly said to penetrate people’s “kidneys and heart” (Pss 7:10, 26:2, 139:13; also Jer 11:20, 17:10, 20:12), entering these messy internal organs 28 where thoughts were believed to dwell and reading—as if from a book—all of people’s hidden ideas and intentions. God just enters and looks around:

You have examined my heart, visited [me] at night;
You have tested me and found no wickedness; my mouth has not transgressed. (Ps 17:3)
Examine me, O LORD , and test me; try my kidneys and my heart. (26:2)

[28. Robert North rightly explained references to a person’s “heart” alone ( leb in biblical Hebrew) not as a precise reference to that particular organ, but as “a vaguely known or confused jumble of organs, somewhere in the area of the heart or stomach”: see North (1993), 596.]

Indeed God is so close that inside and outside are sometimes fused:

Let me bless the LORD who has given me counsel; my kidneys have been instructing me at night.
I keep the LORD before me at all times, just at my right hand, so I will not stumble. (Ps 16:7–8)

(Who’s giving this person advice, an external God or an internal organ?)

Such is God’s passage into a person’s semipermeable mind. But the flip side of all this is prayer, when a person’s words, devised on the inside, in the human mind, leave his or her lips in order to reach—somehow—God on the outside. As we have seen, those words were indeed believed to make their way to God; in fact, it was the cry of the victim that in some sense made the world work, causing God to notice and take up the cause of justice and right. Now, the God who did so was also, we have seen, a mighty King, who presumably ranged over all of heaven and earth:

He mounted on a cherub and flew off, gliding on the wings of the wind. (Ps 18:11)

He makes the clouds His chariot, He goes about on the wings of the wind. (Ps 104:3)

Yet somehow, no matter where His travels might take Him, God is also right there, just on the other side of the curtain that separates ordinary from extraordinary reality, allowing Him to hear the sometimes geographically distant cry of the victim or even to hear an inaudible, silent prayer like Hannah’s. The doctrine of divine omnipresence was still centuries away and was in fact implicitly denied in many biblical texts, 29 yet something akin to omnipresence seems to be implied in God’s ability to hear and answer prayers uttered from anywhere, no matter where He is. In fact, this seems implied as well in the impatient, recurrent question seen above, “How long, O L ORD ?”; the psalmist seems to be saying, “I know You’ve heard me, so when will You answer?”

Perhaps the most striking thing suggested by all this is the extent to which the Psalms’ depiction of God seems to conform to the general contours of the great Outside as described in an earlier chapter. God is huge and powerful, but also all-enfolding and, hence, just a whisper away. Somehow, people in biblical times seem to have just assumed that God, on the other side of that curtain, could hear their prayers, no matter where they were. All this again suggests a sense of self quite different from our own—a self that could not only be permeated by a great, external God, but whose thoughts and prayers could float outward and reach a God who was somehow never far, His domain beginning precisely where the humans’ left off.

One might thus say that, in this and in other ways, the psalmists’ underlying assumptions constitute a kind of biblical translation of a basic way of perceiving that had started many, many millennia earlier, a rephrasing of that fundamental reality in the particular terms of the religion of Israel. That other, primeval sense of reality and this later, more specific version of it found in these psalms present the same basic outline, which is ultimately a way of fitting into the world: the little human (more specifically in the Psalms, the little supplicant) faced with a huge enfolding Outside (in the Psalms, the mighty King) who overshadows everything and has all the power: sometimes kind and sometimes cruel (in the Psalms, sometimes heeding one’s request, but at other times oddly inattentive or sluggish), the Outside is so close as to move in and out of the little human (in the Psalms as elsewhere, penetrating a person’s insides, but also, able to pick up the supplicant’s request no matter where or how uttered). 30

pp. 205-207

The biblical “soul” was not originally thought to be immortal; in fact, the whole idea that human beings have some sort of sacred or holy entity inside them did not exist in early biblical times. But the soul as we conceive of it did eventually come into existence, and how this transformation came about is an important part of the history that we are tracing.

The biblical book of Proverbs is one of the least favorites of ordinary readers. To put the matter bluntly, Proverbs can be pretty monotonous: verse after verse tells you how much better the “righteous” are than the “wicked”: that the righteous tread the strait and narrow, control their appetites, avoid the company of loose women, save their money for a rainy day, and so forth, while the “wicked” always do quite the opposite. In spite of the way the book hammers away at these basic themes, a careful look at specific verses sometimes reveals something quite striking. 1 Here, for example, is what one verse has to say about the overall subject of the present study:

A person’s soul is the lamp of the LORD , who searches out all the innermost chambers. (Prov 20:27)

At first glance, this looks like the old theme of the semipermeable mind, whose innermost chambers are accessible to an inquisitive God. But in this verse, God does not just enter as we have seen Him do so often in previous chapters, when He appeared (apparently in some kind of waking dream) to Abraham or Moses, or put His words in the mouth of Amos or Jeremiah, or in general was held to “inspect the kidneys and heart” (that is, the innermost thoughts) of people. Here, suddenly, God seems to have an ally on the inside: the person’s own soul.

This point was put forward in rather pungent form by an ancient Jewish commentator, Rabbi Aḥa (fourth century CE ). He cited this verse to suggest that the human soul is actually a kind of secret agent, a mole planted by God inside all human beings. The soul’s job is to report to God (who is apparently at some remove) on everything that a person does or thinks:

“A person’s soul is the lamp of the LORD , who searches out all the innermost chambers”: Just as kings have their secret agents * who report to the king on each and every thing, so does the Holy One have secret agents who report on everything that a person does in secret . . . The matter may be compared to a man who married the daughter of a king. The man gets up early each morning to greet the king, and the king says, “You did such-and-such a thing in your house [yesterday], then you got angry and you beat your slave . . .” and so on for each and every thing that occurred. The man leaves and says to the people of the palace, “Which of you told the king that I did such-and-so? How does he know?” They reply to him, “Don’t be foolish! You’re married to his daughter and you want to know how he finds out? His own daughter tells him!” So likewise, a person can do whatever he wants, but his soul reports everything back to God. 2

The soul, in other words, is like God’s own “daughter”: she dwells inside a human body, but she reports regularly to her divine “father.” Or, to put this in somewhat more schematic terms: God, who is on the outside, has something that is related or connected to Him on the inside, namely, “a person’s soul.” But wasn’t it always that way?

Before getting to an answer, it will be worthwhile to review in brief something basic that was seen in the preceding chapters. Over a period of centuries, the basic model of God’s interaction with human beings came to be reconceived. After a time, He no longer stepped across the curtain separating ordinary from extraordinary reality. Now He was not seen at all—at first because any sort of visual sighting was held to be lethal, and later because it was difficult to conceive of. God’s voice was still heard, but He Himself was an increasingly immense being, filling the heavens; and then finally (moving ahead to post-biblical times), He was just axiomatically everywhere all at once. This of course clashed with the old idea of the sanctuary (a notion amply demonstrated in ancient Mesopotamian religion as well), according to which wherever else He was, God was physically present in his earthly “house,” that is, His temple. But this ancient notion as well came to be reconfigured in Israel; perched like a divine hologram above the outstretched wings of the cherubim in the Holy of Holies, God was virtually bodiless, issuing orders (like “Let there be light”) that were mysteriously carried out. 3

If conceiving of such a God’s being was difficult, His continued ability to penetrate the minds of humans ought to have been, if anything, somewhat easier to account for. He was incorporeal and omnipresent; 4 what could stand in the way of His penetrating a person’s mind, or being there already? Yet precisely for this reason, Proverbs 20:27 is interesting. It suggests that God does not manage this search unaided: there is something inside the human being that plays an active role in this process, the person’s own self or soul.

p. 390

It is striking that the authors of this study went on specifically to single out the very different sense of self prevailing in the three locales as responsible for the different ways in which voice hearing was treated: “Outside Western culture people are more likely to imagine [a person’s] mind and self as interwoven with others. These are, of course, social expectations, or cultural ‘invitations’—ways in which other people expect people like themselves to behave. Actual people do not always follow social norms. Nonetheless, the more ‘independent’ emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally many times in many places—among them India and Africa . . .” The passage continues: “For instance, the anthropologist McKim Marriott wanted to be so clear about how much Hindus conceive themselves to be made through relationships, compared with Westerners, that he called the Hindu person a ‘dividual’. His observations have been supported by other anthropologists of South Asia and certainly in south India, and his term ‘dividual’ was picked up to describe other forms of non-Western personhood. The psychologist Glenn Adams has shown experimentally that Ghanaians understand themselves as intrinsically connected through relationships. The African philosopher John Mbiti remarks: ‘only in terms of other people does the [African] individual become conscious of his own being.’” Further, see Markus and Mullally (1997); Nisbett (2004); Marriot (1976); Miller (2007); Trawick (1992); Strathern (1988); Ma and Schoeneman (1997); Mbiti (1969).

The “Other” Psychology of Julian Jaynes
by Brian J. McVeigh
p. 74

The Heart is the Ruler of the Body

We can begin with the word xin1, or heart, though given its broader denotations related to both emotions and thought, a better translation is “heart-mind” (Yu 2003). Xin1 is a pictographic representation of a physical heart, and as we will see below, it forms the most primary and elemental building block for Chinese linguo-concepts having to do with the psychological. The xin1 oversaw the activities of an individual’s psychophysiological existence and was regarded as the ruler of the body — indeed, the person — in the same way a king ruled his people. If individuals cultivate and control their hearts, then the family, state, and world cold be properly governed (Yu 2007, 2009b).

Psycho-Physio-Spiritual Aspects of the Person

Under the control of heart were the wu3shen2 of “five spirits” (shen2, hun2, po4, yi4, zhi4) which dwelt respectively in the heart, liver, lungs, spleen, and kidneys. The five shen2 were implicated in the operations of thinking, perception, and bodily systems and substances. A phonosemantic, shen2 has been variously translated as mind, spirit, supernatural being, consciousness, vitality, expression, soul, energy, god, or numen/numinous. The left side element of this logograph means manifest, show, demonstrate; we can speculate that whatever was manifested came from a supernatural source; it may have meant “ancestral spirit” (Keightley 1978: 17). The right side provides sound but also the additional meaning of “to state” or “report to a superior”; again we can speculate that it meant communing to a supernatural superior.

Introspective Illusion

On split brain research, Susan Blackmore observed that, “In this way, the verbal left brain covered up its ignorance by confabulating.” This relates to the theory of introspective illusion (see also change blindness, choice blindness, and bias blind spot). In both cases, the conscious mind turns to confabulation to explain what it has no access to and so what it doesn’t understand.

This is how we maintain a sense of being in control. Our egoic minds have immense talent at rationalization and it can happen instantly with total confidence in the reason(s) given. That indicates that consciousness is a lot less conscious than it really is… or rather that consciousness isn’t what we think it is.

Our theory of mind, as such, is highly theoretical in the speculative sense. That is to say it isn’t particularly reliable in most cases. First and foremost, what matters is that the story told is compelling, to both us and others (self-justification, in its role within consciousness, is close to Jaynesian self-authorization). We are ruled by our need for meaning, even as our body-minds don’t require meaning to enact behaviors and take actions. We get through our lives just fine mostly on automatic.

According to Julian Jaynes theory of the bicameral mind, the purpose of consciousness is to create an internal stage upon which we play out narratives. As this interiorized and narratized space is itself confabulated, that is to say psychologically and socially constructed, this space allows all further confabulations of consciousness. We imaginatively bootstrap our individuality into existence, and that requires a lot of explaining.

* * *

Introspection illusion
Wikipedia

A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness.[8][9] Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, “little or no introspective access to higher order cognitive processes”.[10] They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.[8]

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated.[9]A ten-year-anniversary review of the paper raised several objections, questioning the idea of “process” they had used and arguing that unambiguous tests of introspective access are hard to achieve.[3]

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching.[10] He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity.[7] However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and “seem” to be “unaware of their unawareness”.[11]

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including “belief” or “pain” will turn out to be quite different from what is commonly expected as science advances.

The faulty guesses that people make to explain their thought processes have been called “causal theories”.[1] The causal theories provided after an action will often serve only to justify the person’s behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

2017 Report on Consciousness and Moral Patienthood
Open Philanthropy Project

Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability.

59. I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

[Keith Frankish, Editorial Introduction, Journal of Consciousness Studies, Volume 23, Numbers 11-12, 2016, pp. 9-10(2)]

The Round-Based Community

Yet there’s an even deeper point to be made here, which is that flatness may actually be closer to how we think about the people around us, or even about ourselves.

This is a useful observation from Alec Nevala-Lee (The flat earth society).

I’m willing to bet that perceiving others and oneself as round characters has to do with the ability of cognitive complexity and tolerance for cognitive dissonance. These are tendencies of the liberal-minded, although research shows that with cognitive overload, from stress to drunkenness, even the liberal-minded will become conservative-minded (e.g., liberals who watched repeated video of 9/11 terrorist attacks were more likely to support Bush’s war on terror; by the way, identifying a conflict by a single emotion is a rather flat way of looking at the world).

Bacon concludes: “Increasingly, the political party you belong to represents a big part of your identity and is not just a reflection of your political views. It may even be your most important identity.” And this strikes me as only a specific case of the way in which we flatten ourselves out to make our inner lives more manageable. We pick and choose what else we emphasize to better fit with the overall story that we’re telling. It’s just more obvious these days.

So, it’s not only about characters but entire attitudes and worldviews. The ego theory of self itself encourages flatness, as opposed to the (Humean and Buddhist) bundle theory of self. It’s interesting to note how much more complex identity has become in the modern world and how much more accepting we are of allowing people to have multiple identities than in the past. This has happened at the very same time that fluid intelligence has drastically increased, and of course fluid intelligence correlates with liberal-mindedness (correlating as well to FFM openness, MBTI perceiving, Hartmann’s thin boundary type, etc).

Cultures have a way of taking psychological cues from their heads of state. As Forster says of one critical objection to flat characters: “Queen Victoria, they argue, cannot be summed up in a single sentence, so what excuse remains for Mrs. Micawber?” When the president himself is flat—which is another way of saying that he can no longer surprise us on the downside—it has implications both for our literature and for our private lives.

At the moment, the entire society is under extreme duress. This at least temporarily rigidifies the ego boundaries. Complexity of identity becomes less attractive to the average person at such times. Still, the most liberal-minded (typically radical leftists in the US) will be better at maintaining their psychological openness in the face of conflict, fear, and anxiety. As Trump is the ultimate flat character, look to the far left for those who will represent the ultimate round character. Mainstream liberals, as usual, will attempt to play to the middle and shift with the winds, taking up flat and round in turn. It’s a battle of not only ideological but psychological worldviews. And which comes to define our collective identity will dominate our society for the coming generation.

The process is already happening. And it shouldn’t astonish us if we all wake up one day to discover that the world is flat.

It’s an interesting moment. Our entire society is becoming more complex — in terms of identity, demographics, technology, media, and on and on. This requires we develop the ability of roundedness or else fall back on the simplifying rhetoric and reaction of conservative-mindedness with the rigid absolutes of authoritarianism being the furthest reaches of flatness… and, yes, such flatness tends to be memorable (the reason it is so easy to make comparisons to someone like Hitler who has become an extreme caricature of flatness). This is all the more reason for the liberal-minded to gain the awareness and intellectual defenses toward the easy attraction of flat identities and worldviews, since in a battle of opposing flat characters the most conservative-minded will always win.

 

Dickinson’s Purse and Sword

A lesser known founding father is John Dickinson, but he should be more well known considering how important he was at the time. His politics could today be described as moderate conservatism or maybe status quo liberalism. During conflict with the British Empire, he hoped the colonial leaders would seek reconciliation. Yet even as he refused to sign the Declaration of Independence, not based on principle but prudence, he didn’t stand in the way of those who supported it. And once war was under way, he served in the revolutionary armed forces. After that, he was a key figure in developing the Articles of Confederation and the Constitution.

Although a Federalist, he was highly suspicious of nationalism, the two being distinguished at the time. It might be noted that, if not for losing the war of rhetoric, the Anti-Federalists would be known as Federalists for they actually wanted a functioning federation. Indeed, Dickinson made arguments that are more Anti-Federalist in spirit. An example of this is his warning against a centralized government possessing both purse and sword, that is to say a powerful government that has both a standing army and the means of taxation to fund it without any need of consent of the governed. That is what the Articles protected against and the Constitution failed to do.

That warning remains unheeded to this day. And so the underlying issue remains silenced, the conflict and tension remains unresolved. The lack of political foresight and moral courage was what caused the American Revolution, the problems (e.g., division of power) arising in the English Civil War and Glorious Revolution still being problems generations later. The class war and radical ideologies from the 17th century led to the decades of political strife and public outrage prior to the official start of the American Revolution. But the British leadership hoped to continue to suppress the growing unrest, similar to how present American leadership hopes for the same and probably with the same eventual result.

What is interesting is how such things never go away and how non-radicals like Dickinson can end up giving voice to radical ideas. The idea of the purse strings being held by a free people, i.e., those taxed having the power of self-governance to determine their own taxation,  is not that far off from Karl Marx speaking of workers controlling the means of production — both implying that a society is only free to the degree people are free. Considering Dickinson freed the slaves he inherited, even a reluctant revolutionary such as himself could envision

* * *

On a related thought, one of the most radical documents, of course, was Thomas Jefferson’s strongly worded Declaration of Independence. It certainly was radical when it was written and, as with much else from that revolutionary era, maintains its radicalism to this day.

The Articles of Confederation, originally drafted by Dickinson, were closely adhering to the guiding vision of the Declaration.  Even though Dickinson was against declaring independence until all alternatives had been exhausted, once independence had been declared he was very much about following a course of moral principle as set down by that initial revolutionary document.

Yet the Constitution, that is the second constitution after the Articles, was directly unconstitutional and downright authoritarian according to the Articles.  The men of the Constitutional Convention blatantly disregarded their constitutional mandate in their having replaced the Articles without constitutional consensus and consent, that is to say it was a coup (many of the revolutionary soldiers didn’t take this coup lightly and continued the revolutionary war through such acts as Shay’s Rebellion, which was violently put down by the very Federal military that the Anti-Federalists warned about).

But worse still, the Constitution ended up being a complete betrayal of the Declaration which set out the principles that justified a revolution in the first place. As Howard Schartz put it:

“The Declaration itself, by contrast, never envisioned a Federal government at all. Ironically, then, if one wants to see the political philosophy of the United States in the Declaration of Independence, one should theoretically be against any form of federal government and not just for a particular interpretation of its limited powers.”
(Liberty In America’s Founding Moment, Kindle Locations 5375-5378)

It does seem that the contradiction bothered Dickinson. But he wasn’t a contrarian by nature, much less a rabblerouser. Once it was determined a new constitution was going to be passed, he sought the best compromise he saw as possible, although on principle he still refused to show consent by being a signatory. As for Jefferson, whether or not he ever thought the Constitution was a betrayal of the Declaration, he assumed any constitution was an imperfect document and that no constitution would or should last beyond his own generation.

* * *

Letters from a Farmer
Letter IX

No free people ever existed, or can ever exist, without keeping, to use a common, but strong expression, “the purse strings,” in their own hands. Where this is the case, they have a constitutional check upon the administration, which may thereby be brought into order without violence: But where such a power is not lodged in the people, oppression proceeds uncontrolled in its career, till the governed, transported into rage, seek redress in the midst of blood and confusion.

Letter II

Nevertheless I acknowledge the proceedings of the convention furnish my mind with many new and strong reasons, against a complete consolidation of the states. They tend to convince me, that it cannot be carried with propriety very far—that the convention have gone much farther in one respect than they found it practicable to go in another; that is, they propose to lodge in the general government very extensive powers—powers nearly, if not altogether, complete and unlimited, over the purse and the sword. But, in its organization, they furnish the strongest proof that the proper limbs, or parts of a government, to support and execute those powers on proper principles (or in which they can be safely lodged) cannot be formed. These powers must be lodged somewhere in every society; but then they should be lodged where the strength and guardians of the people are collected. They can be wielded, or safely used, in a free country only by an able executive and judiciary, a respectable senate, and a secure, full, and equal representation of the people. I think the principles I have premised or brought into view, are well founded—I think they will not be denied by any fair reasoner. It is in connection with these, and other solid principles, we are to examine the constitution. It is not a few democratic phrases, or a few well formed features, that will prove its merits; or a few small omissions that will produce its rejection among men of sense; they will inquire what are the essential powers in a community, and what are nominal ones; where and how the essential powers shall be lodged to secure government, and to secure true liberty.

Letter III

When I recollect how lately congress, conventions, legislatures, and people contended in the cause of liberty, and carefully weighed the importance of taxation, I can scarcely believe we are serious in proposing to vest the powers of laying and collecting internal taxes in a government so imperfectly organized for such purposes. Should the United States be taxed by a house of representatives of two hundred members, which would be about fifteen members for Connecticut, twenty-five for Massachusetts, etc., still the middle and lower classes of people could have no great share, in fact, in taxation. I am aware it is said, that the representation proposed by the new constitution is sufficiently numerous; it may be for many purposes; but to suppose that this branch is sufficiently numerous to guard the rights of the people in the administration of the government, in which the purse and sword are placed, seems to argue that we have forgotten what the true meaning of representation is. I am sensible also, that it is said that congress will not attempt to lay and collect internal taxes; that it is necessary for them to have the power, though it cannot probably be exercised. I admit that it is not probable that any prudent congress will attempt to lay and collect internal taxes, especially direct taxes: but this only proves that the power would be improperly lodged in congress, and that it might be abused by imprudent and designing men.

Letter XVII

It is said, that as the federal head must make peace and war, and provide for the common defense, it ought to possess all powers necessary to that end: that powers unlimited, as to the purse and sword, to raise men and monies, and form the militia, are necessary[168] to that end; and, therefore, the federal head ought to possess them. This reasoning is far more specious than solid: it is necessary that these powers so exist in the body politic, as to be called into exercise whenever necessary for the public safety; but it is by no means true, that the man, or congress of men, whose duty it more immediately is to provide for the common defense, ought to possess them without limitation. But clear it is, that if such men, or congress, be not in a situation to hold them without danger to liberty, he or they ought not to possess them. It has long been thought to be a well-founded position, that the purse and sword ought not to be placed in the same hands in a free government. Our wise ancestors have carefully separated them—placed the sword in the hands of their king, even under considerable limitations, and the purse in the hands of the commons alone: yet the king makes peace and war, and it is his duty to provide for the common defense of the nation. This authority at least goes thus far—that a nation, well versed in the science of government, does not conceive it to be necessary or expedient for the man entrusted with the common defense and general tranquility, to possess unlimitedly the powers in question, or even in any considerable degree.

The Spell of Inner Speech

Inner speech is not a universal trait of humanity, according to Russell T. Hurlburt. That is unsurprising. Others go much further in arguing that inner speech was once non-existent for entire civilizations.

My favorite version of this argument being that of Julian Jayne’s theory of the bicameral mind. It was noted by Jaynes how bicameralism can be used as an interpretative frame to understand many of the psychological oddities still found in modern society. His theory goes a long way in explaining hypnosis, for instance. From that perspective, I’ve long suspected that post-bicameral consciousness isn’t as well established as is generally assumed. David Abrahms observes (see at end of post for full context):

“It is important to realize that the now common experience of “silent” reading is a late development in the story of the alphabet, emerging only during the Middle Ages, when spaces were first inserted between the words in a written manuscript (along with various forms of punctuation), enabling readers to distinguish the words of a written sentence without necessarily sounding them out audibly. Before this innovation, to read was necessarily to read aloud, or at the very least to mumble quietly; after the twelfth century it became increasingly possible to internalize the sounds, to listen inwardly to phantom words (or the inward echo of words once uttered).”

Internal experience took a long time to take hold. During the Enlightenment, there was still contentious debate about whether or not all humans shared a common capacity for inner experience — that is, did peasants, slaves and savages (and women) have minds basically the same as rich white men, presumably as rational actors with independent-mindedness and abstract thought. The rigid boundaries of the hyper-individualistic ego-mind required millennia to be built up within the human psyche, initially considered the sole province of the educated elite that, from Plato onward, was portrayed as a patriarchal and paternalistic enlightened aristocracy.

The greatest of radical ideals was to challenge this self-serving claim of the privileged mind by demanding that all be treated as equals before God and government, in that through the ability to read all could have a personal relationship with God and through natural rights all could self-govern. Maybe it wasn’t merely a change in the perception of common humanity but a change within common humanity itself. As modernity came into dominance, the inner sense of self with the accompanying inner speech became an evermore prevalent experience. Something rare among the elite not too many centuries earlier had suddenly become common among the commoners.

With minds of their own, quite literally, the rabble became rabblerousers who no longer mindlessly bowed down to their betters. The external commands and demands of the ancien regime lost their grip as individuality became the norm. What replaced it was what Jaynes referred to as self-authorization, very much dependent on an inner voice. But it is interesting to speculate that it might have required such a long incubation period considering this new mindset had first taken root back in the Axial Age. It sometimes can be a slow process for new memes to filter across vast geographic populations and seep down into the masses.

So what might the premodern mentality have been like? At Hurlburt’s piece, I noticed some comments about personal experience. One anonymous person mentioned, after brain trauma, “LOSING my inner voice. It is a totally different sensation/experience of reality. […] It is totally unlike anything I had ever known, I felt “simple” my day to day routines where driven only by images related to my goals (example: seeing Toothbrush and knowing my goals is to brush my teeth) and whenever I needed to recite something or create thoughts for communication, it seemed I could only conjure up the first thoughts to come to my mind without any sort of filter. And I would mumble and whisper to myself in Lue of the inner voice. But even when mumbling and whispering there was NO VOICE in my head. Images, occasionally. Other than that I found myself being almost hyper-aware of my surroundings with my incoming visual stimuli as the primary focus throughout my day.”

This person said a close comparison was being in the zone, sometimes referred to as runner’s high. That got me thinking about various factors that can shut down the normal functioning of the egoic mind. Extreme physical activity forces the mind into a mode that isn’t experienced that often and extensively by people in the modern world, a state of mind combining exhaustion, endorphins, and ketosis — a state of mind, on the other hand, that would have been far from uncommon before modernity with some arguing ketosis was once the normal mode of neurocogntivie functioning. Related to this, it has been argued that the abstractions of Enlightenment thought was fueled by the imperial sugar trade, maybe the first time a permanent non-ketogenic mindset was possible in the Western world. What sugar (i.e., glucose), especially when mixed with the other popular trade items of tea and coffee, makes possible is thinking and reading (i.e., inner experience) for long periods of time without mental tiredness. During the Enlightenment, the modern mind was borne out of a drugged-up buzz. That is one interpretation. Whatever the cause, something changed.

Also, in the comment section of that article, I came across a perfect description of self-authorization. Carla said that, “There are almost always words inside my head. In fact, I’ve asked people I live with to not turn on the radio in the morning. When they asked why, they thought my answer was weird: because it’s louder than the voice in my head and I can’t perform my morning routine without that voice.” We are all like that to some extent. But for most of us, self-authorization has become so natural as to largely go unnoticed. Unlike Carla, the average person learns to hear their own inner voice despite external sounds. I’m willing to bet that, if tested, Carla would show results of having thin mental boundaries and probably an accordingly weaker egoic will to force her self-authorization onto situations. Some turn to sugar and caffeine (or else nicotine and other drugs) to help shore up rigid thick boundaries and maintain focus in this modern world filled with distractions — likely a contributing factor to drug addiction.

In Abrams’s book, The Spell of the Sensuous, he emphasizes the connection between sight and sound. By way of reading, seeing words becomes hearing words in one’s own mind. This is made possible by the perceptual tendency to associate sight and sound, the two main indicators of movement, with a living other such as an animal moving through the underbrush. Maybe this is what creates the sense of a living other within, a Jaynesian consciousness as interiorized metaphorical space. The magic of hearing words inside puts a spell on the mind, invoking a sense of inner being separate from the outer world. This is how reading can conjure forth an entire visuospatial experience of narratized world, sometimes as compellingly real or moreso than our mundane lives. To hear and see, even if only imagined inwardly, is to make real.

Yet many lose the ability to visualize as they age. I wonder if that has to do with how the modern world until recently has been almost exclusively focused on text. It’s only now that a new generation has been so fully raised on the visual potency of 24/7 cable and the online world, and unlike past generations they might remain more visually-oriented into old age. The loss of visual imagination might have been more of a quirk of printed text, the visual not so much disappearing as being subverted into sound as the ego’s own voice became insular. But even when we are unaware of it, maybe the visual remains as the light in the background that makes interior space visible like a lamp in a sonorous cave, the lamp barely offering enough light to allow us follow the sound further into the darkness. Bertrand Russell went so far as to “argues that mental imagery is the essence of the meaning of words in most cases” (Bertrand Russell: Unconscious Terrors; Murder, Rage and Mental Imagery.). It is the visual that makes the aural come alive with meaning — as Russell put it:

“it is nevertheless the possibility of a memory image in the child and an imagination image in the hearer that makes the essence of the ‘meaning’ of the words. In so far as this is absent, the words are mere counters, capable of meaning, but not at the moment possessing it.”

Jaynes resolves the seeming dilemma by proposing the visuospatial as a metaphorical frame in which the mind operates, rather than itself being the direct focus of thought. And to combine this with Russell’s view, as the visual recedes from awareness, abstract thought recedes from the visceral sense of meaning of the outer world. This is how modern humanity, ever more lost in thought, has lost contact with the larger world of nature and universe, a shrinking number of people who still regularly experience a wilderness vista or the full starry sky. Our entire world turns inward and loses its vividness, becomes smaller, the boundaries dividing thicker. Our minds become ruled by Russell’s counters of meaning (i.e., symbolic proxies), rather than meaning directly. That may be changing, though, in this new era of visually-saturated media. Even books, as audiobooks, can now be heard outwardly in the voice of another. The rigid walls of the ego, so carefully constructed over centuries, are being cracked open again. If so, we might see a merging back together again of the separated senses, which could manifest as a return of synaesthesia as a common experience and with it a resurgence of metaphorical thought that hews close to the sensory world, the fertile ground of meaning. About a talk by Vilanayur S. Ramachandran, Maureen Seaberg writes (The Sea of Similitude):

“The refined son of an Indian diplomat explains that synesthesia was discovered by Sir Francis Galton, cousin of Charles Darwin, and that its name is derived from the Greek words for joined sensations. Next, he says something that really gets me to thinking – that there is greater cross wiring in the brains of synesthetes. This has enormous implications. “Now, if you assume that this greater cross wiring and concepts are also in different parts of the brain [than just where the synesthesia occurs], then it’s going to create a greater propensity towards metaphorical thinking and creativity in people with synesthesia. And, hence, the eight times more common incidence of synesthesia among poets, artists and novelists,” he says.

“In 2005, Dr. Ramachandran and his colleagues at the University of California at San Diego identified where metaphors are likely generated in the brain by studying people who could no longer understand metaphor because of brain damage. Proving once again the maxim that nature speaks through exceptions, they tested four patients who had experienced injuries to the left angular gyrus region. In May 2005, Scientific American reported on this and pointed out that although the subjects were bright and good communicators, when the researchers presented them with common proverbs and metaphors such as “the grass is always greener on the other side” and “reaching for the stars,” the subjects interpreted the sayings literally almost all the time. Their metaphor centers – now identified – had been compromised by the damage and the people just didn’t get the symbolism. Interestingly, synesthesia has also been found to occur mostly in the fusiform and angular gyrus – it’s in the same neighborhood. […]

“Facility with metaphor is a “thing” in synesthesia. Not only do Rama’s brain studies prove it, but I’ve noticed synesthetes seldom choose the expected, clichéd options when forming the figures of speech that describe a thing in a way that is symbolic to explain an idea or make comparisons. It would be more enviable were it not completely involuntary and automatic. In our brains without borders, it just works that way. Our neuronal nets are more interwoven”

The meeting of synaesthesia and metaphor opens up to our greater, if largely forgotten, humanity. As Jaynes and many others have made clear, those in the distant past and those still living in isolated tribes, such people experience the world far differently than us. This can be seen in odd use of language in ancient texts, which we may take as odd turns of phrase, as mere metaphor. But what if these people so foreign to us took their own metaphors quite literally, so to speak. In another post by Maureen Seaberg (The Shamanic Synesthesia of the Kalahari Bushmen), there are clear examples of this:

“The oldest cultures found that ecstatic experience expands our awareness and in its most special form, the world is experienced through more sensory involvement and presence, he says. “The shaman’s transition into ecstasy brought about what we call synesthesia today. But there was more involved than just passively experiencing it. The ecstatic shaman also performed sound, movement, and made reference to vision, smell, and taste in ways that helped evoke extraordinary experiences in others. They were both recipients and performers of multi-sensory theatres. Of course this is nothing like the weekend workshop shamans of the new age who are day dreaming rather than shaking wildly…. Rhythm, especially syncopated African drumming, excites the whole body to feel more intensely. Hence, it is valued as a means of ‘getting there’. A shaman (an ecstatic performer) played all the senses.” If this seems far afield from Western experience, consider that in Exodus 20:18, as Moses ascended Mt. Sinai to retrieve the tablets, the people present were said to have experienced synesthesia. “And all the people saw the voices” of heaven, it says. And we know synesthesia happens even in non-synesthetes during meditation — a heightened state.” “

The metaphorical ground of synaesthesia is immersive and participatory. It is a world alive with meaning. It was a costly trade in sacrificing this in creating our separate and sensory-deprived egoic consciousness, despite all that we gained in wielding power over the world. During the Bronze Age when written language still had metaphorical mud on its living roots, what Jaynes calls the bicameral mind would have been closer to this animistic mindset. A metaphor in that experiential reality was far more than what we now know of as metaphor. The world was alive with beings and voices. This isn’t only the origins of our humanity for it remains the very ground of our being, the source of what we have become — language most of all (“First came the temple, then the city.”):

“Looking at an even more basic level, I was reading Mark Changizi’s Harnessed. He argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature.” That reminded me of Lynne Kelly’s description of how indigenous people would use vocal techniques and musical instruments to mimic natural sounds, as a way of communicating and passing on complex knowledge of the world. Changizi’s argument is based on the observation that “human speech sounds like solid-object physical events” and that “music sounds like humans moving and behaving (usually expressively)” (p. 19). Certain sounds give information about what is going on in the immediate environment, specifically sounds related to action and movement. This sound-based information processing would make for an optimal basis of language formation. This is given support from evidence that Kelly describes in her own books.

“This also touches upon the intimate relationship language has to music, dance, and gesture. Language is inseparable from our experience of being in the world, involving multiple senses or even synaesthesia. The overlapping of sensory experience may have been more common to earlier societies. Research has shown that synaesthetes have better capacity for memory: “spatial sequence synesthetes have a built-in and automatic mnemonic reference” (Wikipedia). That is relevant considering that memory is central to oral societies, as Kelly demonstrates. And the preliterate memory systems are immensely vast, potentially incorporating the equivalent of thousands of pages of info. Knowledge and memory isn’t just in the mind but within the entire sense of self, sense of community, and sense of place.”

We remain haunted by the past (“Beyond that, there is only awe.”):

“Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.

If you are one of those who clearly hears a voice in your head, appreciate all that went into creating and constructing it. This is an achievement of our entire civilization. But also realize how precarious is this modern mind. It’s a strange thing to contemplate. What is that voice that speaks? And who is it that is listening? Now imagine what it would be like if, as with the bicameral gods going silent, your own god-like ego went silent. And imagine this silence spreading across all of society, an entire people suddenly having lost their self-authorization to act, their very sense of identity and social reality. Don’t take for granted that voice within.

* * *

Below is a passage from a book I read long ago, maybe back when it was first published in 1996. The description of cognitive change almost could have been lifted straight out of Julian Jaynes book from twenty years earlier (e.g., the observation of the gods becoming silent). Abrams doesn’t mention Jaynes and it’s possible he was unfamiliar with it, whether or not there was an indirect influence. The kinds of ideas Jaynes was entertaining had been floating around for a long while before him as well. The unique angle that Abrams brings in this passage is framing it all within synaesthesia.

The Spell of the Sensuous
by David Abrams
p. 69

Although contemporary neuroscientists study “synaesthesia”—the overlap and blending of the senses—as though it were a rare or pathological experience to which only certain persons are prone (those who report “seeing sounds,” “hearing colors,” and the like), our primordial, preconceptual experience, as Merleau-Ponty makes evident, is inherently synaesthetic. The intertwining of sensory modalities seems unusual to us only to the extent that we have become estranged from our direct experience (and hence from our primordial contact with the entities and elements that surround us):

…Synaesthetic perception is the rule, and we are unaware of it only because scientific knowledge shifts the center of gravity of experience, so that we have unlearned how to see, hear, and generally speaking, feel, in order to deduce, from our bodily organization and the world as the physicist conceives it, what we are to see, hear, and feel. 20

pp. 131-144

It is remarkable that none of the major twentieth-century scholars who have directed their attention to the changes wrought by literacy have seriously considered the impact of writing—and, in particular, phonetic writing—upon the human experience of the wider natural world. Their focus has generally centered upon the influence of phonetic writing on the structure and deployment of human language, 53 on patterns of cognition and thought, 54 or upon the internal organization of human societies. 55 Most of the major research, in other words, has focused upon the alphabet’s impact on processes either internal to human society or presumably “internal” to the human mind. Yet the limitation of such research—its restriction within the bounds of human social interaction and personal interiority—itself reflects an anthropocentric bias wholly endemic to alphabetic culture. In the absence of phonetic literacy, neither society, nor language, nor even the experience of “thought” or consciousness, can be pondered in isolation from the multiple nonhuman shapes and powers that lend their influence to all our activities (we need think only of our ceaseless involvement with the ground underfoot, with the air that swirls around us, with the plants and animals that we consume, with the daily warmth of the sun and the cyclic pull of the moon). Indeed, in the absence of formal writing systems, human communities come to know themselves primarily as they are reflected back by the animals and the animate landscapes with which they are directly engaged. This epistemological dependence is readily evidenced, on every continent, by the diverse modes of identification commonly categorized under the single term “totemism.”

It is exceedingly difficult for us literates to experience anything approaching the vividness and intensity with which surrounding nature spontaneously presents itself to the members of an indigenous, oral community. Yet as we saw in the previous chapters, Merleau-Ponty’s careful phenomenology of perceptual experience had begun to disclose, underneath all of our literate abstractions, a deeply participatory relation to things and to the earth, a felt reciprocity curiously analogous to the animistic awareness of indigenous, oral persons. If we wish to better comprehend the remarkable shift in the human experience of nature that was occasioned by the advent and spread of phonetic literacy, we would do well to return to the intimate analysis of sensory perception inaugurated by Merleau-Ponty. For without a clear awareness of what reading and writing amounts to when considered at the level of our most immediate, bodily experience, any “theory” regarding the impact of literacy can only be provisional and speculative.

Although Merleau-Ponty himself never attempted a phenomenology of reading or writing, his recognition of the importance of synaesthesia—the overlap and intertwining of the senses—resulted in a number of experiential analyses directly pertinent to the phenomenon of reading. For reading, as soon as we attend to its sensorial texture, discloses itself as a profoundly synaesthetic encounter. Our eyes converge upon a visible mark, or a series of marks, yet what they find there is a sequence not of images but of sounds, something heard; the visible letters, as we have said, trade our eyes for our ears. Or, rather, the eye and the ear are brought together at the surface of the text—a new linkage has been forged between seeing and hearing which ensures that a phenomenon apprehended by one sense is instantly transposed into the other. Further, we should note that this sensory transposition is mediated by the human mouth and tongue; it is not just any kind of sound that is experienced in the act of reading, but specifically human, vocal sounds—those which issue from the human mouth. It is important to realize that the now common experience of “silent” reading is a late development in the story of the alphabet, emerging only during the Middle Ages, when spaces were first inserted between the words in a written manuscript (along with various forms of punctuation), enabling readers to distinguish the words of a written sentence without necessarily sounding them out audibly. Before this innovation, to read was necessarily to read aloud, or at the very least to mumble quietly; after the twelfth century it became increasingly possible to internalize the sounds, to listen inwardly to phantom words (or the inward echo of words once uttered). 56

Alphabetic reading, then, proceeds by way of a new synaesthetic collaboration between the eye and the ear, between seeing and hearing. To discern the consequences of this new synaesthesia, we need to examine the centrality of synaesthesia in our perception of others and of the earth.

The experiencing body (as we saw in Chapter 2) is not a self-enclosed object, but an open, incomplete entity. This openness is evident in the arrangement of the senses: I have these multiple ways of encountering and exploring the world—listening with my ears, touching with my skin, seeing with my eyes, tasting with my tongue, smelling with my nose—and all of these various powers or pathways continually open outward from the perceiving body, like different paths diverging from a forest. Yet my experience of the world is not fragmented; I do not commonly experience the visible appearance of the world as in any way separable from its audible aspect, or from the myriad textures that offer themselves to my touch. When the local tomcat comes to visit, I do not have distinctive experiences of a visible cat, an audible cat, and an olfactory cat; rather, the tomcat is precisely the place where these separate sensory modalities join and dissolve into one another, blending as well with a certain furry tactility. Thus, my divergent senses meet up with each other in the surrounding world, converging and commingling in the things I perceive. We may think of the sensing body as a kind of open circuit that completes itself only in things, and in the world. The differentiation of my senses, as well as their spontaneous convergence in the world at large, ensures that I am a being destined for relationship: it is primarily through my engagement with what is not me that I effect the integration of my senses, and thereby experience my own unity and coherence. 57 […]

The diversity of my sensory systems, and their spontaneous convergence in the things that I encounter, ensures this interpenetration or interweaving between my body and other bodies—this magical participation that permits me, at times, to feel what others feel. The gestures of another being, the rhythm of its voice, and the stiffness or bounce in its spine all gradually draw my senses into a unique relation with one another, into a coherent, if shifting, organization. And the more I linger with this other entity, the more coherent the relation becomes, and hence the more completely I find myself face-to-face with another intelligence, another center of experience.

In the encounter with the cyclist, as in my experience of the blackbird, the visual focus induced and made possible the participation of the other senses. In different situations, other senses may initiate the synaesthesia: our ears, when we are at an orchestral concert; or our nostrils, when a faint whiff of burning leaves suddenly brings images of childhood autumns; our skin, when we are touching or being touched by a lover. Nonetheless, the dynamic conjunction of the eyes has a particularly ubiquitous magic, opening a quivering depth in whatever we focus upon, ceaselessly inviting the other senses into a concentrated exchange with stones, squirrels, parked cars, persons, snow-capped peaks, clouds, and termite-ridden logs. This power—the synaesthetic magnetism of the visual focus—will prove crucial for our understanding of literacy and its perceptual effects.

The most important chapter of Merleau-Ponty’s last, unfinished work is entitled “The Intertwining—The Chiasm.” The word “chiasm,” derived from an ancient Greek term meaning “crisscross,” is in common use today only in the field of neurobiology: the “optic chiasm” is that anatomical region, between the right and left hemispheres of the brain, where neuronal fibers from the right eye and the left eye cross and interweave. As there is a chiasm between the two eyes, whose different perspectives continually conjoin into a single vision, so—according to Merleau-Ponty—there is a chiasm between the various sense modalities, such that they continually couple and collaborate with one another. Finally, this interplay of the different senses is what enables the chiasm between the body and the earth, the reciprocal participation—between one’s own flesh and the encompassing flesh of the world—that we commonly call perception. 59

Phonetic reading, of course, makes use of a particular sensory conjunction—that between seeing and hearing. And indeed, among the various synaesthesias that are common to the human body, the confluence (or chiasm) between seeing and hearing is particularly acute. For vision and hearing are the two “distance” senses of the human organism. In contrast to touch and proprioception (inner-body sensations), and unlike the chemical senses of taste and smell, seeing and hearing regularly place us in contact with things and events unfolding at a substantial distance from our own visible, audible body.

My visual gaze explores the reflective surfaces of things, their outward color and contour. By following the play of light and shadow, the dance of colors, and the gradients of repetitive patterns, the eyes—themselves gleaming surfaces—keep me in contact with the multiple outward facets, or faces, of the things arrayed about me. The ears, meanwhile, are more inward organs; they emerge from the depths of my skull like blossoms or funnels, and their participation tells me less about the outer surface than the interior substance of things. For the audible resonance of beings varies with their material makeup, as the vocal calls of different animals vary with the size and shape of their interior cavities and hollows. I feel their expressive cries resound in my skull or my chest, echoing their sonorous qualities with my own materiality, and thus learn of their inward difference from myself. Looking and listening bring me into contact, respectively, with the outward surfaces and with the interior voluminosity of things, and hence where these senses come together, I experience, over there, the complex interplay of inside and outside that is characteristic of my own self-experience. It is thus at those junctures in the surrounding landscape where my eyes and my ears are drawn together that I most readily feel myself confronted by another power like myself, another life. […]

Yet our ears and our eyes are drawn together not only by animals, but by numerous other phenomena within the landscape. And, strangely, wherever these two senses converge, we may suddenly feel ourselves in relation with another expressive power, another center of experience. Trees, for instance, can seem to speak to us when they are jostled by the wind. Different forms of foliage lend each tree a distinctive voice, and a person who has lived among them will easily distinguish the various dialects of pine trees from the speech of spruce needles or Douglas fir. Anyone who has walked through cornfields knows the uncanny experience of being scrutinized and spoken to by whispering stalks. Certain rock faces and boulders request from us a kind of auditory attentiveness, and so draw our ears into relation with our eyes as we gaze at them, or with our hands as we touch them—for it is only through a mode of listening that we can begin to sense the interior voluminosity of the boulder, its particular density and depth. There is an expectancy to the ears, a kind of patient receptivity that they lend to the other senses whenever we place ourselves in a mode of listening—whether to a stone, or a river, or an abandoned house. That so many indigenous people allude to the articulate speech of trees or of mountains suggests the ease with which, in an oral culture, one’s auditory attention may be joined with the visual focus in order to enter into a living relation with the expressive character of things.

Far from presenting a distortion of their factual relation to the world, the animistic discourse of indigenous, oral peoples is an inevitable counterpart of their immediate, synaesthetic engagement with the land that they inhabit. The animistic proclivity to perceive the angular shape of a boulder (while shadows shift across its surface) as a kind of meaningful gesture, or to enter into felt conversations with clouds and owls—all of this could be brushed aside as imaginary distortion or hallucinatory fantasy if such active participation were not the very structure of perception, if the creative interplay of the senses in the things they encounter was not our sole way of linking ourselves to those things and letting the things weave themselves into our experience. Direct, prereflective perception is inherently synaesthetic, participatory, and animistic, disclosing the things and elements that surround us not as inert objects but as expressive subjects, entities, powers, potencies.

And yet most of us seem, today, very far from such experience. Trees rarely, if ever, speak to us; animals no longer approach us as emissaries from alien zones of intelligence; the sun and the moon no longer draw prayers from us but seem to arc blindly across the sky. How is it that these phenomena no longer address us , no longer compel our involvement or reciprocate our attention? If participation is the very structure of perception, how could it ever have been brought to a halt? To freeze the ongoing animation, to block the wild exchange between the senses and the things that engage them, would be tantamount to freezing the body itself, stopping it short in its tracks. And yet our bodies still move, still live, still breathe. If we no longer experience the enveloping earth as expressive and alive, this can only mean that the animating interplay of the senses has been transferred to another medium, another locus of participation.

IT IS THE WRITTEN TEXT THAT PROVIDES THIS NEW LOCUS . FOR TO read is to enter into a profound participation, or chiasm, with the inked marks upon the page. In learning to read we must break the spontaneous participation of our eyes and our ears in the surrounding terrain (where they had ceaselessly converged in the synaesthetic encounter with animals, plants, and streams) in order to recouple those senses upon the flat surface of the page. As a Zuñi elder focuses her eyes upon a cactus and hears the cactus begin to speak, so we focus our eyes upon these printed marks and immediately hear voices. We hear spoken words, witness strange scenes or visions, even experience other lives. As nonhuman animals, plants, and even “inanimate” rivers once spoke to our tribal ancestors, so the “inert” letters on the page now speak to us! This is a form of animism that we take for granted, but it is animism nonetheless—as mysterious as a talking stone.

And indeed, it is only when a culture shifts its participation to these printed letters that the stones fall silent. Only as our senses transfer their animating magic to the written word do the trees become mute, the other animals dumb.

But let us be more precise, recalling the distinction between different forms of writing discussed at the start of this chapter. As we saw there, pictographic, ideographic, and even rebuslike writing still makes use of, or depends upon, our sensorial participation with the natural world. As the tracks of moose and bear refer beyond themselves to those entities of whom they are the trace, so the images in early writing systems draw their significance not just from ourselves but from sun, moon, vulture, jaguar, serpent, lightning—from all those sensorial, never strictly human powers, of which the written images were a kind of track or tracing. To be sure, these signs were now inscribed by human hands, not by the hooves of deer or the clawed paws of bear; yet as long as they presented images of paw prints and of clouds , of sun and of serpent , these characters still held us in relation to a more-than-human field of discourse. Only when the written characters lost all explicit reference to visible, natural phenomena did we move into a new order of participation. Only when those images came to be associated, alphabetically, with purely human-made sounds, and even the names of the letters lost all worldly, extrahuman significance, could speech or language come to be experienced as an exclusively human power. For only then did civilization enter into the wholly self-reflexive mode of animism, or magic, that still holds us in its spell:

We know what the animals do, what are the needs of the beaver, the bear, the salmon, and other creatures, because long ago men married them and acquired this knowledge from their animal wives. Today the priests say we lie, but we know better. The white man has been only a short time in this country and knows very little about the animals; we have lived here thousands of years and were taught long ago by the animals themselves. The white man writes everything down in a book so that it will not be forgotten; but our ancestors married animals, learned all their ways, and passed on this knowledge from one generation to another. 60

THAT ALPHABETIC READING AND WRITING WAS ITSELF experienced as a form of magic is evident from the reactions of cultures suddenly coming into contact with phonetic writing. Anthropological accounts from entirely different continents report that members of indigenous, oral tribes, after seeing the European reading from a book or from his own notes, came to speak of the written pages as “talking leaves,” for the black marks on the flat, leaflike pages seemed to talk directly to the one who knew their secret.

The Hebrew scribes never lost this sense of the letters as living, animate powers. Much of the Kabbalah, the esoteric body of Jewish mysticism, is centered around the conviction that each of the twenty-two letters of the Hebrew aleph-beth is a magic gateway or guide into an entire sphere of existence. Indeed, according to some kabbalistic accounts, it was by combining the letters that the Holy One, Blessed Be He, created the ongoing universe. The Jewish kabbalists found that the letters, when meditated upon, would continually reveal new secrets; through the process of tzeru, the magical permutation of the letters, the Jewish scribe could bring himself into successively greater states of ecstatic union with the divine. Here, in other words, was an intensely concentrated form of animism—a participation conducted no longer with the sculpted idols and images worshiped by other tribes but solely with the visible letters of the aleph-beth.

Perhaps the most succinct evidence for the potent magic of written letters is to be found in the ambiguous meaning of our common English word “spell.” As the roman alphabet spread through oral Europe, the Old English word “spell,” which had meant simply to recite a story or tale, took on the new double meaning: on the one hand, it now meant to arrange, in the proper order, the written letters that constitute the name of a thing or a person; on the other, it signified a magic formula or charm. Yet these two meanings were not nearly as distinct as they have come to seem to us today. For to assemble the letters that make up the name of a thing, in the correct order, was precisely to effect a magic, to establish a new kind of influence over that entity, to summon it forth! To spell, to correctly arrange the letters to form a name or a phrase, seemed thus at the same time to cast a spell , to exert a new and lasting power over the things spelled. Yet we can now realize that to learn to spell was also, and more profoundly, to step under the influence of the written letters ourselves, to cast a spell upon our own senses. It was to exchange the wild and multiplicitous magic of an intelligent natural world for the more concentrated and refined magic of the written word.

THE BULGARIAN SCHOLAR TZVETAN TODOROV HAS WRITTEN AN illuminating study of the Spanish conquest of the Americas, based on extensive study of documents from the first months and years of contact between European culture and the native cultures of the American continent. 61 The lightning-swift conquest of Mexico by Cortéz has remained a puzzle for historians, since Cortéz, leading only a few hundred men, managed to seize the entire kingdom of Montezuma, who commanded several hundred thousand . Todorov concludes that Cortéz’s astonishing and rapid success was largely a result of the discrepancy between the different forms of participation engaged in by the two societies. The Aztecs, whose writing was highly pictorial, necessarily felt themselves in direct communication with an animate, more-than-human environment. “Everything happens as if, for the Aztecs, [written] signs automatically and necessarily proceed from the world they designate…”; the Aztecs are unable to use their spoken words, or their written characters, to hide their true intentions, since these signs belong to the world around them as much as to themselves. 62 To be duplicitous with signs would be, for the Aztecs, to go against the order of nature, against the encompassing speech or logos of an animate world, in which their own tribal discourse was embedded.

The Spaniards, however, suffer no such limitation. Possessed of an alphabetic writing system, they experience themselves not in communication with the sensuous forms of the world, but solely with one another. The Aztecs must answer, in their actions as in their speech, to the whole sensuous, natural world that surrounds them; the Spanish need answer only to themselves.

In contact with this potent new magic, with these men who participate solely with their own self-generated signs, whose speech thus seems to float free of the surrounding landscape, and who could therefore be duplicitous and lie even in the presence of the sun, the moon, and the forest, the Indians felt their own rapport with those sensuous powers, or gods, beginning to falter:

The testimony of the Indian accounts, which is a description rather than an explanation, asserts that everything happened because the Mayas and the Aztecs lost control of communication. The language of the gods has become unintelligible, or else these gods fell silent. “Understanding is lost, wisdom is lost” [from the Mayan account of the Spanish invasion]….As for the Aztecs, they describe the beginning of their own end as a silence that falls: the gods no longer speak to them. 63

In the face of aggression from this new, entirely self-reflexive form of magic, the native peoples of the Americas—like those of Africa and, later, of Australia—felt their own magics wither and become useless, unable to protect them.

Inequality in the Anthropocene

This post was inspired by an article on the possibility of increasing suicides because of climate change. What occurred to me is that all the social and psychological problems seen with climate change are also seen with inequality (as shown in decades of research), and to a lesser extent as seen with extreme poverty — although high poverty with low inequality isn’t necessarily problematic at all (e.g., the physically and psychologically healthy hunter-gatherers who are poor in terms of material wealth and private property).

Related to this, I noticed in one article that a study was mentioned about the chances of war increasing when detrimental weather events are combined with ethnic diversity. And that reminded me of the research that showed diversity only leads to lowered trust when combined with segregation. A major problem with climate-related refugee crises is that it increases segregation, such as refugee camps and immigrant ghettoization. That segregation will lead to further conflict and destruction of the social fabric, which in turn will promote further segregation — a vicious cycle that will be hard to pull out before the crash, especially as the environmental conditions lead to droughts, famines, and plagues.

As economic and environmental conditions worsen, there are some symptoms that will become increasingly apparent and problematic. Based on the inequality and climatology research, we should expect increased stress, anxiety, fear, xenophobia, bigotry, suicide, homicide, aggressive behavior, short-term thinking, reactionary politics, and generally crazy and bizarre behavior. This will likely result in civil unrest, violent conflict, race wars, genocides, terrorism, militarization, civil wars, revolutions, international conflict, resource-based wars, world wars, authoritarianism, ethno-nationalism, right-wing populism, etc.

The only defense against this will be a strong, courageous left-wing response. That would require eliminating not only the derangement of the GOP but also the corruption of the DNC by replacing both with a genuinely democratic and socialist movement. Otherwise, our society will descend into collective madness and our entire civilization will be under existential threat. There is no other option.

* * *

The Great Acceleration and the Great Divergence: Vulnerability in the Anthropocene
by Rob Nixon

Most Anthropocene scholars date the new epoch to the late-eighteenth-century beginnings of industrialization. But there is a second phase to the Anthropocene, the so-called great acceleration, beginning circa 1950: an exponential increase in human-induced changes to the carbon cycle and nitrogen cycle and in ocean acidification, global trade, and consumerism, as well as the rise of international forms of governance like the World Bank and the IMF.

However, most accounts of the great acceleration fail to position it in relation to neoliberalism’s recent ascent, although most of the great acceleration has occurred during the neoliberal era. One marker of neoliberalism has been a widening chasm of inequality between the superrich and the ultrapoor: since the late 1970s, we have been living through what Timothy Noah calls “the great divergence.” Noah’s subject is the economic fracturing of America, the new American gilded age, but the great divergence has scarred most societies, from China and India to Indonesia, South Africa, Nigeria, Italy, Spain, Ireland, Costa Rica, Jamaica, Australia, and Bangladesh.

My central problem with the dominant mode of Anthropocene storytelling is its failure to articulate the great acceleration to the great divergence. We need to acknowledge that the grand species narrative of the Anthropocene—this geomorphic “age of the human”—is gaining credence at a time when, in society after society, the idea of the human is breaking apart economically, as the distance between affluence and abandonment is increasing. It is time to remold the Anthropocene as a shared story about unshared resources. When we examine the geology of the human, let us also pay attention to the geopolitics of the new stratigraphy’s layered assumptions.

Neoliberalism loves watery metaphors: the trickle-down effect, global flows, how a rising tide lifts all boats. But talk of a rising tide raises other specters: the coastal poor, who will never get storm-surge barriers; Pacific Islanders in the front lines of inundation; Arctic peoples, whose livelihoods are melting away—all of them exposed to the fallout from Anthropocene histories of carbon extraction and consumption in which they played virtually no part.

We are not all in this together
by Ian Angus

So the 21st century is being defined by a combination of record-breaking inequality with record-breaking climate change. That combination is already having disastrous impacts on the majority of the world’s people. The line is not only between rich and poor, or comfort and poverty: it is a line between survival and death.

Climate change and extreme weather events are not devastating a random selection of human beings from all walks of life. There are no billionaires among the dead, no corporate executives living in shelters, no stockbrokers watching their children die of malnutrition. Overwhelmingly, the victims are poor and disadvantaged. Globally, 99 percent of weather disaster casualties are in developing countries, and 75 percent of them are women.

The pattern repeats at every scale. Globally, the South suffers far more than the North. Within the South, the very poorest countries, mostly in Africa south of the Sahara, are hit hardest. Within each country, the poorest people—women, children, and the elderly—are most likely to lose their homes and livelihoods from climate change, and most likely to die.

The same pattern occurs in the North. Despite the rich countries’ overall wealth, when hurricanes and heatwaves hit, the poorest neighborhoods are hit hardest, and within those neighborhoods the primary victims are the poorest people.

Chronic hunger, already a severe problem in much of the world, will be made worse by climate change. As Oxfam reports: “The world’s most food-insecure regions will be hit hardest of all.”

Unchecked climate change will lock the world’s poorest people in a downward spiral, leaving hundreds of millions facing malnutrition, water scarcity, ecological threats, and loss of livelihood. Children will be among the primary victims, and the effects will last for lifetimes: studies in Ethiopia, Kenya, and Niger show that being born in a drought year increases a child’s chances of being irreversibly stunted by 41 to 72 percent.

Environmental racism has left black Americans three times more likely to die from pollution
By Bartees Cox

Without a touch of irony, the EPA celebrated Black History Month by publishing a report that finds black communities face dangerously high levels of pollution. African Americans are more likely to live near landfills and industrial plants that pollute water and air and erode quality of life. Because of this, more than half of the 9 million people living near hazardous waste sites are people of color, and black Americans are three times more likely to die from exposure to air pollutants than their white counterparts.

The statistics provide evidence for what advocates call “environmental racism.” Communities of color aren’t suffering by chance, they say. Rather, these conditions are the result of decades of indifference from people in power.

Environmental racism is dangerous. Trump’s EPA doesn’t seem to care.
by P.R. Lockhart

Studies have shown that black and Hispanic children are more likely to develop asthma than their white peers, as are poor children, with research suggesting that higher levels of smog and air pollution in communities of color being a factor. A 2014 study found that people of color live in communities that have more nitrogen dioxide, a pollutant that exacerbates asthma.

The EPA’s own research further supported this. Earlier this year, a paper from the EPA’s National Center for Environmental Assessment found that when it comes to air pollutants that contribute to issues like heart and lung disease, black people are exposed to 1.5 times more of the pollutant than white people, while Hispanic people were exposed to about 1.2 times the amount of non-Hispanic whites. People in poverty had 1.3 times the exposure of those not in poverty.

Trump’s EPA Concludes Environmental Racism Is Real
by Vann R. Newkirk II

Late last week, even as the Environmental Protection Agency and the Trump administration continued a plan to dismantle many of the institutions built to address those disproportionate risks, researchers embedded in the EPA’s National Center for Environmental Assessment released a study indicating that people of color are much more likely to live near polluters and breathe polluted air. Specifically, the study finds that people in poverty are exposed to more fine particulate matter than people living above poverty. According to the study’s authors, “results at national, state, and county scales all indicate that non-Whites tend to be burdened disproportionately to Whites.”

The study focuses on particulate matter, a group of both natural and manmade microscopic suspensions of solids and liquids in the air that serve as air pollutants. Anthropogenic particulates include automobile fumes, smog, soot, oil smoke, ash, and construction dust, all of which have been linked to serious health problems. Particulate matter was named a known definite carcinogen by the International Agency for Research on Cancer, and it’s been named by the EPA as a contributor to several lung conditions, heart attacks, and possible premature deaths. The pollutant has been implicated in both asthma prevalence and severitylow birth weights, and high blood pressure.

As the study details, previous works have also linked disproportionate exposure to particulate matter and America’s racial geography. A 2016 study in Environment International found that long-term exposure to the pollutant is associated with racial segregation, with more highly segregated areas suffering higher levels of exposure. A 2012 article in Environmental Health Perspectives found that overall levels of particulate matter exposure for people of color were higher than those for white people. That article also provided a breakdown of just what kinds of particulate matter counts in the exposures. It found that while differences in overall particulate matter by race were significant, differences for some key particles were immense. For example, Hispanics faced rates of chlorine exposure that are more than double those of whites. Chronic chlorine inhalation is known for degrading cardiac function.

The conclusions from scientists at the National Center for Environmental Assessment not only confirm that body of research, but advance it in a top-rate public-health journal. They find that black people are exposed to about 1.5 times more particulate matter than white people, and that Hispanics had about 1.2 times the exposure of non-Hispanic whites. The study found that people in poverty had about 1.3 times more exposure than people above poverty. Interestingly, it also finds that for black people, the proportion of exposure is only partly explained by the disproportionate geographic burden of polluting facilities, meaning the magnitude of emissions from individual factories appears to be higher in minority neighborhoods.

These findings join an ever-growing body of literature that has found that both polluters and pollution are often disproportionately located in communities of color. In some places, hydraulic-fracturing oil wells are more likely to be sited in those neighborhoods. Researchers have found the presence of benzene and other dangerous aromatic chemicals to be linked to race. Strong racial disparities are suspected in the prevalence of lead poisoning.

It seems that almost anywhere researchers look, there is more evidence of deep racial disparities in exposure to environmental hazards. In fact, the idea of environmental justice—or the degree to which people are treated equally and meaningfully involved in the creation of the human environment—was crystallized in the 1980s with the aid of a landmark study illustrating wide disparities in the siting of facilities for the disposal of hazardous waste. Leaders in the environmental-justice movement have posited—in places as prestigious and rigorous as United Nations publications and numerous peer-reviewed journals—that environmental racism exists as the inverse of environmental justice, when environmental risks are allocated disproportionately along the lines of race, often without the input of the affected communities of color.

The idea of environmental racism is, like all mentions of racism in America, controversial. Even in the age of climate change, many people still view the environment mostly as a set of forces of nature, one that cannot favor or disfavor one group or another. And even those who recognize that the human sphere of influence shapes almost every molecule of the places in which humans live, from the climate to the weather to the air they breathe, are often loathe to concede that racism is a factor. To many people, racism often connotes purposeful decisions by a master hand, and many see existing segregation as a self-sorting or poverty problem. Couldn’t the presence of landfills and factories in disproportionately black neighborhoods have more to do with the fact that black people tend to be disproportionately poor and thus live in less desirable neighborhoods?

But last week’s study throws more water on that increasingly tenuous line of thinking. While it lacks the kind of complex multivariate design that can really disentangle the exact effects of poverty and race, the finding that race has a stronger effect on exposure to pollutants than poverty indicates that something beyond just the concentration of poverty among black people and Latinos is at play. As the study’s authors write: “A focus on poverty to the exclusion of race may be insufficient to meet the needs of all burdened populations.” Their finding that the magnitude of pollution seems to be higher in communities of color than the number of polluters suggests, indicates that regulations and business decisions are strongly dependent on whether people of color are around. In other words, they might be discriminatory.

This is a remarkable finding, and not only because it could provide one more policy linkage to any number of health disparities, from heart disease to asthma rates in black children that are double those of white children. But the study also stands as an implicit rebuke to the very administration that allowed its release.

Violence: Categories & Data, Causes & Demographics

Most violent crime correlates to social problems in general. Most social problems in general correlate to economic factors such as poverty but even moreso inequality. And in a country like the US, most economic factors correlate to social disadvantage and racial oppression, from economic segregation (redlining, sundown towns, etc) to environmental racism (ghettos located in polluted urban areas, high toxicity rates among minorities, etc) — consider how areas of historically high rates of slavery at present have higher levels of poverty and inequality, impacting not just blacks but also whites living in those communities.

Socialized Medicine & Externalized Costs

About 40 percent of deaths worldwide are caused by water, air and soil pollution, concludes a Cornell researcher. Such environmental degradation, coupled with the growth in world population, are major causes behind the rapid increase in human diseases, which the World Health Organization has recently reported. Both factors contribute to the malnourishment and disease susceptibility of 3.7 billion people, he says.

Percentages of Suffering and Death

Even accepting the data that Pinker uses, it must be noted that he isn’t including all violent deaths. Consider economic sanctions and neoliberal exploitation, vast poverty and inequality forcing people to work long hours in unsafe and unhealthy conditions, covert operations to overthrow governments and destabilize regions, anthropogenic climate change with its disasters, environmental destruction and ecosystem collapse, loss of arable land and food sources, pollution and toxic dumps, etc. All of this would involve food scarcity, malnutrition, starvation, droughts, rampant disease, refugee crises, diseases related to toxicity and stress, etc; along with all kinds of other consequences to people living in desperation and squalor.

This has all been intentionally caused through governments, corporations, and other organizations seeking power and profit while externalizing costs and harm. In my lifetime, the fatalities to this large scale often slow violence and intergenerational trauma could add up to hundreds of millions or maybe billions of lives cut short. Plus, as neoliberal globalization worsens inequality, there is a direct link to higher rates of homicides, suicides, and stress-related diseases for the most impacted populations. Yet none of these deaths would be counted as violent, no matter how horrific it was for the victims. And those like Pinker adding up the numbers would never have to acknowledge this overwhelming reality of suffering. It can’t be seen in the official data on violence, as the causes are disconnected from the effects. But why should only a small part of the harm and suffering get counted as violence?

Learning to Die in the Anthropocene: Reflections on the End of a Civilization
by Roy Scranton
Kindle Locations 860-888 (see here)

Consider: Once among the most modern, Westernized nations in the Middle East, with a robust, highly educated middle class, Iraq has been blighted for decades by imperialist aggression, criminal gangs, interference in its domestic politics, economic liberalization, and sectarian feuding. Today it is being torn apart between a corrupt petrocracy, a breakaway Kurdish enclave, and a self-declared Islamic fundamentalist caliphate, while a civil war in neighboring Syria spills across its borders. These conflicts have likely been caused in part and exacerbated by the worst drought the Middle East has seen in modern history. Since 2006, Syria has been suffering crippling water shortages that have, in some areas, caused 75 percent crop failure and wiped out 85 percent of livestock, left more than 800,000 Syrians without a livelihood, and sent hundreds of thousands of impoverished young men streaming into Syria’s cities. 90 This drought is part of long-term warming and drying trends that are transforming the Middle East. 91 Not just water but oil, too, is elemental to these conflicts. Iraq sits on the fifth-largest proven oil reserves in the world. Meanwhile, the Islamic State has been able to survive only because it has taken control of most of Syria’s oil and gas production. We tend to think of climate change and violent religious fundamentalism as isolated phenomena, but as Retired Navy Rear Admiral David Titley argues, “you can draw a very credible climate connection to this disaster we call ISIS right now.” 92

A few hundred miles away, Israeli soldiers spent the summer of 2014 killing Palestinians in Gaza. Israel has also been suffering drought, while Gaza has been in the midst of a critical water crisis exacerbated by Israel’s military aggression. The International Committee for the Red Cross reported that during summer 2014, Israeli bombers targeted Palestinian wells and water infrastructure. 93 It’s not water and oil this time, but water and gas: some observers argue that Israel’s “Operation Protective Edge” was intended to establish firmer control over the massive Leviathan natural gas field, discovered off the coast of Gaza in the eastern Mediterranean in 2010.94

Meanwhile, thousands of miles to the north, Russian-backed separatists fought fascist paramilitary forces defending the elected government of Ukraine, which was also suffering drought. 95 Russia’s role as an oil and gas exporter in the region and the natural gas pipelines running through Ukraine from Russia to Europe cannot but be key issues in the conflict. Elsewhere, droughts in 2014 sent refugees from Guatemala and Honduras north to the US border, devastated crops in California and Australia, and threatened millions of lives in Eritrea, Somalia, Ethiopia, Sudan, Uganda, Afghanistan, India, Morocco, Pakistan, and parts of China. Across the world, massive protests and riots have swept Bosnia and Herzegovina, Venezuela, Brazil, Turkey, Egypt, and Thailand, while conflicts rage on in Colombia, Libya, the Central African Republic, Sudan, Nigeria, Yemen, and India. And while the world burns, the United States has been playing chicken with Russia over control of Eastern Europe and the melting Arctic, and with China over control of Southeast Asia and the South China Sea, threatening global war on a scale not seen in seventy years. This is our present and future: droughts and hurricanes, refugees and border guards, war for oil, water, gas, and food.

Donald Trump Is the First Demagogue of the Anthropocene
by Robinson Meyer

First, climate change could easily worsen the inequality that has already hollowed out the Western middle class. A recent analysis in Nature projected that the effects of climate change will reduce the average person’s income by 23 percent by the end of the century. The U.S. Environmental Protection Agency predicts that unmitigated global warming could cost the American economy $200 billion this century. (Some climate researchers think the EPA undercounts these estimates.)

Future consumers will not register these costs so cleanly, though—there will not be a single climate-change debit exacted on everyone’s budgets at year’s end. Instead, the costs will seep in through many sources: storm damage, higher power rates, real-estate depreciation, unreliable and expensive food. Climate change could get laundered, in other words, becoming just one more symptom of a stagnant and unequal economy. As quality of life declines, and insurance premiums rise, people could feel that they’re being robbed by an aloof elite.

They won’t even be wrong. It’s just that due to the chemistry of climate change, many members of that elite will have died 30 or 50 years prior. […]

Malin Mobjörk, a senior researcher at the Stockholm International Peace Research Institute, recently described a “growing consensus” in the literature that climate change can raise the risk of violence. And the U.S. Department of Defense already considers global warming a “threat multiplier” for national security. It expects hotter temperatures and acidified oceans to destabilize governments and worsen infectious pandemics.

Indeed, climate change may already be driving mass migrations. Last year, the Democratic presidential candidate Martin O’Malley was mocked for suggesting that a climate-change-intensified drought in the Levant—the worst drought in 900 years—helped incite the Syrian Civil War, thus kickstarting the Islamic State. The evidence tentatively supports him. Since the outbreak of the conflict, some scholars have recognized that this drought pushed once-prosperous farmers into Syria’s cities. Many became unemployed and destitute, aggravating internal divisions in the run-up to the war. […]

They were not disappointed. Heatwaves, droughts, and other climate-related exogenous shocks do correlate to conflict outbreak—but only in countries primed for conflict by ethnic division. In the 30-year period, nearly a quarter of all ethnic-fueled armed conflict coincided with a climate-related calamity. By contrast, in the set of all countries, war only correlated to climatic disaster about 9 percent of the time.

“We cannot find any evidence for a generalizable trigger relationship, but we do find evidence for some risk enhancement,” Schleussner told me. In other words,  climate disaster will not cause a war, but it can influence whether one begins.

Why climate change is very bad for your health
by Geordan Dickinson Shannon

Ecosystems

We don’t live in isolation from other ecosystems. From large-scale weather events, through to the food we eat daily, right down to the minute organisms colonising our skin and digestive systems, we live and breath in co-dependency with our environment.

A change in the delicate balance of micro-organisms has the potential to lead to disastrous effects. For example, microbial proliferation – which is predicted in warmer temperatures driven by climate change – may lead to more enteric infections (caused by viruses and bacteria that enter the body through the gastrointestinal tract), such as salmonella food poisoning and increased cholera outbreaks related to flooding and warmer coastal and estuarine water.

Changes in temperature, humidity, rainfall, soil moisture and sea-level rise, caused by climate change is also affecting the transmission of dangerous insect-borne infectious diseases. These include malaria, dengue, Japanese encephalitis, chikungunya and West Nile viruslymphatic filariasis, plague, tick-borne encephalitis, Lyme diseaserickettsioses, and schistosomiasis.

Through climate change, the pattern of human interaction will likely change and so will our interactions with disease-spreading insects, especially mosquitoes. The World Health Organisation has also stressed the impact of climate change on the reproductive, survival and bite rates of insects, as well as their geographic spread.

Climate refugees

Perhaps the most disastrous effect of climate change on human health is the emergence of large-scale forced migration from the loss of local livelihoods and weather events – something that is recognised by the United Nations High Commission on Human Rights. Sea-level rise, decreased crop yield, and extreme weather events will force many people from their lands and livelihoods, while refugees in vulnerable areas also face amplified conditions such as fewer food supplies and more insect-borne diseases. And those who are displaced put a significant health and economic burden on surrounding communities.

The International Red Cross estimates that there are more environmental refugees than political. Around 36m people were displaced by natural disasters in 2009; a figure that is predicted to rise to more than 50m by 2050. In one worst-case scenario, as many as 200m people could become environmental refugees.

Not a level playing field

Climate change has emerged as a major driver of global health inequalities. As J. Timmons Roberts, professor of Environmental Studies and Sociology at Brown University, put it:

Global warming is all about inequality, both in who will suffer most its effects and in who created the problem in the first place.

Global climate change further polarises the haves and the have-nots. The Intergovernmental Panel on Climate Change predicts that climate change will hit poor countries hardest. For example, the loss of healthy life years in low-income African countries is predicted to be 500 times that in Europe. The number of people in the poorest countries most vulnerable to hunger is predicted by Oxfam International to increase by 20% in 2050. And many of the major killers affecting developing countries, such as malaria, diarrhoeal illnesses, malnutrition and dengue, are highly sensitive to climate change, which would place a further disproportionate burden on poorer nations.

Most disturbingly, countries with weaker health infrastructure – generally situated in the developing world – will be the least able to copewith the effects of climate change. The world’s poorest regions don’t yet have the technical, economic, or scientific capacity to prepare or adapt.

Predictably, those most vulnerable to climate change are not those who contribute most to it. China, the US, and the European Union combined have contributed more than half the world’s total carbon dioxide emissions in the last few centuries. By contrast, and unfairly, countries that contributed the least carbon emissions (measured in per capita emissions of carbon dioxide) include many African nations and small Pacific islands – exactly those countries which will be least prepared and most affected by climate change.

Here’s Why Climate Change Will Increase Deaths by Suicide
by Francis Vergunst, Helen Louise Berry & Massimiliano Orri

Suicide is already among the leading causes of death worldwide. For people aged 15-55 years, it is among the top five causes of death. Worldwide nearly one million people die by suicide each year — more than all deaths from war and murder combined.

Using historical temperature records from the United States and Mexico, the researchers showed that suicide rates increased by 0.7 per cent in the U.S. and by 2.1 per cent in Mexico when the average monthly temperatures rose by 1 C.

The researchers calculated that if global temperatures continue to rise at these rates, between now and 2050 there could be 9,000 to 40,000 additional suicides in the U.S. and Mexico alone. This is roughly equivalent to the number of additional suicides that follow an economic recession.

Spikes during heat waves

It has been known for a long time that suicide rates spike during heat waves. Hotter weather has been linked with higher rates of hospital admissions for self-harmsuicide and violent suicides, as well as increases in population-level psychological distress, particularly in combination with high humidity.

Another recent study, which combined the results of previous research on heat and suicide, concluded there is “a significant and positive association between temperature rises and incidence of suicide.”

Why this is remains unclear. There is a well-documented link between rising temperatures and interpersonal violence and suicide could be understood as an act of violence directed at oneself. Lisa Page, a researcher in psychology at King’s College London, notes:

“While speculative, perhaps the most promising mechanism to link suicide with high temperatures is a psychological one. High temperatures have been found to lead individuals to behave in a more disinhibited, aggressive and violent manner, which might in turn result in an increased propensity for suicidal acts.”

Hotter temperatures are taxing on the body. They cause an increase in the stress hormone cortisol, reduce sleep quality and disrupt people’s physical activity routines. These changes can reduce well-being and increase psychological distress.

Disease, water shortages, conflict and war

The effects of hotter temperatures on suicides are symptomatic of a much broader and more expansive problem: the impact of climate change on mental health.

Climate change will increase the frequency and severity of heat waves, droughts, storms, floods and wildfires. It will extend the range of infectious diseases such as Zika virus, malaria and Lyme disease. It will contribute to food and water shortages and fuel forced migration, conflict and war.

These events can have devastating effects on people’s health, homes and livelihoods and directly impact psychological health and well-being.

But effects are not limited to people who suffer direct losses — for example, it has been estimated that up to half of Hurricane Katrina survivors developed post-traumatic stress disorder even when they had suffered no direct physical losses.

The feelings of loss that follow catastrophic events, including a sense of loss of safety, can erode community well-being and further undermine mental health resilience

The Broken Ladder
by Keith Payne
pp. 3-4 (see here)

[W]hen the level of inequality becomes too large to ignore, everyone starts acting strange.

But they do not act strange in just any old way. Inequality affects our actions and our feelings in the same systematic, predictable fashion again and again. It makes us shortsighted and prone to risky behavior, willing to sacrifice a secure future for immediate gratification. It makes us more inclined to make self-defeating decisions. It makes us believe weird things, superstitiously clinging to the world as we want it to be rather than as it is. Inequality divides us, cleaving us into camps not only of income but also of ideology and race, eroding our trust in one another. It generates stress and makes us all less healthy and less happy.

Picture a neighborhood full of people like the ones I’ve described above: shortsighted, irresponsible people making bad choices; mistrustful people segregated by race and by ideology; superstitious people who won’t listen to reason; people who turn to self-destructive habits as they cope with the stress and anxieties of their daily lives. These are the classic tropes of poverty and could serve as a stereotypical description of the population of any poor inner-city neighborhood or depressed rural trailer park. But as we will see in the chapters ahead, inequality can produce these tendencies even among the middle class and wealthy individuals.

PP. 119-120 (see here)

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Essentialism On the Decline

Before getting to the topic of essentialism, let me take an indirect approach. In reading about paleolithic diets and traditional foods, a recurring theme is inflammation, specifically as it relates to the health of the gut-brain network and immune system.

The paradigm change this signifies is that seemingly separate diseases with different diagnostic labels often have underlying commonalities. They share overlapping sets of causal and contributing factors, biological processes and symptoms. This is why simple dietary changes can have a profound effect on numerous health conditions. For some, the diseased state expresses as mood disorders and for others as autoimmune disorders and for still others something entirely else, but there are immense commonalities between them all. The differences have more to do with how dysbiosis and dysfunction happens to develop, where it takes hold in the body, and so what symptoms are experienced.

From a paleo diet perspective in treating both patients and her own multiple sclerosis, Terry Wahls gets at this point in a straightforward manner (p. 47): “In a very real sense, we all have the same disease because all disease begins with broken, incorrect biochemistry and disordered communication within and between our cells. […] Inside, the distinction between these autoimmune diseases is, frankly, fairly arbitrary”. In How Emotions Are Made, Lisa Feldman Barrett wrote (Kindle Locations 3834-3850):

“Inflammation has been a game-changer for our understanding of mental illness. For many years, scientists and clinicians held a classical view of mental illnesses like chronic stress, chronic pain, anxiety, and depression. Each ailment was believed to have a biological fingerprint that distinguished it from all others. Researchers would ask essentialist questions that assume each disorder is distinct: “How does depression impact your body? How does emotion influence pain? Why do anxiety and depression frequently co-occur?” 9

“More recently, the dividing lines between these illnesses have been evaporating. People who are diagnosed with the same-named disorder may have greatly diverse symptoms— variation is the norm. At the same time, different disorders overlap: they share symptoms, they cause atrophy in the same brain regions, their sufferers exhibit low emotional granularity, and some of the same medications are prescribed as effective.

“As a result of these findings, researchers are moving away from a classical view of different illnesses with distinct essences. They instead focus on a set of common ingredients that leave people vulnerable to these various disorders, such as genetic factors, insomnia, and damage to the interoceptive network or key hubs in the brain (chapter 6). If these areas become damaged, the brain is in big trouble: depression, panic disorder, schizophrenia, autism, dyslexia, chronic pain, dementia, Parkinson’s disease, and attention deficit hyperactivity disorder are all associated with hub damage. 10

“My view is that some major illnesses considered distinct and “mental” are all rooted in a chronically unbalanced body budget and unbridled inflammation. We categorize and name them as different disorders, based on context, much like we categorize and name the same bodily changes as different emotions. If I’m correct, then questions like, “Why do anxiety and depression frequently co-occur?” are no longer mysteries because, like emotions, these illnesses do not have firm boundaries in nature.”

What jumped out at me was the conventional view of disease as essentialist, and hence the related essentialism in biology and psychology. This is exemplified by genetic determinism, such as it informs race realism. It’s easy for most well-informed people to dismiss race realists, but essentialism takes on much more insidious forms that are harder to detect and root out. When scientists claimed to find a gay gene, some gay men quickly took this genetic determinism as a defense against the fundamentalist view that homosexuality is a choice and a sin. It turned out that there was no gay gene (by the way, this incident demonstrated how, in reacting to reactionaries, even leftist activists can be drawn into the reactionary mind). Not only is there no gay gene but also no simple and absolute gender divisions at all — as I previously explained (Is the Tide Starting to Turn on Genetics and Culture?):

“Recent research has taken this even further in showing that neither sex nor gender is binary (1234, & 5), as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way.”

The attraction of essentialism is powerful. And as shown in numerous cases, the attraction can be found across the political spectrum, as it offers a seemingly strong defense in diverting attention away from other factors. Similar to the gay gene, many people defend neurodiversity as if some people are simply born a particular way, and that therefore we can’t and shouldn’t seek to do anything to change or improve their condition, much less cure it or prevent it in future generations.

For example, those on the high-functioning end of the autism spectrum will occasionally defend their condition as being gifted in their ability to think and perceive differently. That is fine as far as it goes, but from a scientific perspective we still should find it concerning that conditions like this are on a drastic rise and it can’t be explained by mere greater rates of diagnosis. Whether or not one believes the world would be a better place with more people with autism, this shouldn’t be left as a fatalistic vision of an evolutionary leap, especially considering most on the autism spectrum aren’t high functioning — instead, we should try to understand why it is happening and what it means.

Researchers have found that there are prospective causes to be studied. Consider proprionate, a substance discussed by Alanna Collen (10% Human, p. 83): “although propionate was an important compound in the body, it was also used as a preservative in bread products – the very foods many autistic children crave. To top it all off, clostridia species are known to produce propionate. In itself, propionate is not ‘bad’, but MacFabe began to wonder whether autistic children were getting an overdose.” This might explain why antibiotics helped many with autism, as it would have been knocking off the clostridia population that was boosting propionate. To emphasize this point, when rodents were injected with propionate, they exhibited the precise behaviors of autism and they too showed inflammation in the brain. The fact that autistics often have brain inflammation, an unhealthy condition, is strong evidence that autism shouldn’t be taken as mere neurodiversity (and, among autistics, the commonality of inflammation-related gut issues emphasizes this point).

There is no doubt that genetic determinism, like the belief in an eternal soul, can be comforting. We identify with our genes, as we inherit them and are born with them. But to speak of inflammation or propionate or whatever makes it seem like we are victims of externalities. And it means we aren’t isolated individuals to be blamed or to take credit for who we are. To return to Collen (pp. 88-89):

“In health, we like to think we are the products of our genes and experiences. Most of us credit our virtues to the hurdles we have jumped, the pits we have climbed out of, and the triumphs we have fought for. We see our underlying personalities as fixed entities – ‘I am just not a risk-taker’, or ‘I like things to be organised’ – as if these are a result of something intrinsic to us. Our achievements are down to determination, and our relationships reflect the strength of our characters. Or so we like to think.

“But what does it mean for free will and accomplishment, if we are not our own masters? What does it mean for human nature, and for our sense of self? The idea that Toxoplasma, or any other microbe inhabiting your body, might contribute to your feelings, decisions and actions, is quite bewildering. But if that’s not mind-bending enough for you, consider this: microbes are transmissible. Just as a cold virus or a bacterial throat infection can be passed from one person to another, so can the microbiota. The idea that the make-up of your microbial community might be influenced by the people you meet and the places you go lends new meaning to the idea of cultural mind-expansion. At its simplest, sharing food and toilets with other people could provide opportunity for microbial exchange, for better or worse. Whether it might be possible to pick up microbes that encourage entrepreneurship at a business school, or a thrill-seeking love of motorbiking at a race track, is anyone’s guess for now, but the idea of personality traits being passed from person to person truly is mind-expanding.”

This goes beyond the personal level, which lends a greater threat to the proposal. Our respective societies, communities, etc might be heavily influenced by environmental factors that we can’t see. A ton of research shows the tremendous impact of parasites, heavy metal toxins, food additives, farm chemicals, hormones, hormone mimics, hormone disruptors, etc. Entire regions might be shaped by even a single species of parasite, such as how higher rates of toxoplasmosis gondii in New England is directly correlated to higher rates of neuroticism (see What do we inherit? And from whom? & Uncomfortable Questions About Ideology).

Essentialism, though still popular, has taken numerous major hits in recent years. It once was the dominant paradigm and went largely unquestioned. Consider how early last century respectable fields of study such as anthropology, linguistic relativism and behaviorism suggested that humans were largely products of environmental and cultural factors. This was the original basis of the attack on racism and race realism. In linguistics, Noam Chomsky overturned this view in positing the essentialist belief that, though not observed much less proven, there must exist within the human brain a language module with a universal grammar. It was able to defeat and replace the non-essentialist theories because it was more satisfying to the WEIRD ideologies that were becoming a greater force in an increasingly WEIRD society.

Ever since Plato, Western civilization has been drawn toward the extremes of essentialism (as part of the larger Axial Age shift toward abstraction and idealism). Yet there has also long been a countervailing force (even among the ancients, non-essentialist interpretations were common; consider group identity: here, here, here, here, and here). It wasn’t predetermined that essentialism would be so victorious as to have nearly obliterated the memory of all alternatives. It fit the spirit of the times for this past century, but now the public mood is shifting again. It’s no accident that, as social democracy and socialism regains favor, environmentalist explanations are making a comeback. But this is merely the revival of a particular Western tradition of thought, a tradition that is centuries old.

I was reminded of this in reading Liberty in America’s Founding Moment by Howard Schwartz. It’s an interesting shift of gears, since Schwartz doesn’t write about anything related to biology, health, or science. But he does indirectly get at environmentalist critique that comes out in his analysis of David Hume (1711-1776). I’ve mostly thought of Hume in terms of his bundle theory of self, possibly having been borrowed from Buddhism that he might have learned from Christian missionaries having returned from the East. However he came to it, the bundle theory argued that there is no singular coherent self, as was a central tenet of traditional Christian theology. Still, heretical views of the self were hardly new — some detect a possible Western precursor of Humean bundle theory in the ideas of Baruch Spinoza (1632-1677).

Whatever its origins in Western thought, environmentalism has been challenging essentialism since the Enlightenment. And in the case of Hume, there is an early social constructionist view of society and politics, that what motivates people isn’t essentialism. This puts a different spin on things, as Hume’s writings were widely read during the revolutionary era when the United States was founded. Thomas Jefferson, among others, was familiar with Hume and highly recommended his work. Hume represented the opposite position to John Locke. We are now returning to this old battle of ideas.

End of Corporate Personhood and Citizenship

Awkward! The idea of ‘corporate personhood’ relies on the same Amendment that gives birthright citizenship
by Mark Ames
(from REAL Democracy History Calendar: August 27 – September 2)

“[M]ost of the GOP candidates want to change the 14th Amendment to deny birthright citizenship to children born here to foreign parents…

“But beyond the twisted racist dementia fueling this, there’s another problem for these GOP candidates: Section One of the 14th Amendment, granting birthright citizenship to anyone born in the US, is also the same section of the same amendment interpreted by our courts to grant corporations “personhood”…

“So to repeat: GOP candidates from Trump and Bush down the line to Silicon Valley’s boy-disrupter Rand Paul want to revoke citizenship to living humans born in the US to foreign parents; but they support granting citizenship rights and guarantees to artificial persons –corporations – which are really legal fictions granted by the states, allowing a pool of investors legal liability and tax advantages in order to profit more than they otherwise would as mere living humans”…

“And here we are today—where we have an Amendment meant to protect vulnerable and abused minorities now under attack from Lincoln’s party, who at the same time want to use the same section in the same amendment to protect fictitious artificial persons and allow them greater rights and powers than even those of us born here to American parents.”

Now That We’re Talking About Citizenship, Let’s Revoke Corporate Personhood
by C. Robert Gibson
(from REAL Democracy History Calendar: August 20 – 26)

“Thanks to Donald Trump and Jeb Bush, the media is now entertaining discussion on the idea of revoking citizenship for human beings, to the point where the media is calculating the cost of these insane and unconstitutional proposals. If Trump wants to revoke the citizenship of people who are using up all of our resources and not paying taxes, and if the media really wants to have the conversation, let’s start with multinational corporations…

“A constitutional amendment that explicitly states that corporations aren’t people, and that money is not speech would do the trick. The organization Move to Amend is doing just that, and have roughly 535 resolutions that have either been passed at the local/state level or are currently in progress. State legislatures in Delaware, Illinois, Minnesota, Montana, Vermont, and West Virginia have already passed such resolutions.

“Donald Trump has been able to shift the Overton Window of acceptable political discourse far to the right in just a matter of weeks, to where the media is now entertaining discussion on the idea of revoking citizenship for human beings. The left must be just as willing to push the discussion toward revoking corporate citizenship due to the harm they’ve caused to our political process, as well as our public programs that have been slashed to the bone due to corporations avoiding billions in taxes.”

“…we can’t pretend they don’t exist anymore.”

James Bridle (from YouTube transcript):

But the other thing, the thing that really gets to me about this, is that I’m not sure we even really understand how we got to this point. We’ve taken all of this influence, all of these things, and munged them together in a way that no one really intended. And yet, this is also the way that we’re building the entire world.

We’re taking all of this data, a lot of it bad data, a lot of historical data full of prejudice, full of all of our worst impulses of history, and we’re building that into huge data sets and then we’re automating it. And we’re munging it together into things like credit reports, into insurance premiums, into things like predictive policing systems, into sentencing guidelines. This is the way we’re actually constructing the world today out of this data.

And I don’t know what’s worse, that we built a system that seems to be entirely optimized for the absolute worst aspects of human behavior, or that we seem to have done it by accident, without even realizing that we were doing it, because we didn’t really understand the systems that we were building, and we didn’t really understand how to do anything differently with it.

There’s a couple of things I think that really seem to be driving this most fully on YouTube, and the first of those is advertising, which is the monetization of attention without any real other variables at work, any care for the people who are actually developing this content, the centralization of the power, the separation of those things. And I think however you feel about the use of advertising to kind of support stuff, the sight of grown men in diapers rolling around in the sand in the hope that an algorithm that they don’t really understand will give them money for it suggests that this probably isn’t the thing that we should be basing our society and culture upon, and the way in which we should be funding it.

And the other thing that’s kind of the major driver of this is automation, which is the deployment of all of this technology as soon as it arrives, without any kind of oversight, and then once it’s out there, kind of throwing up our hands and going, “Hey, it’s not us, it’s the technology.” Like, “We’re not involved in it.” That’s not really good enough, because this stuff isn’t just algorithmically governed, it’s also algorithmically policed. When YouTube first started to pay attention to this, the first thing they said they’d do about it was that they’d deploy better machine learning algorithms to moderate the content.

Well, machine learning, as any expert in it will tell you, is basically what we’ve started to call software that we don’t really understand how it works. And I think we have enough of that already. We shouldn’t be leaving this stuff up to AI to decide what’s appropriate or not, because we know what happens. It’ll start censoring other things. It’ll start censoring queer content. It’ll start censoring legitimate public speech. What’s allowed in these discourses, it shouldn’t be something that’s left up to unaccountable systems. It’s part of a discussion all of us should be having.

But I’d leave a reminder that the alternative isn’t very pleasant, either. YouTube also announced recently that they’re going to release a version of their kids’ app that would be entirely moderated by humans. Facebook — Zuckerberg said much the same thing at Congress, when pressed about how they were going to moderate their stuff. He said they’d have humans doing it. And what that really means is, instead of having toddlers being the first person to see this stuff, you’re going to have underpaid, precarious contract workers without proper mental health support being damaged by it as well. And I think we can all do quite a lot better than that.

The thought, I think, that brings those two things together, really, for me, is agency. It’s like, how much do we really understand — by agency, I mean: how we know how to act in our own best interests. Which — it’s almost impossible to do in these systems that we don’t really fully understand. Inequality of power always leads to violence. And we can see inside these systems that inequality of understanding does the same thing. If there’s one thing that we can do to start to improve these systems, it’s to make them more legible to the people who use them, so that all of us have a common understanding of what’s actually going on here.

The thing, though, I think most about these systems is that this isn’t, as I hope I’ve explained, really about YouTube. It’s about everything. These issues of accountability and agency, of opacity and complexity, of the violence and exploitation that inherently results from the concentration of power in a few hands — these are much, much larger issues. And they’re issues not just of YouTube and not just of technology in general, and they’re not even new. They’ve been with us for ages.

But we finally built this system, this global system, the internet, that’s actually showing them to us in this extraordinary way, making them undeniable. Technology has this extraordinary capacity to both instantiate and continue all of our most extraordinary, often hidden desires and biases and encoding them into the world, but it also writes them down so that we can see them, so that we can’t pretend they don’t exist anymore.

We need to stop thinking about technology as a solution to all of our problems, but think of it as a guide to what those problems actually are, so we can start thinking about them properly and start to address them.

“Everything is Going According to Plan”: Being an Activist in the Anthropocene

“Everything is going according to plan. I don’t know whose plan it is, and I think that it’s a really stupid plan, but everything is going according to it anyway.”
— Dmitry Orlov

GODS & RADICALS

“What If It’s Already Too Late”

I had a terrible thought recently …

“What if it’s already too late?”

Actually, this idea has been haunting me, hovering on the boundary between my conscious and unconscious mind, for some time.

In 2016, Bill McKibben, founder of the climate activist organization 350.org, came to speak at a rally at the BP tar sands refinery in my “backyard” in the highly industrialized northwest corner Indiana.  The occasion was a series of coordinated direct actions around the world against the fossil fuel industry, collectively hailed as the largest direct action in the history of the environmental movement.

What struck me about McKibben’s speech, though, was its tone of … well, hopelessness. Here’s how he concluded his 10 minute speech:

“I wish that I could guarantee you that we’re all going to win in the end, the whole thing. And I can’t, because we…

View original post 4,471 more words