On Truth and Bullshit

One of the most salient features of our culture is that there is so much bullshit.

This is how Harry Frankfurt begins his essay, “On Bullshit“. He continues:

“Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern, or attracted much sustained inquiry. In consequence, we have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves. And we lack a conscientiously developed appreciation of what it means to us. In other words, we have no theory.”

So, what is this bullshit? He goes through many definitions of related words. A main point is that it’s “short of lying” and this leads him to insincerity. The bullshitter isn’t a liar for the bullshitter isn’t concerned about either truth or its contrary. No intention to lie is required.

“Someone who lies and someone who tells the truth are playing on the opposite sides, so to speak, in the same game. Each responds to the facts . . . the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of truth, as the liar does, and oppose himself to it. He pays no attention to it at all.”

Bullshitting is more of a creative act that dances around such concerns of verity:

“For the essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit: although it is produced without concern with the truth, it need not be false. The bullshitter is faking things. But this does not mean that he necessarily gets them wrong.”

Bullshit is, first and foremost, insincere. In Frankfurt’s essay, that is some combination of an observation, premise, and conclusion. It is the core issue. But as with bullshit, what is this insincerity? How are we to judge it, from what perspective and according to what standard?

His answer seems to be that bullshit is to sincerity as a lie to the truth. This implies that the bullshitter knows they are insincere in the way the liar knows they are being untruthful. And as the bullshitter doesn’t care about truth, the liar doesn’t care about sincerity. This assumes that the intention of a speaker can be known, both to the presumed bullshitter and to the one perceiving (or accusing) them as a bullshitter. We know bullshit when we hear it, as we know porn when we see it.

After much analysis, the ultimate conclusion is that, “sincerity itself is bullshit.” Bullshit is insincere and sincerity is bullshit. How clever! But there is a genuine point being made. Frankfurt’s ideal is that of truth, not sincerity. Truth and sincerity aren’t polar opposite ideals. They are separate worldviews and attitudes, so the argument goes.

Coming to the end of the essay, I immediately realized what this conflict was. It is an old conflict. It goes back at least to Socrates, although part of larger transcultural changes happening in the post-bicameral Axial Age. Socrates is simply the standard originating point for Western thought, the frame we prefer since Greece represents the earliest known example of a democracy (as a highly organized political system within an advanced civilization).

Socrates, as known through the writings of Plato, is often portrayed as the victim of democracy’s dark populism. The reality, though, is that Plato was severely anti-democratic and associated with those behind the authoritarian forces that sought to destroy Athenian democracy. His fellow Athenians didn’t take kindly to this treasonous threat, whether or not it was just and fair to blame Socrates (we shall never know since we lack the details of the accusation and evidence, as no official court proceedings are extant).

What we know, from Plato, is that Socrates had issues with the Sophists. So, who were these Sophists? It’s a far more interesting question than it first appears. It turns out that the word has a complicated history. It originally referred to poets, the original teachers of wisdom in archaic Greek society. And it should be recalled that the poets were specifically excluded from Plato’s utopian society because, in Plato’s mind, of the danger they posed to rationalistic idealism.

What did the poets and Sophists have in common? They both used language to persuade, through language that was concrete rather than abstract, emotional rather than detached. Plato was interested in big ‘T’ absolute Truth, whereas those employing poetry and rhetoric were interested in small ‘t’ relative truths that were on a human scale. Ancient Greek poets and Sophists weren’t necessarily untruthful but simply indifferent to Platonic ideals of Truth.

This does relate to Frankfurt’s theory of bullshit. Small ‘t’ truths are bullshit or at least easily seen in this light. The main example he uses demonstrates this. A friend of Ludwig Wittgenstein’ was sick and she told him that, “I feel just like a dog that has been run over.” Wittgenstein saw this as careless use of language, not even meaningful enough to be wrong. It was a human truth, instead of a philosophical Truth.

Her statement expressed a physical and emotional experience. One could even argue that Wittgenstein was wrong about a human not being able to know what a hurt dog feels like, as mammals have similar biology and neurology. Besides, as far as we know, this friend had a pet dog run over by a car and was speaking from having a closer relationship to this dog than she had to Wittgenstein. Reading this account, Wittgenstein comes off as someone with severe Asperger’s and indeed plenty of people have speculated elsewhere about this possible diagnosis. Whatever is the case, his response was obtuse and callous.

It is hard to know what the relevance of such an anecdote might have, in reference to clarifying the meaning of bullshit. What it does make clear is that there are different kinds of truths.

This is what separated Socrates and Plato on one side and the poets and Sophists on the other. The Sophists had inherited a tradition of teaching from the poets and it was a tradition that became ever more important in the burgeoning democracy. But it was an era when the power of divine voice still clung to the human word. Persuasion was a power not to be underestimated, as the common person back then hadn’t yet developed the thick-boundaried intellectual defensiveness against rhetoric that we moderns take for granted. Plato sought a Truth that was beyond both petty humans and petty gods, a longing to get beyond all the ‘bullshit’.

Yet it might be noted that some even referred to Socrates and Plato as Sophists. They too used rhetoric to persuade. And of course, the Platonic is the foundation of modern religion (e.g., Neoplatonic Alexandrian Jews who helped shape early Christian theology and Biblical exegesis), the great opponent of the Enlightenment tradition of rationality.

This is why some, instead, prefer to emphasize the divergent strategies of Plato and Aristotle, the latter making its own accusations of bullshit against the former. From the Aristotelian view, Platonism is a belief system proclaiming truth all the while willfully detached from reality. The Platonic concern with Truth, from this perspective, can seem rather meaningless, maybe so meaningless as to not even being false. The Sophists who opposed Socrates and Plato at least were interested in practical knowledge that applied to the real world of human society, dedicated as they were to teaching the skills necessary for a functioning democracy.

As a side note, the closest equivalent to the Sophists today is the liberal arts professor who hopes to instill a broad knowledge in each new generation of students. It’s quite telling that those on the political right are the most likely to make accusations of bullshit against the liberal arts tradition. A traditional university education was founded on philology, the study of languages. And the teaching of rhetoric was standard in education into the early 1900s. Modern Western Civilization was built on the values of the Sophists, the ideal of a well rounded education and the central importance of language, including the ability to speak well and persuasively, the ability to logically defend an argument and rhetorically to make a case. The Sophists saw that to have a democratic public what was needed was an educated public.

Socrates and Plato came from more of what we’d call an aristocratic tradition. They were an enlightened elite, born into wealth, luxury, and privilege. This put them in opposition to the emerging democratic market of ideas. The Sophists were seen as mercenary philosophers who would teach or do anything for money. Socrates didn’t accept money from his students, but then again he was independently wealthy (in that, he didn’t have to work because slaves did the work for him). He wanted pure philosophy, unadulterated by the coarse human realities such as making a living and democratic politics.

It’s not that Socrates and Plato were necessarily wrong. Sophists were a diverse bunch, some using their talents for the public good and others not so much. They were simply the well educated members of the perceived meritocracy who used their expertise in exchange for payment. It seems like a rather normal thing to do in a capitalist society such as ours, but back then a market system was a newfangled notion that seemed radically destabilizing to the social order. Socrates and Plato were basically the reactionaries of their day, nostalgically longing for what they imagined was being lost. Yet they were helping creating an entirely new society, wresting it from the control and authority of tradition. Plato offered a radical utopian vision precisely because he was a reactionary, in terms of how the reactionary is explained by Corey Robin.

Socrates and Plato were challenging the world they were born into. Like all reactionaries, they had no genuine interest in a conservative-minded defense of the status quo. It would take centuries for their influence to grow so large as to become a tradition of its own. Even then, they laid the groundwork for future radicalism during the Renaissance, Protestant Reformation, and Enlightenment Age. Platonic idealism is the seed of modern idealism. What was reactionary in classical Greece fed into a progressive impulse about two millennia later, the anti-democratic leading to the eventual return of democratization. The fight against ‘bullshit’ became the engine of change that overthrew the European ancien régime of Catholicicism, feudalism, aristocracy, and monarchy. Utopian visions such as that of Plato’s Republic became increasingly common.

Thinking along these lines, it brought to mind a recent post of mind, Poised on a Knife Edge. I was once again considering the significance of the ‘great debate’ between Edmund Burke and Thomas Paine. It was Paine who was more of the inheritor of Greek idealism, but unlike some of the early Greek idealists he was very much putting idealism in service of democracy, not some utopian vision above and beyond the messiness of public politics. It occurred to me that, to Paine and his allies, Burke’s attack on the French Revolution was ‘bullshit’. The wardrobe of the moral imagination was deemed rhetorical obfuscation, a refusal of the plain speech and the plain honest truth that was favored by Paine (and by Socrates).

Let me explain why this matters. As I began reading Frankfurt’s “On Bullshit”, I was naturally pulled into the view presented. Pretty much everyone hates bullshit. But I considered a different possible explanation for this. Maybe bullshit isn’t more common than before. Maybe it’s even less common in some sense. It’s just that, as a society that idealizes truth, the category of bullshit represents something no longer respected or understood. We’ve lost touch with something within our own human nature. Our hyper-sensitivity in seeing bullshit everywhere, almost a paranoia, is an indication of this.

As much as I love Paine and his vision, I have to give credit where it is due by acknowledging that Burke managed to catch hold of a different kind of truth, a very human truth. He warned us about treading cautiously on the sacred ground of the moral imagination. On this point, I think he was right. We are too careless.

Frankfurt talks about the ‘bullshit artist’. Bullshitters are always artists. And maybe artists are always bullshitters. This is because the imagination, moral or otherwise, is the playground of the bullshitter. This is because the artist, the master of imagination, is different than a craftsmen. The artist always has a bit of the trickster about him, as he plays at the boundaries of the mind. Here is how Frankfurt explains it:

“Wittgenstein once said that the following bit of verse by Longfellow could serve him as a motto:

“In the elder days of art
Builders wrought with greatest care
Each minute and unseen part,
For the Gods are everywhere.

“The point of these lines is clear. In the old days, craftsmen did not cut corners. They worked carefully, and they took care with every aspect of their work. Every part of the product was considered, and each was designed and made to be exactly as it should be. These craftsmen did not relax their thoughtful self-discipline even with respect to features of their work which would ordinarily not be visible. Although no one would notice if those features were not quite right, the craftsmen would be bothered by their consciences. So nothing was swept under the rug. Or, one might perhaps also say, there was no bullshit.

“It does seem fitting to construe carelessly made, shoddy goods as in some way analogues of bullshit. But in what way? Is the resemblance that bullshit itself is invariably produced in a careless or self-indulgent manner, that it is never finely crafted, that in the making of it there is never the meticulously attentive concern with detail to which Longfellow alludes? Is the bullshitter by his very nature a mindless slob? Is his product necessarily messy or unrefined? The word shit does, to be sure, suggest this. Excrement is not designed or crafted at all; it is merely emitted, or dumped. It may have a more or less coherent shape, or it may not, but it is in any case certainly not wrought.

“The notion of carefully wrought bullshit involves, then, a certain inner strain. Thoughtful attention to detail requires discipline and objectivity. It entails accepting standards and limitations that forbid the indulgence of impulse or whim. It is this selflessness that, in connection with bullshit, strikes us as inapposite. But in fact it is not out of the question at all.”

This is logos vs mythos. In religious terms, it is the One True God who creates ex nihilo vs the demiurgic god of this world. And in Platonic terms, it is the idealistic forms vs concrete substance, where the latter is a pale imitation of the former. As such, truth is unique whereas bullshit is endless. The philosopher and the poet represent opposing forces. To the Philosopher, everything is either philosophically relevant or bullshit. But to the poet (and his kin), this misses the point and overlooks the essence of our humanity. Each side makes sense, according to the perspective of each side. And so each side is correct about what is wrong with the other side.

If all bullshit was eliminated and all further bullshit made impossible, what would be left of our humanity? Maybe our very definition of truth is dependent on bullshit, both as a contrast and an impetus. Without bullshit, we might no longer be able to imagine new truths. But such imagination, if not serving greater understanding, is of uncertain value and potentially dangerous to society. For good or ill, the philosopher, sometimes obtuse and detached, and the artist, sometimes full of bullshit, are the twin representatives of civilization as we know it.

* * *

“I had my tonsils out and was in the Evelyn Nursing Home feeling sorry for myself. Wittgenstein called.”
by Ann Althouse

Short of Lying
by Heinz Brandenburg

Bullshit as the Absence of Truthfulness
by Michael R. Kelly

Democracy is not a truth machine
by Thomas R. Wells

Our ability as individuals to get to true facts merely by considering different arguments is distinctly limited. If we only know of one account of the holocaust – what we were taught in school – we are likely to accept it. But whether it is true or false is a matter of luck rather than our intellectual capacities. Now it is reasonable to suppose that if we were exposed to a diversity of claims about the holocaust then our opinions on the subject would become more clearly our own, and our own responsibility. They would be the product of our own intellectual capacities and character instead of simply reflecting which society we happened to be born into. But so what? Holding sincere opinions about whether the holocaust happened is all very well and Millian, but it has no necessary relation to their truth. As Harry Frankfurt notes in his philosophical essay On Bullshit, sincerity is concerned with being true to oneself, not to the nature of the world: from the perspective of truth seeking, sincerity is bullshit.

Knowing this, we can have no faith that the popularity of certain factual claims among people as ordinary as ourselves is any guide to their truth. Democracy is no more equipped to evaluate facts than rational truths. We can all, of course, hold opinions about the civilisational significance of the holocaust and its status as a justification for the state of Israel, and debate them with others in democratic ways. Yet, when it comes to the facts, neither the sincerity with which individuals believe that ‘the holocaust’ is a myth nor the popularity of such beliefs can make them epistemically respectable. 90% of the population denying the holocaust is irrelevant to its truth status. And vice versa.

Rhetoric and Bullshit
by James Fredal

Frankfurt is also indebted (indirectly) to Plato: Phaedrus is as much about the bullshitter’s (Lysias’s or the non-lover’s) lack of concern for (or “love” for) the truth as is Frankfurt’s brief tome. From the perspective of Plato, Lysias’s speech in praise of the non-lover is just so much bullshit not simply because it is not true, but because Lysias is not concerned with telling the truth so much as he is with gaining the affection and attention of his audience: the beloved boy, the paying student or, more to the point, that lover of speeches, Phaedrus himself.

The non-lover described by Lysias in Phaedrus is best understood as Plato’s allegory for sophists who reject any “natural” truth and who remain committed to contradictory arguments as the practical consequence of their general agnosticism. For Lysias’s non-lover, language is not for telling the truth, because the truth is inaccessible: language is for finding and strengthening positions, for gaining advantage, and for exerting influence over others. Richard Weaver offers a similar reading of Phaedrus that sees the non-lover as representing an attitude toward language use (though for Weaver the non-lover is not a sophist, but a scientist).

Others interested in the bullshitter apply a different, more favorable lens. Daniel Mears, for example, draws on Chandra Mukerji’s study of bullshit among hitchhikers, and more generally on Erving Goffman’s study of self-presentation in the interaction order (for example, “Role Distance” and Interaction Rituals) to highlight bullshit as a form of impression management: what, as Mears notes, Suzanne Eggins and Diana Slade call a “framing device” for the “construction and maintenance of our social identities and social relationships” (qtd. in Mears 279). For Mears, bullshit is the deliberate (albeit playful) creation of possible but ultimately misleading impressions of self or reality, whether for expressive or instrumental reasons (4).

Like Frankfurt, Mears locates the source of bullshit in the speaker herself and her desire to craft a creditable self-image. But whereas Frankfurt sees bullshitting as a species of deception worse than lying (because at least liars have to know the truth if only to lead us away from it, whereas bullshitters have no concern at all for the truth), Mears understands bullshit as a significant social phenomenon that serves several prosocial functions.7 For Mears, we engage in bullshit for purposes of socialization and play, for self-exploration and self-expression, for the resolution of social tensions and cognitive dissonance, and for gaining an advantage in encounters.

Like Mukerji, Mears emphasizes the playful (though often nontrivial and highly consequential) quality of bullshit, much as the ancient sophists composed speeches as “play”: as exercises and exempla, for enjoyment, for display and impression management, and for study separate from the “real world” of politics and law.

Rhetoric Is Not Bullshit
by Davd J. Tietge
from Bullshit and Philosophy
Kindle Locations 3917-4003

The Truth about Postmodernism

One issue that helps obscure the universality of rhetoric, and thus promotes the pejorative use of ‘rhetoric’, is the popular tendency to oversimplify the “truth-lie” dichotomy. In The Liar’s Tale: A History of Falsehood, Jeremy Campbell reminds us that the reductionistic binary that separates truth from falsity is not only in error, but also that the thoroughly unclear and inconsistent distinction between the true and the false has a long, rich cultural history.180 Those doing much of the speaking in our own era, however, assume that the dividing line between truth and untruth is clear and, more significantly, internalized by the average human. Truth, however, is an elusive concept. While we can cite many examples of truths (that the sky is blue today, that the spoon will fall if dropped, and so forth), these depend on definitions of the words used. The sky is blue because ‘blue’ is the word we use to describe the hue that we have collectively agreed is bluish. We may, however, disagree about what shade of blue the sky is. Is it powder blue? Blue-green? Royal Blue? Interpretive responses to external realities that rely on definition (and language generally) always complicate the true-false binary, especially when we begin to discuss the nature of abstractions involved in, say, religion or metaphysics. The truth of ‘God is good’ depends very heavily upon the speaker’s understanding of God and the nature of goodness, both of which depend upon the speaker’s conceptualization, which may be unique to him, his group, or his cultural environment, and thus neither clear nor truthful to other parties.

Is this rampant relativism? Some might think so, but it is perhaps more useful to suggest that the Absolute Truths that we usually embrace are unattainable because of these complexities of language. Some cultures have seen the linguistic limitations of specifying the Truth. Hinduism has long recognized that language is incapable of revealing Truth; to utter the Truth, it holds, is simultaneously to make it no longer the Truth.

Note here the distinction between capital ‘T’ truth and lower-case ‘t’ truth. Lower-case truths are situational, even personal. They often reflect more the state of mind of the agent making the utterance than the immutable nature of the truth. They are also temporally situated; what may be true now may not be in the future. Truth in this sense is predicated on both perception and stability, and, pragmatically speaking, such truths are tran-sitional and, often, relative. Capital ‘T’ Truths can be traced back at least as far as Plato, and are immutable, pure, and incorruptible. They do not exist in our worldly realm, at least so far as Plato was concerned. This is why Plato was so scornful of rhetoric: he felt that rhetoricians (in particular, the Sophists) were opportunists who taught people how to disguise the Truth with language and persuasion. Whereas Plato imagined a realm in which the worldly flaws and corruption of a physical existence were supplanted by perfect forms, the corporeal domain of human activity was saturated with language, and therefore, could not be trusted to reveal Truth with any certainty.

Contemporary, postmodern interest in truth and meaning turns the tables on Plato and studies meaning and truth in this shifting, less certain domain of human activity. Campbell cites many thinkers from our philosophical past who helped inaugurate this development, but none is more important than Friedrich Nietzsche. For Nietzsche, humans have no “organ” for discerning Truth, but we do have a natural instinct for falsehood. “Truth,” as an abstraction taken from the subjectivity of normal human activities, was a manufactured fiction that we are not equipped to actually find. On the other hand, a natural aptitude for falsehood is an important survival mechanism for many species. Human beings have simply cultivated it in innovative, sophisticated, ways. As the rhetorician George A. Kennedy has noted, “in daily life, many human speech acts are not consciously intentional; they are automatic reactions to situations, culturally (rather than genetically) imprinted in the brain or rising from the subconscious.”181 Our propensity for appropriate (if not truthful) responses to situations is something nourished by an instinct to survive, interact, protect, and socialize. Civilization gives us as many new ways to do this as there are situations that require response.

This is why Nietzsche carefully distinguished Truth from a belief system that only professed to contain the Truth. Ken Gemes notes that Nietzsche co-ordinated the question of Truth around the pragmatics of survival,182 an observation echoed by Kennedy, who provides examples of animals that deceive for self-preservation. Camouflage, for example, can be seen in plants and animals. Many birds imitate the calls of rival species to fool them to distraction and away from their nests or food sources. Deception, it seems, is common in nature. But Nietzsche took doctrinal Truth (note the “T”) to be one of the most insidious deceptions to occur in human culture, especially as it is articulated in religions. It is not a basic lie that is being promulgated, but rather a lie masquerading as the Truth and, according to Nietzsche, performing certain functions. Truth, that is, is a ritualized fiction, a condition manufactured for institutions and the individuals who control them to maintain their power.

Rhetoric and Bullshit

Truth, deception, control over others. This survey of rhetoric thus brings us close to the territory that Harry Frankfurt explores in On Bullshit. For Frankfurt, however, bullshit has little to do with these complexities about truth and Truth that rhetoric helps us identify. Indeed bullshit, for Frankfurt, has little do with truth at all, insofar as it requires an indifference to truth. Does this mean, then, that language that is not bullshit has settled the matter of truth and has access to truth (or Truth)? Does this lead us to a dichotomy between truth and bullshit that is similar to the dichotomy between truth and falsity that postmodernism criticizes? It may seem that postmodernism has little place in Frankfurt’s view, insofar as he rejects “various forms of skepticism which deny that we have any reliable access to objective reality, and which therefore reject the possibility of knowing how things truly are” (p. 64). Indeed, postmodernism is often vilified as the poster child of relativism and skepticism.

Yet postmodernism is far subtler than a mere denial of “objective reality.” Postmodernism claims, rather, that reality is as much a construct of language as it is objective and unchanging. Postmodernism is less about rejecting beliefs about objective reality than about the intersection between material reality and the human interpretations of it that change, mutate, and shift that reality to our own purposes—the kind of small-t truths that Nietzsche addressed. The common complaint about post-modernism, for example, that it denies “natural laws,” forgets that humans noticed and formulated those laws. Postmodernism attempts to supply a vocabulary to describe this kind of process. It is not just “jargon,” as is so often charged; it is an effort to construct a metalinguistic lexicon for dealing with some very difficult and important epistemological questions.

And, not surprisingly, so is rhetoric. Constructing language that deals with the nature of language is a unique human problem. It is meta-cognition at its most complicated because it requires us to use the same apparatus to decode human texts that is contained in the texts themselves—that is, using words to talk about words, what Kenneth Burke referred to in The Rhetoric of Religion as “logology.”183 In no other area of human thinking is this really the case. Most forms of intellectual exploration involve an extraneous phenomenon, event, agent, or object that requires us to bring language to bear upon it in order to observe, describe, classify, and draw conclusions about its nature, its behavior, or its effect. For example, scientific inquiry usually involves an event or a process in the material world that is separate from the instruments we use to describe it. Historical analysis deals with texts as a matter of disciplinary course, yet most historians rarely question the efficacy or the reliability of the language used to convey an event of the remote (or, for that matter, recent) past. Even linguistics, which uses a scientific model to describe language structure, deals little with meaning or textual analysis.

Law is one of the closest cousins of rhetoric. Words are very much a part of the ebb and flow of legal wrangling, and the attention given to meaning and interpretation is central. Yet, even here, there is little theoretical discussion about how words have meaning or how, based on such theory, that meaning can be variously interpreted. Law is more concerned with the fact that words can be interpreted differently and how different agents might interpret language in different ways. This is why legal documents are often so unreadable; in an attempt to control ambiguity, more words (and more words with specific, technical meanings) must be used so that multiple interpretations can be avoided. If theoretical discussions about how language generates meaning were entered into the equation, the law would be impossible to apply in any practical way. Yet, to understand legal intricacies, every law student should be exposed to rhetoric—not so they can better learn how to manipulate a jury or falsify an important document, but so they understand how tenuous and limited language actually is for dealing with ordinary situations. Moreover, nearly every disciplinary area of inquiry uses language, but only rhetoric (and its associated disciplines, especially philosophy of language and literary /cultural criticism, which have influenced the development of modern rhetoric considerably) analyzes language using a hermeneutical instrument designed to penetrate the words to examine their effects—desired or not—on the people who use them.

What, then, qualifies as “bullshit”? Certainly, as I hope I have shown, rhetoric and bullshit are hardly the same thing. They are not even distant cousins. When a student begins a paper with the sentence, “In today’s society, there are many things that people have different and similar opinions about,” it’s a pretty good guess that there is little of rhetorical value there. About the only conclusion a reader can draw is that the student is neither inspired nor able to hide this fact. This is the extent of the subtext, and it could conceivably qualify as bullshit. In this sense, Frankfurt’s characterization of bullshit as “unavoidable whenever circumstances require someone to talk without knowing what he is talking about” (p. 63) is a useful differentiation.

But aside from these rather artificial instances, if bullshit does occur at the rate Frankfurt suggests, we have an arduous task in separating the bullshit from more interesting and worthy rhetorical situations. We have all met people whom we know, almost from the moment of acquaintance, are full of bullshit. It is the salesman syndrome that some people just (naturally, it seems) possess. In one sense, then, poor rhetoric—a rhetoric of transparency or obviousness—can be construed as bullshit. For the person with salesman syndrome is certainly attempting to achieve identification with his audience; he may even be attempting to persuade others that he is upright or trustworthy. But he fails because his bullshit is apparent. He is a bad rhetorician in the sense that he fails to convince others that he should be taken seriously, that his words are worthy of attention and, possibly, action.

Bullshit is something we can all recognize. Rhetoric is not. My remedy for this situation is simple: learn rhetoric.

 

The Sociology of Intellectual Life
by Steve Fuller
pp. 147-8

Harry Frankfurt’s (2005) On Bullshit is the latest contribution to a long, distinguished, yet deeply problematic line of Western thought that has attempted to redeem the idea of intellectual integrity from the cynic’s suspicion that it is nothing but high-minded, self-serving prejudice. I say ‘problematic’ because while Plato’s unflattering portrayal of poets and sophists arguably marked the opening salvo in the philosophical war against bullshit, Plato availed himself of bullshit in promoting the ‘myth of the metals’ as a principle of social stratification in his Republic. This doublethink has not been lost on the neo-conservative followers of the great twentieth century Platonist Leo Strauss. […]

The bullshit detector aims to convert an epistemic attitude into a moral virtue: reality can be known only by the right sort of person. This idea, while meeting with widespread approval by philosophers strongly tied to the classical tradition of Plato and Aristotle, is not lacking in dissenters. The line of dissent is best seen in the history of ‘rhetoric’, a word Plato coined to demonize Socrates’ dialectical opponents, the sophists. The sophists were prepared to teach anyone the art of winning arguments, provided you could pay the going rate. As a series of sophistic interlocutors tried to make clear to Socrates, possession of the skills required to secure the belief of your audience is the only knowledge you really need to have. Socrates famously attacked this claim on several fronts, which the subsequent history of philosophy has often conflated. In particular, Socrates’ doubts about the reliability of the sophists’ techniques have been run together with a more fundamental criticism: even granting the sophists their skills, they are based on a knowledge of human gullibility, not of reality itself.

Bullshit is sophistry under this charitable reading, which acknowledges that the truth may not be strong enough by itself to counteract an artfully presented claim that is not so much outright false as, in the British idiom, ‘economical with the truth’. In stressing the difference between bullshit and lies, Frankfurt clearly has this conception in mind, though he does sophistry a disservice by casting the bullshitter’s attitude toward the truth as ‘indifference’. On the contrary, the accomplished bullshitter must be a keen student of what people tend to regard as true, if only to cater to those tendencies so as to serve her own ends. What likely offends Frankfurt and other philosophers here is the idea that the truth is just one more tool to be manipulated for personal advantage. Conceptual frameworks are simply entertained and then discarded as their utility passes. The nature of the offence, I suspect, is the divine eye-view implicated in such an attitude – the very idea that one could treat in a detached fashion the terms in which people normally negotiate their relationship to reality. A bullshitter revealed becomes a god unmade.

pp. 152-3

The bullshit detector believes not only that there is a truth but also that her own access to it is sufficiently reliable and general to serve as a standard by which others may be held accountable. Protestants appeared prepared to accept the former but not the latter condition, which is why dissenters were encouraged – or perhaps ostracized – to establish their own ministries. The sophists appeared to deny the former and possibly the latter condition as well. Both Protestants and sophists are prime candidates for the spread of bullshit because they concede that we may normally address reality in terms it does not recognize – or at least do not require it to yield straight ‘yes-or-no’, ‘true-or-false’ answers. In that case, we must make up the difference between the obliqueness of our inquiries and the obtuseness of reality’s responses. That ‘difference’ is fairly seen as bullshit. When crystallized as a philosophy of mind or philosophy of language, this attitude is known as antirealism. Its opposite number, the background philosophy of bullshit detectors, is realism.

The difference in the spirit of the two philosophies is captured as follows: do you believe that everything you say and hear is bullshit unless you have some way of showing whether it is true or false; or rather, that everything said and heard is simply true or false, unless it is revealed to be bullshit? The former is the antirealist, the latter the realist response. Seen in those terms, we might say that the antirealist regards reality as inherently risky and always under construction (Caveat credor: ‘Let the believer beware!’) whereas the realist treats reality as, on the whole, stable and orderly – except for the reprobates who try to circumvent the system by producing bullshit. In this respect, On Bullshit may be usefully read as an ad hominem attack on antirealists. Frankfurt himself makes passing reference to this interpretation near the end of the essay (Frankfurt 2005: 64–65). Yet, he appears happy to promote the vulgar image of antirealism as intellectually, and perhaps morally, slipshod, instead of treating it as the philosophically honorable position that it is.

A case in point is Frankfurt’s presentation of Wittgenstein as one of history’s great bullshit detectors (Frankfurt 2005: 24–34). He offers a telling anecdote in which the Viennese philosopher objects to Fania Pascal’s self description as having been ‘sick as a dog’. Wittgenstein reportedly told Pascal that she misused language by capitalizing on the hearer’s easy conflation of a literal falsehood with a genuine condition, which is made possible by the hearer’s default anthropocentric bias. Wittgenstein’s objection boils down to claiming that, outside clearly marked poetic contexts, our intellectual end never suffices alone to justify our linguistic means. Frankfurt treats this point as a timeless truth about how language structures reality. Yet, it would be quite easy, especially recalling that this ‘truth’ was uttered seventy years ago, to conclude that Wittgenstein’s irritation betrays a spectacular lack of imagination in the guise of scrupulousness.

Wittgenstein’s harsh judgement presupposes that humans lack any real access to canine psychology, which renders any appeal to dogs purely fanciful. For him, this lack of access is an established fact inscribed in a literal use of language, not an open question answers to which a figurative use of language might offer clues for further investigation. Nevertheless, scientists informed by the Neo-Darwinian synthesis – which was being forged just at the time of Wittgenstein’s pronouncement – have quite arguably narrowed the gap between the mental lives of humans and animals in research associated with ‘evolutionary psychology’. As this research makes more headway, what Wittgenstein confidently declared to be bullshit in his day may tomorrow appear as having been a prescient truth. But anyone holding such a fluid view of verifiability would derive scant comfort from either Wittgenstein or Frankfurt, who act as if English linguistic intuitions, circa 1935, should count indefinitely as demonstrable truths.

Some philosophers given to bullshit detection are so used to treating any Wittgensteinian utterance as a profundity that it never occurs to them that Wittgenstein may have been himself a grandmaster of bullshit. The great bullshit detectors whom I originally invoked, Nietzsche and Mencken, made themselves vulnerable to critics by speaking from their own self-authorizing standpoint, which supposedly afforded a clear vista for distinguishing bullshit from its opposite. Wittgenstein adopts the classic bullshitter’s technique of ventriloquism, speaking through the authority of someone or something else in order to be spared the full brunt of criticism.

I use ‘adopts’ advisedly, since the deliberateness of Wittgenstein’s rhetoric remains unclear. What was he trying to do: to speak modestly without ever having quite controlled his spontaneously haughty manner, or to exercise his self-regarding superiority as gently as possible so as not to frighten the benighted? Either way, Wittgenstein became – for a certain kind of philosopher – the standard-bearer of linguistic rectitude, where ‘language’ is treated as a proxy for reality itself. Of course, to the bullshitter, this description also fits someone whose strong personality cowed the impressionable into distrusting their own thought processes. As with most successful bullshit, the trick is revealed only after it has had the desired effect and the frame of reference has changed. Thus, Wittgenstein’s precious concern about Pascal’s account of her state of health should strike, at least some readers today, as akin to a priest’s fretting over a parishioner’s confession of impure thoughts. In each case, the latter is struck by something that lies outside the box in which the former continues to think.

If Wittgenstein was a bullshitter, how did he manage to take in professed enemies of bullshit like Frankfurt? One clue is that most bullshit is forward looking, and Wittgenstein’s wasn’t. The bullshitter normally refers to things whose prima facie plausibility immunizes the hearer against checking their actual validity. The implication is that the proof is simply ‘out there’ waiting be found. But is there really such proof? Here the bullshitter is in a race against time. A sufficient delay in checking sources has salvaged the competence and even promoted the prescience of many bullshitters. Such was the spirit of Paul Feyerabend’s (1975) notorious account of Galileo’s ‘discoveries’, which concluded that his Papal Inquisitors were originally justified in their scepticism, even though Galileo’s followers subsequently redeemed his epistemic promissory notes.

In contrast, Wittgenstein’s unique brand of bullshit was backward-looking, always reminding hearers and readers of something they should already know but had perhaps temporarily forgotten. Since Wittgenstein usually confronted his interlocutors with mundane examples, it was relatively easy to convey this impression. The trick lay in immediately shifting the context from the case at hand to what Oxford philosophers in the 1950s called a ‘paradigm case’ that was presented as a self-evident standard of usage against which to judge the case at hand. That Wittgenstein, a non-native speaker of English, impressed one or two generations of Britain’s philosophical elite with just this mode of argumentation remains the envy of the aspiring bullshitter. Ernest Gellner (1959), another émigré from the old Austro Hungarian Empire, ended up ostracized from the British philosophical establishment for offering a cutting diagnosis of this phenomenon as it was unfolding. He suggested that Wittgenstein’s success testified to his ability to feed off British class anxiety, which was most clearly marked in language use. An academically sublimated form of such language-driven class anxiety remains in the discipline of sociolinguistics (Bernstein 1971–77).

Yet, after nearly a half-century, Gellner’s diagnosis is resisted, despite the palpable weakening of Wittgenstein’s posthumous grip on the philosophical imagination. One reason is that so many living philosophers still ride on Wittgenstein’s authority – if not his mannerisms – that to declare him a bullshitter would amount to career suicide. But a second reason is also operative, one that functions as an insurance policy against future debunkers. Wittgenstein is often portrayed, by himself and others, as mentally unbalanced. You might think that this would render his philosophical deliverances unreliable. On the contrary, Wittgenstein’s erratic disposition is offered as evidence for his spontaneously guileless nature – quite unlike the controlled and calculated character of bullshitters. Bullshit fails to stick to Wittgenstein because he is regarded as an idiot savant.

Democratic Republicanism in Early America

There was much debate and confusion around various terms, in early America.

The word ‘democracy’ wasn’t used on a regular basis at the time of the American Revolution, even as the ideal of it was very much in the air. Instead, the word ‘republic’ was used by most people back then to refer to democracy. But some of the founding fathers such as Thomas Paine avoided such confusion and made it clear beyond any doubt by speaking directly of ‘democracy’. Thomas Jefferson, the author of the first founding document and 3rd president, formed a political party with both ‘democratic’ and ‘republican’ in the name, demonstrating that no conflict was seen between the two terms.

The reason ‘democracy’ doesn’t come up in founding documents is that the word is too specific, although it gets alluded to when speaking of “the People” since democracy is literally “people power”. Jefferson, in writing the Declaration of Independence, was particularly clever in avoiding most language that evoked meaning that was too ideologically singular and obvious (e.g., he effectively used rhetoric to avoid the divisive debates for and against belief in natural law). That is because the founding documents were meant to unite a diverse group of people with diverse opinions. Such a vague and ambiguous word as ‘republic’ could mean almost anything to anyone and so was an easy way to paper over disagreements and differing visions. If more specific language was used that made absolutely clear what they were actually talking about, it would have led to endless conflict, dooming the American experiment from the start.

Yet it was obvious from pamphlets and letters that many American founders and revolutionaries wanted democracy, in whole or part, to the degree they had any understanding of it. Some preferred a civic democracy with some basic social democratic elements and civil rights, while others (mostly Anti-Federalists) pushed for more directly democratic forms of self-governance. The first American constitution, the Articles of Confederation, was clearly a democratic document with self-governance greatly emphasized. Even among those who were wary of democracy and spoke out against it, they nonetheless regularly used democratic rhetoric (invoking democratic ideals, principles, and values) because democracy was a major reason why so many fought the revolution in the first place. If not for democracy, there was little justification for and relevance in starting a new country, beyond a self-serving power grab by a new ruling elite.

Without assuming that large number of those early Americans had democracy in mind, their speaking of a republic makes no sense. And that is a genuine possibility for at least some of them, as they weren’t always clear in their own minds about what they did and didn’t mean. To be technical (according to even the common understanding from the 1700s), a country either is a democratic republic or a non-democratic republic. The variety of non-democratic republics would include what today we’d call theocracy, fascism, communism, etc. It is a bit uncertain exactly what kind of republic various early Americans envisioned, but one thing is certain: There was immense overlap and conflation between democracy and republicanism in the early American mind. This was the battleground of the fight between Federalists and Anti-Federalists (or to be more accurate, between pseudo-Federalists and real Federalists).

As a label, stating something is a republic says nothing at all about what kind of government it is. All that it says is what a government isn’t, that is to say it isn’t a monarchy, although there were even those who argued for republican monarchy with an elective king which is even more confused and so the king theoretically would serve the citizenry that democratically elected him. Even some of the Federalists talked about this possibility of republic with elements of a monarchy, strange as it seems to modern Americans. This is what the Anti-Federalists worried about.

Projecting our modern ideological biases onto the past is the opposite of helpful. The earliest American democrats were, by definition, republicans. And most of the earliest American republicans were heavily influenced by democratic political philosophy, even when they denounced it while co-opting it. There was no way to avoid the democratic promise of the American Revolution and the founding documents. Without that promise, we Americans would still be British. That promise remains, yet unfulfilled. The seed of an ideal is hard to kill once planted.

Still, bright ideals cast dark shadows. And the reactionary authoritarianism of the counter-revolutionaries was a powerful force. It is an enemy we still fight. The revolution never ended.

* * *

Democracy Denied: The Untold Story
by Arthur D. Robbins
Kindle Locations 2862-2929

Fascism has been defined as “an authoritarian political ideology (generally tied to a mass movement) that considers individual and other societal interests inferior to the needs of the state, and seeks to forge a type of national unity, usually based on ethnic, religious, cultural, or racial attributes.”[ 130] If there is a significant difference between fascism thus defined and the society enunciated in Plato’s Republic,[ 131] in which the state is supreme and submission to a warrior class is the highest virtue, I fail to detect it. [132] What is noteworthy is that Plato’s Republic is probably the most widely known and widely read of political texts, certainly in the United States, and that the word “republic” has come to be associated with democracy and a wholesome and free way of life in which individual self-expression is a centerpiece.

To further appreciate the difficulty that exists in trying to attach specific meaning to the word “republic,” one need only consult the online encyclopedia Wikipedia.[ 133] There one will find a long list of republics divided by period and type. As of this writing, there are five listings by period (Antiquity, Middle Ages and Renaissance, Early Modern, 19th Century, and 20th Century and Later), encompassing 90 separate republics covered in Wikipedia. The list of republic types is broken down into eight categories (Unitary Republics, Federal Republics, Confederal Republics, Arab Republics, Islamic Republics, Democratic Republics, Socialist Republics, and People’s Republics), with a total of 226 entries. There is some overlap between the lists, but one is still left with roughly 300 republics— and roughly 300 ideas of what, exactly, constitutes a republic.

One might reasonably wonder what useful meaning the word “republic” can possibly have when applied in such diverse political contexts. The word— from “res publica,” an expression of Roman (i.e., Latin) origin— might indeed apply to the Roman Republic, but how can it have any meaning when applied to ancient Athens, which had a radically different form of government existing in roughly the same time frame, and where res publica would have no meaning whatsoever?

Let us recall what was going on in Rome in the time of the Republic. Defined as the period from the expulsion of the Etruscan kings (509 B.C.) until Julius Caesar’s elevation to dictator for life (44 B.C.),[ 134] the Roman Republic covered a span of close to five hundred years in which Rome was free of despotism. The title rex was forbidden. Anyone taking on kingly airs might be killed on sight. The state of affairs that prevailed during this period reflects the essence of the word “republic”: a condition— freedom from the tyranny of one-man rule— and not a form of government. In fact, The American Heritage College Dictionary offers the following as its first definition for republic: “A political order not headed by a monarch.”

[…] John Adams (1735– 1826), second President of the United States and one of the prime movers behind the U.S. Constitution, wrote a three-volume study of government entitled Defence of the Constitutions of Government of the United States of America (published in 1787), in which he relies on the writings of Cicero as his guide in applying Roman principles to American government.[ 136] From Cicero he learned the importance of mixed governments,”[ 137] that is, governments formed from a mixture of monarchy, aristocracy, and democracy. According to this line of reasoning, a republic is a non-monarchy in which there are monarchic, aristocratic, and democratic elements. For me, this is confusing. Why, if one had just shed blood in unburdening oneself of monarchy, with a full understanding of just how pernicious such a form of government can be, would one then think it wise or desirable to voluntarily incorporate some form of monarchy into one’s new “republican” government? If the word “republic” has any meaning at all, it means freedom from monarchy.

The problem with establishing a republic in the United States was that the word had no fixed meaning to the very people who were attempting to apply it. In Federalist No. 6, Alexander Hamilton says, “Sparta, Athens, Rome and Carthage were all republics”( F.P., No. 6, 57). Of the four mentioned, Rome is probably the only one that even partially qualifies according to Madison’s definition from Federalist No. 10 (noted earlier): “a government in which the scheme of representation takes place,” in which government is delegated “to a small number of citizens elected by the rest” (ibid, No. 10, 81-82).

Madison himself acknowledges that there is a “confounding of a republic with a democracy” and that people apply “to the former reasons drawn from the nature of the latter ”( ibid., No. 14, 100). He later points out that were one trying to define “republic” based on existing examples, one would be at a loss to determine the common elements. He then goes on to contrast the governments of Holland, Venice, Poland, and England, all allegedly republics, concluding, “These examples … are nearly as dissimilar to each other as to a genuine republic” and show “the extreme inaccuracy with which the term has been used in political disquisitions.”( ibid., No. 39, 241).

Thomas Paine offers a different viewpoint: “What is now called a republic, is not any particular form of government. It is wholly characteristical [sic] of the purport, matter, or object for which government ought to be instituted, and on which it is to be employed, res-publica, the public affairs or the public good” (Paine, 369) (italics in the original). In other words, as Paine sees it, “res-publica” describes the subject matter of government, not its form.

Given all the confusion about the most basic issues relating to the meaning of “republic,” what is one to do? Perhaps the wisest course would be to abandon the term altogether in discussions of government. Let us grant the word has important historical meaning and some rhetorical appeal. “Vive la Republique!” can certainly mean thank God we are free of the tyranny of one-man, hereditary rule. That surely is the sense the word had in early Rome, in the early days of the United States, and in some if not all of the French and Italian republics. Thus understood, “republic” refers to a condition— freedom from monarchy— not a form of government.

* * *

Roger Williams and American Democracy
US: Republic & Democracy
 (part two and three)
Democracy: Rhetoric & Reality
Pursuit of Happiness and Consent of the Governed
The Radicalism of The Articles of Confederation
The Vague and Ambiguous US Constitution
Wickedness of Civilization & the Role of Government
Spirit of ’76
A Truly Free People
Nature’s God and American Radicalism
What and who is America?
Thomas Paine and the Promise of America
About The American Crisis No. III
Feeding Strays: Hazlitt on Malthus
Inconsistency of Burkean Conservatism
American Paternalism, Honor and Manhood
Revolutionary Class War: Paine & Washington
Paine, Dickinson and What Was Lost
Betrayal of Democracy by Counterrevolution
Revolutions: American and French (part two)
Failed Revolutions All Around
The Haunted Moral Imagination
“Europe, and not England, is the parent country of America.”
“…from every part of Europe.”

The Fight For Freedom Is the Fight To Exist: Independence and Interdependence
A Vast Experiment
America’s Heartland: Middle Colonies, Mid-Atlantic States and the Midwest
When the Ancient World Was Still a Living Memory

Dark Matter of the Mind

The past half year has been spent in anticipation. Daniel Everett has a new book that finally came out the other day: Dark Matter of the Mind. I was so curious to read it because Everett is the newest and most well known challenger to mainstream linguistics theory. This is only an interest to me because it so happens to directly touch upon every aspect of our humanity: human nature (vs nurture), self-identity, consciousness, cognition, perception, behavior, culture, philosophy, etc.

The leading opponent to Everett’s theory is Noam Chomsky, a well-known and well-respected public intellectual. Chomsky is the founder of the so-called cognitive revolution — not that Everett sees it as all that revolutionary: “it was not a revolution in any sense, however popular that narrative has become” (Kindle Location 306). That brings into the conflict issues of personality, academia, politics, and funding. It’s two paradigms clashing, one of the paradigms having been dominant for more than a half century.

Now that I’ve been reading the book, I find my response to be mixed. Everett is running headlong into difficult terrain and I must admit he does so competently. He is doing the tough scholarly work that needs to be done. As Bill Benzon explained (at 3 Quarks Daily):

“While the intellectual world is rife with specialized argumentation arrayed around culture and associated concepts (nature, nurture, instinct, learning) these concepts themselves do not have well-defined technical meanings. In fact, I often feel they are destined to go the way of phlogiston, except that, alas, we’ve not yet discovered the oxygen that will allow us to replace them [4]. These concepts are foundational, but the foundation is crumbling. Everett is attempting to clear away the rubble and start anew on cleared ground. That’s what dark matter is, the cleared ground that becomes visible once the rubble has been pushed to the side. Just what we’ll build on it, and how, that’s another question.”

This explanation points to a fundamental problem, if we are to consider it a problem. Earlier in the piece, Benzon wrote that, “OK, I get it, I think, you say, but this dark matter stuff is so vague and metaphorical. You’re right. And it remains that way to the end of the book. And that, I suppose, is my major criticism, though it’s a minor one. “Dark matter” does a lot of conceptual work for Everett, but he discusses it indirectly.” Basically, Everett struggles with a limited framework of terminology and concepts. But that isn’t entirely his fault. It’s not exactly new territory that Everett discovered, just not yet fully explored and mapped out. The main thing he did, in his earliest work, was to bring up evidence that simply did not fit into prevailing theories. And now in a book like this he is trying to make sense of what that evidence indicates and what theory better explains it.

It would have been useful if Everett had been able to give a fuller survey of the relevant scholarship. But if he had, it would have been a larger and more academic book. It is already difficult enough for most readers not familiar with the topic. Besides, I suspect that Everett was pushing against the boundaries of his own knowledge and readings. It was easy for me to see everything that was left out, in relation to numerous other fields beyond his focus of linguistics and anthropology — such as: neurocognitive research, consciousness studies, classical studies of ancient texts, voice-hearing and mental health, etc.

The book sometimes felt like reinventing the wheel. Everett’s expertise is in linguistics, and apparently that has has been an insular field of study defended by a powerful and entrenched academic establishment. My sense is that linguistics is far behind in development, compared to many other fields. The paradigm shift that is just now happening in linguistics has been for decades creating seismic shifts elsewhere in academia. Some argue that this is because linguistics became enmeshed in Pentagon-funded computer research and so has had a hard time disentangling itself in order to become an independent field once again. Chomsky as leader of the cognitive revolution has effectively dissuaded a generation of linguists from doing social science, instead promoting the hard sciences, a problematic position to hold about a rather soft field like linguistics. As anthropologist Chris Knight explains it, in Decoding Chomsky (Chapter 1):

“[O]ne bedrock assumption underlies his work. If you want to be a scientist, Chomsky advises, restrict your efforts to natural science. Social science is mostly fraud. In fact, there is no such thing as social science.[49] As Chomsky asks: ‘Is there anything in the social sciences that even merits the term “theory”? That is, some explanatory system involving hidden structures with non-trivial principles that provide understanding of phenomena? If so, I’ve missed it.’[50]

“So how is it that Chomsky himself is able to break the mould? What special factor permits him to develop insights which do merit the term ‘theory’? In his view, ‘the area of human language . . . is one of the very few areas of complex human functioning’ in which theoretical work is possible.[51] The explanation is simple: language as he defines it is neither social nor cultural, but purely individual and natural. Provided you acknowledge this, you can develop theories about hidden structures – proceeding as in any other natural science. Whatever else has changed over the years, this fundamental assumption has not.”

This makes Everett’s job harder than it should be, in breaking new ground in linguistics and in trying to connect it to the work already done elsewhere, most often in the social sciences. As humans are complex social animals living in a complex world, it is bizarre and plain counterproductive to study humans in the way one studies a hard science like geology. Humans aren’t isolated biological computers that can operate outside of the larger context of specific cultures and environments. But Chomsky simply assumes all of that is irrelevant on principle. Field research of actual functioning languages, as Everett has done, can be dismissed because it is mere social science. One can sense how difficult it is for Everett in struggling against this dominant paradigm.

Still, even with these limitations of the linguistics field, the book remains a more than worthy read. His using Plato and Aristotle to frame the issue was helpful to an extent, although it also added another variety of limitation. I got a better sense of the conflict of worldviews and how they relate to the larger history of ideas. But in doing so, I became more aware of the problems of that frame, very closely related to the problems of the nature vs nurture debate (for, in reality, nature and nurture are inseparable). He describes linguistic theoreticians like Chomsky as being in the Platonic school of thought. Chomsky surely would agree, as he has already made that connection in his own writings, what he discusses as Plato’s problem and Plato’s answer. Chomsky’s universal grammar are Platonic in nature, for as he has written such “knowledge is ‘remembered’” (“Linguistics, a personal view” from The Chomskyan Turn). This is Plato’s ananmesis and alethia, an unforgetting of what is true, based on the belief that humans are born with certain kinds of innate knowledge.

That is interesting to think about. But in the end I felt that something was being oversimplified or entirely left out. Everett is arguing against nativism, that there is an inborn predetermined human nature. It’s not so much that he is arguing for a blank slate as he is trying to explain the immense diversity and potential that exists across cultures. But the duality of nativism vs non-nativism lacks the nuance to wrestle down complex realities.

I’m sympathetic to Everett’s view and to his criticisms of the nativist view. But there are cross-cultural patterns that need to be made sense of, even with the exceptions that deviate from those patterns. Dismissing evidence is never satisfying. Along with Chomsky, he throws in the likes of Carl Jung. But the difference between Chomsky and Jung is that the former is an academic devoted to pure theory unsullied by field research while the latter was a practicing psychotherapist who began with the particulars of individual cases. Everett is arguing for a focus on the particulars, upon which to build theory, but that is what Jung did. The criticisms of Chomsky can’t be shifted over to Jung, no matter what one thinks of Jung’s theories.

Part of the problem is that the kind of evidence Jung dealt with remains to be explained. It’s simply a fact that certain repeating patterns are found in human experience, across place and time. That is evidence to be considered, not dismissed, however one wishes to interpret it. Not even most respectable nativist thinkers want to confront this kind of evidence that challenges conventional understandings on all sides. Maybe Jungian theories of archetypes, personality types, etc are incorrect. But how do we study and test such things, going from direct observation to scientific research? And how is the frame of nativism/non-nativism helpful at all?

Maybe there are patterns, not unlike gravity and other natural laws, that are simply native to the world humans inhabit and so might not be entirely or at all native to the human mind, which is to say not in the way that Chomsky makes nativist claims about universal grammar. Rather, these patterns would be native to to humans in the way and to the extent humans are native to the world. This could be made to fit into Everett’s own theorizing, as he is attempting to situate the human within larger contexts of culture, environment, and such.

Consider an example from psychedelic studies. It has been found that people under the influence of particular psychedelics often have similar experiences. This is why shamanic cultures speak of psychedelic plants as having spirits that reside within or are expressed through them.

Let me be more specific. DMT is the most common psychedelic in the world, it being found in numerous plants and even is produced in small quantities by the human brain. It’s an example of interspecies co-evolution, plants and humans having chemicals in common. Plants are chemistry factories and they use chemicals for various purposes, including communication with other plants (e.g., chemically telling nearby plants that something is nibbling on its leaves and so put up your chemical defenses) and communicating with non-plants (e.g., sending out bitter chemicals to help inform the nibbler that they might want to eat elsewhere). Animals didn’t just co-evolve with edible plants but also psychedelic plants. And humans aren’t the only species to imbibe. Maybe chemicals like DMT serve a purpose. And maybe there is a reason so many humans tripping on DMT experience what some describe as self-replicating machine elves or self-transforming fractal elves. Humans have been tripping on DMT for longer than civilization has existed.

DMT is far from being the only psychedelic plant like this. It’s just one of the more common. The reason plant psychedelics do what they do to our brains is because our brains were shaped by evolution to interact with chemicals like this. These chemicals almost seem designed for animal brains, especially DMT which our own brains produce.

That brings up some issues about the whole nativism/non-nativism conflict. Is a common experience many humans have with a psychedelic plant native to humans, native to the plant, or native to the inter-species relationship between human and plant? Where do the machine/fractal elves live, in the plant or in our brain? My tendency is to say that they in some sense ‘exist’ in the relationship between plants and humans, an experiential expression of that relationship, as immaterial and ephemeral as the love felt by two humans. These weird psychedelic beings are a plant-human hybrid, a shared creation of our shared evolution. They are native to our humanity to the extent that we are native to the ecosystems we share with those psychedelic plants.

Other areas of human experience lead down similar strange avenues. Take as another example the observations of Jacques Vallée. When he was a practicing astronomer, he became interested in UFOs as some of his fellow astronomers would destroy rather than investigate anomalous observational data. This led him to look into the UFO field and that led to his studying those claiming alien abduction experiences. What he noted was that the stories told were quite similar to fairy abduction folktales and shamanic accounts of initiation. There seemed to be a shared pattern of experience that was interpreted differently according to culture but that in a large number of cases the basic pattern held.

Or take yet another example. Judith Weissman has noted patterns among the stated experiences of voice-hearers. Another researcher on voice-hearing, Tanya Luhrmann, has studied how voice-hearing both has commonalities and differences across cultures. John Geiger has shown how common voice-hearing can be, even if for most people it is usually only elicited during times of stress. Based on this and the work of others, it is obvious that voice-hearing is a normal capacity existing within all humans. It is actually quite common among children and some theorize it was more common for adults in other societies. Is pointing out the surprisingly common experience of voice-hearing an argument for nativism?

These aspects of our humanity are plain weird. It was the kind of thing that always fascinated Jung. But what do we do with such evidence? It doesn’t prove a universal human nature that is inborn and predetermined. Not everyone has these experiences. But it appears everyone is capable of having these experiences.

This is where mainstream thinking in the field of linguistics shows its limitations. Going by Everett’s descriptions of the Pirahã, it seems likely that voice-hearing is common among them, although they wouldn’t interpret it that way. For them, voice-hearing appears to manifest as full possession and what, to Western outsiders, seems like a shared state of dissociation. It’s odd that as a linguist it didn’t occur to Everett to study the way of speaking of those who were possessed or to think more deeply about the experiential significance of the use of language indicating dissociation. Maybe it was too far outside of his own cultural biases, the same cultural biases that causes many Western voice-hearers to be medicated and institutionalized.

And if we’re going to talk about voice-hearing, we have to bring up Julian Jaynes. Everett probably doesn’t realize it, but his views seem to be in line with the bicameral theory or at least not in explicit contradiction with it on conceptual grounds. He seems to be coming out of the cultural school of thought within anthropology, the same influence on Jaynes. It is precisely Everett’s anthropological field research that distinguishes him from a theoretical linguist like Chomsky who has never formally studied any foreign language nor gone out into the field to test his theories. It was from studying the Pirahã firsthand over many years that the power of culture was impressed upon him. Maybe that is a commonality with Jaynes who began his career doing scientific research, not theorizing.

As I was reading the book, I kept being reminded of Jaynes, despite Everett never mentioning him or related thinkers. It’s largely how he talks about individuals situated in a world and worldview, along with his mentioning of Bordieu’s habitus. This fits into his emphasis on the culture and nurture side of influences, arguing that people (and languages) are products of their environments. Also, when Everett wrote that his view was there is “nothing to an individual but one’s body” (Kindle Location 328), it occurred to me how this fit into the proposed experience of hypothetical ancient bicameral humans. My thought was confirmed when he stated that his own understanding was most in line with the Buddhist anatnam, ‘non-self’. Just a week ago, I wrote the following in reference to Jaynes’ bicameral theory:

“We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.”

Jaynes considered self-consciousness and self-identity to be products of thought, rather than the other way around. Like Everett, this is an argument against the old Western belief in a human soul that is eternal and immortal, that Platonically precedes individual corporality. But notions like Chomsky’s universal grammar feel like an attempt to revamp the soul for a scientific era, a universal human nature that precedes any individual, a soul as the spark of God and the divine expressed as a language imprinted on the soul. If I must believe in something existing within me that pre-exists me, then I’d rather go with alien-fairy-elves hiding out in the tangled undergrowth of my neurons.

Anyway, how might Everett’s views of nativism/non-nativism been different if he had been more familiar with the work of these other researchers and thinkers? The problem is that the nativism/non-nativism framework is itself culturally biased. It’s related to the problem of anthropologists who try to test the color perception of other cultures using tests that are based on Western color perception. Everett’s observations of the Pirahã, by the way, have also challenged that field of study — as he has made the claim that the Pirahã have no color terms and no particular use in discriminating colors. That deals with the relationship of language to cognition and perception. Does language limit our minds? If so, how and to what extent? If not, are we to assume that such things as ‘colors’ are native to how the human brain functions? Would an individual born into and raised in a completely dark room still ‘see’ colors in their mind’s eye?

Maybe the fractal elves produce the colors, consuming the DMT and defecating rainbows. Maybe the alien-fairies abduct us in our sleep and use advanced technology to implant the colors into our brains. Maybe without the fractal elves and alien-fairies, we would finally all be colorblind and our society would be free from racism. Just some alternative theories to consider.

Talking about cultural biases, I was fascinated by some of the details he threw out about the Pirahã, the tribe he had spent the most years studying. He wrote that (Kindle Locations 147-148), “Looking back, I can identify many of the hidden problems it took me years to recognize, problems based in contrasting sets of tacit assumptions held by the Pirahãs and me.” He then lists some of the tacit assumptions held by these people he came to know.

They don’t appear to have any concepts, language, or interest in God or gods, in religion, or anything spiritual/supernatural that wasn’t personally experienced by them or someone they personally know. Their language is very direct and precise about all experience and the source of claims. But they don’t feel like they’re spiritually lost or somehow lacking anything. In fact, Everett describes them as being extremely happy and easygoing, except on the rare occasion when a trader gives them alcohol.

They don’t have any concern or fear about nor do they seek out and talk about death, the dead, ancestral spirits, or the afterlife. They apparently are entirely focused on present experience. They don’t speculate, worry, or even have curiosity about what is outside their experience. Foreign cultures are irrelevant to them, this being an indifference and not hatred of foreigners. It’s just that foreign cultures is thought of as good for foreigners, as Pirahã culture is good for Pirahã. Generally, they seem to lack the standard anxiety that is typical of our society, despite living in and walking around barefoot in one of the most dangerous environments on the planet surrounded by poisonous and deadly creatures. It’s actually malaria that tends to cut their lives short. But they don’t much comparison in thinking that their lives are cut short.

Their society is based on personal relationships and “do not like for any individual to tell another individual how to live” (Kindle Locations 149-150). They don’t have governments or, as far as I know, governing councils. They don’t practice social coercion, community-mandated punishments, and enforced norms. They are very small tribe living in isolation with a way of life that has likely remained basically the same for millennia. Their culture and lifestyle is well-adapted to their environmental niche, and so they don’t tend to encounter many new problems that require them to act differently than in the past. They also don’t practice or comprehend incarceration, torture, capital punishment, mass war, genocide, etc. It’s not that violence never happens in their society, but I get the sense that it’s rare.

In the early years of life, infants and young toddlers live in near constant proximity to their mothers and other adults. They are given near ownership rights of their mothers’ bodies, freely suckling whenever they want without asking permission or being denied. But once weaned, Pirahã are the opposite of coddled. Their mothers simply cut them off from their bodies and the toddlers go through a tantrum period that is ignored by adults. They learn from experience and get little supervision in the process. They quickly become extremely knowledgeable and capable about living in and navigating the world around them. The parents have little fear about their children and it seems to be well-founded, as the children prove themselves able to easily learn self-sufficiency and a willingness to contribute. It reminded me of Jean Liedloff’s continuum concept.

Then, once they become teenagers, they don’t go through a rebellious phase. It seems a smooth transition into adulthood. As he described it in his first book (Don’t Sleep, There Are Snakes, p. 99-100):

“I did not see Pirahã teenagers moping, sleeping in late, refusing to accept responsibility for their own actions, or trying out what they considered to be radically new approaches to life. They in fact are highly productive and conformist members of their community in the Pirahã sense of productivity (good fishermen, contributing generally to the security, food needs, and o ther aspects of the physical survival of the community). One gets no sense of teenage angst, depression, or insecurity among the Pirahã youth. They do not seem to be searching for answers. They have them. And new questions rarely arise.

“Of course, this homeostasis can stifle creativity and individuality, two important Western values. If one considers cultural evolution to be a good thing, then this may not be something to emulate, since cultural evolution likely requires conflict, angst, and challenge. But if your life is unthreatened (so far as you know) and everyone in your society is satisfied, why would you desire change? How could things be improved? Especially if the outsiders you came into contact with seemed more irritable and less satisfied with life than you. I asked the Pirahãs once during my early missionary years if they knew why I was there. “You are here because this is a beautiful place. The water is pretty. There are good things to eat here. The Pirahãs are nice people.” That was and is the Pirahãs’ perspective. Life is good. Their upbringing, everyone learning early on to pull their own weight, produces a society of satisfied members. That is hard to argue against.”

The most strange and even shocking aspect of Pirahã life is their sexuality. Kids quickly learn about sex. It’s not that people have sex out in the open. But it’s a lifestyle that provides limited privacy. Sexual activity isn’t considered a mere adult activity and children aren’t protected from it. Quite the opposite (Kindle Locations 2736-2745):

“Sexual behavior is another behavior distinguishing Pirahãs from most middle-class Westerners early on. A young Pirahã girl of about five years came up to me once many years ago as I was working and made crude sexual gestures, holding her genitalia and thrusting them at me repeatedly, laughing hysterically the whole time. The people who saw this behavior gave no sign that they were bothered. Just child behavior, like picking your nose or farting. Not worth commenting about.

“But the lesson is not that a child acted in a way that a Western adult might find vulgar. Rather, the lesson, as I looked into this, is that Pirahã children learn a lot more about sex early on, by observation, than most American children. Moreover, their acquisition of carnal knowledge early on is not limited to observation. A man once introduced me to a nine- or ten-year-old girl and presented her as his wife. “But just to play,” he quickly added. Pirahã young people begin to engage sexually, though apparently not in full intercourse, from early on. Touching and being touched seem to be common for Pirahã boys and girls from about seven years of age on. They are all sexually active by puberty, with older men and women frequently initiating younger girls and boys, respectively. There is no evidence that the children then or as adults find this pedophilia the least bit traumatic.”

This seems plain wrong to most Westerners. Then again, to the Pirahã, much of what Westerners do would seem plain wrong or simply incomprehensible. Which is worse, Pirahã pedophilia or Western mass violence and systematic oppression?

What is most odd is that, like death for adults, sexuality for children isn’t considered a traumatizing experience and they don’t act traumatized. It’s apparently not part of their culture to be traumatized. They aren’t a society based on and enmeshed in a worldview of violence, fear, and anxiety. That isn’t how they think about any aspect of their lifeworld. I would assume that, like most tribal people, they don’t have high rates of depression and other mental illnesses. Everett pointed out that in the thirty years he knew the Pirahã there never was a suicide. And when he told them about his stepmother killing herself, they burst out in laughter because it made absolutely no sense to them that someone would take their own life.

That demonstrates the power of culture, environment, and lifestyle. According to Everett, it also demonstrates the power of language, inseparable from the society that shapes and is shaped by it, and demonstrates how little we understand the dark matter of the mind.

* * *

The Amazon’s Pirahã People’s Secret to Happiness: Never Talk of the Past or Future
by Dominique Godrèche, Indian Country

Being Pirahã Means Never Having to Say You’re Sorry
by Christopher Ryan, Psychology Today

The Myth of Teenage Rebellion
by Suzanne Calulu, Patheos

The Suicide Paradox: Full Transcript
from Freakonomics

“Beyond that, there is only awe.”

“What is the meaning of life?” This question has no answer except in the history of how it came to be asked. There is no answer because words have meaning, not life or persons or the universe itself. Our search for certainty rests in our attempts at understanding the history of all individual selves and all civilizations. Beyond that, there is only awe.
~ Julian Jaynes, 1988, Life Magazine

That is always a nice quote. Jaynes never seemed like an ideologue about his own speculations. In his controversial book, more than a decade earlier (1976), he titled his introduction as “The Problem of Consciousness”. That is what frames his thought, confronting a problem. The whole issue of consciousness is still problematic to this day and likely will be so for a long time. After a lengthy analysis of complex issues, he concludes his book with some humbling thoughts:

For what is the nature of this blessing of certainty that science so devoutly demands in its very Jacob-like wrestling with nature? Why should we demand that the universe make itself clear to us? Why do we care?

To be sure, a part of the impulse to science is simple curiosity, to hold the unheld and watch the unwatched. We are all children in the unknown.

Following that, he makes a plea for understanding. Not just understanding of the mind but also of experience. It is a desire to grasp what makes us human, the common impulses that bind us, underlying both religion and science. There is a tender concern being given voice, probably shaped and inspired by his younger self having poured over his deceased father’s Unitarian sermons.

As individuals we are at the mercies of our own collective imperatives. We see over our everyday attentions, our gardens and politics, and children, into the forms of our culture darkly. And our culture is our history. In our attempts to communicate or to persuade or simply interest others, we are using and moving about through cultural models among whose differences we may select, but from whose totality we cannot escape. And it is in this sense of the forms of appeal, of begetting hope or interest or appreciation or praise for ourselves or for our ideas, that our communications are shaped into these historical patterns, these grooves of persuasion which are even in the act of communication an inherent part of what is communicated. And this essay is no exception.

That humility feels genuine. His book was far beyond mere scholarship. It was an expression of decades of questioning and self-questioning, about what it means to be human and what it might have meant for others throughout the millennia.

He never got around to writing another book on the topic, despite his stated plans to do so. But during the last decade of his life, he wrote an afterword to his original work. It was placed in the 1990 edition, fourteen years after the original publication. He had faced much criticism and one senses a tired frustration in those last years. Elsewhere, he complained about the expectation to explain himself and make himself understood to people who, for whatever reason, didn’t understand. Still, he realized that was the nature of his job as an academic scholar working at a major university. From the after word, he wrote:

A favorite practice of some professional intellectuals when at first faced with a theory as large as the one I have presented is to search for that loose thread which, when pulled, will unravel all the rest. And rightly so. It is part of the discipline of scientific thinking. In any work covering so much of the terrain of human nature and history, hustling into territories jealously guarded by myriad aggressive specialists, there are bound to be such errancies, sometimes of fact but I fear more often of tone. But that the knitting of this book is such that a tug on such a bad stitch will unravel all the rest is more of a hope on the part of the orthodox than a fact in the scientific pursuit of truth. The book is not a single hypothesis.

Interestingly, Jaynes doesn’t state the bicameral mind as an overarching context for the hypotheses he lists. In fact, it is just one among the several hypotheses and not even the first to be mentioned. That shouldn’t be surprising since decades of his thought and research, including laboratory studies done on animal behavior, preceded the formulation of the bicameral hypothesis. Here are the four hypotheses:

  1. Consciousness is based on language.
  2. The bicameral mind.
  3. The dating.
  4. The double brain.

He states that, “I wish to emphasize that these four hypotheses are separable. The last, for example, could be mistaken (at least in the simplified version I have presented) and the others true. The two hemispheres of the brain are not the bicameral mind but its present neurological model. The bicameral mind is an ancient mentality demonstrated in the literature and artifacts of antiquity.” Each hypothesis is connected to the others but must be dealt with separately. The key element to his project is consciousness, as that is the key problem. And as problems go, it is a doozy. Calling it a problem is like calling the moon a chunk of rock and the sun a warm fire.

Related to these hypotheses, earlier in his book, Jaynes proposes a useful framework. He calls it the General Bicameral Paradigm. “By this phrase,” he explains, “I mean an hypothesized structure behind a large class of phenomena of diminished consciousness which I am interpreting as partial holdovers from our earlier mentality.” There are four components:

  1. “the collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form;”
  2. “an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations;”
  3. “the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group; and”
  4. “the archaic authorization to which the trance is directed or related to, usually a god, but sometimes a person who is accepted by the individual and his culture as an authority over the individual, and who by the collective cognitive imperative is prescribed to be responsible for controlling the trance state.”

The point is made that the reader shouldn’t assume that they are “to be considered as a temporal succession necessarily, although the induction and trance usually do follow each other. But the cognitive imperative and the archaic authorization pervade the whole thing. Moreover, there is a kind of balance or summation among these elements, such that when one of them is weak the others must be strong for the phenomena to occur. Thus, as through time, particularly in the millennium following the beginning of consciousness, the collective cognitive imperative becomes weaker (that is, the general population tends toward skepticism about the archaic authorization), we find a rising emphasis on and complication of the induction procedures, as well as the trance state itself becoming more profound.”

This general bicameral paradigm is partly based on the insights he gained from studying ancient societies. But ultimately it can be considered separately from that. All you have to understand is that these are a basic set of cognitive abilities and tendencies that have been with humanity for a long time. These are the vestiges of human evolution and societal development. They can be combined and expressed in multiple ways. Our present society is just one of many possible manifestations. Human nature is complex and human potential is immense, and so diversity is to be expected among human neurocognition, behavior, and culture.

An important example of the general bicameral paradigm is hypnosis. It isn’t just an amusing trick done for magic shows. Hypnosis shows something profoundly odd, disturbing even, about the human mind. Also, it goes far beyond the individual for it is about how humans relate. It demonstrates the power of authority figures, in whatever form they take, and indicates the significance of what Jaynes calls authorization. By the way, this leads down the dark pathways of authoritarianism, brainwashing, propaganda, and punishment — as for the latter, Jaynes writes that:

If we can regard punishment in childhood as a way of instilling an enhanced relationship to authority, hence training some of those neurological relationships that were once the bicameral mind, we might expect this to increase hypnotic susceptibility. And this is true. Careful studies show that those who have experienced severe punishment in childhood and come from a disciplined home are more easily hypnotized, while those who were rarely punished or not punished at all tend to be less susceptible to hypnosis.

He discusses the history of hypnosis beginning with Mesmer. In this, he shows how metaphor took different form over time. And, accordingly, it altered shared experience and behavior.

Now it is critical here to realize and to understand what we might call the paraphrandic changes which were going on in the people involved, due to these metaphors. A paraphrand, you will remember, is the projection into a metaphrand of the associations or paraphiers of a metaphier. The metaphrand here is the influences between people. The metaphiers, or what these influences are being compared to, are the inexorable forces of gravitation, magnetism, and electricity. And their paraphiers of absolute compulsions between heavenly bodies, of unstoppable currents from masses of Ley den jars, or of irresistible oceanic tides of magnetism, all these projected back into the metaphrand of interpersonal relationships, actually changing them, changing the psychological nature of the persons involved, immersing them in a sea of uncontrollable control that emanated from the ‘magnetic fluids’ in the doctor’s body, or in objects which had ‘absorbed’ such from him.

It is at least conceivable that what Mesmer was discovering was a different kind of mentality that, given a proper locale, a special education in childhood, a surrounding belief system, and isolation from the rest of us, possibly could have sustained itself as a society not based on ordinary consciousness, where metaphors of energy and irresistible control would assume some of the functions of consciousness.

How is this even possible? As I have mentioned already, I think Mesmer was clumsily stumbling into a new way of engaging that neurological patterning I have called the general bicameral paradigm with its four aspects: collective cognitive imperative, induction, trance, and archaic authorization.

Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.

This touches upon the issue of rhetoric, although Jaynes never mentioned the topic. It’s disappointing since his original analysis of metaphor has many implications. Fortunately, others have picked up where he left off (see Ted Remington, Brian J. McVeigh, and Frank J. D’Angelo). Authorization in the ancient world came through a poetic voice, but today it is most commonly heard in rhetoric.

Still, that old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”

Metaphor grows naturally in poetic soil, but its seeds are planted in every aspect of language and thought, giving fruit to our perceptions and actions. This is a thousandfold true on the collective level of society and politics. Metaphors are most powerful when we don’t see them as metaphors. So, the most persuasive rhetoric is that which hides its metaphorical frame and obfuscates any attempts to bring it to light.

Going far back into the ancient world, metaphors didn’t need to be hidden in this sense. The reason for this is that there was no intellectual capacity or conceptual understanding of metaphors as metaphors. Instead, metaphors were taken literally. The way people spoke about reality was inseparable from their experience of reality and they had no way of stepping back from their cultural biases, as the cultural worldviews they existed within were all-encompassing. It’s only with the later rise of multicultural societies, especially the vast multi-ethnic trade empires, that people began to think in terms of multiple perspectives. Such a society was developing in the trade networking and colonizing nation-states of Greece in the centuries leading up to Hellenism.

That is the well known part of Jaynes’ speculations, the basis of his proposed bicameral mind. And Jaynes considered it extremely relevant to the present.

Marcel Kuijsten wrote that, “Jaynes maintained that we are still deep in the midst of this transition from bicamerality to consciousness; we are continuing the process of expanding the role of our internal dialogue and introspection in the decision-making process that was started some 3,000 years ago. Vestiges of the bicameral mind — our longing for absolute guidance and external control — make us susceptible to charismatic leaders, cults, trends, and persuasive rhetoric that relies on slogans to bypass logic” (“Consciousness, Hallucinations, and the Bicameral Mind Three Decades of New Research”, Reflections on the Dawn of Consciousness, Kindle Locations 2210-2213). Considering the present, in Authoritarian Grammar and Fundamentalist Arithmetic, Ben G. Price puts it starkly: “Throughout, tyranny asserts its superiority by creating a psychological distance between those who command and those who obey. And they do this with language, which they presume to control.” The point made by the latter is that this knowledge, even as it can be used as intellectual defense, might just lead to even more effective authoritarianism.

We’ve grown less fearful of rhetoric because we see ourselves as being savvy, experienced consumers of media. The cynical modern mind is always on guard, our well-developed and rigid state of consciousness offering a continuous psychological buffering against the intrusions of the world. So we like to think. I remember, back in 7th grade, being taught how the rhetoric of advertising is used to manipulate us. But we are over-confident. Consciousness operates at the surface of the psychic depths. We are better at rationalizing than being rational, something we may understand intellectually but rarely do we fully acknowledge the psychological and societal significance of this. That is the usefulness of theories like that of bicameralism, as they remind us that we are out of our depths. In the ancient world, there was a profound mistrust between the poetic and rhetorical, and for good reason. We would be wise to learn from that clash of mindsets and worldviews.

We shouldn’t be so quick to assume we understand our own minds, the kind of vessel we find ourselves on. Nor should we allow ourselves to get too comfortable within the worldview we’ve always known, the safe harbor of our familiar patterns of mind. It’s hard to think about these issues because they touch upon our own being, the surface of consciousness along with the depths below it. This is the near difficult task of fathoming the ocean floor using rope and a weight, an easier task the closer we hug the shoreline. But what might we find if cast ourselves out on open waters? What new lands might be found, lands to be newly discovered and lands already inhabited?

We moderns love certainty. And it’s true we possess more knowledge than any civilization before has accumulated. Yet we’ve partly made the unfamiliar into familiar by remaking the world in our own image. There is no place on earth that remains entirely untouched. Only a couple hundred small isolated tribes are still uncontacted, representing foreign worldviews not known or studied, but even they live under unnatural conditions of stress as the larger world closes in on them. Most of the ecological and cultural diversity that once existed has been obliterated from the face of the earth, most of it having left not a single trace or record, just simply gone. Populations beyond count have faced extermination by outside influences and forces before they ever got a chance to meet an outsider. Plagues, environmental destruction, and societal collapse wiped them out often in short periods of time.

Those other cultures might have gifted us with insights about our humanity that now are lost forever, just as extinct species might have held answers to questions not yet asked and medicines for diseases not yet understood. Almost all that now is left is a nearly complete monoculture with the differences ever shrinking into the constraints of capitalist realism. If not for scientific studies done on the last of isolated tribal people, we would never know how much diversity exists within human nature. Many of the conclusions that earlier social scientists had made were based mostly on studies involving white, middle class college kids in Western countries, what some have called the WEIRD: Western, Educated, Industrialized, Rich, and Democratic. But many of those conclusions have since proven wrong, biased, or limited.

When Jaynes’ first thought on such matters, the social sciences were still getting established as serious fields of study. His entered college around 1940 when behaviorism was a dominant paradigm. It was only in the prior decades that the very idea of ‘culture’ began to take hold among anthropologists. He was influenced by anthropologists, directly and indirectly. One indirect influence came by way of E. R. Dodds, a classical scholar, who in writing his 1951 The Greeks and the Irrational found inspiration from Ruth Benedict’s anthropological work comparing cultures (Benedict taking this perspective through the combination of the ideas of Franz Boas and Carl Jung). Still, anthropology was young and the fascinating cases so well known today were unknown back then (e.g., Daniel Everett’s recent books on the Pirahã). So, in following Dodds example, Jaynes turned to ancient societies and their literature.

His ideas were forming at the same time the social sciences were gaining respectability and maturity. It was a time when many scholars and other intellectuals were more fully questioning Western civilization. But it was also the time when Western ascendancy was becoming clear with the WWI ending of the Ottoman Empire and the WWII ending of the Japanese Empire. The whole world was falling under Western cultural influence. And traditional societies were in precipitous decline. That was the dawning of the age of monoculture.

We are the inheritors of the world that was created from that wholesale destruction of all that came before. And even what came before was built on millennia of collapsing civilizations. Jaynes focused on the earliest example of mass destruction and chaos leading him to see a stark division to what came before and after. How do we understand why we came to be the way we are when so much has been lost? We are forced back on our own ignorance. Jaynes apparently understood that and so considered awe to be the proper response. We know the world through our own humanity, but we can only know our own humanity through the cultural worldview we are born into. It is our words that have meaning, was Jaynes response, “not life or persons or the universe itself.” That is to say we bring meaning to what we seek to understand. Meaning is created, not discovered. And the kind of meaning we create depends on our cultural worldview.

In Monoculture, F. S. Michaels writes (pp. 1-2):

THE HISTORY OF HOW we think and act, said twentieth-century philosopher Isaiah Berlin, is, for the most part, a history of dominant ideas. Some subject rises to the top of our awareness, grabs hold of our imagination for a generation or two, and shapes our entire lives. If you look at any civilization, Berlin said, you will find a particular pattern of life that shows up again and again, that rules the age. Because of that pattern, certain ideas become popular and others fall out of favor. If you can isolate the governing pattern that a culture obeys, he believed, you can explain and understand the world that shapes how people think, feel and act at a distinct time in history.1

The governing pattern that a culture obeys is a master story — one narrative in society that takes over the others, shrinking diversity and forming a monoculture. When you’re inside a master story at a particular time in history, you tend to accept its definition of reality. You unconsciously believe and act on certain things, and disbelieve and fail to act on other things. That’s the power of the monoculture; it’s able to direct us without us knowing too much about it.

Over time, the monoculture evolves into a nearly invisible foundation that structures and shapes our lives, giving us our sense of how the world works. It shapes our ideas about what’s normal and what we can expect from life. It channels our lives in a certain direction, setting out strict boundaries that we unconsciously learn to live inside. It teaches us to fear and distrust other stories; other stories challenge the monoculture simply by existing, by representing alternate possibilities.

Jaynes argued that ideas are more than mere concepts. Ideas are embedded in language and metaphor. And ideas take form not just as culture but as entire worldviews built on interlinked patterns of attitudes, thought, perception, behavior, and identity. Taken together, this is the reality tunnel we exist within.

It takes a lot to shake us loose from these confines of the mind. Certain practices, from meditation to imbibing psychedelics, can temporarily or permanently alter the matrix of our identity. Jaynes, for reasons of his own, came to question the inevitability of the society around him which allowed him to see that other possibilities may exist. The direction his queries took him landed him in foreign territory, outside of the idolized individualism of Western modernity.

His ideas might have been less challenging in a different society. We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.

Maybe we modern Westerners need to practice a sense of awe, to inquire more deeply. Jaynes offers a different way of thinking that doesn’t even require us to look to another society. If he is correct, this radical worldview is at the root of Western Civilization. Maybe the traces of the past are still with us.

* * *

The Origin of Rhetoric in the Breakdown of the Bicameral Mind
by Ted Remington

Endogenous Hallucinations and the Bicameral Mind
by Rick Straussman

Consciousness and Dreams
by Marcel Kuijsten, Julian Jaynes Society

Ritual and the Consciousness Monoculture
by Sarah Perry, Ribbonfarm

“I’m Nobody”: Lyric Poetry and the Problem of People
by David Baker, The Virginia Quarterly Review

It is in fact dangerous to assume a too similar relationship between those ancient people and us. A fascinating difference between the Greek lyricists and ourselves derives from the entity we label “the self.” How did the self come to be? Have we always been self-conscious, of two or three or four minds, a stew of self-aware voices? Julian Jaynes thinks otherwise. In The Origin of Consciousness in the Breakdown of the Bicameral Mind—that famous book my poetry friends adore and my psychologist friends shrink from—Jaynes surmises that the early classical mind, still bicameral, shows us the coming-into-consciousness of the modern human, shows our double-minded awareness as, originally, a haunted hearing of voices. To Jaynes, thinking is not the same as consciousness: “one does one’s thinking before one knows what one is to think about.” That is, thinking is not synonymous with consciousness or introspection; it is rather an automatic process, notably more reflexive than reflective. Jaynes proposes that epic poetry, early lyric poetry, ritualized singing, the conscience, even the voices of the gods, all are one part of the brain learning to hear, to listen to, the other.

Auditory Hallucinations: Psychotic Symptom or Dissociative Experience?
by Andrew Moskowitz & Dirk Corstens

Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or by persons with no mental disorder at all.

Neuroimaging, auditory hallucinations, and the bicameral mind.
by L. Sher, Journal of Psychiatry and Neuroscience

Olin suggested that recent neuroimaging studies “have illuminated and confirmed the importance of Jaynes’ hypothesis.” Olin believes that recent reports by Lennox et al and Dierks et al support the bicameral mind. Lennox et al reported a case of a right-handed subject with schizophrenia who experienced a stable pattern of hallucinations. The authors obtained images of repeated episodes of hallucination and observed its functional anatomy and time course. The patient’s auditory hallucination occurred in his right hemisphere but not in his left.

What Is It Like to Be Nonconscious?: A Defense of Julian Jaynes
by Gary William, Phenomenology and the Cognitive Sciences

To explain the origin of consciousness is to explain how the analog “I” began to narratize in a functional mind-space. For Jaynes, to understand the conscious mind requires that we see it as something fleeting rather than something always present. The constant phenomenality of what-it-is-like to be an organism is not equivalent to consciousness and, subsequently, consciousness must be thought in terms of the authentic possibility of consciousness rather than its continual presence.

Defending Damasio and Jaynes against Block and Gopnik
by Emilia Barile, Phenomenology Lab

When Jaynes says that there was “nothing it is like” to be preconscious, he certainly didn’t mean to say that nonconscious animals are somehow not having subjective experience in the sense of “experiencing” or “being aware” of the world. When Jaynes said there is “nothing it is like” to be preconscious, he means that there is no sense of mental interiority and no sense of autobiographical memory. Ask yourself what it is like to be driving a car and then suddenly wake up and realize that you have been zoned out for the past minute. Was there something it is like to drive on autopilot? This depends on how we define “what it is like”.

“The Evolution of the Analytic Topoi: A Speculative Inquiry”
by Frank J. D’Angelo
from Essays on Classical Rhetoric and Modern Discourse
ed. Robert J. Connors, Lisa S. Ede, & Andrea A. Lunsford
pp. 51-5

The first stage in the evolution of the analytic topoi is the global stage. Of this stage we have scanty evidence, since we must assume the ontogeny of invention in terms of spoken language long before the individual is capable of anything like written language. But some hints of how logical invention might have developed can be found in the work of Eric Havelock. In his Preface to Plato, Havelock, in recapitulating the educational experience of the Homeric and post-Homeric Greek, comments that the psychology of the Homeric Greek is characterized by a high degree of automatism.

He is required as a civilised being to become acquainted with the history, the social organisation, the technical competence and the moral imperatives of his group. This in turn is able to function only as a fragment of the total Hellenic world. It shares a consciousness in which he is keenly aware that he, as a Hellene, in his memory. Such is poetic tradition, essentially something he accepts uncritically, or else it fails to survive in his living memory. Its acceptance and retention are made psychologically possible by a mechanism of self-surrender to the poetic performance and of self-identification with the situations and the stories related in the performance. . . . His receptivity to the tradition has thus, from the standpoint of inner psychology, a degree of automatism which however is counter-balanced by a direct and unfettered capacity for action in accordance with the paradigms he has absorbed. 6

Preliterate man was apparently unable to think logically. He acted, or as Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, puts it, “reacted” to external events. “There is in general,” writes Jaynes, “no consciousness in the Iliad . . . and in general therefore, no words for consciousness or mental acts.” 7 There was, in other words, no subjective consciousness in Iliadic man. His actions were not rooted in conscious plans or in reasoning. We can only speculate, then, based on the evidence given by Havelock and Jaynes that logical invention, at least in any kind of sophisticated form, could not take place until the breakdown of the bicameral mind, with the invention of writing. If ancient peoples were unable to introspect, then we must assume that the analytic topoi were a discovery of literate man. Eric Havelock, however, warns that the picture he gives of Homeric and post-Homeric man is oversimplified and that there are signs of a latent mentality in the Greek mind. But in general, Homeric man was more concerned to go along with the tradition than to make individual judgments.

For Iliadic man to be able to think, he must think about something. To do this, states Havelock, he had to be able to revolt against the habit of self-identification with the epic poem. But identification with the poem at this time in history was necessary psychologically (identification was necessary for memorization) and in the epic story implicitly as acts or events that are carried out by important people, must be abstracted from the narrative flux. “Thus the autonomous subject who no longer recalls and feels, but knows, can now be confronted with a thousand abstract laws, principles, topics, and formulas which become the objects of his knowledge.” 8

The analytic topoi, then, were implicit in oral poetic discourse. They were “experienced” in the patterns of epic narrative, but once they are abstracted they can become objects of thought as well as of experience. As Eric Havelock puts it,

If we view them [these abstractions] in relation to the epic narrative from which, as a matter of historical fact, they all emerged they can all be regarded as in one way or another classifications of an experience which was previously “felt” in an unclassified medley. This was as true of justice as of motion, of goodness as of body or space, of beauty as of weight or dimension. These categories turn into linguistic counters, and become used as a matter of course to relate one phenomenon to another in a non-epic, non-poetic, non-concrete idiom. 9

The invention of the alphabet made it easier to report experience in a non-epic idiom. But it might be a simplification to suppose that the advent of alphabetic technology was the only influence on the emergence of logical thinking and the analytic topics, although perhaps it was the major influence. Havelock contends that the first “proto-thinkers” of Greece were the poets who at first used rhythm and oral formulas to attempt to arrange experience in categories, rather than in narrative events. He mentions in particular that it was Hesiod who first parts company with the narrative in the Theogony and Works and Days. In Works and Days, Hesiod uses a cataloging technique, consisting of proverbs, aphorisms, wise sayings, exhortations, and parables, intermingled with stories. But this effect of cataloging that goes “beyond the plot of a story in order to impose a rough logic of topics . . . presumes that Hesiod is 10

The kind of material found in the catalogs of Hesiod was more like the cumulative commonplace material of the Renaissance than the abstract topics that we are familiar with today. Walter Ong notes that “the oral performer, poet or orator needed a stock of material to keep him going. The doctrine of the commonplaces is, from one point of view, the codification of ways of assuring and managing this stock.” 11 We already know what some of the material was like: stock epithets, figures of speech, exempla, proverbs, sententiae, quotations, praises or censures of people and things, and brief treatises on virtues and vices. By the time we get to the invention of printing, there are vast collections of this commonplace material, so vast, relates Ong, that scholars could probably never survey it all. Ong goes on to observe that

print gave the drive to collect and classify such excerpts a potential previously undreamed of. . . . the ranging of items side by side on a page once achieved, could be multiplied as never before. Moreover, printed collections of such commonplace excerpts could be handily indexed; it was worthwhile spending days or months working up an index because the results of one’s labors showed fully in thousands of copies. 12

To summarize, then, in oral cultures rhetorical invention was bound up with oral performance. At this stage, both the cumulative topics and the analytic topics were implicit in epic narrative. Then the cumulative commonplaces begin to appear, separated out by a cataloging technique from poetic narrative, in sources such as the Theogony and Works and Days . Eric Havelock points out that in Hesiod, the catalog “has been isolated or abstracted . . . out of a thousand contexts in the rich reservoir of oral tradition. … A general world view is emerging in isolated or ‘abstracted’ form.” 13 Apparently, what we are witnessing is the emergence of logical thinking. Julian Jaynes describes the kind of thought to be found in the Works and Days as “preconscious hypostases.” Certain lines in Hesiod, he maintains, exhibit “some kind of bicameral struggle.” 14

The first stage, then, of rhetorical invention is that in which the analytic topoi are embedded in oral performance in the form of commonplace material as “relationships” in an undifferentiated matrix. Oral cultures preserve this knowledge by constantly repeating the fixed sayings and formulae. Mnemonic patterns, patterns of repetition, are not added to the thought of oral cultures. They are what the thought consists of.

Emerging selves: Representational foundations of subjectivity
by Wolfgang Prinz, Consciousness and Cognition

What, then, may mental selves be good for and why have they emerged during evolution (or, perhaps, human evolution or even early human history)? Answers to these questions used to take the form of stories explaining how the mental self came about and what advantages were associated with it. In other words, these are theories that construct hypothetical scenarios offering plausible explanations for why certain (groups of) living things that initially do not possess a mental self gain fitness advantages when they develop such an entity—with the consequence that they move from what we can call a self-less to a self-based or “self-morphic” state.

Modules for such scenarios have been presented occasionally in recent years by, for example, Dennett, 1990 and Dennett, 1992, Donald (2001), Edelman (1989), Jaynes (1976), Metzinger, 1993 and Metzinger, 2003, or Mithen (1996). Despite all the differences in their approaches, they converge around a few interesting points. First, they believe that the transition between the self-less and self-morphic state occurred at some stage during the course of human history—and not before. Second, they emphasize the cognitive and dynamic advantages accompanying the formation of a mental self. And, third, they also discuss the social and political conditions that promote or hinder the constitution of this self-morphic state. In the scenario below, I want to show how these modules can be keyed together to form a coherent construction. […]

Thus, where do thoughts come from? Who or what generates them, and how are they linked to the current perceptual situation? This brings us to a problem that psychology describes as the problem of source attribution ( Heider, 1958).

One obvious suggestion is to transfer the schema for interpreting externally induced messages to internally induced thoughts as well. Accordingly, thoughts are also traced back to human sources and, likewise, to sources that are present in the current situation. Such sources can be construed in completely different ways. One solution is to trace the occurrence of thoughts back to voices—the voices of gods, priests, kings, or ancestors, in other words, personal authorities that are believed to have an invisible presence in the current situation. Another solution is to locate the source of thoughts in an autonomous personal authority bound to the body of the actor: the self.

These two solutions to the attribution problem differ in many ways: historically, politically, and psychologically. In historical terms, the former must be markedly older than the latter. The transition from one solution to the other and the mentalities associated with them are the subject of Julian Jaynes’s speculative theory of consciousness. He even considers that this transfer occurred during historical times: between the Iliad and the Odyssey. In the Iliad, according to Jaynes, the frame of mind of the protagonists is still structured in a way that does not perceive thoughts, feelings, and intentions as products of a personal self, but as the dictates of supernatural voices. Things have changed in the Odyssey: Odysseus possesses a self, and it is this self that thinks and acts. Jaynes maintains that the modern consciousness of Odysseus could emerge only after the self had taken over the position of the gods (Jaynes, 1976; see also Snell, 1975).

Moreover, it is obvious why the political implications of the two solutions differ so greatly: Societies whose members attribute their thoughts to the voices of mortal or immortal authorities produce castes of priests or nobles that claim to be the natural authorities or their authentic interpreters and use this to derive legitimization for their exercise of power. It is only when the self takes the place of the gods that such castes become obsolete, and authoritarian constructions are replaced by other political constructions that base the legitimacy for their actions on the majority will of a large number of subjects who are perceived to be autonomous.

Finally, an important psychological difference is that the development of a self-concept establishes the precondition for individuals to become capable of perceiving themselves as persons with a coherent biography. Once established, the self becomes involved in every re-presentation and representation as an implicit personal source, and just as the same body is always present in every perceptual situation, it is the same mental self that remains identical across time and place. […]

According to the cognitive theories of schizophrenia developed in the last decade (Daprati et al., 1997; Frith, 1992), these symptoms can be explained with the same basic pattern that Julian Jaynes uses in his theory to characterize the mental organization of the protagonists in the Iliad. Patients with delusions suffer from the fact that the standardized attribution schema that localizes the sources of thoughts in the self is not available to them. Therefore, they need to explain the origins of their thoughts, ideas, and desires in another way (see, e.g., Stephens & Graham, 2000). They attribute them to person sources that are present but invisible—such as relatives, physicians, famous persons, or extraterrestrials. Frequently, they also construct effects and mechanisms to explain how the thoughts proceeding from these sources are communicated, by, for example, voices or pictures transmitted over rays or wires, and nowadays frequently also over phones, radios, or computers. […]

As bizarre as these syndromes seem against the background of our standard concept of subjectivity and personhood, they fit perfectly with the theoretical idea that mental selves are not naturally given but rather culturally constructed, and in fact set up in, attribution processes. The unity and consistency of the self are not a natural necessity but a cultural norm, and when individuals are exposed to unusual developmental and life conditions, they may well develop deviant attribution patterns. Whether these deviations are due to disturbances in attribution to persons or to disturbances in dual representation cannot be decided here. Both biological and societal conditions are involved in the formation of the self, and when they take an unusual course, the causes could lie in both domains.


“The Varieties of Dissociative Experience”
by Stanley Krippner
from Broken Images Broken Selves: Dissociative Narratives In Clinical Practice
pp. 339-341

In his provocative description of the evolution of humanity’s conscious awareness, Jaynes (1976) asserted that ancient people’s “bicameral mind” enabled them to experience auditory hallucinations— the voices of the deities— but they eventually developed an integration of the right and left cortical hemispheres. According to Jaynes, vestiges of this dissociation can still be found, most notably among the mentally ill, the extremely imaginative, and the highly suggestible. Even before the development of the cortical hemispheres, the human brain had slowly evolved from a “reptilian brain” (controlling breathing, fighting, mating, and other fixed behaviors), to the addition of an “old-mammalian brain,” (the limbic system, which contributed emotional components such as fear, anger, and affection), to the superimposition of a “new-mammalian brain” (responsible for advanced sensory processing and thought processes). MacLean (1977) describes this “triune brain” as responsible, in part, for distress and inefficiency when the parts do not work well together. Both Jaynes’ and MacLean’s theories are controversial, but I believe that there is enough autonomy in the limbic system and in each of the cortical hemispheres to justify Ornstein’s (1986) conclusion that human beings are much more complex and intricate than they imagine, consisting of “an uncountable number of small minds” (p. 72), sometimes collaborating and sometimes competing. Donald’s (1991) portrayal of mental evolution also makes use of the stylistic differences of the cerebral hemisphere, but with a greater emphasis on neuropsychology than Jaynes employs. Mithen’s (1996) evolutionary model is a sophisticated account of how specialized “cognitive domains” reached the point that integrated “cognitive fluidity” (apparent in art and the use of symbols) was possible.

James (1890) spoke of a “multitude” of selves, and some of these selves seem to go their separate ways in posttraumatic stress disorder (PTSD) (see Greening, Chapter 5), dissociative identity disorder (DID) (see Levin, Chapter 6), alien abduction experiences (see Powers, Chapter 9), sleep disturbances (see Barrett, Chapter 10), psychedelic drug experiences (see Greenberg, Chapter 11), death terrors (see Lapin, Chapter 12), fantasy proneness (see Lynn, Pintar, & Rhue, Chapter 13), near-death experiences (NDEs) (see Greyson, Chapter 7), and mediumship (see Grosso, Chapter 8). Each of these conditions can be placed into a narrative construction, and the value of these frameworks has been described by several authors (e.g., Barclay, Chapter 14; Lynn, Pintar, & Rhue, Chapter 13; White, Chapter 4). Barclay (Chapter 14) and Powers (Chapter 15) have addressed the issue of narrative veracity and validation, crucial issues when stories are used in psychotherapy. The American Psychiatric Association’s Board of Trustees (1993) felt constrained to issue an official statement that “it is not known what proportion of adults who report memories of sexual abuse were actually abused” (p. 2). Some reports may be fabricated, but it is more likely that traumatic memories may be misconstrued and elaborated (Steinberg, 1995, p. 55). Much of the same ambiguity surrounds many other narrative accounts involving dissociation, especially those described by White (Chapter 4) as “exceptional human experiences.”

Nevertheless, the material in this book makes the case that dissociative accounts are not inevitably uncontrolled and dysfunctional. Many narratives considered “exceptional” from a Western perspective suggest that dissociation once served and continues to serve adaptive functions in human evolution. For example, the “sham death” reflex found in animals with slow locomotor abilities effectively offers protection against predators with greater speed and agility. Uncontrolled motor responses often allow an animal to escape from dangerous or frightening situations through frantic, trial-and-error activity (Kretchmer, 1926). Many evolutionary psychologists have directed their attention to the possible value of a “multimodular” human brain that prevents painful, unacceptable, and disturbing thoughts, wishes, impulses, and memories from surfacing into awareness and interfering with one’s ongoing contest for survival (Nesse & Lloyd, 1992, p. 610). Ross (1991) suggests that Western societies suppress this natural and valuable capacity at their peril.

The widespread prevalence of dissociative reactions argues for their survival value, and Ludwig (1983) has identified seven of them: (1) The capacity for automatic control of complex, learned behaviors permits organisms to handle a much greater work load in as smooth a manner as possible; habitual and learned behaviors are permitted to operate with a minimum expenditure of conscious control. (2) The dissociative process allows critical judgment to be suspended so that, at times, gratification can be more immediate. (3) Dissociation seems ideally suited for dealing with basic conflicts when there is no instant means of resolution, freeing an individual to take concerted action in areas lacking discord. (4) Dissociation enables individuals to escape the bounds of reality, providing for inspiration, hope, and even some forms of “magical thinking.” (5) Catastrophic experiences can be isolated and kept in check through dissociative defense mechanisms. (6) Dissociative experiences facilitate the expression of pent-up emotions through a variety of culturally sanctioned activities. (7) Social cohesiveness and group action often are facilitated by dissociative activities that bind people together through heightened suggestibility.

Each of these potentially adaptive functions may be life-depotentiating as well as life-potentiating; each can be controlled as well as uncontrolled. A critical issue for the attribution of dissociation may be the dispositional set of the experiencer-in-context along with the event’s adaptive purpose. Salamon (1996) described her mother’s ability to disconnect herself from unpleasant surroundings or facts, a proclivity that led to her ignoring the oncoming imprisonment of Jews in Nazi Germany but that, paradoxically, enabled her to survive her years in Auschwitz. Gergen (1991) has described the jaundiced eye that modern Western science has cast toward Dionysian revelry, spiritual experiences, mysticism, and a sense of bonded unity with nature, a hostility he predicts may evaporate in the so-called “postmodern” era, which will “open the way to the full expression of all discourses” (pp. 246– 247). For Gergen, this postmodern lifestyle is epitomized by Proteus, the Greek sea god, who could change his shape from wild boar to dragon, from fire to flood, without obvious coherence through time. This is all very well and good, as long as this dissociated existence does not leave— in its wake— a residue of broken selves whose lives have lost any intentionality or meaning, who live in the midst of broken images, and whose multiplicity has resulted in nihilistic affliction and torment rather than in liberation and fulfillment (Glass, 1993, p. 59).

 

 

Probability of Reality as We Know it

Jogging this morning, a pebble got into my shoe. I was on a sidewalk that wasn’t covered in rocks. The shoes I had on have high tops and were tied tightly. The thought occurred to me about probability, considering all the perfect conditions that have to come together to lead to even such a simple result as a pebble in my shoe.

I had to step on one of the few tiny rocks that happened to be in the right spot. Somehow the rock got kicked up about 6 inches where it caught the back edge of my shoe. It had to land perfectly right in order to lodge in the slight space between my foot and the shoe. Then it had to make its way down my shoe without first getting kicked back out.

It just got me thinking. For any given person at any given moment, a rock getting in their shoe is highly improbable. I run and/or walk numerous times every single day. And I can go years without getting a rock in my shoe. Even when it does happen, it would usually be because I was walking on a gravel road or alley, not on a standard sidewalk. Yet for all of the billions of people who are out and about every single day, the probability of numerous people getting rocks in their shoes at any given moment is quite high.

A more exciting example is getting struck by lightning. The vast majority of people go through their entire lives without getting hit. Still, there is a miniscule minority of the world’s population that gets hit on any day. Some rare people even get struck by lightning multiple times in their lifetime. Lightning directly hitting any single person is extremely improbable, while lightning directly hitting some person somewhere is extremely probable.

Most people don’t go around worrying about lightning, but right at this moment multiple people in the world are probably getting struck. Someone somewhere inevitably will get struck. It could be you, right now where you are. And sometimes lightning comes seemingly out of nowhere with no storm in sight, even on occasion hitting people in their houses.

Probability is dependent on context. So it depends on our perspective, on how we look at the data and how we calculate the probability. Our view of probability tends to be biased by the personal, of course. So it tends to be biased by what we know and have experienced, what is familiar to us. It is hard to think about probability in purely rational terms.

Given the right perspective, almost anything can be seen as improbable.

The entire existence of the universe, if one thinks too much about it, starts to seem improbable. Also improbable is life emerging on a particular planet, then that life leading to consciousness, intelligence, and advanced civilizations. Even so, because of the immense number of planets in the immense number of solar systems in the immense number of galaxies, it is probable to the point of near inevitability that there are vast numbers of planets with conscious, intelligent lifeforms and advanced civilizations.

Heck, we might be surrounded by lifeforms on our planet and in our own solar system while being unable to perceive and recognize them. We think of the probability of life, along with all that goes with it, in terms of the life we know immediately around us. But the actual probability is that other lifeforms would be bizarre to us, even if we could even discern them. Other lifeforms might simply be beings of energy or fluids, might be too small to detect with our senses or too large to comprehend with our minds. If a gut microbe gained intelligence and you were able to ask it what the probability was that their world was a giant ambling creature, the response would probably be amused laughter or else they’d look at you as though you were crazy. Maybe our own imaginations toward that which is beyond us is as relatively limited as that of the gut microbe.

Another aspect is cultural bias. People living in a society that wears sandals would have a different view of the probability of rocks in their ‘shoes’ than those in a society that wears tall boots. Societies that don’t wear any footwear at all wouldn’t even comprehend the issue of rocks in shoes. The same thing for beings that can’t be seen, as in some societies it would be common belief that such beings are all around us (ghosts, spirits, demons, elves, supernatural creatures, etc), and they may claim to know how to interact with them.

How do we determine the probability of bicameral societies having existed in the ancient world? Some say it isn’t even plausible, much less probable. I was reading Hearing Voices by Simon McCarthy-Jones and the author was in this doubting camp. He basically argued that, interpreting ancient non-Western texts based on modern Western preconceptions, it is highly improbable that ancient non-Western societies could exist that contradicted modern Western preconceptions. Uh, well, yeah, I guess. Within that circular logic, it indeed is a coherent opinion. But obviously others disagree based on the possibility of other ways of interpreting the same evidence. For example, unlike McCarthy-Jones, some people would point to the anthropological record to see possible examples of bicameralism or something akin to it, such as the Ugandan Ik and the Amazonian Pirahã.

My point isn’t whether or not bicameral theory is the best possible explanation of the data. But even ignoring the theory, the anthropological record makes absolutely clear there are societies that seem very strange to our modern Western sensibility. Then again, to those other societies, we would appear strange. Considering how perfect conditions have had to be, all of modern Western civilization is highly improbable. If it were possible to re-create the entire world in a vast laboratory, you could run an experiment numerous times and probably never be able to repeat these same results. Supposedly strange societies like the Ik and Pirahã are immensely more probable than our own strange society. Some other societies have lasted for thousands of years and we might be lucky to last the coming century.

Although it’s possible that the world perfectly matches our present beliefs and biases, it is ridiculously improbable that such is the case. Future generations surely will look back on us as we look back on the ignorance and barbarity of ancient societies. So, who are we to hold ourselves up as the norm for all of humanity? And who are we to use our cultural biases to judge all of reality?

We have no way to determine the probability of most things or often even their plausibility. All we know is what we know. And we don’t know what we don’t know. Usually, we don’t even know that we don’t know what we don’t know. Our state of ignorance is almost entirely self-enclosed, as what we know or think we know is inseparable from what we don’t know. As it has been said: The world is not only stranger than we imagine, it is stranger than we can imagine.

The world is full of kicked-up pebbles and lightning strikes, strange lifeforms and even stranger cultures. Everything is improbable from some perspective, until it happens to you or is experienced by you and then it’s the most probable thing in the world. Then it simply is the reality you know.

Shaken and Stirred

I Is an Other
by James Geary
Kindle Locations 303-310

Descartes’s “Cogito ergo sum.”

This phrase is routinely translated as:

I think, therefore I am.

But there is a better translation.

The Latin word cogito is derived from the prefix co (with or together) and the verb agitare (to shake). Agitare is the root of the English words “agitate” and “agitation.” Thus, the original meaning of cogito is “to shake together,” and the proper translation of “Cogito ergo sum” is:

I shake things up, therefore I am.

Staying with the Trouble
by Donna J. Haraway
Kindle Locations 293-303

Trouble is an interesting word. It derives from a thirteenth-century French verb meaning “to stir up,” “to make cloudy,” “to disturb.” We— all of us on Terra— live in disturbing times, mixed-up times, troubling and turbid times. The task is to become capable, with each other in all of our bumptious kinds, of response. Mixed-up times are overflowing with both pain and joy— with vastly unjust patterns of pain and joy, with unnecessary killing of ongoingness but also with necessary resurgence. The task is to make kin in lines of inventive connection as a practice of learning to live and die well with each other in a thick present. Our task is to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places. In urgent times, many of us are tempted to address trouble in terms of making an imagined future safe, of stopping something from happening that looms in the future, of clearing away the present and the past in order to make futures for coming generations. Staying with the trouble does not require such a relationship to times called the future. In fact, staying with the trouble requires learning to be truly present, not as a vanishing pivot between awful or edenic pasts and apocalyptic or salvific futures, but as mortal critters entwined in myriad unfinished configurations of places, times, matters, meanings.

Kafka’s Silence On Job

The Book of “Job”: A Biography
by Mark Larrimore
pp. 235-239

Susman begins and ends her book with quotations from a modern Jewish writer whose entire oeuvre has been interpreted as a commentary on the book of Job even though Job is never mentioned: Franz Kafka. She was one of the first to link Kafka’s evocations of the mute fruitlessness of modern experience to the social, cognitive, and spiritual crises of Job. A line from Kafka’s diary (January 10, 1920) limns Susman’s closing account of that messianic hope which, paradoxically, can arise only from the most total devastation:

It is no refutation of the premonition of a final rescue when the imprisonment is unchanged the following day or even more severe, or even when it is expressly explained that it will never end. For all of that can be the necessary precondition for the final rescue. (238)

Kafka is not an obvious apostle of hope, but Susman’s conception of hope is not the usual one. Kafka’s rigorous evocations of dehumanization are the most powerful accounts of what true hope and true humanity might be, precisely through their absence. “Metamorphosis,” the tale of a man who awakes to find himself transformed into a monstrous insect, describes modern Jewish, and through it, all modern human experience. Its protagonist Gregor Samsa is not only “a Job … ejected entirely from human community” but so estranged from his own humanity that he “cannot present his fate to God and demand to be dealt with in a human way” (152).

Susman first laid out the analogy of Job and Jewish experience in a 1929 essay on Kafka, years before the cataclysm. 40 Like Job the Jews know what only the truly innocent sufferer can know— that individual innocence does not register in the relationship of humanity and God, a relationship defined rather by a general human guilt. Modern Jews, Susman argues, are triply homeless. They are exiled from a homeland, from nature. In refusing to convert to Christianity, they are exiled from history. And the disenchanted modern European civilization to which they have assimilated has itself lost sight of the divine, and of the human. The Christian (or ex-Christian) still has the world and history, for his God once appeared in it. The Jew has nothing but a transcendent God beyond nature and culture, whom she cannot help constantly seeking and addressing. She is Job.

The awful truth is that the only way to encounter God unambiguously is in a suffering that defies nature and history as well as justice, and, thus, alone, can be known to be a divine sign. This is what Kafka and his characters seek. There is no lament in Kafka, only a tireless seeking for the law that might explain and redeem existence, restore humanity. “Herein lies what is so strange, so profoundly religiously shattering about Kafka’s God-remote world,” Susman wrote: “it is seen not from the world, from life, but from God, measured against him and judged by him.” Everything is in “indecipherable and uncanny relation” in the world of Kafka’s writing, and we “never know which link in the endless chain we are touching.” What has been described as Kafka’s “perceptual nightmare” may seem the end of the road for the human project to discern a “depth dimension” to our experience. 41 Yet Susman finds a messianic hope in the way Kafka’s world, so ruthlessly rendered in its “God-remoteness,” nevertheless calls for the affirmation of every part of it as, perhaps, the “necessary precondition for final salvation.”

Margarete Susman’s argument that the book of Job, more than the covenant of David, describes Jewish history and destiny makes Job’s place off the map of history decisive. Job had been a sideshow to the story of Israel, just as the story of the Jews in diaspora had been for the centuries of Christian history. Now that the idols of nation, nature, and history are crumbling, God and human destiny have become legible again— though only in negative, and in the most anguished forms of affliction and marginalization. The lesson is a hard one, bitterly hard. But if we recognize Job and Kafka as prophets, there is still hope for human life.

Our time in western history has been described as a “secular age,” an era in which “naïve” religious faith is no longer possible. Even the most devoted believer in a faith is aware that many others do not share it. 42 Religious communities flourish, to the consternation of secularization theorists, but know the world is not theirs alone— or perhaps theirs at all. The shared default experience is of a neutral world of indifferent natural laws shared by cultures and communities projecting fragile meanings on or beyond it. Every human tradition may be a sideshow.

Job is the modern soul’s guide as it navigates the religious experience of being off the map. He has offered a template for experiences of individual hopelessness since the books of Hours. His relationship with God, based on personal integrity in the absence of communal or covenantal support, resonates with modern disenchantment with religious institutions. And his book records the strange and painful discovery that God’s presence is felt most keenly in what might otherwise seem his absences: in the ethical irrationality of the world, and especially in those experiences of loss and suffering that defy human conceptions of justice or meaning. God may be understood to be the instigator of innocent suffering or to be suffering alongside the innocent— or both. The book of Job is the Schicksalsbuch for all whose God is exiled from what was once his creation.

* * *

The Gesture of Tank Man

Language and Knowledge, Parable and Gesture

Kafka On Parables And Metaphors, Writing And Language

Occam’s Shadow

Occam’s razor sometimes casts a dark shadow.

“Speaking on the myths and misconceptions surrounding the demise of the video game manufacturer Atari, founder Nolan Bushnell notes that “a simple answer that is clear and precise will always have more power in the world than a complex one that is true.” Bushnell’s observation is not limited to the situation with Atari. When it comes to subjects that are not fully understood, it seems to be a reality of human nature that we have a propensity to prefer easy answers and simple “truths” over more complex—and oftentimes more accurate—explanations. This certainly describes the study of the history of psychology: many prefer simplistic answers that ignore inconvenient facts, rather than explanations that take into account the full range of human experience and all its fascinating complexities.

“People often display a strong preference for simple answers and a compulsion to have everything settled (rather than withholding judgment until more information is available); we seem to have an aversion toward unknowns and ambiguity. Yet subjects that we are not entirely familiar with are generally more complex than we first realize. It behooves us to resist the impulse to make snap judgments and succumb to the illusion of mastery for subjects we don’t fully understand. by prematurely making up our mind about a topic we are unfamiliar with, we risk the tendency to oversimplify and to only seek evidence that confirms our existing beliefs. withholding an opinion on new ideas until we have adequate information to make an informed judgment takes a great deal of effort and self-discipline.”

Gods, Voices and the Bicameral Mind
Edited by Marcel Kuijsten
Introduction, pp. 7-8

On the Origins of Liberalism

The following is my side of a discussion from the comments section of a post by Corey Robin, The Definitive Take on Donald Trump. Considering the topic of the post, it’s odd that it became a historical and philosophical analysis of liberalism.

My comments are in response to Jason Bowden. He sees John Locke as more central to American liberalism. I don’t deny his importance, but I see it as having more diverse origins.

* * *

“The menu above is liberalism — limited government, individual rights, states rights, balance of powers, paper-worshipping Constitutionalism, privatization, deregulation, market-knows-best, blah blah blah. That’s the tradition of Locke, Jefferson, Godwin, Mill, Spencer, etc. It isn’t the counter-revolutionary tradition of Hobbes, Hume, Maistre, Burke, etc.”

I consider those type of people to be more in the reactionary category. That is particularly true of Locke, but even Jefferson and Godwin were never consistent and moderated their views over time. Also, as far as I know, none of these thinkers came from poverty or even the working class. The same applies to Burke with a father who was a government official and, I might add, began as a strong progressive before his reactionary side was elicited by the French Revolution.

Consider the details of Locke’s political views, as compared to an earlier thinker like Roger Williams:

https://benjamindavidsteele.wordpress.com/2012/12/19/roger-williams-and-american-democracy/

“Basically, Williams was articulating Lockean political philosophy when John Locke was still in diapers. Even Locke never defended Lockean rights as strongly as did Williams. Locke didn’t think Catholics and atheists deserved equal freedom. Locke was involved in writing the constitution of the Carolina Colony which included slavery, something Williams wouldn’t have ever done under any circumstances and no matter the personal benefits. In writing about land rights, Locke defended the rights of colonists to take Native American Land whereas Williams defended against the theft of land from Native Americans.”

That demonstrates this difference between ‘liberal’ and reactionary. There was no liberalism as such when Williams lived, but by his example he helped set the stage for what would become liberalism. Locke came from an entirely different tradition, that which influenced the Deep South.

https://benjamindavidsteele.wordpress.com/2012/01/13/deep-south-american-hypocrisy-liberal-traditions/

The difference between liberal and reactionary to some degree aligned with the difference between democrat and republican during the revolutionary era, and to some degree it matched up with Anti-Federalist and Federalist. Josiah Tucker, a critic of Locke, wrote:

“Republicans in general . . . for leveling all Distinctions above them, and at the same time for tyrannizing over those, whom Chance or Misfortune have placed below them.”

The more reactionary Enlightenment thinkers and American founders were wary of democracy. Liberals like Thomas Paine, on the other hand, advocated for democracy openly. Paine saw the failure of the French Revolution as their not having created a democratic constitution when they had the chance. Also in the category of liberals, as opposed to reactionaries, I’d place people like Ethan Allen, Thomas Young, Abraham Clark, etc.

https://benjamindavidsteele.wordpress.com/2014/10/05/natures-god-and-american-radicalism/

https://benjamindavidsteele.wordpress.com/2016/02/23/a-truly-free-people/

Paine, in particular, is the archetype of modern American liberalism and progressivism. Besides supporting democracy in general, he was for rights for (poor men, women, blacks, Native Americans, non-Protestants, etc), along with being for progressive taxation and strong welfare state. Paine represents what we mean by liberalism today. But even a classical liberal like Adam Smith pointed to how inequality endangered a free society and so he argued for progressive taxation and public education.

Someone like Jefferson was more of a fence-sitter. It is hard to categorize him. But he obviously never fully committed himself to the progressive liberalism of his friend, Paine. And as he aged he became considerably more conservative. The same happened with Godwin. It must be understood that both Jefferson and Godwin came from the elite and they never betrayed their class. It was class position that distinguished strong progressive liberals and everyone else. Paine, Allen, Young, and Clark were never fully accepted into the more respectable social circles.

“Sometimes I wonder if many Sanders supporters are closet reactionaries and don’t know it yet.”

I support Sanders’ campaign. I do so because I see it as a way of promoting needed debate. It is also good to challenge Clinton’s sense of entitlement to the presidency. But in the end I might vote Green. I’m undecided. I just like how Sanders has been able to shake things up so far.

“The left is defined as groups on the ascent. People benefitting from the established order — CEOs, immigrants, government employees, and the managerial class.”

I can’t say, though, that I feel like I’m part of a group on the ascent. I am a government employee, but my position is about as low as you can get. I have no college degree and I don’t make much money, as I’m only part time. I don’t particularly feel like I’m receiving any immense benefit from the established order, at least no more than the average American.

“A lot of suburban and rural whites have a lot to lose by the way things are going. In one possible political realignment in the future, they could be on the same side — the right.”

I see that as a separate issue. Many other realignments may form in the future, such as between various non-black minorities and whites, especially in terms of the growing Hispanic population. How that all settles out would be speculation.

* * *

What interests me about Williams is that he held to a view similar to Lockean land rights. This was before Locke was even born. I don’t if the idea was just in the air or where it might have originated. I’m not sure why Locke gets credit for it. It is sad that this philosophical and legal justification came to be used to take Native American land away, when for Williams it was meant to protect Native American rights.

He was an interesting guy, way before his times. I liked how he went to convert the Native Americans and came away converted to their having a superior society than their neighboring white settlers. He seemingly gave up on organized religion. He also took religious freedom much further than Locke ever did.

“I’m glad you brought up Roger Williams, because I definitely view progressivism, with its moral self-certainly, as a kind of secular Puritanism.”

That is at least partly true. I might broaden it a bit.

I see progressivism as largely a product of dissenter religions—not just Puritans, but also Quakers, Anabaptists, Pietists, Huguenots, etc. These were people who were tired of religious persecution and religious wars. I’d include Samuel de Champlain in this category, similar to someone like Roger Williams.

I’m most familiar with the Quakers. Having read about John Dickinson, I was fascinated by their separate tradition of living and evolving constitutionalism as a pact of a people with God, not a piece of paper. That is not unlike how many liberals and progressives still like to interpret the US Constitution, minus the God part.

“But Locke, while not a progressive, nor a democrat, brings the conceptual heft.”

I don’t necessarily disagree. I’m not sure how to categorize Locke. He did formalize many ideas and made them useful for the purposes of new laws and constitutions.

I have come to the view that Spinoza was important as well. Someone like Jefferson probably was familiar with Spinoza, but I don’t know how influential his ideas were in the English-speaking world. There were large non-English populations in the American colonies (some colonies were even a majority non-English, such as Pennsylvania). Besides dissenter religions, I couldn’t say what else non-English Europeans brought with them.

“It is a complete “Captain Picard” theory of man, strutting about the galaxy, pleading with everyone to put their irrational biases aside and just be reasonable.”

That might be what differentiated Locke from the likes of Williams and Penn. Religious dissenters weren’t so obsessed reason in this manner. I suspect that Paine inherited some of this earlier tradition. Paine’s deism wasn’t just about being rational but about knowing God directly, a very Quaker attitude. Paine, besides having a Quaker father, spent two influential periods of his life in a dissenter Puritan town and in Quaker Pennsylvania. Paine’s common sense could relate to his Quaker style of plain speech, it’s about a directness of knowing and communicating. It’s seems different than how you describe Locke.

“Out of Locke, one gets the instrumental nature of the state, disinterested power, the presumption of liberty when making trade-offs, popular sovereignty, and even government intervention for the public good, providing it meets a threshold of justification.”

In the non-Lockean traditions of dissenter religions and Spinozism, I sense another kind of attitude. It’s not clear to me all that distinguishes them.

Williams definitely had a live-and-let-live attitude, a proto-liberal can’t we all just get along. He didn’t want war, an oppressive government, or anyone telling anyone else how to live. Instead of banning, imprisoning, or torturing Quakers like the Puritans, he invited them to public debate—for the time, a radical advocacy of free speech. He expressed so many modern liberal and progressive values before almost anyone else in the colonies.

Along these lines, Penn later created the first tolerant multicultural colony in America. Franklin, who was a child when Penn died, complained about the German majority that refused to assimilate. This multiculturalism led to strong democratic culture.

“Liberals today write books like “Moral Politics” and writers like Dworkin think the Constitution should be interpreted in a moral spirit.”

That moralistic attitude would definitely be a result of dissenter religions. It also would relate to the Constitution being a living document.

“This is why a liberal like Spencer claimed that in reactionary thought, government resides in the “very soul of its system.” Spencer dreamed of a non-coercive world — morality is supposedly prior to government — while conservatism is about borders, culture, hierarchies, identity, etc.”

That is interesting. I’m not familiar with Spencer.

“Even in the United States, the biggest fans of free trade, limited government, and deregulation were southern slavers. The cultural inertia remains. It isn’t an accident that Clinton and Gore, both pimping for NAFTA, are from the south.”

That fits into Locke’s influence. He wrote or co-wrote the constitution for the Carolinas colony. This Southern classical liberalism is, of course, what today we call conservatism—an ideologically mixed bag. But it also shaped Clinton’s New Democrats, which partly returned the Democratic Party to its Southern roots. The early Democratic Party was weakest in New England.

“Liberalism has always been a top-down movement, usually spearheaded by university professors.”

There has also always been a working class liberalism, often a mix of progressivism, populism, and moral reformism. It’s harder to identify this tradition because the people who have held it weren’t and aren’t those with much power and voice.

The revolutionary era began as a bottom-up movement, a class-based restleness about not only distant British rule but also local ruling elite. It was the process of Renaissance and Enlightenment ideas spreading across the dirty masses. Paine was so influential for the very reason he could be understood by the most uneducated person. The upper class so-called founders only joined the revolution once it became clear it wasn’t going away.

“If anything, liberalism is aristocratic and Puritan in temperament, an attempt to improve the perceived immorality of rowdy, sinful, shameless, vulgar people.”

There were those like the Quakers and Baptists as well. People of this other strain of liberalism hated haughty Puritanism and aristocracy. I wouldn’t discount this aspect, as this bottom-up liberal tradition has been a powerful force in American society and politics.

* * *

I’ve recently been reading about Abraham Lincoln. I was specifically curious to learn more about his having been influenced by Thomas Paine.

Lincoln was born at the end of Thomas Jefferson’s presidency. It was only months away from Paine’s death. Much later, Jefferson and Adams died when Lincoln was 17 years old. Lincoln read many of the writings of the founders and others from the revolutionary era, including a number of radical thinkers. He was very much a child of the Enlightenment, even embracing a rational irreligiosity with a deistic bent. His mind was preoccupied with the founding generation.

I find interesting the contrast between Lincoln and Paine. Lincoln became a mainstream professional politician, something that Paine never would have done. Paine, even with his desire to moderate extremes, was a radical through and through. Lincoln ultimately mistrusted radicalism and had no desire for a second revolution. The government, in his mind, represented the public good. Paine, on the other hand, had a more palpable sense of he people as something distinct from particular governments.

Another difference seems to be related to their respective religious upbringings. They both held progressive views, but their motivations came from different sources.

Lincoln admitted to being a fatalist and that this came from his Baptist childhood with its Calvinist predetermination. This fed into his melancholy and sense of doom, oddly combined with a whiggish view of history (i.e., moral arc). The divine, portrayed in the light of Enlightenment deism, was an almost brutal force of nature that forced moral progress, decimating humans in its wake. Lincoln believed that individuals were helpless pawns, facing a dual fate of inborn character and cosmic forces. The Civil War was the perfect stage for Lincoln’s fatalistic drama of transformation through death and suffering.

Paine had so much more to be melancholy about. He saw one of his childhood friends, convicted of a petty crime, hanged from the scaffolding that could be seen from his home. His first wife and child died. His second marriage led to divorce. He spent many years struggling financially, sometimes unemployed and homeless. He almost died from sickness on his way to the American colonies. Yet, unlike Lincoln, Paine seemed to have an optimistic bent to his nature. He was a dreamer, opposite of Lincoln’s cold pragmatism. I suspect this at least partly has to do with how much Paine was influenced by dissenter religions, most especially the positive vision of Quakerism where God is seen as a friend to humanity.

The two represent different strains of Anglo-American progressivism, neither of which is particularly Lockean in mindset. In today’s politics, I’m not sure there is much room for either Lincoln or Paine. Their worldviews are almost alien to the contemporary mind. Politics has become so mechanistic and government so bureaucratic. There isn’t any room left for the vast visions of old school varieties of progressivism. Maybe that is why Trump is so appealing. He brings drama back into politics, no matter how superficial and petty that drama is.

* * *

I follow much of what you say. You describe the gist of the dominant strains of American liberalism and progressivism. But I keep thinking about origins. You wrote that,

“Locke invented liberalism: reasonable citizens updating public policy through reasonableness without resorting to terrorism.”

Did Locke really invent liberalism? To be specific, did he invent what you describe above as liberalism? To Locke, who was a citizen, specifically a reasonable citizen?

He had no problem writing or helping to write the constitution for a colony whose economy was dependent on slavery—in fact, a colony where the majority of the population was enslaved. He also didn’t support religious freedom for all, but only for certain religious groups and definitely not for heretics and atheists.

By reasonable citizens, would he have simply meant white male adults who were propertied and adherents of particular acceptable religions? Or did he think peasants, indentured servants, slaves, and indigenous people should be considered part of the reasonable citizenry? The reasonable citizens among the ruling elite and upper classes in the British Empire, including in the colonies, didn’t mind resorting to terrorism. Lockean land rights were even used as justification for taking away the land of various indigenous people. All of colonialism was built on violence, terrorism even, and Locke didn’t seem too bothered by that.

Was Locke genuinely praising reasonableness any more than previous thinkers? Didn’t those with wealth and power always think of themselves as reasonable? I’m sure the highly educated elite in the Roman Empire also thought of themselves as reasonable citizens maintaining order reasonably in their reasonable republic. The rhetoric of a reasonable citizenry goes back to the ancient world, e.g., classical Greece.

What was entirely new that Locke was bringing to the table? As I pointed out, even Lockean land rights as a theory preceded Locke, such as with Roger Williams. Others had also previously argued for social contract theory and against divine sanction, such as Thomas Hobbes. Many of these kinds of ideas had been discussed for generations, centuries, or even millennia—consider Giordano Bruno’s views on science and religion or consider how some trace liberalism back to Epicurus. What made Lockean thought unique? Was it how these ideas were systematized?

Also, what do you think about Benedict Spinoza? Some think Locke was influenced by him. Spinoza began writing long before Locke did. And Locke spent time in Spinoza’s Netherlands, during a time when Spinoza’s work was well known among the type of people Locke associated with. Locke did most of his writing in Netherlands and following that period. Some of Spinoza’s ideas would have likely resonated with and influenced Locke, specifically Spinoza’s advocacy of free speech, religious tolerance, separation of church and state, republicanism, etc.

There is always the argument as well that Spinoza and Locke represent separate strains of the Enlightenment, one radical and the other reactionary or moderate. Do you agree with this argument? Or do you prefer the view of there being a single Enlightenment and hence a single Enlightenment basis of mainstream liberalism? Do you think Spinoza had much of any influence in early America, either directly or indirectly? If so, can a Spinozistic element be detected in American political thought?

A number of people argue for an influence, e.g., “Nature’s God.” For example, Spinoza’s collected works were in
Thomas Jefferson’s library. Thomas Paine likely was familiar with Spinoza’s ideas, either by reading him or through those around him who had read Spinoza. One can sense Spinozism in deism and maybe in Romanticism, Transcendentalism, Theosophy, New Age spirituality, and New Thought Christianity. Spinoza’s panentheism has aspects of unitarianism and universalism, both of which have been influential over American history—and so maybe it was incorporated into the Unitarian-Universalist tradition. I could see even Quakerism, or more mainstream Christianity being influenced.

Plus, there is someone like Algernon Sidney. I don’t know much about him. He doesn’t get as much attention from popular works, at least here in the US. From what I can gather, his views were partly in line with Spinoza. Some other related early Enlightenment thinkers are Conyers Middleton and Henry St John, 1st Viscount Bolingbroke.

Your comment got me thinking about all of this. I decided to do a web search. Here are a few things that came up (some that I’m familiar with and others new to me):

Radical Enlightenment
by Jonathan Israel

Spinoza and the Rise of Liberalism
by Lewis Samuel Feuer

Nature’s God
by Matthew Stewart

New Netherland and the Dutch Origins of American Religious Liberty
Evan Haefeli

The Island at the Center of the World
by Russell Shorto

http://jeffersonandspinoza.blogspot.com/

http://www.thomaspaine.us/pdf/paine_spinoza_bisheff.pdf

https://en.wikipedia.org/wiki/The_Age_of_Reason#Paine.27s_intellectual_debts

https://larvalsubjects.wordpress.com/2008/11/19/spinoza-virtue-and-american-ideology/

https://www.bostonglobe.com/arts/2014/07/04/questioning-america-christian-roots/XVNKjkViIzncq9Rr9T7DMM/story.html

https://www.goodreads.com/work/quotes/25993801-nature-s-god-the-heretical-origins-of-the-american-republic

http://www.latimes.com/entertainment/la-ca-jc-matthew-stewart-20140629-story.html

http://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1001&context=philosophy_hontheses

http://www.boston.com/news/globe/editorial_opinion/oped/articles/2004/09/14/americas_jewish_founding_father/

http://opinionator.blogs.nytimes.com/2012/02/05/spinozas-vision-of-freedom-and-ours/

http://www.readperiodicals.com/201009/2131675381.html

On Being Strange

The human mind is fascinating. Did you know that? I thought I should mention it, just in case.

The capacity of the human mind leads in various directions. Many have wondered what psychiatric conditions say not just about those who are ‘afflicted’ but for human nature in general.

Take schizophrenia, which is always a popular topic, as it is fairly common. Schizophrenia includes several types of experience that get me thinking.

There is the oceanic feeling that is typical, something they share with many mystics, meditation practitioners and anyone who has imbibed psychedelics. It is a loss of boundaries or rather a fluidity between self and other.

This is part of a generally fluid way of experiencing reality. Schizophrenics often think others can hear their thoughts and that they can hear the thoughts of others. It also goes along with hearing voices, especially command hallucinations. Instead of thinking ‘I will do such and such,’ they hear ‘You will do such and such’.

This is where we touch upon the theories of Julian Jaynes and Iain McGilchrist. If we take the ancients at their word, we have to conclude that command hallucinations were considered a normal experience. Even today normal people hear command hallucinations when under extreme duress and stress. What if we all possess immense potential in how we can experience reality and identity? What might this mean for societies, in the ancient world and maybe in the future?

A different aspect is how schizophrenics view the world. People, objects, and concepts aren’t perceived as being individual. Rather, they are experienced as inseparable members of ever larger subclasses. This emphasizes a sense of larger wholeness beyond individuality.

Related to this, Iain McGilchrist explains this in terms of hemisphere functioning (p. 51):

“At the same time it is the right hemisphere that has the capacity to distinguish specific examples within a category, rather than categories alone: it stores details to distinguish specific instances. 148 The right hemisphere presents individual, unique instances of things and individual, familiar, objects, where the left hemisphere re-presents categories of things, and generic, non-specific objects. 149 In keeping with this, the right hemisphere uses unique referents, where the left hemisphere uses non-unique referents. 150 It is with the right hemisphere that we distinguish individuals of all kinds, places as well as faces. 151 In fact it is precisely its capacity for holistic processing that enables the right hemisphere to recognise individuals. 152 Individuals are, after all, Gestalt wholes: that face, that voice, that gait, that sheer ‘quiddity’ of the person or thing, defying analysis into parts.”

This isn’t just about schizophrenics. This difference between hemispheres exists in everyone, even if it doesn’t normally show so starkly as in psychiatric conditions.

In terms of bicameral societies, this makes me think that it isn’t an issue of there being no boundaries. It simply would be different and larger boundaries. Society itself, instead of the individual, would define self and reality. Individuality wouldn’t be the locus of experience and so individual perspective wouldn’t necessarily be understood as such, much less privileged as the basis of all else. This is shown in the odd examples throughout ancient literature where body parts are spoken of as if they had their own minds, their own thoughts and emotions.

This brings to mind a book I’ve been reading. It’s Evolution and Empathy by Milton E. Brener. He doesn’t reference either Jaynes or McGilchrist, but his thinking is in line with theirs. Brener discusses how the ancients apparently didn’t see spatial relationship between things as we moderns do. Closer and further objects lacked perspective, both being shown the same size. And multiple sides to a person or object would be shown simultaneously (e.g., all wheels of a wagon shown equally or different body parts shown from different angles).

Why did the ancients portray their world in such strange ways? And why do some people even today experience the world in strange ways that seem to match aspects of what the ancients portrayed? Maybe we are all a bit stranger than we realize.

* * *

Here are a few previous posts of mine:

Radical Human Mind: From Animism to Bicameralism and Beyond
Making Gods, Making Individuals
Synesthesia, and Psychedelics, and Civilization! Oh My!
Developmental Differences: Preliminary Thoughts

Also, if this kind of thing fascinates you as it fascinates me, you might want to check out another blog:

Gary Williams’s Minds and Brains