The following is more thoughts on the contrast between Germanic ‘freedom’ and Latin ‘liberty’ (see previous post: Libertarian Authoritarianism). The one is a non-legal construct of a more general culture, whereas the other is specifically a legal construct that was adapted to other ends, from philosophical ideal to spiritual otherworldliness as salvific emancipation. One important point is that liberty is specifically defined as not being a slave according to the law, but freedom is not directly or necessarily about slavery since freedom is more about what you are than what you are not. Though Germanic tribes had slaves, they weren’t fundamentally slave-based societies in the legal sense and economic structure of the Roman Empire.
Furthermore, the distinction is partly that ‘freedom’, as a word and a concept, developed in a pre-literate society of Germanic tribes, from which it was imported into England and carried to the American colonies. This freedom was expressed in informal practices of proto-democracy such as out-of-doors politics where people met on the commons to discuss important matters, a tradition that originated in northern Europe. Latin, on the other hand, was a language of literacy and the Roman Empire was one of the most literate societies in the ancient world. Our understanding of ‘liberty’ is strongly influenced by surviving ancient texts written by the literate elite, but the more common sense of ‘freedom’ was, in the past, mostly passed on by the custom of spoken language.
On a related note, Hanna Arendt was on the mind recently. She spent her early life in Germany, but, as a Jewish refugee from the Nazis, she had strong opinions about certain issues. By the time Arendt was growing up in 20th century Germany, I’m not sure how much of the premodern Germanic notion of freedom remained, but maybe the underlying culture persisted. It meant, as noted, belonging to a free people; and that was part of the problem, as the Jews were perceived as not belonging. The old cultural meaning of freedom was not part of formal laws of a large centralized nation-state with a court system. One was either free as being a member or not, as it was defined more by sociocultural relationships and identity.
What was lacking was the complex legalistic and political hierachy of the Roman Empire where there were all kinds of nuances, variations, and complexities involving one’s sociopolitical position. Being a Roman slave or a Roman citizen (or something in between), as a legal status, primarily was defined by one’s relationship to the state. Liberty was also an economic matter that signified one owned oneself, as opposed to being owned by another. The metaphor of ownership was not a defining feature of Germanic freedom.
The problem the Jewish people had with the Nazis was a legal issue. The civil rights they once possessed as German citizens suddenly were gone. The civil rights, Arendt argued, that the government gives could likewise be taken away by the government. Something else was required to guarantee and protect human value and dignity. Maybe that has to do with a culture of trust, what she felt was lacking or something related to it. The Nazis, though, were maybe all about a culture of trust, even if Jews were not in their circle of trust. Mere legalities such as civil rights were secondary as expressions of culture, rather than culture being shaped by a law system as part of legalistic traditions and mindset.
Arendt may never have considered the difference between liberty and freedom. It would’ve been interesting if she could have drawn upon the cultural history of the ancient Germanic tradition of freedom as community membership, which resonates with the older worldview of a commons. Liberty, as originating within a legalistic mindset, has no greater authority to proclaim outside of law, be it actual law (the state) or law as metaphor (natural law). Even invoking natural law, as Stoics did, can be of limited power; but it was used with greater force when wielded by radical-minded revolutionaries to challenge human law.
A deeper understanding of culture is what is missing, both the benefits and the harms. Maybe the Nazis were going by that culture of freedom and the Jews, as a perceived different culture, simply did not belong and so were deemed a threat. In a culture demanding a sense of belonging to a shared identity, difference could not be tolerated and diversity not allowed. Certain kinds of legalistic systems, on the other hand, can incorporate multiculturalism as seen with the Roman Empire and Napoleon’s French Empire, the military of the latter having consisted of soldiers that were primarily non-French. One can legally have citizenship and civil rights without having to share culture.
Also, it might be similar to how different ethnic groups can belong to the same larger Catholic Church, while Protestant traditions have more often been ethnic or nation specific. Catholicism, after all, developed directly out of Roman imperialism. It is true that Catholicism does have more of a legalistic structure to its hierarchy and practices. It was the legalistic view of buying indulgences as an economic contract with the Church as representative of a law-making God that was a major complaint in the Protestant Reformation. Protestants, concentrated in Northwestern Europe, preferred religion to have a more personal and communal expression that was concretely embodied in the congregation, not in a churchly institution of rules and rituals.
Like the Germans, the Scandinavians (and Japanese) have also emphasized the cultural approach. This common culture can allow for effective social democracies but also effective totalitarian regimes. Maybe that is why the American Midwest of Germanic and Scandinavian ancestry was the birthplace of the American Melting Pot, sometimes a cultural assimilation enforced by violent threat and punishment (English only laws, Second Klan, etc); and indeed some early Midwestern literature portrayed the homogenizing force of oppressive conformity. To the Midwestern mind, American identity too often became a hegemony (even making claims upon Standard American English), but at the same time anyone who assimilated (in being allowed to assimilate) was treated as equal. Some have noted that American-style assimilation has allowed immigration to be less of a problem than seen with the more common practice of housing segregation in Europe.
So, it might not be an accident that Southerners always were the most resistant to assimilate to mainstream American culture, while also being resistant to Northerner’s notions of equality. The hierarchical society of the South does to an extent allow populations to maintain their separate cultures and identities, but does so through a long history of enforced segregation and discrimination of racial laws. That is why there is still a separate black culture and Scots-Irish culture of the lower classes, as separate from the Cavalier culture of the ruling class — it’s separate and unequal; i.e. liberty. Assimilation is not an option, even if one wanted to, but the nature of the overall culture disinclines people from wanting it, as seen in how Southerners have continued to self-segregate themselves long after segregation laws ended.
The Southern emphasis on individual liberty is because it’s generally the individual who relates to the state and it’s laws. The communal aspect of life, in the South, is not found in governance so much as in kinship and church. That is the difference in how, particularly in the Midwest, the Northern attitude tends to more closely mix community and governance, as communal is more seen as cutting across all groups that are perceived as belonging (maybe why kinship and church is less central in the Midwest; and related to the emphasis on the nuclear family first promoted by the Quakers from the Scandinavian-settled English Midlands). Ethnic culture in the Midwest has disappeared more quickly than in the South. But this greater communal identity also defines individuality as more cultural than legal.
Legalistic individuality, in the modern world, is very capitalist in nature or otherwise expressed in material forms. Liberty-minded individualism is about self-ownership and the propertied self. To own oneself means to not be owned by another. That is why Thomas Jefferson saw individual freedom in terms of yeoman farming where an individual owned land, as property defined freedom. The more property one has, the more liberty one has as an individual; because one is independent by not being a dependent on others but rather to make others dependent. This relates to how, during the colonial era, the Southern governments gave more land based on their number of dependents (family, indentured servants, and slaves).
That is why a business owner and others in the propertied class have greater individuality in having the resources to act with less constraint, specifically in legal terms as money and power have always gone hand in hand, particularly in the South. A factory owner with hundreds of employees has more liberty-minded individuality, in the way did a plantation aristocrat with hundreds of slaves. Inequality before the legal system of power and privilege is what defines liberty. That explains how liberty has taken on such potent significance, as it has been tightly controlled as a rare commodity. Yet the state of dependence is more closely connected to liberty in general, as even aristocrats were trapped within societal expectations and obligations of social role. Liberty is primarily about one’s legal status and formal position, which can be a highly structured individuality — maybe why Stoics associated the ideal of liberty with the love of fate in denying free will.
As African-American culture was shaped in the South, this legalistic mentality might be why the black movement for freedom emphasized legal changes of civil rights, initially fighting for the negative freedom (i.e., liberty) of not being actively oppressed. They wanted equality before the law, not equality as assimilated cultural membership — besides, whites were more willing to assent to the former than the latter. This same legalistic mentality might go the heart of why Southerners are so offended by what they describe as illegal immigrants, whereas Northerners are more likely to speak of undocumented immigrants. This is typically described as being ideological, conservatism versus liberalism, but maybe it’s more having to do with the regional divide between the legalistic mind and the cultural mind where ideological identities have become shaped by regional cultures.
There is also a divide in the ideological perception of protest culture, a democratic phenomenon more common in the North than the South. To the Southern mind, there is an old fear about totalizing ideologies of the North, whereas their own way of life is thought of as a non-ideological tradition. Liberal rhetoric is more grounded in the culture of freedom as more all-encompassing ideological worldview than coherent ideological system as embodied in Southern legalism. This makes it more acceptable to challenge laws in the North because culture informs the legal system more than the other way around; that is to say, law is secondary (consider the living, as opposed to legalistic, interpretation of the Constitution that has it’s origins in Quaker constitutionalism; a constitution is a living agreement of a living generation, not the dead hand of law). That is maybe why there is the conservative pushback against a perceived cultural force that threatens their sense of liberty, as the culture of freedom is more vague and pervasive in its influence. The conspiracy theory of Cultural Marxism is maybe the conservative’s attempt to grasp this liberal-minded culture that feels alien to them.
Liberty and freedom is part of an old Anglo-American dialogue, a creative flux of ideas.To lop off one side would be to cripple American society, and yet the two remain uneasy and unresolved in their relationship. Sadly, it’s typically freedom (i.e., positive freedom and sociocultural freedom) that gets the short shrift in how both the left and right too often became caught up in political battles of legalistic conflicts over civil rights and court cases, even to the point that the democratic process becomes legalistic in design; with the culture of freedom and democracy being cast aside. Consider the power that has grown within the Supreme Court to decide not only political but also economic and social issues, cultural and moral issues (e.g., abortion). As democracy has weakened and legalism further taken hold, we’ve forgotten about how freedom and democracy always were first and foremost about culture with politics being the result, not the cause. The gut-level sense of freedom remains in the larger culture, but the liberty-minded legalism has come to rule the government, as well as the economy. That is why there can be such clashes between police and protesters, as each embodies a separate vision of America; and this is why property damage is always featured in the corporate media’s narrative about protests.
The ideal of freedom has such power over the mind. It harkens back to an earlier way of living, a simpler form of society. Freedom as culture is a shared experience of shared identity, maybe drawing upon faint memories of what Julian Jaynes called the bicameral mind. When the Bronze Age was coming to an end, a new kind of rule-based legalism emerged, including laws literally etched into stone as never before seen. But the mentality that preceded it didn’t entirely disappear. We know of it in ourselves from a sense of loss and nostalgia we have a hard time pinpointing. That is why freedom is such a vague concept, as opposed to liberty’s straightforward definition. We are haunted by the promise of freedom, but without quite knowing what it would mean to be free. Our heavy reliance on systems of liberty is, in a sense, a failure to protect and express some deep longing within us, the simple but undeniable need to belong.
Daniel Everett is an expert on the Piraha, although he has studied other tribal cultures. It’s unsurprising then to find him make the same observations in different books. One particular example (seen below) is about bodily form. I bring it up becomes it contradicts much of the right-wing and reactionary ideology found in genetic determinism, race realism, evolutionary psychology, and present HBD (as opposed to the earlier human biodiversity theory originated by Jonathan Marks).
From the second book below, the excerpt is part of a larger section where Everett responded to the evolutionary psychologist John Tooby, the latter arguing that there is no such thing as ‘culture’ and hence everything is genetic or otherwise biological. Everett’s use of dark matter of the mind is his way of attempting to get at more deeply complex view. This dark matter is of the mind but also of the body. But he isn’t the only person to make such physiological observations.
The same point was emphasized in reading Ron Schmid’s Primal Nutrition. On page 57, there are some photographs showing healthy individuals from traditional communities. In one set of photographs, four Melanesian boys are shown who look remarkably similar. “These four boys lived on four different islands and were not related. Each had nutrition adequate for the development of the physical pattern typical of Melanesian males; thus their similar appearance.” This demonstrates non-determinism and non-essentialism.
* * *
How Language Began:
The Story of Humanity’s Greatest Invention by Daniel L. Everett pp. 220-221
Culture, patterns of being – such as eating, sleeping, thinking and posture – have been cultivated. A Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that their minds have been cultivated – because of the roles they play in a particular set of values and because of how they define, live out and prioritise these values, the roles of individuals in a society and the knowledge they have acquired.
It would be worth exploring further just how understanding language and culture together can enable us better to understand each. Such an understanding would also help to clarify how new languages or dialects or any other variants of speech come about. I think that this principle ‘you talk like who you talk with’ represents all human behaviour. We also eat like who we eat with, think like those we think with, etc. We take on a wide range of shared attributes – our associations shape how we live and behave and appear – our phenotype. Culture affects our gestures and our talk. It can even affect our bodies. Early American anthropologist Franz Boas studied in detail the relationship between environment, culture and bodily form. Boas made a solid case that human body types are highly plastic and change to adapt to local environmental forces, both ecological and cultural.
Less industrialised cultures show biology-culture connections. Among the Pirahã, facial features range impressionistically from slightly Negroid to East Asian, to Native American. Differences between villages or families may have a biological basis, originating in different tribes merging over the last 200 years. One sizeable group of Pirahãs (perhaps thirty to forty) – usually found occupying a single village – are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two centuries ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different – broader noses, some with epicanthic folds, large foreheads – giving an overall impression of similarity to East Asian features. ‡ Yet body dimensions across all Pirahãs are constant. Men’s waists are, or were when I worked with them, uniformly 27 inches (68 cm), their average height 5 feet 2 inches (157.5 cm) and their average weight 55 kilos (121 pounds). The Pirahã phenotypes are similar not because all Pirahãs necessarily share a single genotype, but because they share a culture, including values, knowledge of what to eat and values about how much to eat, when to eat and the like.
These examples show that even the body does not escape our earlier observation that studies of culture and human social behaviour can be summed up in the slogan that ‘you talk like who you talk with’ or ‘grow like who you grow with’. And the same would have held for all our ancestors, even erectus .
Dark Matter of the Mind:
The Culturally Articulated Unconscious by Daniel L. Everett Kindle Locations 1499-1576
Thus while Tooby may be absolutely right that to have meaning, “culture” must be implemented in individual minds, this is no indictment of the concept. In fact, this requirement has long been insisted on by careful students of culture, such as Sapir. Yet unlike, say, Sapir, Tooby has no account of how individual minds— like ants in a colony or neurons in a brain or cells in a body— can form a larger entity emerging from multi-individual sets of knowledge, values, and roles. His own nativist views offer little insight into the unique “unconscious patterning of society” (to paraphrase Sapir) that establishes the “social set” to which individuals belong.
The idea of culture, after all, is just that certain patterns of being— eating, sleeping, thinking, posture, and so forth— have been cultivated and that minds arising from one such “field” will not be like minds cultivated in another “field.” The Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that his or her mind has been cultivated— because of the roles he or she plays in a particular value grouping, because of the ranking of values that her or she has come to share, and so on.
We must be clear, of course, that the idea of “cultivation” we are speaking of here is not merely of minds, but of entire individuals— their minds a way of talking about their bodies. From the earliest work on ethnography in the US, for example, Boas showed how cultures affect even body shape. And body shape is a good indication that it is not merely cognition that is effected and affected by culture. The uses, experiences, emotions, senses, and social engagements of our bodies forget the patterns of thought we call mind. […]
Exploring this idea that understanding language can help us understand culture, consider how linguists account for the rise of languages, dialects, and all other local variants of speech. Part of their account is captured in linguistic truism that “you talk like who you talk with.” And, I argue, this principle actually impinges upon all human behavior. We not only talk like who we talk with, but we also eat like who we eat with, think like those we think with, and so on. We take on a wide range of shared attributes; our associations shape how we live and behave and appear— our phenotype. Culture can affect our gestures and many other aspects of our talk. Boas (1912a, 1912b) takes up the issue of environment, culture, and bodily form. He provides extensive evidence that human body phenotypes are highly plastic and subject to nongenetic local environmental forces (whether dietary, climatological, or social). Had Boas lived later, he might have studied a very clear and dramatic case; namely, the body height of Dutch citizens before and after World War II. This example is worth a close look because it shows that bodies— like behaviors and beliefs— are cultural products and shapers simultaneously.
The curious case of the Netherlanders fascinates me. The Dutch went from among the shortest peoples of Europe to the tallest in the world in just over one century. One account simplistically links the growth in Dutch height with the change in political system (Olson 2014): “The Dutch growth spurt of the mid-19th century coincided with the establishment of the first liberal democracy. Before this time, the Netherlands had grown rich off its colonies but the wealth had stayed in the hands of the elite. After this time, the wealth began to trickle down to all levels of society, the average income went up and so did the height.” Tempting as this single account may be, there were undoubtedly other factors involved, including gene flow and sexual selection between Dutch and other (mainly European) populations, that contribute to explain European body shape relative to the Dutch. But democracy, a new political change from strengthened and enforced cultural values, is a crucial component of the change in the average height of the Dutch, even though the Dutch genotype has not changed significantly in the past two hundred years. For example, consider figures 2.1 and 2.2. In 1825, US male median height was roughly ten centimeters (roughly four inches) taller than the average Dutch. In the 1850s, the median heights of most males in Europe and the USA were lowered. But then around 1900, they begin to rise again. Dutch male median height lagged behind that of most of the world until the late ’50s and early ’60s, when it began to rise at a faster rate than all other nations represented in the chart. By 1975 the Dutch were taller than Americans. Today, the median Dutch male height (183 cm, or roughly just above six feet) is approximately three inches more than the median American male height (177 cm, or roughly five ten). Thus an apparent biological change turns out to be largely a cultural phenomenon.
To see this culture-body connection even more clearly, consider figure 2.2. In this chart, the correlation between wealth and height emerges clearly (not forgetting that the primary determiner of height is the genome). As wealth grew, so did men (and women). This wasn’t matched in the US, however, even though wealth also grew in the US (precise figures are unnecessary). What emerges from this is that Dutch genes are implicated in the Dutch height transformation, from below average to the tallest people in the world. And yet the genes had to await the right cultural conditions before they could be so dramatically expressed. Other cultural differences that contribute to height increases are: (i) economic (e.g., “white collar”) background; (ii) size of family (more children, shorter children); (iii) literacy of the child’s mother (literate mothers provide better diets); (iv) place of residence (residents of agricultural areas tend to be taller than those in industrial environments— better and more plentiful food); and so on (Khazan 2014). Obviously, these factors all have to do with food access. But looked at from a broader angle, food access is clearly a function of values, knowledge, and social roles— that is, culture.
Just as with the Dutch, less-industrialized cultures show culture-body connections. For example, Pirahã phenotype is also subject to change. Facial features among the Pirahãs range impressionistically from slightly Negroid to East Asian to American Indian (to use terms from physical anthropology). Phenotypical differences between villages or families seem to have a biological basis (though no genetic tests have been conducted). This would be due in part to the fact Pirahã women have trysts with various non-Pirahã visitors (mainly river traders and their crews, but also government workers and contract employees on health assistance assignments, demarcating the Pirahã reservation, etc.). The genetic differences are also partly historical. One sizeable group of Pirahãs (perhaps thirty to forty)— usually found occupying a single village— are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two hundred years ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different— broader noses; some with epicanthic folds; large foreheads— giving an overall impression of similarity to Cambodian features. This and other evidence show us that the Pirahã gene pool is not closed. 4 Yet body dimensions across all Pirahãs are constant. Men’s waists are or were uniformly 83 centimeters (about 32.5 inches), their average height 157.5 centimeters (five two), and their average weight 55 kilos (about 121 pounds).
I learned about the uniformity in these measurements over the past several decades as I have taken Pirahã men, women, and children to stores in nearby towns to purchase Western clothes, when they came out of their villages for medical help. (The Pirahãs always asked that I purchase Brazilian clothes for them so that they would not attract unnecessary stares and comments.) Thus I learned that the measurements for men were nearly identical. Biology alone cannot account for this homogeneity of body form; culture is implicated as well. For example, Pirahãs raised since infancy outside the village are somewhat taller and much heavier than Pirahãs raised in their culture and communities. Even the body does not escape our earlier observation that studies of culture and human social behavior can be summed up in the slogan that “you talk like who you talk with” or “grow like who you grow with.”
In modern society, we are obsessed with identity, specifically in terms of categorizing and labeling. This leads to a tendency to essentialize identity, but this isn’t supported by the evidence. The only thing we are born as is members of a particular species, homo sapiens.
What stands out is that other societies have entirely different experiences of collective identity. The most common distinctions, contrary to ethnic and racial ideologies, are those we perceive in the people most similar to us — the (too often violent) narcissism of small differences.
We not only project onto other societies our own cultural assumptions for we also read anachronisms into the past as our way of rationalizing the present. But if we study closely what we know from history and archaeology, there isn’t any clear evidence for ethnic and racial ideology.
In Search of the Phoenicians by Josephine Quinn pp. 13-17
However, my intention here is not simply to rescue the Phoenicians from their undeserved obscurity. Quite the opposite, in fact: I’m going to start by making the case that they did not in fact exist as a self-conscious collective or “people.” The term “Phoenician” itself is a Greek invention, and there is no good evidence in our surviving ancient sources that these Phoenicians saw themselves, or acted, in collective terms above the level of the city or in many cases simply the family. The first and so far the only person known to have called himself a Phoenician in the ancient world was the Greek novelist Heliodorus of Emesa (modern Homs in Syria) in the third or fourth century CE, a claim made well outside the traditional chronological and geographical boundaries of Phoenician history, and one that I will in any case call into question later in this book.
Instead, then, this book explores the communities and identities that were important to the ancient people we have learned to call Phoenicians, and asks why the idea of being Phoenician has been so enthusiastically adopted by other people and peoples—from ancient Greece and Rome, to the emerging nations of early modern Europe, to contemporary Mediterranean nation-states. It is these afterlives, I will argue, that provide the key to the modern conception of the Phoenicians as a “people.” As Ernest Gellner put it, “Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist”. 7 In the case of the Phoenicians, I will suggest, modern nationalism invented and then sustained an ancient nation.
Identities have attracted a great deal of scholarly attention in recent years, serving as the academic marginalia to a series of crucially important political battles for equality and freedom. 8 We have learned from these investigations that identities are not simple and essential truths into which we are born, but that they are constructed by the social and cultural contexts in which we live, by other people, and by ourselves—which is not to say that they are necessarily freely chosen, or that they are not genuinely and often fiercely felt: to describe something as imagined is not to dismiss it as imaginary. 9 Our identities are also multiple: we identify and are identified by gender, class, age, religion, and many other things, and we can be more than one of any of those things at once, whether those identities are compatible or contradictory. 10 Furthermore, identities are variable across both time and space: we play—and we are assigned—different roles with different people and in different contexts, and they have differing levels of importance to us in different situations. 11
In particular, the common assumption that we all define ourselves as a member of a specific people or “ethnic group,” a collective linked by shared origins, ancestry, and often ancestral territory, rather than simply by contemporary political, social, or cultural ties, remains just that—an assumption. 12 It is also a notion that has been linked to distinctive nineteenth-century European perspectives on nationalism and identity, 13 and one that sits uncomfortably with counterexamples from other times and places. 14
The now-discredited categorization and labeling of African “tribes” by colonial administrators, missionaries, and anthropologists of the nineteenth and twentieth centuries provides many well-known examples, illustrating the way in which the “ethnic assumption” can distort interpretations of other people’s affiliations and self-understanding. 15 The Banande of Zaire, for instance, used to refer to themselves simply as bayira (“cultivators” or “workers”), and it was not until the creation of a border between the British Protectorate of Uganda and the Belgian Congo in 1885 that they came to be clearly delineated from another group of bayira now called Bakonzo. 16 Even more strikingly, the Tonga of Zambia, as they were named by outsiders, did not regard themselves as a unified group differentiated from their neighbors, with the consequence that they tended to disperse and reassimilate among other groups. 17 Where such groups do have self-declared ethnic identities, they were often first imposed from without, by more powerful regional actors. The subsequent local adoption of those labels, and of the very concepts of ethnicity and tribe in some African contexts, illustrates the effects that external identifications can have on internal affiliations and self-understandings. 18 Such external labeling is not of course a phenomenon limited to Africa or to Western colonialism: other examples include the ethnic categorization of the Miao and the Yao in Han China, and similar processes carried out by the state in the Soviet Union. 19
Such processes can be dangerous. When Belgian colonial authorities encountered the central African kingdom of Rwanda, they redeployed labels used locally at the time to identify two closely related groups occupying different positions in the social and political hierarchy to categorize the population instead into two distinct “races” of Hutus (identified as the indigenous farmers) and Tutsis (thought to be a more civilized immigrant population). 20 This was not easy to do, and in 1930 a Belgian census attempting to establish which classification should be recorded on the identity cards of their subjects resorted in some cases to counting cows: possession of ten or more made you a Tutsi. 21 Between April and July 1994, more than half a million Tutsis were killed by Hutus, sometimes using their identity cards to verify the “race” of their victims.
The ethnic assumption also raises methodological problems for historians. The fundamental difficulty with labels like “Phoenician” is that they offer answers to questions about historical explanation before they have even been asked. They assume an underlying commonality between the people they designate that cannot easily be demonstrated; they produce new identities where they did not to our knowledge exist; and they freeze in time particular identities that were in fact in a constant process of construction, from inside and out. As Paul Gilroy has argued, “ethnic absolutism” can homogenize what are in reality significant differences. 22 These labels also encourage historical explanation on a very large and abstract scale, focusing attention on the role of the putative generic identity at the expense of more concrete, conscious, and interesting communities and their stories, obscuring in this case the importance of the family, the city, and the region, not to mention the marking of other social identities such as gender, class, and status. In sum, they provide too easy a way out of actually reading the historical evidence.
As a result, recent scholarship tends to see ethnicity not as a timeless fact about a region or group, but as an ideology that emerges at certain times, in particular social and historical circumstances, and, especially at moments of change or crisis: at the origins of a state, for instance, or after conquest, or in the context of migration, and not always even then. 23 In some cases, we can even trace this development over time: James C. Scott cites the example of the Cossacks on Russia’s frontiers, people used as cavalry by the tsars, Ottomans, and Poles, who “were, at the outset, nothing more and nothing less than runaway serfs from all over European Russia, who accumulated at the frontier. They became, depending on their locations, different Cossack “hosts”: the Don (for the Don River basin) Cossacks, the Azov (Sea) Cossacks, and so on.” 24
Ancient historians and archaeologists have been at the forefront of these new ethnicity studies, emphasizing the historicity, flexibility, and varying importance of ethnic identity in the ancient Mediterranean. 25 They have described, for instance, the emergence of new ethnic groups such as the Moabites and Israelites in the Near East in the aftermath of the collapse of the Bronze Age empires and the “crystallisation of commonalities” among Greeks in the Archaic period. 26 They have also traced subsequent changes in the ethnic content and formulation of these identifications: in relation to “Hellenicity,” for example, scholars have delineated a shift in the fifth century BCE from an “aggregative” conception of Greek identity founded largely on shared history and traditions to a somewhat more oppositional approach based on distinction from non-Greeks, especially Persians, and then another in the fourth century BCE, when Greek intellectuals themselves debated whether Greekness should be based on a shared past or on shared culture and values in the contemporary world. 27 By the Hellenistic period, at least in Egypt, the term “Hellene” (Greek) was in official documents simply an indication of a privileged tax status, and those so labeled could be Jews, Thracians—or, indeed, Egyptians. 28
Despite all this fascinating work, there is a danger that the considerable recent interest in the production, mechanisms, and even decline of ancient ethnicity has obscured its relative rarity. Striking examples of the construction of ethnic groups in the ancient world do not of course mean that such phenomena became the norm. 29 There are good reasons to suppose in principle that without modern levels of literacy, education, communication, mobility, and exchange, ancient communal identities would have tended to form on much smaller scales than those at stake in most modern discussions of ethnicity, and that without written histories and genealogies people might have placed less emphasis on the concepts of ancestry and blood-ties that at some level underlie most identifications of ethnic groups. 30 And in practice, the evidence suggests that collective identities throughout the ancient Mediterranean were indeed largely articulated at the level of city-states and that notions of common descent or historical association were rarely the relevant criterion for constructing “groupness” in these communities: in Greek cities, for instance, mutual identification tended to be based on political, legal, and, to a limited extent, cultural criteria, 31 while the Romans famously emphasized their mixed origins in their foundation legends and regularly manumitted their foreign slaves, whose descendants then became full Roman citizens. 32
This means that some of the best-known “peoples” of antiquity may not actually have been peoples at all. Recent studies have shown that such familiar groups as the Celts of ancient Britain and Ireland and the Minoans of ancient Crete were essentially invented in the modern period by the archaeologists who first studied or “discovered” them, 33 and even the collective identity of the Greeks can be called into question. As S. Rebecca Martin has recently pointed out, “there is no clear recipe for the archetypal Hellene,” and despite our evidence for elite intellectual discussion of the nature of Greekness, it is questionable how much “being Greek” meant to most Greeks: less, no doubt, than to modern scholars. 34 The Phoenicians, I will suggest in what follows, fall somewhere in the middle—unlike the Minoans or the Atlantic Celts, there is ancient evidence for a conception of them as a group, but unlike the Greeks, this evidence is entirely external—and they provide another good case study of the extent to which an assumption of a collective identity in the ancient Mediterranean can mislead. 35
In all the exciting work that has been done on “identity” in the past few decades, there has been too little attention paid to the concept of identity itself. We tend to ask how identities are made, vary, and change, not whether they exist at all. But Rogers Brubaker and Frederik Cooper have pinned down a central difficulty with recent approaches: “it is not clear why what is routinely characterized as multiple, fragmented, and fluid should be conceptualized as ‘identity’ at all.” 1 Even personal identity, a strong sense of one’s self as a distinct individual, can be seen as a relatively recent development, perhaps related to a peculiarly Western individualism. 2 Collective identities, furthermore, are fundamentally arbitrary: the artificial ways we choose to organize the world, ourselves, and each other. However strong the attachments they provoke, they are not universal or natural facts. Roger Rouse has pointed out that in medieval Europe, the idea that people fall into abstract social groupings by virtue of common possession of a certain attribute, and occupy autonomous and theoretically equal positions within them, would have seemed nonsensical: instead, people were assigned their different places in the interdependent relationships of a concrete hierarchy. 3
The truth is that although historians are constantly apprehending the dead and checking their pockets for identity, we do not know how people really thought of themselves in the past, or in how many different ways, or indeed how much. I have argued here that the case of the Phoenicians highlights the extent to which the traditional scholarly perception of a basic sense of collective identity at the level of a “people,” “culture,” or “nation” in the cosmopolitan, entangled world of the ancient Mediterranean has been distorted by the traditional scholarly focus on a small number of rather unusual, and unusually literate, societies.
My starting point was that we have no good evidence for the ancient people that we call Phoenician identifying themselves as a single people or acting as a stable collective. I do not conclude from this absence of evidence that the Phoenicians did not exist, nor that nobody ever called her- or himself a Phoenician under any circumstances: Phoenician-speakers undoubtedly had a larger repertoire of self-classifications than survives in our fragmentary evidence, and it would be surprising if, for instance, they never described themselves as Phoenicians to the Greeks who invented that term; indeed, I have drawn attention to several cases where something very close to that is going on. Instead, my argument is that we should not assume that our “Phoenicians” thought of themselves as a group simply by analogy with models of contemporary identity formation among their neighbors—especially since those neighbors do not themselves portray the Phoenicians as a self-conscious or strongly differentiated collective. Instead, we should accept the gaps in our knowledge and fill the space instead with the stories that we can tell.
The stories I have looked at in this book include the ways that the people of the northern Levant did in fact identify themselves—in terms of their cities, but even more of their families and occupations—as well as the formation of complex social, cultural, and economic networks based on particular cities, empires, and ideas. These could be relatively small and closed, like the circle of the tophet, or on the other hand, they could, like the network of Melqart, create shared religious and political connections throughout the Mediterranean—with other Levantine settlements, with other settlers, and with local populations. Identifications with a variety of social and cultural traditions is one recurrent characteristic of the people and cities we call Phoenician, and this continued into the Hellenistic and Roman periods, when “being Phoenician” was deployed as a political and cultural tool, although it was still not claimed as an ethnic identity.
Another story could go further, to read a lack of collective identity, culture, and political organization among Phoenician-speakers as a positive choice, a form of resistance against larger regional powers. James C. Scott has recently argued in The Art of Not Being Governed (2009) that self-governing people living on the peripheries and borders of expansionary states in that region tend to adopt strategies to avoid incorporation and to minimize taxation, conscription, and forced labor. Scott’s focus is on the highlands of Southeast Asia, an area now sometimes known as Zomia, and its relationship with the great plains states of the region such as China and Burma. He describes a series of tactics used by the hill people to avoid state power, including “their physical dispersion in rugged terrain, their mobility, their cropping practices, their kinship structure, their pliable ethnic identities . . . their flexible social structure, their religious heterodoxy, their egalitarianism and even the nonliterate, oral cultures.” The constant reconstruction of identity is a core theme in his work: “ethnic identities in the hills are politically crafted and designed to position a group vis-à-vis others in competition for power and resources.” 4 Political integration in Zomia, when it has happened at all, has usually consisted of small confederations: such alliances, he points out, are common but short-lived, and are often preserved in local place names such as “Twelve Tai Lords” (Sipsong Chutai) or “Nine Towns” (Ko Myo)—information that throws new light on the federal meetings recorded in fourth-century BCE Tripolis (“Three Cities”). 5
In fact, many aspects of Scott’s analysis feel familiar in the world of the ancient Mediterranean, on the periphery of the great agricultural empires of Mesopotamia and Iran, and despite all its differences from Zomia, another potential candidate for the label of “shatterzone.” The validity of Scott’s model for upland Southeast Asia itself —a matter of considerable debate since the book’s publication—is largely irrelevant for our purposes; 6 what is interesting here is how useful it might be for thinking about the mountainous region of the northern Levant, and the places of refuge in and around the Mediterranean.
In addition to outright rebellion, we could argue that the inhabitants of the Levant employed a variety of strategies to evade the heaviest excesses of imperial power. 7 One was to organize themselves in small city-states with flimsy political links and weak hierarchies, requiring larger powers to engage in multiple negotiations and arrangements, and providing the communities involved with multiple small and therefore obscure opportunities for the evasion of taxation and other responsibilities—“divide that ye be not ruled,” as Scott puts it. 8 A cosmopolitan approach to culture and language in those cities would complement such an approach, committing to no particular way of doing or being or even looking, keeping loyalties vague and options open. One of the more controversial aspects of Scott’s model could even explain why there is no evidence for Phoenician literature despite earlier Near Eastern traditions of myth and epic. He argues that the populations he studies are in some cases not so much nonliterate as postliterate: “Given the considerable advantages in plasticity of oral over written histories and genealogies, it is at least conceivable to see the loss of literacy and of written texts as a more or less deliberate adaptation to statelessness.” 9
Another available option was to take to the sea, a familiar but forbidding terrain where the experience and knowledge of Levantine sailors could make them and their activities invisible and unaccountable to their overlords further east. The sea also offered an escape route from more local sources of power, and the stories we hear of the informal origins of western settlements such as Carthage and Lepcis, whether or not they are true, suggest an appreciation of this point. A distaste even for self-government could also explain a phenomenon I have drawn attention to throughout the book: our “Phoenicians” not only fail to visibly identify as Phoenician, they often omit to identify at all.
It is striking in this light that the first surviving visible expression of an explicitly “Phoenician” identity was imposed by the Carthaginians on their subjects as they extended state power to a degree unprecedented among Phoenician-speakers, that it was then adopted by Tyre as a symbol of colonial success, and that it was subsequently exploited by Roman rulers in support of their imperial activities. This illustrates another uncomfortable aspect of identity formation: it is often a cultural bullying tactic, and one that tends to benefit those already in power more than those seeking self-empowerment. Modern European examples range from the linguistic and cultural education strategies that turned “peasants into Frenchmen” in the late nineteenth century, 10 to the eugenic Lebensborn program initiated by the Nazis in mid-twentieth-century central Europe to create more Aryan children through procreation between German SS officers and “racially pure” foreign women. 11 Such examples also underline the difficulty of distinguishing between internal and external conceptions of identity when apparently internal identities are encouraged from above, or even from outside, just as the developing modern identity as Phoenician involved the gradual solidification of the identity of the ancient Phoenicians.
It seems to me that attempts to establish a clear distinction between “emic” and “etic” identity are part of a wider tendency to treat identities as ends rather than means, and to focus more on how they are constructed than on why. Identity claims are always, however, a means to another end, and being “Phoenician” is in all the instances I have surveyed here a political rather than a personal statement. It is sometimes used to resist states and empires, from Roman Africa to Hugh O’Donnell’s Ireland, but more often to consolidate them, lending ancient prestige and authority to later regimes, a strategy we can see in Carthage’s Phoenician coinage, the emperor Elagabalus’s installation of a Phoenician sun god at Rome, British appeals to Phoenician maritime power, and Hannibal Qadhafi’s cruise ship.
In the end, it is modern nationalism that has created the Phoenicians, along with much else of our modern idea of the ancient Mediterranean. Phoenicianism has served nationalist purposes since the early modern period: the fully developed notion of Phoenician ethnicity may be a nineteenth-century invention, a product of ideologies that sought to establish ancient peoples or “nations” at the heart of new nation-states, but its roots, like those of nationalism itself, are deeper. As origin myth or cultural comparison, aggregative or oppositional, imperialist and anti-imperialist, Phoenicianism supported the expansion of the early modern nation of Britain, as well as the position of the nation of Ireland as separate and respected within that empire; it helped to consolidate the nation of Lebanon under French imperial mandate, premised on a regional Phoenician identity agreed on between local and French intellectuals, but it also helped to construct the nation of Tunisia in opposition to European colonialism.
One expression of the misguided nature vs nurture debate is the understanding of our humanity. In wondering about the universality of Western views, we have already framed the issue in terms of Western dualism. The moment we begin speaking in specific terms, from mind to psyche, we’ve already smuggled in cultural preconceptions and biases.
Sabrina Golonka discusses several other linguistic cultures (Korean, Japanese, and Russian) in comparison to English. She suggests that dualism, even if variously articulated, underlies each conceptual tradition — a general distinction between visible and invisible. But all of those are highly modernized societies built on millennia of civilizational projects, from imperialism to industrialization. It would be even more interesting and insightful to look into the linguistic worldviews of indigenous cultures.
The Piraha, for example, are linguistically limited in only speaking about what they directly experience or about what those they personally know have directly experienced. They don’t talk about what is ‘invisible’, whether within the human sphere or beyond in the world, and as such they aren’t prone to theoretical speculations.
What is clear is that the Piraha’s mode of perception and description is far different, even to the point that what they see is sometimes invisible to those who aren’t Piraha. There is an anecdote shared by Daniel Everett. The Piraha crowded on the riverbank pointing to the spirit they saw on the other side, but Everett and his family saw nothing. That brings doubt to the framework of visible vs invisible. The Piraha were fascinated by what becomes invisible such as a person disappearing around the bend of a trail, although their fascination ended at that liminal point at the edge of the visible, not extending beyond it.
Another useful example would be the Australian Aborigine. The Songlines were traditionally integrated with their sense of identity and reality, signifying an experience that is invisible within the reality tunnel of WEIRD society (Western Educated Industrialized Rich Democratic). Prior to contact, individualism as we know it may have been entirely unknown for Songlines express a profoundly collective sense of being in the world.
If any kind of dualism between visible and invisible did exist within the Aboriginal worldview, it more likely would have been on a communal level of experience. In their culture, ritual songs are learned and then what they represent becomes visible to the initiated, however this process might be made sense of within Aboriginal language. A song makes some aspect of the world visible, which is to invoke a particular reality and the beings that inhabit that reality. This is what Westerners would interpret as states of mind, but that is clearly an inadequate understanding of the fully immersive and embodied experience.
Western psychology has made non-Western experience invisible to most Westerners. There is the invisible we talk about within our own cultural worldview, what we perceive as known and familiar, no matter how intangible. But even more important is the unknown and unfamiliar that is so fundamentally invisible that we are incapable of talking about it. This doesn’t merely limit our understanding. Entire ways of being in the world are precluded by the words and concepts we use. Our sense of our own humanity is lesser for it and, as cultural languages go extinct, this state of affairs worsens with the near complete monocultural destruction of the very alternatives that most powerfully challenge our assumptions.
So, back to the mind and our current view of cognition. Cross-linguistic research shows that, generally speaking, every culture has a folk model of a person consisting of visible and invisible (psychological) aspects (Wierzbicka, 2005). While there is agreement that the visible part of the person refers to the body, there is considerable variation in how different cultures think about the invisible (psychological) part. In the West, and, specifically, in the English-speaking West, the psychological aspect of personhood is closely related to the concept of “the mind” and the modern view of cognition. But, how universal is this conception? How do speakers of other languages think about the psychological aspect of personhood? […]
In a larger sense, the fact that there seems to be a universal belief that people consist of visible and invisible aspects explains much of the appeal of cognitive psychology over behaviourism. Cognitive psychology allows us to invoke invisible, internal states as causes of behaviour, which fits nicely with the broad, cultural assumption that the mind causes us to act in certain ways.
To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.
I recently wrote a blog about a related topic. In Pāli and Sanskrit – ancient Indian languages – there is no collective term for emotions. They do have words for all of the basic emotions and some others, but they do not think of them as a category distinct from thought. I have yet to think through all of the implications of this observation but clearly the ancient Indian view on psychology must have been very different to ours.
Very interesting post. Have you looked into Julian Jaynes’s strange and marvelous book “The Origin of Consciousness in the Breakdown of the Bicameral Mind”? Even if you regard bicameralism as iffy, there’s an interesting section on the creation of metaphorical spaces — body-words that become “containers” for feelings, thoughts, attributes etc. The culturally distinct descriptors of the “invisible” may be related to historical accidents that vary from place to place.
Also relevant might be Lakoff and Johnson’s “Philosophy in the Flesh” looking at, in their formulation, the inevitably metaphorical nature of thought and speech and the ultimate grounding of (almost) all metaphors in our physical experience from embodiment in the world.
When I first came upon the argument that “culture is a racial construct” last year, I was pretty horrified. I saw this as a re-gurgitated Nazi talking point that was clearly unfactual.
But like other longtime taboo topics such as HBD, eugenics, and White identity, I’ve seen this theory pop up over the past year in some shocking places. First, a scientific magazine revealed that orcas genetics’ are affected by culture and vice versa. Then, I started seeing normies discuss this talking point in comment sections in the Wall Street Journal and even NY Times.
Finally, a liberal academic has thrown himself into the discussion. Bret Weinsten, a Jewish Leftist who most people here know as the targeted professor of the Marxist insanity at Evergreen University, posted this tweet yesterday: “Sex is biological. Gender is cultural. Culture is biological,” and then this one today: “Culture is as adaptive, evolutionary and biological as genes. You’re unlikely to accept it. But if you did you’d see people with 10X clarity.”
This is a pretty remarkable assertion coming from someone like Bret Weinstein. I wonder if the dam will eventually break and rather than being seen as incredibly taboo, this theory will be commonly accepted. If so, it’s probably the best talking point you have for America to prioritize its demographics.
What is so shocking?
This line of thought, taken broadly, has been developing and taking hold in the mainstream for more than a century. Social constructionism was popularized and spread by the anthropologist Franz Boaz. I don’t think this guy grasps what this theory means nor its implications. That “culture is a racial construct” goes hand in hand with race being a cultural construct, which is to say we understand the world and our own humanity through the lens of ideology, in the sense used by Louis Althusser. As applied to the ideology of pseudo-scientific race realism and gender realism, claims of linear determinism of singular and isolated causal factors are meaningless because research has shown that all aspects are intertwined factors in how we develop and who we become.
Bret Weinstein makes three assertions: “Sex is biological. Gender is cultural. Culture is biological.” I don’t know what is his ideological position. But he sounds like a genetic determinist, although this is not clear since he also claims that his assertions have nothing to do with group selection (a standard reductionist approach). Anyway, to make these statements accurate, other statements would need to be added — such as that, biology is epigenetics, epigenetics is environment, and environment is culture. We’d have to throw in other things as well, from biome to linguistic relativism. To interpret Weinstein generously and not taking his use of ‘is’ too literally: Many things are many other things or rather closely related, if by that we mean that multiple factors can’t be reduced to one another in that they influence each other in multiple directions and through multiple pathways.
Recent research has taken this even further in showing that neither sex nor gender is binary *, as genetics and its relationship to environment, epigenetics, and culture is more complex than was previously realized. It’s far from uncommon for people to carry genetics of both sexes, even multiple DNA. It has to do with diverse interlinking and overlapping causal relationships. We aren’t all that certain at this point what ultimately determines the precise process of conditions, factors, and influences in how and why any given gene expresses or not and how and why it expresses in a particular way. Most of the genetics in human DNA is entirely unknown in its purpose or maybe lack of purpose, although the Junk DNA theory has become highly contested. And most genetics in the human body is non-human: bacteria, viruses, symbiotes, and parasites. The point is that, scientifically speaking, causation is a lot harder to prove than many would like to admit.
The second claim by Weinstein is even more interesting: “Culture is as adaptive, evolutionary and biological as genes.” That easily could be interpreted in alignment with Richard Dawkins theory of memetics. That argument is that there are cultural elements that act and spread similarly to genes, like a virus replicating. With the growing research on epigenetics, microbiome, parasites, and such, the mechanisms for such a thing become more plausible. We are treading in unexplored territory when we combine memetics not just with culture but also with extended mind and extended phenotype. Linguistic relativism, for example, has proven that cultural influences can operate through non-biological causes — in that bilingual individuals with the same genetics will think, perceive, and act differently depending on which language they are using. Yes, culture is adaptive, whether or not in the way Weinstein believes.
The problems in this area only occur when one demands a reductionist conclusion. The simplistic thinking of reductionism appeals to the limits of the human mind. But reality has no compulsion to comform to the human mind. Reality is irreducible. And so we need a scientific understanding that deals with, rather than dismisses, complexity. Indeed, the tide is turning.
Intersex people have been treated in different ways by different cultures. Whether or not they were socially tolerated or accepted by any particular culture, the existence of intersex people was known to many ancient and pre-modern cultures and legal systems, and numerous historical accounts exist.
In different cultures, a third or fourth gender may represent very different things. To Native Hawaiians and Tahitians, Māhū is an intermediate state between man and woman, or a “person of indeterminate gender”. The traditional Diné of the Southwestern US acknowledge four genders: feminine woman, masculine woman, feminine man, masculine man. The term “third gender” has also been used to describe hijras of India who have gained legal identity, fa’afafine of Polynesia, and sworn virgins of Albania.
Whether or not they were socially tolerated or accepted by any particular culture, the existence of intersex people was known to many ancient and pre-modern cultures. The Greek historian Diodorus Siculus wrote of “hermaphroditus” in the first century BCE that Hermaphroditus “is born with a physical body which is a combination of that of a man and that of a woman”, and with supernatural properties.
In European societies, Roman law, post-classical canon law, and later common law, referred to a person’s sex as male, female or hermaphrodite, with legal rights as male or female depending on the characteristics that appeared most dominant. The 12th-century Decretum Gratiani states that “Whether an hermaphrodite may witness a testament, depends on which sex prevails”. The foundation of common law, the 17th Century Institutes of the Lawes of England described how a hermaphrodite could inherit “either as male or female, according to that kind of sexe which doth prevaile.” Legal cases have been described in canon law and elsewhere over the centuries.
In some non-European societies, sex or gender systems with more than two categories may have allowed for other forms of inclusion of both intersex and transgender people. Such societies have been characterized as “primitive”, while Morgan Holmes states that subsequent analysis has been simplistic or romanticized, failing to take account of the ways that subjects of all categories are treated.
During the Victorian era, medical authors introduced the terms “true hermaphrodite” for an individual who has both ovarian and testicular tissue, “male pseudo-hermaphrodite” for a person with testicular tissue, but either female or ambiguous sexual anatomy, and “female pseudo-hermaphrodite” for a person with ovarian tissue, but either male or ambiguous sexual anatomy. Some later shifts in terminology have reflected advances in genetics, while other shifts are suggested to be due to pejorative associations.
So when and why did doctors move from one sex to two? Many scholars set the change during a time known as the “long 18th century”: 1688-1815. This time period covers the Age of Enlightenment in Europe and the period of political revolution that followed. It was during this time that many ideas about man’s inalienable rights were conceived.
Before the long 18th century, Western societies operated under feudalism, which presupposes that people are born unequal. Kings were better than lords who were better than peasants, and this sense of betterness extended to their physical bodies. “Aristocrats have better bodies, bodies are racialized,” says Laqueur, summing up the idea. “The body is open and fluid and the consequence of a hierarchy in heaven.” Specifics of this corruptible flesh are of less consequence than our souls. We were all servants in the Kingdom of Heaven, which set the hierarchy on earth.
This idea of a natural hierarchy was challenged by the thinkers of the Enlightenment. We see it in the Declaration of Independence: All men are created equal. But it was also understood that women and people of color couldn’t possibly have been created equal. Therefore, it became necessary to conceive of innate biological differences between men and women, white and black. “As political theorists were increasingly invoking a potentially egalitarian language of natural rights in the 18th century, ‘woman’ had to be defined as qualitatively different from men in order that political power would be kept out of women’s reach,” writes Karen Harvey in Cambridge University Press’s Historical Journal.
Sexual difference becomes much more explicit in medical texts once women’s anatomy gets its own words. […] What follows in the long 18th century and into the Victorian era is a solidifying of masculine and feminine as diametrically opposed. When doctors followed humoral system, it was understood that everyone was a little hot, a little cold, a little country, a little rock and roll. Women were frequently represented as hornier than men. But once everyone has to be shunted into a binary, women are rendered passive and disinclined to sex. “Historically, women had been perceived as lascivious and lustful creatures,” writes Ruth Perry in the amazingly titled academic paper “Colonizing the Breast.” “[B]y the middle of the eighteenth century they were increasingly reimagined as belonging to another order of being: loving but without sexual needs.” Men are horny, therefore women must be the opposite of horny.
Nonbinary and genderfluid people of the 21st century can gain some comfort from the notion that sex and gender divisions weren’t always so rigid. But that understanding is nevertheless tinged with the knowledge that the sexes, fluid though they were, were still ranked. Someone was still coming out a winner, and yet again it was whoever was most masculine.
Several studies have been conducted looking at the gender roles of intersex children.
One such study looked at female infants with adrenal hyperplasia, and who had excess male hormone levels, but were thought to be females and raised as such by their parents. These girls were more likely to express masculine traits.
Another study looked at 18 infants with the intersex condition 5-alpha reductase deficiency, and XY chromosomes, assigned female at birth. At adult age only one individual maintained a female role, all the others being stereotypically male.
In a third study, 14 male children born with cloacal exstrophy and assigned female at birth, including through intersex medical interventions. Upon follow-up between the ages of 5 to 12, eight of them identified as boys, and all of the subjects had at least moderately male-typical attitudes and interests.
Dr. Sandra Lipsitz Bem is a psychologist who developed the gender schema theory, based on the combination of aspects of the social learning theory and the cognitive-development theory of sex role acquisition, to explain how individuals come to use gender as an organizing category in all aspects of their life. In 1971, she created the Bem Sex-Role Inventory to measure how well an individual conformed to a traditional gender role, characterizing those tested as having masculine, feminine, androgynous, or undifferentiated personality. She believed that through gender-schematic processing, a person spontaneously sorts attributes and behaviors into masculine and feminine categories, and that therefore individuals processes information and regulate their behavior based on whatever definitions of femininity and masculinity their culture provides.
While there are differences in average capabilities of various kinds (E.g. better average balance in females or greater average physical size and endurance in males) between the sexes the capabilities of some members of one sex will fall within the range of capabilities needed for tasks conventionally assigned to the other sex. Eve Shapiro, author of Gender Circuits, explains that “gender, like other social categories, is both a personal identity and a culture set of behaviors, beliefs and values.” […]
Ideas of appropriate behavior according to gender vary among cultures and era, although some aspects receive more widespread attention than others. R.W. Connell in Men, Masculinities and Feminism claims:
There are cultures where it has been normal, not exceptional, for men to have homosexual relations. There have been periods in ‘Western’ history when the modern convention that men suppress displays of emotion did not apply at all, when men were demonstrative about their feeling for their friends. Mateship in the Australian outback last century is a case in point.
There are huge areal differences in attitudes towards appropriate gender roles. In the World Values Survey, responders were asked if they thought that wage work should be restricted to only men in the case of shortage in jobs: in Iceland the proportion that agreed with the proposition was 3.6%; while in Egypt it was 94.9%.
Attitudes have also varied historically, for example, in Europe, during the Middle Ages, women were commonly associated with roles related to medicine and healing. Because of the rise of witch-hunts across Europe and the institutionalization of medicine, these roles became exclusively associated with men but in the last few decades these roles have become largely gender-neutral in Western society.
Sex and gender are much more complex and nuanced than people have long believed. Defining sex as a binary treats it like a light switch: on or off. But it’s actually more similar to a dimmer switch, with many people sitting somewhere in between male and female genetically, physiologically, and/or mentally. To reflect this, scientists now describe sex as a spectrum.
The more we have learned about human genetics, the more complicated it has revealed itself to be. Because of this, the idea of binary gender has become less and less tenable. As Claire Ainsworth summarizes in an article for Nature, recent discoveries “have pointed to a complex process of sex determination, in which the identity of the gonad emerges from a contest between two opposing networks of gene activity. Changes in the activity … can tip the balance towards or away from the sex seemingly spelled out by the chromosomes.”
Sex can be much more complicated than it at first seems. According to the simple scenario, the presence or absence of a Y chromosome is what counts: with it, you are male, and without it, you are female. But doctors have long known that some people straddle the boundary — their sex chromosomes say one thing, but their gonads (ovaries or testes) or sexual anatomy say another. Parents of children with these kinds of conditions — known as intersex conditions, or differences or disorders of sex development (DSDs) — often face difficult decisions about whether to bring up their child as a boy or a girl. Some researchers now say that as many as 1 person in 100 has some form of DSD2.
When genetics is taken into consideration, the boundary between the sexes becomes even blurrier. Scientists have identified many of the genes involved in the main forms of DSD, and have uncovered variations in these genes that have subtle effects on a person’s anatomical or physiological sex. What’s more, new technologies in DNA sequencing and cell biology are revealing that almost everyone is, to varying degrees, a patchwork of genetically distinct cells, some with a sex that might not match that of the rest of their body. Some studies even suggest that the sex of each cell drives its behaviour, through a complicated network of molecular interactions. “I think there’s much greater diversity within male or female, and there is certainly an area of overlap where some people can’t easily define themselves within the binary structure,” says John Achermann, who studies sex development and endocrinology at University College London’s Institute of Child Health.
These discoveries do not sit well in a world in which sex is still defined in binary terms. Few legal systems allow for any ambiguity in biological sex, and a person’s legal rights and social status can be heavily influenced by whether their birth certificate says male or female.
“The main problem with a strong dichotomy is that there are intermediate cases that push the limits and ask us to figure out exactly where the dividing line is between males and females,” says Arthur Arnold at the University of California, Los Angeles, who studies biological sex differences. “And that’s often a very difficult problem, because sex can be defined a number of ways.”
Many of us learned in high school biology that sex chromosomes determine a baby’s sex, full stop: XX means it’s a girl; XY means it’s a boy. But on occasion, XX and XY don’t tell the whole story.
Today we know that the various elements of what we consider “male” and “female” don’t always line up neatly, with all the XXs—complete with ovaries, vagina, estrogen, female gender identity, and feminine behavior—on one side and all the XYs—testes, penis, testosterone, male gender identity, and masculine behavior—on the other. It’s possible to be XX and mostly male in terms of anatomy, physiology, and psychology, just as it’s possible to be XY and mostly female.
Each embryo starts out with a pair of primitive organs, the proto-gonads, that develop into male or female gonads at about six to eight weeks. Sex differentiation is usually set in motion by a gene on the Y chromosome, the SRY gene, that makes the proto-gonads turn into testes. The testes then secrete testosterone and other male hormones (collectively called androgens), and the fetus develops a prostate, scrotum, and penis. Without the SRY gene, the proto-gonads become ovaries that secrete estrogen, and the fetus develops female anatomy (uterus, vagina, and clitoris).
But the SRY gene’s function isn’t always straightforward. The gene might be missing or dysfunctional, leading to an XY embryo that fails to develop male anatomy and is identified at birth as a girl. Or it might show up on the X chromosome, leading to an XX embryo that does develop male anatomy and is identified at birth as a boy.
Genetic variations can occur that are unrelated to the SRY gene, such as complete androgen insensitivity syndrome (CAIS), in which an XY embryo’s cells respond minimally, if at all, to the signals of male hormones. Even though the proto-gonads become testes and the fetus produces androgens, male genitals don’t develop. The baby looks female, with a clitoris and vagina, and in most cases will grow up feeling herself to be a girl.
Which is this baby, then? Is she the girl she believes herself to be? Or, because of her XY chromosomes—not to mention the testes in her abdomen—is she “really” male? […]
In terms of biology, some scientists think it might be traced to the syncopated pacing of fetal development. “Sexual differentiation of the genitals takes place in the first two months of pregnancy,” wrote Dick Swaab, a researcher at the Netherlands Institute for Neuroscience in Amsterdam, “and sexual differentiation of the brain starts during the second half of pregnancy.” Genitals and brains are thus subjected to different environments of “hormones, nutrients, medication, and other chemical substances,” several weeks apart in the womb, that affect sexual differentiation.
This doesn’t mean there’s such a thing as a “male” or “female” brain, exactly. But at least a few brain characteristics, such as density of the gray matter or size of the hypothalamus, do tend to differ between genders. It turns out transgender people’s brains may more closely resemble brains of their self-identified gender than those of the gender assigned at birth. In one study, for example, Swaab and his colleagues found that in one region of the brain, transgender women, like other women, have fewer cells associated with the regulator hormone somatostatin than men. In another study scientists from Spain conducted brain scans on transgender men and found that their white matter was neither typically male nor typically female, but somewhere in between.
These studies have several problems. They are often small, involving as few as half a dozen transgender individuals. And they sometimes include people who already have started taking hormones to transition to the opposite gender, meaning that observed brain differences might be the result of, rather than the explanation for, a subject’s transgender identity.
Still, one finding in transgender research has been robust: a connection between gender nonconformity and autism spectrum disorder (ASD). According to John Strang, a pediatric neuropsychologist with the Center for Autism Spectrum Disorders and the Gender and Sexuality Development Program at Children’s National Health System in Washington, D.C., children and adolescents on the autism spectrum are seven times more likely than other young people to be gender nonconforming. And, conversely, children and adolescents at gender clinics are six to 15 times more likely than other young people to have ASD.
The past half year has been spent in anticipation. Daniel Everett has a new book that finally came out the other day: Dark Matter of the Mind. I was so curious to read it because Everett is the newest and most well known challenger to mainstream linguistics theory. This is only an interest to me because it so happens to directly touch upon every aspect of our humanity: human nature (vs nurture), self-identity, consciousness, cognition, perception, behavior, culture, philosophy, etc.
The leading opponent to Everett’s theory is Noam Chomsky, a well-known and well-respected public intellectual. Chomsky is the founder of the so-called cognitive revolution — not that Everett sees it as all that revolutionary: “it was not a revolution in any sense, however popular that narrative has become” (Kindle Location 306). That brings into the conflict issues of personality, academia, politics, and funding. It’s two paradigms clashing, one of the paradigms having been dominant for more than a half century.
Now that I’ve been reading the book, I find my response to be mixed. Everett is running headlong into difficult terrain and I must admit he does so competently. He is doing the tough scholarly work that needs to be done. As Bill Benzon explained (at 3 Quarks Daily):
“While the intellectual world is rife with specialized argumentation arrayed around culture and associated concepts (nature, nurture, instinct, learning) these concepts themselves do not have well-defined technical meanings. In fact, I often feel they are destined to go the way of phlogiston, except that, alas, we’ve not yet discovered the oxygen that will allow us to replace them . These concepts are foundational, but the foundation is crumbling. Everett is attempting to clear away the rubble and start anew on cleared ground. That’s what dark matter is, the cleared ground that becomes visible once the rubble has been pushed to the side. Just what we’ll build on it, and how, that’s another question.”
This explanation points to a fundamental problem, if we are to consider it a problem. Earlier in the piece, Benzon wrote that, “OK, I get it, I think, you say, but this dark matter stuff is so vague and metaphorical. You’re right. And it remains that way to the end of the book. And that, I suppose, is my major criticism, though it’s a minor one. “Dark matter” does a lot of conceptual work for Everett, but he discusses it indirectly.” Basically, Everett struggles with a limited framework of terminology and concepts. But that isn’t entirely his fault. It’s not exactly new territory that Everett discovered, just not yet fully explored and mapped out. The main thing he did, in his earliest work, was to bring up evidence that simply did not fit into prevailing theories. And now in a book like this he is trying to make sense of what that evidence indicates and what theory better explains it.
It would have been useful if Everett had been able to give a fuller survey of the relevant scholarship. But if he had, it would have been a larger and more academic book. It is already difficult enough for most readers not familiar with the topic. Besides, I suspect that Everett was pushing against the boundaries of his own knowledge and readings. It was easy for me to see everything that was left out, in relation to numerous other fields beyond his focus of linguistics and anthropology — such as: neurocognitive research, consciousness studies, classical studies of ancient texts, voice-hearing and mental health, etc.
The book sometimes felt like reinventing the wheel. Everett’s expertise is in linguistics, and apparently that has has been an insular field of study defended by a powerful and entrenched academic establishment. My sense is that linguistics is far behind in development, compared to many other fields. The paradigm shift that is just now happening in linguistics has been for decades creating seismic shifts elsewhere in academia. Some argue that this is because linguistics became enmeshed in Pentagon-funded computer research and so has had a hard time disentangling itself in order to become an independent field once again. Chomsky as leader of the cognitive revolution has effectively dissuaded a generation of linguists from doing social science, instead promoting the hard sciences, a problematic position to hold about a rather soft field like linguistics. As anthropologist Chris Knight explains it, in Decoding Chomsky (Chapter 1):
“[O]ne bedrock assumption underlies his work. If you want to be a scientist, Chomsky advises, restrict your efforts to natural science. Social science is mostly fraud. In fact, there is no such thing as social science. As Chomsky asks: ‘Is there anything in the social sciences that even merits the term “theory”? That is, some explanatory system involving hidden structures with non-trivial principles that provide understanding of phenomena? If so, I’ve missed it.’
“So how is it that Chomsky himself is able to break the mould? What special factor permits him to develop insights which do merit the term ‘theory’? In his view, ‘the area of human language . . . is one of the very few areas of complex human functioning’ in which theoretical work is possible. The explanation is simple: language as he defines it is neither social nor cultural, but purely individual and natural. Provided you acknowledge this, you can develop theories about hidden structures – proceeding as in any other natural science. Whatever else has changed over the years, this fundamental assumption has not.”
This makes Everett’s job harder than it should be, in breaking new ground in linguistics and in trying to connect it to the work already done elsewhere, most often in the social sciences. As humans are complex social animals living in a complex world, it is bizarre and plain counterproductive to study humans in the way one studies a hard science like geology. Humans aren’t isolated biological computers that can operate outside of the larger context of specific cultures and environments. But Chomsky simply assumes all of that is irrelevant on principle. Field research of actual functioning languages, as Everett has done, can be dismissed because it is mere social science. One can sense how difficult it is for Everett in struggling against this dominant paradigm.
Still, even with these limitations of the linguistics field, the book remains a more than worthy read. His using Plato and Aristotle to frame the issue was helpful to an extent, although it also added another variety of limitation. I got a better sense of the conflict of worldviews and how they relate to the larger history of ideas. But in doing so, I became more aware of the problems of that frame, very closely related to the problems of the nature vs nurture debate (for, in reality, nature and nurture are inseparable). He describes linguistic theoreticians like Chomsky as being in the Platonic school of thought. Chomsky surely would agree, as he has already made that connection in his own writings, what he discusses as Plato’s problem and Plato’s answer. Chomsky’s universal grammar are Platonic in nature, for as he has written such “knowledge is ‘remembered’” (“Linguistics, a personal view” from The Chomskyan Turn). This is Plato’s ananmesis and alethia, an unforgetting of what is true, based on the belief that humans are born with certain kinds of innate knowledge.
That is interesting to think about. But in the end I felt that something was being oversimplified or entirely left out. Everett is arguing against nativism, that there is an inborn predetermined human nature. It’s not so much that he is arguing for a blank slate as he is trying to explain the immense diversity and potential that exists across cultures. But the duality of nativism vs non-nativism lacks the nuance to wrestle down complex realities.
I’m sympathetic to Everett’s view and to his criticisms of the nativist view. But there are cross-cultural patterns that need to be made sense of, even with the exceptions that deviate from those patterns. Dismissing evidence is never satisfying. Along with Chomsky, he throws in the likes of Carl Jung. But the difference between Chomsky and Jung is that the former is an academic devoted to pure theory unsullied by field research while the latter was a practicing psychotherapist who began with the particulars of individual cases. Everett is arguing for a focus on the particulars, upon which to build theory, but that is what Jung did. The criticisms of Chomsky can’t be shifted over to Jung, no matter what one thinks of Jung’s theories.
Part of the problem is that the kind of evidence Jung dealt with remains to be explained. It’s simply a fact that certain repeating patterns are found in human experience, across place and time. That is evidence to be considered, not dismissed, however one wishes to interpret it. Not even most respectable nativist thinkers want to confront this kind of evidence that challenges conventional understandings on all sides. Maybe Jungian theories of archetypes, personality types, etc are incorrect. But how do we study and test such things, going from direct observation to scientific research? And how is the frame of nativism/non-nativism helpful at all?
Maybe there are patterns, not unlike gravity and other natural laws, that are simply native to the world humans inhabit and so might not be entirely or at all native to the human mind, which is to say not in the way that Chomsky makes nativist claims about universal grammar. Rather, these patterns would be native to to humans in the way and to the extent humans are native to the world. This could be made to fit into Everett’s own theorizing, as he is attempting to situate the human within larger contexts of culture, environment, and such.
Consider an example from psychedelic studies. It has been found that people under the influence of particular psychedelics often have similar experiences. This is why shamanic cultures speak of psychedelic plants as having spirits that reside within or are expressed through them.
Let me be more specific. DMT is the most common psychedelic in the world, it being found in numerous plants and even is produced in small quantities by the human brain. It’s an example of interspecies co-evolution, plants and humans having chemicals in common. Plants are chemistry factories and they use chemicals for various purposes, including communication with other plants (e.g., chemically telling nearby plants that something is nibbling on its leaves and so put up your chemical defenses) and communicating with non-plants (e.g., sending out bitter chemicals to help inform the nibbler that they might want to eat elsewhere). Animals didn’t just co-evolve with edible plants but also psychedelic plants. And humans aren’t the only species to imbibe. Maybe chemicals like DMT serve a purpose. And maybe there is a reason so many humans tripping on DMT experience what some describe as self-replicating machine elves or self-transforming fractal elves. Humans have been tripping on DMT for longer than civilization has existed.
DMT is far from being the only psychedelic plant like this. It’s just one of the more common. The reason plant psychedelics do what they do to our brains is because our brains were shaped by evolution to interact with chemicals like this. These chemicals almost seem designed for animal brains, especially DMT which our own brains produce.
That brings up some issues about the whole nativism/non-nativism conflict. Is a common experience many humans have with a psychedelic plant native to humans, native to the plant, or native to the inter-species relationship between human and plant? Where do the machine/fractal elves live, in the plant or in our brain? My tendency is to say that they in some sense ‘exist’ in the relationship between plants and humans, an experiential expression of that relationship, as immaterial and ephemeral as the love felt by two humans. These weird psychedelic beings are a plant-human hybrid, a shared creation of our shared evolution. They are native to our humanity to the extent that we are native to the ecosystems we share with those psychedelic plants.
Other areas of human experience lead down similar strange avenues. Take as another example the observations of Jacques Vallée. When he was a practicing astronomer, he became interested in UFOs as some of his fellow astronomers would destroy rather than investigate anomalous observational data. This led him to look into the UFO field and that led to his studying those claiming alien abduction experiences. What he noted was that the stories told were quite similar to fairy abduction folktales and shamanic accounts of initiation. There seemed to be a shared pattern of experience that was interpreted differently according to culture but that in a large number of cases the basic pattern held.
Or take yet another example. Judith Weissman has noted patterns among the stated experiences of voice-hearers. Another researcher on voice-hearing, Tanya Luhrmann, has studied how voice-hearing both has commonalities and differences across cultures. John Geiger has shown how common voice-hearing can be, even if for most people it is usually only elicited during times of stress. Based on this and the work of others, it is obvious that voice-hearing is a normal capacity existing within all humans. It is actually quite common among children and some theorize it was more common for adults in other societies. Is pointing out the surprisingly common experience of voice-hearing an argument for nativism?
These aspects of our humanity are plain weird. It was the kind of thing that always fascinated Jung. But what do we do with such evidence? It doesn’t prove a universal human nature that is inborn and predetermined. Not everyone has these experiences. But it appears everyone is capable of having these experiences.
This is where mainstream thinking in the field of linguistics shows its limitations. Going by Everett’s descriptions of the Pirahã, it seems likely that voice-hearing is common among them, although they wouldn’t interpret it that way. For them, voice-hearing appears to manifest as full possession and what, to Western outsiders, seems like a shared state of dissociation. It’s odd that as a linguist it didn’t occur to Everett to study the way of speaking of those who were possessed or to think more deeply about the experiential significance of the use of language indicating dissociation. Maybe it was too far outside of his own cultural biases, the same cultural biases that causes many Western voice-hearers to be medicated and institutionalized.
And if we’re going to talk about voice-hearing, we have to bring up Julian Jaynes. Everett probably doesn’t realize it, but his views seem to be in line with the bicameral theory or at least not in explicit contradiction with it on conceptual grounds. He seems to be coming out of the cultural school of thought within anthropology, the same influence on Jaynes. It is precisely Everett’s anthropological field research that distinguishes him from a theoretical linguist like Chomsky who has never formally studied any foreign language nor gone out into the field to test his theories. It was from studying the Pirahã firsthand over many years that the power of culture was impressed upon him. Maybe that is a commonality with Jaynes who began his career doing scientific research, not theorizing.
As I was reading the book, I kept being reminded of Jaynes, despite Everett never mentioning him or related thinkers. It’s largely how he talks about individuals situated in a world and worldview, along with his mentioning of Bordieu’shabitus. This fits into his emphasis on the culture and nurture side of influences, arguing that people (and languages) are products of their environments. Also, when Everett wrote that his view was there is “nothing to an individual but one’s body” (Kindle Location 328), it occurred to me how this fit into the proposed experience of hypothetical ancient bicameral humans. My thought was confirmed when he stated that his own understanding was most in line with the Buddhist anatnam, ‘non-self’. Just a week ago, I wrote the following in reference to Jaynes’ bicameral theory:
“We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.”
Jaynes considered self-consciousness and self-identity to be products of thought, rather than the other way around. Like Everett, this is an argument against the old Western belief in a human soul that is eternal and immortal, that Platonically precedes individual corporality. But notions like Chomsky’s universal grammar feel like an attempt to revamp the soul for a scientific era, a universal human nature that precedes any individual, a soul as the spark of God and the divine expressed as a language imprinted on the soul. If I must believe in something existing within me that pre-exists me, then I’d rather go with alien-fairy-elves hiding out in the tangled undergrowth of my neurons.
Anyway, how might Everett’s views of nativism/non-nativism been different if he had been more familiar with the work of these other researchers and thinkers? The problem is that the nativism/non-nativism framework is itself culturally biased. It’s related to the problem of anthropologists who try to test the color perception of other cultures using tests that are based on Western color perception. Everett’s observations of the Pirahã, by the way, have also challenged that field of study — as he has made the claim that the Pirahã have no color terms and no particular use in discriminating colors. That deals with the relationship of language to cognition and perception. Does language limit our minds? If so, how and to what extent? If not, are we to assume that such things as ‘colors’ are native to how the human brain functions? Would an individual born into and raised in a completely dark room still ‘see’ colors in their mind’s eye?
Maybe the fractal elves produce the colors, consuming the DMT and defecating rainbows. Maybe the alien-fairies abduct us in our sleep and use advanced technology to implant the colors into our brains. Maybe without the fractal elves and alien-fairies, we would finally all be colorblind and our society would be free from racism. Just some alternative theories to consider.
Talking about cultural biases, I was fascinated by some of the details he threw out about the Pirahã, the tribe he had spent the most years studying. He wrote that (Kindle Locations 147-148), “Looking back, I can identify many of the hidden problems it took me years to recognize, problems based in contrasting sets of tacit assumptions held by the Pirahãs and me.” He then lists some of the tacit assumptions held by these people he came to know.
They don’t appear to have any concepts, language, or interest in God or gods, in religion, or anything spiritual/supernatural that wasn’t personally experienced by them or someone they personally know. Their language is very direct and precise about all experience and the source of claims. But they don’t feel like they’re spiritually lost or somehow lacking anything. In fact, Everett describes them as being extremely happy and easygoing, except on the rare occasion when a trader gives them alcohol.
They don’t have any concern or fear about nor do they seek out and talk about death, the dead, ancestral spirits, or the afterlife. They apparently are entirely focused on present experience. They don’t speculate, worry, or even have curiosity about what is outside their experience. Foreign cultures are irrelevant to them, this being an indifference and not hatred of foreigners. It’s just that foreign cultures is thought of as good for foreigners, as Pirahã culture is good for Pirahã. Generally, they seem to lack the standard anxiety that is typical of our society, despite living in and walking around barefoot in one of the most dangerous environments on the planet surrounded by poisonous and deadly creatures. It’s actually malaria that tends to cut their lives short. But they don’t much comparison in thinking that their lives are cut short.
Their society is based on personal relationships and “do not like for any individual to tell another individual how to live” (Kindle Locations 149-150). They don’t have governments or, as far as I know, governing councils. They don’t practice social coercion, community-mandated punishments, and enforced norms. They are very small tribe living in isolation with a way of life that has likely remained basically the same for millennia. Their culture and lifestyle is well-adapted to their environmental niche, and so they don’t tend to encounter many new problems that require them to act differently than in the past. They also don’t practice or comprehend incarceration, torture, capital punishment, mass war, genocide, etc. It’s not that violence never happens in their society, but I get the sense that it’s rare.
In the early years of life, infants and young toddlers live in near constant proximity to their mothers and other adults. They are given near ownership rights of their mothers’ bodies, freely suckling whenever they want without asking permission or being denied. But once weaned, Pirahã are the opposite of coddled. Their mothers simply cut them off from their bodies and the toddlers go through a tantrum period that is ignored by adults. They learn from experience and get little supervision in the process. They quickly become extremely knowledgeable and capable about living in and navigating the world around them. The parents have little fear about their children and it seems to be well-founded, as the children prove themselves able to easily learn self-sufficiency and a willingness to contribute. It reminded me of Jean Liedloff’s continuum concept.
Then, once they become teenagers, they don’t go through a rebellious phase. It seems a smooth transition into adulthood. As he described it in his first book (Don’t Sleep, There Are Snakes, p. 99-100):
“I did not see Pirahã teenagers moping, sleeping in late, refusing to accept responsibility for their own actions, or trying out what they considered to be radically new approaches to life. They in fact are highly productive and conformist members of their community in the Pirahã sense of productivity (good fishermen, contributing generally to the security, food needs, and o ther aspects of the physical survival of the community). One gets no sense of teenage angst, depression, or insecurity among the Pirahã youth. They do not seem to be searching for answers. They have them. And new questions rarely arise.
“Of course, this homeostasis can stifle creativity and individuality, two important Western values. If one considers cultural evolution to be a good thing, then this may not be something to emulate, since cultural evolution likely requires conflict, angst, and challenge. But if your life is unthreatened (so far as you know) and everyone in your society is satisfied, why would you desire change? How could things be improved? Especially if the outsiders you came into contact with seemed more irritable and less satisfied with life than you. I asked the Pirahãs once during my early missionary years if they knew why I was there. “You are here because this is a beautiful place. The water is pretty. There are good things to eat here. The Pirahãs are nice people.” That was and is the Pirahãs’ perspective. Life is good. Their upbringing, everyone learning early on to pull their own weight, produces a society of satisfied members. That is hard to argue against.”
The most strange and even shocking aspect of Pirahã life is their sexuality. Kids quickly learn about sex. It’s not that people have sex out in the open. But it’s a lifestyle that provides limited privacy. Sexual activity isn’t considered a mere adult activity and children aren’t protected from it. Quite the opposite (Kindle Locations 2736-2745):
“Sexual behavior is another behavior distinguishing Pirahãs from most middle-class Westerners early on. A young Pirahã girl of about five years came up to me once many years ago as I was working and made crude sexual gestures, holding her genitalia and thrusting them at me repeatedly, laughing hysterically the whole time. The people who saw this behavior gave no sign that they were bothered. Just child behavior, like picking your nose or farting. Not worth commenting about.
“But the lesson is not that a child acted in a way that a Western adult might find vulgar. Rather, the lesson, as I looked into this, is that Pirahã children learn a lot more about sex early on, by observation, than most American children. Moreover, their acquisition of carnal knowledge early on is not limited to observation. A man once introduced me to a nine- or ten-year-old girl and presented her as his wife. “But just to play,” he quickly added. Pirahã young people begin to engage sexually, though apparently not in full intercourse, from early on. Touching and being touched seem to be common for Pirahã boys and girls from about seven years of age on. They are all sexually active by puberty, with older men and women frequently initiating younger girls and boys, respectively. There is no evidence that the children then or as adults find this pedophilia the least bit traumatic.”
This seems plain wrong to most Westerners. Then again, to the Pirahã, much of what Westerners do would seem plain wrong or simply incomprehensible. Which is worse, Pirahã pedophilia or Western mass violence and systematic oppression?
What is most odd is that, like death for adults, sexuality for children isn’t considered a traumatizing experience and they don’t act traumatized. It’s apparently not part of their culture to be traumatized. They aren’t a society based on and enmeshed in a worldview of violence, fear, and anxiety. That isn’t how they think about any aspect of their lifeworld. I would assume that, like most tribal people, they don’t have high rates of depression and other mental illnesses. Everett pointed out that in the thirty years he knew the Pirahã there never was a suicide. And when he told them about his stepmother killing herself, they burst out in laughter because it made absolutely no sense to them that someone would take their own life.
That demonstrates the power of culture, environment, and lifestyle. According to Everett, it also demonstrates the power of language, inseparable from the society that shapes and is shaped by it, and demonstrates how little we understand the dark matter of the mind.
“What is the meaning of life?” This question has no answer except in the history of how it came to be asked. There is no answer because words have meaning, not life or persons or the universe itself. Our search for certainty rests in our attempts at understanding the history of all individual selves and all civilizations. Beyond that, there is only awe.
~ Julian Jaynes, 1988, Life Magazine
That is always a nice quote. Jaynes never seemed like an ideologue about his own speculations. In his controversial book, more than a decade earlier (1976), he titled his introduction as “The Problem of Consciousness”. That is what frames his thought, confronting a problem. The whole issue of consciousness is still problematic to this day and likely will be so for a long time. After a lengthy analysis of complex issues, he concludes his book with some humbling thoughts:
For what is the nature of this blessing of certainty that science so devoutly demands in its very Jacob-like wrestling with nature? Why should we demand that the universe make itself clear to us? Why do we care?
To be sure, a part of the impulse to science is simple curiosity, to hold the unheld and watch the unwatched. We are all children in the unknown.
Following that, he makes a plea for understanding. Not just understanding of the mind but also of experience. It is a desire to grasp what makes us human, the common impulses that bind us, underlying both religion and science. There is a tender concern being given voice, probably shaped and inspired by his younger self having poured over his deceased father’s Unitarian sermons.
As individuals we are at the mercies of our own collective imperatives. We see over our everyday attentions, our gardens and politics, and children, into the forms of our culture darkly. And our culture is our history. In our attempts to communicate or to persuade or simply interest others, we are using and moving about through cultural models among whose differences we may select, but from whose totality we cannot escape. And it is in this sense of the forms of appeal, of begetting hope or interest or appreciation or praise for ourselves or for our ideas, that our communications are shaped into these historical patterns, these grooves of persuasion which are even in the act of communication an inherent part of what is communicated. And this essay is no exception.
That humility feels genuine. His book was far beyond mere scholarship. It was an expression of decades of questioning and self-questioning, about what it means to be human and what it might have meant for others throughout the millennia.
He never got around to writing another book on the topic, despite his stated plans to do so. But during the last decade of his life, he wrote an afterword to his original work. It was placed in the 1990 edition, fourteen years after the original publication. He had faced much criticism and one senses a tired frustration in those last years. Elsewhere, he complained about the expectation to explain himself and make himself understood to people who, for whatever reason, didn’t understand. Still, he realized that was the nature of his job as an academic scholar working at a major university. From the after word, he wrote:
A favorite practice of some professional intellectuals when at first faced with a theory as large as the one I have presented is to search for that loose thread which, when pulled, will unravel all the rest. And rightly so. It is part of the discipline of scientific thinking. In any work covering so much of the terrain of human nature and history, hustling into territories jealously guarded by myriad aggressive specialists, there are bound to be such errancies, sometimes of fact but I fear more often of tone. But that the knitting of this book is such that a tug on such a bad stitch will unravel all the rest is more of a hope on the part of the orthodox than a fact in the scientific pursuit of truth. The book is not a single hypothesis.
Interestingly, Jaynes doesn’t state the bicameral mind as an overarching context for the hypotheses he lists. In fact, it is just one among the several hypotheses and not even the first to be mentioned. That shouldn’t be surprising since decades of his thought and research, including laboratory studies done on animal behavior, preceded the formulation of the bicameral hypothesis. Here are the four hypotheses:
Consciousness is based on language.
The bicameral mind.
The double brain.
He states that, “I wish to emphasize that these four hypotheses are separable. The last, for example, could be mistaken (at least in the simplified version I have presented) and the others true. The two hemispheres of the brain are not the bicameral mind but its present neurological model. The bicameral mind is an ancient mentality demonstrated in the literature and artifacts of antiquity.” Each hypothesis is connected to the others but must be dealt with separately. The key element to his project is consciousness, as that is the key problem. And as problems go, it is a doozy. Calling it a problem is like calling the moon a chunk of rock and the sun a warm fire.
Related to these hypotheses, earlier in his book, Jaynes proposes a useful framework. He calls it the General Bicameral Paradigm. “By this phrase,” he explains, “I mean an hypothesized structure behind a large class of phenomena of diminished consciousness which I am interpreting as partial holdovers from our earlier mentality.” There are four components:
“the collective cognitive imperative, or belief system, a culturally agreed-on expectancy or prescription which defines the particular form of a phenomenon and the roles to be acted out within that form;”
“an induction or formally ritualized procedure whose function is the narrowing of consciousness by focusing attention on a small range of preoccupations;”
“the trance itself, a response to both the preceding, characterized by a lessening of consciousness or its loss, the diminishing of the analog or its loss, resulting in a role that is accepted, tolerated, or encouraged by the group; and”
“the archaic authorization to which the trance is directed or related to, usually a god, but sometimes a person who is accepted by the individual and his culture as an authority over the individual, and who by the collective cognitive imperative is prescribed to be responsible for controlling the trance state.”
The point is made that the reader shouldn’t assume that they are “to be considered as a temporal succession necessarily, although the induction and trance usually do follow each other. But the cognitive imperative and the archaic authorization pervade the whole thing. Moreover, there is a kind of balance or summation among these elements, such that when one of them is weak the others must be strong for the phenomena to occur. Thus, as through time, particularly in the millennium following the beginning of consciousness, the collective cognitive imperative becomes weaker (that is, the general population tends toward skepticism about the archaic authorization), we find a rising emphasis on and complication of the induction procedures, as well as the trance state itself becoming more profound.”
This general bicameral paradigm is partly based on the insights he gained from studying ancient societies. But ultimately it can be considered separately from that. All you have to understand is that these are a basic set of cognitive abilities and tendencies that have been with humanity for a long time. These are the vestiges of human evolution and societal development. They can be combined and expressed in multiple ways. Our present society is just one of many possible manifestations. Human nature is complex and human potential is immense, and so diversity is to be expected among human neurocognition, behavior, and culture.
An important example of the general bicameral paradigm is hypnosis. It isn’t just an amusing trick done for magic shows. Hypnosis shows something profoundly odd, disturbing even, about the human mind. Also, it goes far beyond the individual for it is about how humans relate. It demonstrates the power of authority figures, in whatever form they take, and indicates the significance of what Jaynes calls authorization. By the way, this leads down the dark pathways of authoritarianism, brainwashing, propaganda, and punishment — as for the latter, Jaynes writes that:
If we can regard punishment in childhood as a way of instilling an enhanced relationship to authority, hence training some of those neurological relationships that were once the bicameral mind, we might expect this to increase hypnotic susceptibility. And this is true. Careful studies show that those who have experienced severe punishment in childhood and come from a disciplined home are more easily hypnotized, while those who were rarely punished or not punished at all tend to be less susceptible to hypnosis.
He discusses the history of hypnosis beginning with Mesmer. In this, he shows how metaphor took different form over time. And, accordingly, it altered shared experience and behavior.
Now it is critical here to realize and to understand what we might call the paraphrandic changes which were going on in the people involved, due to these metaphors. A paraphrand, you will remember, is the projection into a metaphrand of the associations or paraphiers of a metaphier. The metaphrand here is the influences between people. The metaphiers, or what these influences are being compared to, are the inexorable forces of gravitation, magnetism, and electricity. And their paraphiers of absolute compulsions between heavenly bodies, of unstoppable currents from masses of Ley den jars, or of irresistible oceanic tides of magnetism, all these projected back into the metaphrand of interpersonal relationships, actually changing them, changing the psychological nature of the persons involved, immersing them in a sea of uncontrollable control that emanated from the ‘magnetic fluids’ in the doctor’s body, or in objects which had ‘absorbed’ such from him.
It is at least conceivable that what Mesmer was discovering was a different kind of mentality that, given a proper locale, a special education in childhood, a surrounding belief system, and isolation from the rest of us, possibly could have sustained itself as a society not based on ordinary consciousness, where metaphors of energy and irresistible control would assume some of the functions of consciousness.
How is this even possible? As I have mentioned already, I think Mesmer was clumsily stumbling into a new way of engaging that neurological patterning I have called the general bicameral paradigm with its four aspects: collective cognitive imperative, induction, trance, and archaic authorization.
Through authority and authorization, immense power and persuasion can be wielded. Jaynes argues that it is central to the human mind, but that in developing consciousness we learned how to partly internalize the process. Even so, Jaynesian self-consciousness is never a permanent, continuous state and the power of individual self-authorization easily morphs back into external forms. This is far from idle speculation, considering authoritarianism still haunts the modern mind. I might add that the ultimate power of authoritarianism, as Jaynes makes clear, isn’t overt force and brute violence. Outward forms of power are only necessary to the degree that external authorization is relatively weak, as is typically the case in modern societies.
This touches upon the issue of rhetoric, although Jaynes never mentioned the topic. It’s disappointing since his original analysis of metaphor has many implications. Fortunately, others have picked up where he left off (see Ted Remington, Brian J. McVeigh, and Frank J. D’Angelo). Authorization in the ancient world came through a poetic voice, but today it is most commonly heard in rhetoric.
Still, that old time religion can be heard in the words and rhythm of any great speaker. Just listen to how a recorded speech of Martin Luther King jr can pull you in with its musicality. Or if you prefer a dark example, consider the persuasive power of Adolf Hitler for even some Jews admitted they got caught up listening to his speeches. This is why Plato feared the poets and banished them from his utopia of enlightened rule. Poetry would inevitably undermine and subsume the high-minded rhetoric of philosophers. “[P]oetry used to be divine knowledge,” as Guerini et al states in Echoes of Persuasion, “It was the sound and tenor of authorization and it commanded where plain prose could only ask.”
Metaphor grows naturally in poetic soil, but its seeds are planted in every aspect of language and thought, giving fruit to our perceptions and actions. This is a thousandfold true on the collective level of society and politics. Metaphors are most powerful when we don’t see them as metaphors. So, the most persuasive rhetoric is that which hides its metaphorical frame and obfuscates any attempts to bring it to light.
Going far back into the ancient world, metaphors didn’t need to be hidden in this sense. The reason for this is that there was no intellectual capacity or conceptual understanding of metaphors as metaphors. Instead, metaphors were taken literally. The way people spoke about reality was inseparable from their experience of reality and they had no way of stepping back from their cultural biases, as the cultural worldviews they existed within were all-encompassing. It’s only with the later rise of multicultural societies, especially the vast multi-ethnic trade empires, that people began to think in terms of multiple perspectives. Such a society was developing in the trade networking and colonizing nation-states of Greece in the centuries leading up to Hellenism.
That is the well known part of Jaynes’ speculations, the basis of his proposed bicameral mind. And Jaynes considered it extremely relevant to the present.
Marcel Kuijsten wrote that, “Jaynes maintained that we are still deep in the midst of this transition from bicamerality to consciousness; we are continuing the process of expanding the role of our internal dialogue and introspection in the decision-making process that was started some 3,000 years ago. Vestiges of the bicameral mind — our longing for absolute guidance and external control — make us susceptible to charismatic leaders, cults, trends, and persuasive rhetoric that relies on slogans to bypass logic” (“Consciousness, Hallucinations, and the Bicameral Mind Three Decades of New Research”, Reflections on the Dawn of Consciousness, Kindle Locations 2210-2213). Considering the present, in Authoritarian Grammar and Fundamentalist Arithmetic, Ben G. Price puts it starkly: “Throughout, tyranny asserts its superiority by creating a psychological distance between those who command and those who obey. And they do this with language, which they presume to control.” The point made by the latter is that this knowledge, even as it can be used as intellectual defense, might just lead to even more effective authoritarianism.
We’ve grown less fearful of rhetoric because we see ourselves as being savvy, experienced consumers of media. The cynical modern mind is always on guard, our well-developed and rigid state of consciousness offering a continuous psychological buffering against the intrusions of the world. So we like to think. I remember, back in 7th grade, being taught how the rhetoric of advertising is used to manipulate us. But we are over-confident. Consciousness operates at the surface of the psychic depths. We are better at rationalizing than being rational, something we may understand intellectually but rarely do we fully acknowledge the psychological and societal significance of this. That is the usefulness of theories like that of bicameralism, as they remind us that we are out of our depths. In the ancient world, there was a profound mistrust between the poetic and rhetorical, and for good reason. We would be wise to learn from that clash of mindsets and worldviews.
We shouldn’t be so quick to assume we understand our own minds, the kind of vessel we find ourselves on. Nor should we allow ourselves to get too comfortable within the worldview we’ve always known, the safe harbor of our familiar patterns of mind. It’s hard to think about these issues because they touch upon our own being, the surface of consciousness along with the depths below it. This is the near difficult task of fathoming the ocean floor using rope and a weight, an easier task the closer we hug the shoreline. But what might we find if cast ourselves out on open waters? What new lands might be found, lands to be newly discovered and lands already inhabited?
We moderns love certainty. And it’s true we possess more knowledge than any civilization before has accumulated. Yet we’ve partly made the unfamiliar into familiar by remaking the world in our own image. There is no place on earth that remains entirely untouched. Only a couple hundred small isolated tribes are still uncontacted, representing foreign worldviews not known or studied, but even they live under unnatural conditions of stress as the larger world closes in on them. Most of the ecological and cultural diversity that once existed has been obliterated from the face of the earth, most of it having left not a single trace or record, just simply gone. Populations beyond count have faced extermination by outside influences and forces before they ever got a chance to meet an outsider. Plagues, environmental destruction, and societal collapse wiped them out often in short periods of time.
Those other cultures might have gifted us with insights about our humanity that now are lost forever, just as extinct species might have held answers to questions not yet asked and medicines for diseases not yet understood. Almost all that now is left is a nearly complete monoculture with the differences ever shrinking into the constraints of capitalist realism. If not for scientific studies done on the last of isolated tribal people, we would never know how much diversity exists within human nature. Many of the conclusions that earlier social scientists had made were based mostly on studies involving white, middle class college kids in Western countries, what some have called the WEIRD: Western, Educated, Industrialized, Rich, and Democratic. But many of those conclusions have since proven wrong, biased, or limited.
When Jaynes’ first thought on such matters, the social sciences were still getting established as serious fields of study. His entered college around 1940 when behaviorism was a dominant paradigm. It was only in the prior decades that the very idea of ‘culture’ began to take hold among anthropologists. He was influenced by anthropologists, directly and indirectly. One indirect influence came by way of E. R. Dodds, a classical scholar, who in writing his 1951 The Greeks and the Irrational found inspiration from Ruth Benedict’s anthropological work comparing cultures (Benedict taking this perspective through the combination of the ideas of Franz Boas and Carl Jung). Still, anthropology was young and the fascinating cases so well known today were unknown back then (e.g., Daniel Everett’s recent books on the Pirahã). So, in following Dodds example, Jaynes turned to ancient societies and their literature.
His ideas were forming at the same time the social sciences were gaining respectability and maturity. It was a time when many scholars and other intellectuals were more fully questioning Western civilization. But it was also the time when Western ascendancy was becoming clear with the WWI ending of the Ottoman Empire and the WWII ending of the Japanese Empire. The whole world was falling under Western cultural influence. And traditional societies were in precipitous decline. That was the dawning of the age of monoculture.
We are the inheritors of the world that was created from that wholesale destruction of all that came before. And even what came before was built on millennia of collapsing civilizations. Jaynes focused on the earliest example of mass destruction and chaos leading him to see a stark division to what came before and after. How do we understand why we came to be the way we are when so much has been lost? We are forced back on our own ignorance. Jaynes apparently understood that and so considered awe to be the proper response. We know the world through our own humanity, but we can only know our own humanity through the cultural worldview we are born into. It is our words that have meaning, was Jaynes response, “not life or persons or the universe itself.” That is to say we bring meaning to what we seek to understand. Meaning is created, not discovered. And the kind of meaning we create depends on our cultural worldview.
In Monoculture, F. S. Michaels writes (pp. 1-2):
THE HISTORY OF HOW we think and act, said twentieth-century philosopher Isaiah Berlin, is, for the most part, a history of dominant ideas. Some subject rises to the top of our awareness, grabs hold of our imagination for a generation or two, and shapes our entire lives. If you look at any civilization, Berlin said, you will find a particular pattern of life that shows up again and again, that rules the age. Because of that pattern, certain ideas become popular and others fall out of favor. If you can isolate the governing pattern that a culture obeys, he believed, you can explain and understand the world that shapes how people think, feel and act at a distinct time in history.1
The governing pattern that a culture obeys is a master story — one narrative in society that takes over the others, shrinking diversity and forming a monoculture. When you’re inside a master story at a particular time in history, you tend to accept its definition of reality. You unconsciously believe and act on certain things, and disbelieve and fail to act on other things. That’s the power of the monoculture; it’s able to direct us without us knowing too much about it.
Over time, the monoculture evolves into a nearly invisible foundation that structures and shapes our lives, giving us our sense of how the world works. It shapes our ideas about what’s normal and what we can expect from life. It channels our lives in a certain direction, setting out strict boundaries that we unconsciously learn to live inside. It teaches us to fear and distrust other stories; other stories challenge the monoculture simply by existing, by representing alternate possibilities.
Jaynes argued that ideas are more than mere concepts. Ideas are embedded in language and metaphor. And ideas take form not just as culture but as entire worldviews built on interlinked patterns of attitudes, thought, perception, behavior, and identity. Taken together, this is the reality tunnel we exist within.
It takes a lot to shake us loose from these confines of the mind. Certain practices, from meditation to imbibing psychedelics, can temporarily or permanently alter the matrix of our identity. Jaynes, for reasons of his own, came to question the inevitability of the society around him which allowed him to see that other possibilities may exist. The direction his queries took him landed him in foreign territory, outside of the idolized individualism of Western modernity.
His ideas might have been less challenging in a different society. We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.
Maybe we modern Westerners need to practice a sense of awe, to inquire more deeply. Jaynes offers a different way of thinking that doesn’t even require us to look to another society. If he is correct, this radical worldview is at the root of Western Civilization. Maybe the traces of the past are still with us.
It is in fact dangerous to assume a too similar relationship between those ancient people and us. A fascinating difference between the Greek lyricists and ourselves derives from the entity we label “the self.” How did the self come to be? Have we always been self-conscious, of two or three or four minds, a stew of self-aware voices? Julian Jaynes thinks otherwise. In The Origin of Consciousness in the Breakdown of the Bicameral Mind—that famous book my poetry friends adore and my psychologist friends shrink from—Jaynes surmises that the early classical mind, still bicameral, shows us the coming-into-consciousness of the modern human, shows our double-minded awareness as, originally, a haunted hearing of voices. To Jaynes, thinking is not the same as consciousness: “one does one’s thinking before one knows what one is to think about.” That is, thinking is not synonymous with consciousness or introspection; it is rather an automatic process, notably more reflexive than reflective. Jaynes proposes that epic poetry, early lyric poetry, ritualized singing, the conscience, even the voices of the gods, all are one part of the brain learning to hear, to listen to, the other.
Voices heard by persons diagnosed schizophrenic appear to be indistinguishable, on the basis of their experienced characteristics, from voices heard by persons with dissociative disorders or by persons with no mental disorder at all.
Olin suggested that recent neuroimaging studies “have illuminated and confirmed the importance of Jaynes’ hypothesis.” Olin believes that recent reports by Lennox et al and Dierks et al support the bicameral mind. Lennox et al reported a case of a right-handed subject with schizophrenia who experienced a stable pattern of hallucinations. The authors obtained images of repeated episodes of hallucination and observed its functional anatomy and time course. The patient’s auditory hallucination occurred in his right hemisphere but not in his left.
To explain the origin of consciousness is to explain how the analog “I” began to narratize in a functional mind-space. For Jaynes, to understand the conscious mind requires that we see it as something fleeting rather than something always present. The constant phenomenality of what-it-is-like to be an organism is not equivalent to consciousness and, subsequently, consciousness must be thought in terms of the authentic possibility of consciousness rather than its continual presence.
When Jaynes says that there was “nothing it is like” to be preconscious, he certainly didn’t mean to say that nonconscious animals are somehow not having subjective experience in the sense of “experiencing” or “being aware” of the world. When Jaynes said there is “nothing it is like” to be preconscious, he means that there is no sense of mental interiority and no sense of autobiographical memory. Ask yourself what it is like to be driving a car and then suddenly wake up and realize that you have been zoned out for the past minute. Was there something it is like to drive on autopilot? This depends on how we define “what it is like”.
“The Evolution of the Analytic Topoi: A Speculative Inquiry”
by Frank J. D’Angelo
from Essays on Classical Rhetoric and Modern Discourse
ed. Robert J. Connors, Lisa S. Ede, & Andrea A. Lunsford
The first stage in the evolution of the analytic topoi is the global stage. Of this stage we have scanty evidence, since we must assume the ontogeny of invention in terms of spoken language long before the individual is capable of anything like written language. But some hints of how logical invention might have developed can be found in the work of Eric Havelock. In his Preface to Plato, Havelock, in recapitulating the educational experience of the Homeric and post-Homeric Greek, comments that the psychology of the Homeric Greek is characterized by a high degree of automatism.
He is required as a civilised being to become acquainted with the history, the social organisation, the technical competence and the moral imperatives of his group. This in turn is able to function only as a fragment of the total Hellenic world. It shares a consciousness in which he is keenly aware that he, as a Hellene, in his memory. Such is poetic tradition, essentially something he accepts uncritically, or else it fails to survive in his living memory. Its acceptance and retention are made psychologically possible by a mechanism of self-surrender to the poetic performance and of self-identification with the situations and the stories related in the performance. . . . His receptivity to the tradition has thus, from the standpoint of inner psychology, a degree of automatism which however is counter-balanced by a direct and unfettered capacity for action in accordance with the paradigms he has absorbed. 6
Preliterate man was apparently unable to think logically. He acted, or as Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, puts it, “reacted” to external events. “There is in general,” writes Jaynes, “no consciousness in the Iliad . . . and in general therefore, no words for consciousness or mental acts.” 7 There was, in other words, no subjective consciousness in Iliadic man. His actions were not rooted in conscious plans or in reasoning. We can only speculate, then, based on the evidence given by Havelock and Jaynes that logical invention, at least in any kind of sophisticated form, could not take place until the breakdown of the bicameral mind, with the invention of writing. If ancient peoples were unable to introspect, then we must assume that the analytic topoi were a discovery of literate man. Eric Havelock, however, warns that the picture he gives of Homeric and post-Homeric man is oversimplified and that there are signs of a latent mentality in the Greek mind. But in general, Homeric man was more concerned to go along with the tradition than to make individual judgments.
For Iliadic man to be able to think, he must think about something. To do this, states Havelock, he had to be able to revolt against the habit of self-identification with the epic poem. But identification with the poem at this time in history was necessary psychologically (identification was necessary for memorization) and in the epic story implicitly as acts or events that are carried out by important people, must be abstracted from the narrative flux. “Thus the autonomous subject who no longer recalls and feels, but knows, can now be confronted with a thousand abstract laws, principles, topics, and formulas which become the objects of his knowledge.” 8
The analytic topoi, then, were implicit in oral poetic discourse. They were “experienced” in the patterns of epic narrative, but once they are abstracted they can become objects of thought as well as of experience. As Eric Havelock puts it,
If we view them [these abstractions] in relation to the epic narrative from which, as a matter of historical fact, they all emerged they can all be regarded as in one way or another classifications of an experience which was previously “felt” in an unclassified medley. This was as true of justice as of motion, of goodness as of body or space, of beauty as of weight or dimension. These categories turn into linguistic counters, and become used as a matter of course to relate one phenomenon to another in a non-epic, non-poetic, non-concrete idiom. 9
The invention of the alphabet made it easier to report experience in a non-epic idiom. But it might be a simplification to suppose that the advent of alphabetic technology was the only influence on the emergence of logical thinking and the analytic topics, although perhaps it was the major influence. Havelock contends that the first “proto-thinkers” of Greece were the poets who at first used rhythm and oral formulas to attempt to arrange experience in categories, rather than in narrative events. He mentions in particular that it was Hesiod who first parts company with the narrative in the Theogony and Works and Days. In Works and Days, Hesiod uses a cataloging technique, consisting of proverbs, aphorisms, wise sayings, exhortations, and parables, intermingled with stories. But this effect of cataloging that goes “beyond the plot of a story in order to impose a rough logic of topics . . . presumes that Hesiod is 10
The kind of material found in the catalogs of Hesiod was more like the cumulative commonplace material of the Renaissance than the abstract topics that we are familiar with today. Walter Ong notes that “the oral performer, poet or orator needed a stock of material to keep him going. The doctrine of the commonplaces is, from one point of view, the codification of ways of assuring and managing this stock.” 11 We already know what some of the material was like: stock epithets, figures of speech, exempla, proverbs, sententiae, quotations, praises or censures of people and things, and brief treatises on virtues and vices. By the time we get to the invention of printing, there are vast collections of this commonplace material, so vast, relates Ong, that scholars could probably never survey it all. Ong goes on to observe that
print gave the drive to collect and classify such excerpts a potential previously undreamed of. . . . the ranging of items side by side on a page once achieved, could be multiplied as never before. Moreover, printed collections of such commonplace excerpts could be handily indexed; it was worthwhile spending days or months working up an index because the results of one’s labors showed fully in thousands of copies. 12
To summarize, then, in oral cultures rhetorical invention was bound up with oral performance. At this stage, both the cumulative topics and the analytic topics were implicit in epic narrative. Then the cumulative commonplaces begin to appear, separated out by a cataloging technique from poetic narrative, in sources such as the Theogony and Works and Days . Eric Havelock points out that in Hesiod, the catalog “has been isolated or abstracted . . . out of a thousand contexts in the rich reservoir of oral tradition. … A general world view is emerging in isolated or ‘abstracted’ form.” 13 Apparently, what we are witnessing is the emergence of logical thinking. Julian Jaynes describes the kind of thought to be found in the Works and Days as “preconscious hypostases.” Certain lines in Hesiod, he maintains, exhibit “some kind of bicameral struggle.” 14
The first stage, then, of rhetorical invention is that in which the analytic topoi are embedded in oral performance in the form of commonplace material as “relationships” in an undifferentiated matrix. Oral cultures preserve this knowledge by constantly repeating the fixed sayings and formulae. Mnemonic patterns, patterns of repetition, are not added to the thought of oral cultures. They are what the thought consists of.
What, then, may mental selves be good for and why have they emerged during evolution (or, perhaps, human evolution or even early human history)? Answers to these questions used to take the form of stories explaining how the mental self came about and what advantages were associated with it. In other words, these are theories that construct hypothetical scenarios offering plausible explanations for why certain (groups of) living things that initially do not possess a mental self gain fitness advantages when they develop such an entity—with the consequence that they move from what we can call a self-less to a self-based or “self-morphic” state.
Modules for such scenarios have been presented occasionally in recent years by, for example, Dennett, 1990 and Dennett, 1992, Donald (2001), Edelman (1989), Jaynes (1976), Metzinger, 1993 and Metzinger, 2003, or Mithen (1996). Despite all the differences in their approaches, they converge around a few interesting points. First, they believe that the transition between the self-less and self-morphic state occurred at some stage during the course of human history—and not before. Second, they emphasize the cognitive and dynamic advantages accompanying the formation of a mental self. And, third, they also discuss the social and political conditions that promote or hinder the constitution of this self-morphic state. In the scenario below, I want to show how these modules can be keyed together to form a coherent construction. […]
Thus, where do thoughts come from? Who or what generates them, and how are they linked to the current perceptual situation? This brings us to a problem that psychology describes as the problem of source attribution ( Heider, 1958).
One obvious suggestion is to transfer the schema for interpreting externally induced messages to internally induced thoughts as well. Accordingly, thoughts are also traced back to human sources and, likewise, to sources that are present in the current situation. Such sources can be construed in completely different ways. One solution is to trace the occurrence of thoughts back to voices—the voices of gods, priests, kings, or ancestors, in other words, personal authorities that are believed to have an invisible presence in the current situation. Another solution is to locate the source of thoughts in an autonomous personal authority bound to the body of the actor: the self.
These two solutions to the attribution problem differ in many ways: historically, politically, and psychologically. In historical terms, the former must be markedly older than the latter. The transition from one solution to the other and the mentalities associated with them are the subject of Julian Jaynes’s speculative theory of consciousness. He even considers that this transfer occurred during historical times: between the Iliad and the Odyssey. In the Iliad, according to Jaynes, the frame of mind of the protagonists is still structured in a way that does not perceive thoughts, feelings, and intentions as products of a personal self, but as the dictates of supernatural voices. Things have changed in the Odyssey: Odysseus possesses a self, and it is this self that thinks and acts. Jaynes maintains that the modern consciousness of Odysseus could emerge only after the self had taken over the position of the gods (Jaynes, 1976; see also Snell, 1975).
Moreover, it is obvious why the political implications of the two solutions differ so greatly: Societies whose members attribute their thoughts to the voices of mortal or immortal authorities produce castes of priests or nobles that claim to be the natural authorities or their authentic interpreters and use this to derive legitimization for their exercise of power. It is only when the self takes the place of the gods that such castes become obsolete, and authoritarian constructions are replaced by other political constructions that base the legitimacy for their actions on the majority will of a large number of subjects who are perceived to be autonomous.
Finally, an important psychological difference is that the development of a self-concept establishes the precondition for individuals to become capable of perceiving themselves as persons with a coherent biography. Once established, the self becomes involved in every re-presentation and representation as an implicit personal source, and just as the same body is always present in every perceptual situation, it is the same mental self that remains identical across time and place. […]
According to the cognitive theories of schizophrenia developed in the last decade (Daprati et al., 1997; Frith, 1992), these symptoms can be explained with the same basic pattern that Julian Jaynes uses in his theory to characterize the mental organization of the protagonists in the Iliad. Patients with delusions suffer from the fact that the standardized attribution schema that localizes the sources of thoughts in the self is not available to them. Therefore, they need to explain the origins of their thoughts, ideas, and desires in another way (see, e.g., Stephens & Graham, 2000). They attribute them to person sources that are present but invisible—such as relatives, physicians, famous persons, or extraterrestrials. Frequently, they also construct effects and mechanisms to explain how the thoughts proceeding from these sources are communicated, by, for example, voices or pictures transmitted over rays or wires, and nowadays frequently also over phones, radios, or computers. […]
As bizarre as these syndromes seem against the background of our standard concept of subjectivity and personhood, they fit perfectly with the theoretical idea that mental selves are not naturally given but rather culturally constructed, and in fact set up in, attribution processes. The unity and consistency of the self are not a natural necessity but a cultural norm, and when individuals are exposed to unusual developmental and life conditions, they may well develop deviant attribution patterns. Whether these deviations are due to disturbances in attribution to persons or to disturbances in dual representation cannot be decided here. Both biological and societal conditions are involved in the formation of the self, and when they take an unusual course, the causes could lie in both domains.
“The Varieties of Dissociative Experience”
by Stanley Krippner
from Broken Images Broken Selves: Dissociative Narratives In Clinical Practice
In his provocative description of the evolution of humanity’s conscious awareness, Jaynes (1976) asserted that ancient people’s “bicameral mind” enabled them to experience auditory hallucinations— the voices of the deities— but they eventually developed an integration of the right and left cortical hemispheres. According to Jaynes, vestiges of this dissociation can still be found, most notably among the mentally ill, the extremely imaginative, and the highly suggestible. Even before the development of the cortical hemispheres, the human brain had slowly evolved from a “reptilian brain” (controlling breathing, fighting, mating, and other fixed behaviors), to the addition of an “old-mammalian brain,” (the limbic system, which contributed emotional components such as fear, anger, and affection), to the superimposition of a “new-mammalian brain” (responsible for advanced sensory processing and thought processes). MacLean (1977) describes this “triune brain” as responsible, in part, for distress and inefficiency when the parts do not work well together. Both Jaynes’ and MacLean’s theories are controversial, but I believe that there is enough autonomy in the limbic system and in each of the cortical hemispheres to justify Ornstein’s (1986) conclusion that human beings are much more complex and intricate than they imagine, consisting of “an uncountable number of small minds” (p. 72), sometimes collaborating and sometimes competing. Donald’s (1991) portrayal of mental evolution also makes use of the stylistic differences of the cerebral hemisphere, but with a greater emphasis on neuropsychology than Jaynes employs. Mithen’s (1996) evolutionary model is a sophisticated account of how specialized “cognitive domains” reached the point that integrated “cognitive fluidity” (apparent in art and the use of symbols) was possible.
James (1890) spoke of a “multitude” of selves, and some of these selves seem to go their separate ways in posttraumatic stress disorder (PTSD) (see Greening, Chapter 5), dissociative identity disorder (DID) (see Levin, Chapter 6), alien abduction experiences (see Powers, Chapter 9), sleep disturbances (see Barrett, Chapter 10), psychedelic drug experiences (see Greenberg, Chapter 11), death terrors (see Lapin, Chapter 12), fantasy proneness (see Lynn, Pintar, & Rhue, Chapter 13), near-death experiences (NDEs) (see Greyson, Chapter 7), and mediumship (see Grosso, Chapter 8). Each of these conditions can be placed into a narrative construction, and the value of these frameworks has been described by several authors (e.g., Barclay, Chapter 14; Lynn, Pintar, & Rhue, Chapter 13; White, Chapter 4). Barclay (Chapter 14) and Powers (Chapter 15) have addressed the issue of narrative veracity and validation, crucial issues when stories are used in psychotherapy. The American Psychiatric Association’s Board of Trustees (1993) felt constrained to issue an official statement that “it is not known what proportion of adults who report memories of sexual abuse were actually abused” (p. 2). Some reports may be fabricated, but it is more likely that traumatic memories may be misconstrued and elaborated (Steinberg, 1995, p. 55). Much of the same ambiguity surrounds many other narrative accounts involving dissociation, especially those described by White (Chapter 4) as “exceptional human experiences.”
Nevertheless, the material in this book makes the case that dissociative accounts are not inevitably uncontrolled and dysfunctional. Many narratives considered “exceptional” from a Western perspective suggest that dissociation once served and continues to serve adaptive functions in human evolution. For example, the “sham death” reflex found in animals with slow locomotor abilities effectively offers protection against predators with greater speed and agility. Uncontrolled motor responses often allow an animal to escape from dangerous or frightening situations through frantic, trial-and-error activity (Kretchmer, 1926). Many evolutionary psychologists have directed their attention to the possible value of a “multimodular” human brain that prevents painful, unacceptable, and disturbing thoughts, wishes, impulses, and memories from surfacing into awareness and interfering with one’s ongoing contest for survival (Nesse & Lloyd, 1992, p. 610). Ross (1991) suggests that Western societies suppress this natural and valuable capacity at their peril.
The widespread prevalence of dissociative reactions argues for their survival value, and Ludwig (1983) has identified seven of them: (1) The capacity for automatic control of complex, learned behaviors permits organisms to handle a much greater work load in as smooth a manner as possible; habitual and learned behaviors are permitted to operate with a minimum expenditure of conscious control. (2) The dissociative process allows critical judgment to be suspended so that, at times, gratification can be more immediate. (3) Dissociation seems ideally suited for dealing with basic conflicts when there is no instant means of resolution, freeing an individual to take concerted action in areas lacking discord. (4) Dissociation enables individuals to escape the bounds of reality, providing for inspiration, hope, and even some forms of “magical thinking.” (5) Catastrophic experiences can be isolated and kept in check through dissociative defense mechanisms. (6) Dissociative experiences facilitate the expression of pent-up emotions through a variety of culturally sanctioned activities. (7) Social cohesiveness and group action often are facilitated by dissociative activities that bind people together through heightened suggestibility.
Each of these potentially adaptive functions may be life-depotentiating as well as life-potentiating; each can be controlled as well as uncontrolled. A critical issue for the attribution of dissociation may be the dispositional set of the experiencer-in-context along with the event’s adaptive purpose. Salamon (1996) described her mother’s ability to disconnect herself from unpleasant surroundings or facts, a proclivity that led to her ignoring the oncoming imprisonment of Jews in Nazi Germany but that, paradoxically, enabled her to survive her years in Auschwitz. Gergen (1991) has described the jaundiced eye that modern Western science has cast toward Dionysian revelry, spiritual experiences, mysticism, and a sense of bonded unity with nature, a hostility he predicts may evaporate in the so-called “postmodern” era, which will “open the way to the full expression of all discourses” (pp. 246– 247). For Gergen, this postmodern lifestyle is epitomized by Proteus, the Greek sea god, who could change his shape from wild boar to dragon, from fire to flood, without obvious coherence through time. This is all very well and good, as long as this dissociated existence does not leave— in its wake— a residue of broken selves whose lives have lost any intentionality or meaning, who live in the midst of broken images, and whose multiplicity has resulted in nihilistic affliction and torment rather than in liberation and fulfillment (Glass, 1993, p. 59).
“Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use.”
~ Julian Jaynes, The Origin of Consciousness in the
Breakdown of the Bicameral Mind
“This blue was the principle that transcended principles. This was the taste, the wish, the Binah that understands, the dainty fingers of personality and the swirling fingerprint lines of individuality, this sigh that returns like a forgotten and indescribable scent that never dies but only you ever knew, this tingle between familiar and strange, this you that never there was word for, this identifiable but untransmittable sensation, this atmosphere without reason, this illicit fairy kiss for which you are more fool than sinner, this only thing that God and Satan mistakenly left you for your own and which both (and everyone else besides) insist to you is worthless— this, your only and invisible, your peculiar— this secret blue.”
~ Quentin S. Crisp, Blue on Blue
Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.
Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?
The most ancient people would refer to the sky as black. Some isolated people in more recent times have also been observed offering this same description. This apparently isn’t a strange exception. Guy Deutscher mentions that, in an informal color experiment, his young daughter once pointed to the “pitch-black sky late at night” and declared it blue—that was at the age of four, long after having learned the color names for blue and black. She had the language to make the distinction and yet she made a similar ‘mistake’ as some isolated island people. How could that be? Aren’t ‘black’ and ‘blue’ obviously different?
The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.
It has been argued that Homer’s language such as the word for ‘bronze’ might not have referred to color at all. But that just adds to the strangeness. We not only can’t determine what colors he might have been referring to or even if he was describing colors at all. There weren’t abstractly generalized color terms that were exclusively dedicated to colors, instead also describing other physical features, psychological experiences, and symbolic values. This might imply that synesthesia once was a more common experience, related to the greater capacity preliterate individuals had for memorizing vast amounts of information (see Knowledge and Power in Prehistoric Societies by Lynne Kelly).
The paucity and confusion of ancient color language indicates color wasn’t perceived as all that significant, to the degree it was consciously perceived at all, at least not in the way we moderns think about it. Color hue might have not seemed all that relevant in the ancient world that was mostly lacking artificially colored objects and entirely lacking in bright garden flowers. Besides the ancient Egyptians, no one in the earliest civilizations had developed blue pigment and hence a word to describe it. Blue is a rare color in nature. Even water and sky is rarely a bright clear blue, when blue at all.
This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.
Consider the case of the Piraha, as studied by Daniel L. Everett (a man who personally understands the power of their cultural worldview). The Piraha have no color terms, not as single words, although they are able to describe colors using multiple words and concrete comparisons—such as red described as being like blood or green as like not yet ripe. Of course, they’ve been in contact with non-Piraha for a while now and so no one knows how they would’ve talked about colors before interaction with outsiders.
From a Western perspective, there are many other odd things about the Piraha. Their language does not fit the expectations of what many have thought as universal to all human language. They have no terms for numbers and counting, as well as no “quantifiers like all, each, every, and so on” (Everett, Don’t Sleep, There Are Snakes, p. 119). Originally, they had no pronouns and the pronouns they borrowed from other languages are used limitedly. They refer to ‘say’ in place of ‘think’, which makes one wonder what this indicates about their experience—is their thought an act of speaking?
Along with lacking ancestor worship, there aren’t even words to refer to family one never personally knew. Also, there are no creation stories or myths or fiction or any apparent notion of the world having been created or another supernatural world existing. They don’t think in those terms nor, one might presume, perceive reality in those terms. They are epistemological agnostics about anything they haven’t personally experienced or someone they personally know hasn’t personally experienced, and their language is extremely precise in knowledge claims, making early Western philosophers seem simpleminded in comparison. Everett was put in the unfortunate position of having tried to convert them to Christianity, but instead they converted him to atheism. Yet the Piraha live in a world they perceive as filled with spirits. These aren’t otherworldly spirits. They are very much in this world and when a Piraha speaks as a spirit they are that spirit. To put it another way, the world is full of diverse and shifting selves.
Color terms refer to abstract unchanging categories, the very thing that seems least relevant to the Piraha. They favor a subjective mentality, but that doesn’t mean they possess a subjective self similar to Western culture. Like many hunter-gatherers, they have a fluid sense of identity that changes along with their names, their former self treated as no longer existing whatsoever, just gone. There is no evidence of belief in a constant self that would survive death, as there is no belief in gods nor a heaven and hell. Instead of being obsessed with what is beyond, they are endlessly fascinated by what is at the edge of experience, what appears and disappears. In Cultural Constraints on Grammar and Cognition in Piraha, Everett explains this:
“After discussions and checking of many examples of this, it became clearer that the Piraha are talking about liminality—situations in which an item goes in and out of the boundaries of their experience. This concept is found throughout Piraha˜ culture. Piraha˜’s excitement at seeing a canoe go around a river bend is hard to describe; they see this almost as traveling into another dimension. It is interesting, in light of the postulated cultural constraint on grammar, that there is an important Piraha˜ term and cultural value for crossing the border between experience and nonexperience.”
To speak of colors is to speak of particular kinds of perceptions and experiences. The Piraha culture is practically incomprehensible to us, as the Piraha represent an alien view of the world. Everett, in making a conclusion, writes that,
“Piraha thus provides striking evidence for the influence of culture on major grammatical structures, contradicting Newmeyer’s (2002:361) assertion (citing “virtually all linguists today”), that “there is no hope of correlating a language’s gross grammatical properties with sociocultural facts about its speakers.” If I am correct, Piraha shows that gross grammatical properties are not only correlated with sociocultural facts but may be determined by them.”
Even so, Everett is not arguing for a strong Whorfian positon of linguistic determinism. Then again, Vyvyan Evans states that not even Benjamin Lee Whorf made this argument. In Language, Thought and Reality, Whorf wrote (as quoted by Evans in The Language Myth):
“The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur.”
Anyway, Everett observed that the Piraha demonstrated a pattern to how they linguistically treated certain hues of color. It’s just that they had much diversity and complexity in how they described colors, a dark brown object being described differently than a dark-skinned person, and no consistency across all the Piraha members in which phrases they’d use to describe which colors. Still, like any other humans, they had the capacity for color perception, whether or not their color cognition matches that of other cultures.
To emphasize the point, the following is a similar example, as presented by Vyvyan Evans from TheLanguage Myth (p. 207-8):
“The colour system in Yélî Dnye has been studied extensively by linguistic anthropologist Stephen Levinson. 38 Levinson argues that the lesson from Rossel Island is that each of the following claims made by Berlin and Kay is demonstrably false:
Claim 1: All languages have basic colour terms
Claim 2: The colour spectrum is so salient a perceptual field that all cultures must systematically and exhaustively name the colour space
Claim 3: For those basic colour terms that exist in any given language, there are corresponding focal colours – there is an ideal hue that is the prototypical shade for a given basic colour term
Claim 4: The emergence of colour terms follows a universal evolutionary pattern
“A noteworthy feature of Rossel Island culture is this: there is little interest in colour. For instance, there is no native artwork or handiwork in colour. The exception to this is hand-woven patterned baskets, which are usually uncoloured, or, if coloured, are black or blue. Moreover, the Rossel language doesn’t have a word that corresponds to the English word colour: the domain of colour appears not to be a salient conceptual category independent of objects. For instance, in Yélî, it is not normally possible to ask what colour something is, as one can in English. Levinson reports that the equivalent question would be: U pââ ló nté? This translates as “Its body, what is it like?” Furthermore, colours are not usually associated with objects as a whole, but rather with surfaces.”
Evans goes into greater detail. Suffice it to say, she makes a compelling argument that this example contradicts and falsifies the main claims of conventional theory, specifically that of Berlin and Kay. This culture defies expectations. It’s one of the many exceptions that appears to disprove the hypothetical rule.
Part of the challenge is we can’t study other cultures as neutral observers. Researchers end up influencing those cultures they study or else simply projecting their own cultural biases onto them and so interpreting the results accordingly. Even the tests used to analyze color perceptions across cultures are themselves culturally biased. They don’t just measure how people divide up hues. In the process of being tested, the design of the test is teaching the subjects a particular way of thinking about color perception. The test can’t tell us how people think about colors prior to the test itself. And obviously, even if the test could accomplish this impossible feat, we have no way of time traveling back in order to apply the test to ancient people.
We are left with a mystery and no easy way to explore it.
* * *
Here are a few related posts of mine. And below that are other sources of info, including a video at the very bottom.
SINCE THERE IS NO EVIDENCE that any language forbids its speakers to think anything, we must look in an entirely different direction to discover how our mother tongue really does shape our experience of the world. Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about. […]
For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.
Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction. There are other examples of this phenomenon. What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany. Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.
It wasn’t just the ‘wine-dark sea’. That epithet oinops, ‘wine-looking’ (the version ‘wine-dark’ came from Andrew Lang’s later translation) was applied both to the sea and to oxen, and it was accompanied by other colours just as nonsensical. ‘Violet’, ioeis, (from the flower) was used by Homer of the sea too, but also of wool and iron. Chloros, ‘green’, was used of honey, faces and wood. By far the most common colour words in his reticent vocabulary were black (170 times) and white (100), followed distantly by red (13).
What could account for this alien colour-sense? It wasn’t that Homer (if Homer existed) was blind, for there are parallel usages in other Greek authors.
The image Homer hoped to conjure with his winelike sea greatly depended upon what wine meant to his audience. While the Greeks likely knew of white wine, most ancient wine was red, and in the Homeric epics, red wine is the only wine specifically described. Drunk at feasts, poured onto the earth in sacred rituals, or onto the ashes around funeral pyres, Homeric wine is often mélas, “dark,” or even “black,” a term with broad application, used of a brooding spirit, anger, death, ships, blood, night, and the sea. It is also eruthrós, meaning “red” or the tawny-red hue of bronze; and aíthops, “bright,” “gleaming,” a term also used of bronze and of smoke in firelight. While these terms notably have more to do with light, and the play of light, than with color proper, Homeric wine was clearly dark and red and would have appeared especially so when seen in the terracotta containers in which it was transported. “Winelike sea” cannot mean clear seawater, nor the white splash of sea foam, nor the pale color of a clear sea lapping the shallows of a sandy shore. […]
Homer’s sea, whether háls, thálassa, or póntos, is described as misty, darkly troubled, black-dark, and grayish, as well as bright, deep, clashing, tumultuous, murmuring, and tempestuous—but it is never blue. The Greek word for blue, kuáneos, was not used of the sea until the late sixth or early fifth century BC, in a poem by the lyric poet Simonides—and even here, it is unclear if “blue” is strictly meant, and not, again, “dark”:
the fish straight up from the
dark/blue water leapt
at the beautiful song
After Simonides, the blueness of kuáneos was increasingly asserted, and by the first century, Pliny the Elder was using the Latin form of the word, cyaneus, to describe the cornflower, whose modern scientific name, Centaurea cyanus, still preserves this lineage. But for Homer kuáneos is “dark,” possibly “glossy-dark” with hints of blue, and is used of Hector’s lustrous hair, Zeus’ eyebrows, and the night.
Ancient Greek words for color in general are notoriously baffling: In The Iliad, “chlorós fear” grips the armies at the sound of Zeus’ thunder. The word, according to R. J. Cunliffe’s Homeric lexicon, is “an adjective of color of somewhat indeterminate sense” that is “applied to what we call green”—which is not the same as saying it means “green.” It is also applied “to what we call yellow,” such as honey or sand. The pale green, perhaps, of vulnerable shoots struggling out of soil, the sickly green of men gripped with fear? […]
Rather than being ignorant of color, it seems that the Greeks were less interested in and attentive to hue, or tint, than they were to light. As late as the fourth century BC, Plato named the four primary colors as white, black, red, and bright, and in those cases where a Greek writer lists colors “in order,” they are arranged not by the Newtonian colors of the rainbow—red, orange, yellow, green, blue, indigo, violet—but from lightest to darkest. And The Iliad contains a broad, specialized vocabulary for describing the movement of light: argós meaning “flashing” or “glancing white”; aiólos, “glancing, gleaming, flashing,” or, according to Cunliffe’s Lexicon, “the notion of glancing light passing into that of rapid movement,” and the root of Hector’s most defining epithet, koruthaíolos—great Hector “of the shimmering helm.” Thus, for Homer, the sky is “brazen,” evoking the glare of the Aegean sun and more ambiguously “iron,” perhaps meaning “burnished,” but possibly our sense of a “leaden” sky. Significantly, two of the few unambiguous color terms in The Iliad, and which evoke the sky in accordance with modern sensibilities, are phenomena of light: “Dawn robed in saffron” and dawn shining forth in “rosy fingers of light.”
So too, on close inspection, Homeric terms that appear to describe the color of the sea, have more to do with light. The sea is often glaukós or mélas. In Homer, glaukós (whence glaucoma) is color neutral, meaning “shining” or “gleaming,” although in later Greek it comes to mean “gray.” Mélas (whence melancholy) is “dark in hue, dark,” sometimes, perhaps crudely, translated as “black.” It is used of a range of things associated with water—ships, the sea, the rippled surface of the sea, “the dark hue of water as seen by transmitted light with little or no reflection from the surface.” It is also, as we have seen, commonly used of wine.
So what color is the sea? Silver-pewter at dawn; gray, gray-blue, green-blue, or blue depending on the particular day; yellow or red at sunset; silver-black at dusk; black at night. In other words, no color at all, but rather a phenomenon of reflected light. The phrase “winelike,” then, had little to do with color but must have evoked some attribute of dark wine that would resonate with an audience familiar with the sea—with the póntos, the high sea, that perilous path to distant shores—such as the glint of surface light on impenetrable darkness, like wine in a terracotta vessel. Thus, when Achilles, “weeping, quickly slipping away from his companions, sat/on the shore of the gray salt sea,” stretches forth his hands toward the oínopa pónton, he looks not on the enigmatic “wine-dark sea,” but, more explicitly, and possibly with more weight of melancholy, on a “sea as dark as wine.”
In his writings Homer surprises us by his use of color. His color descriptive palate was limited to metallic colors, black, white, yellowish green and purplish red, and those colors he often used oddly, leaving us with some questions as to his actual ability to see colors properly (1). He calls the sky “bronze” and the sea and sheep as the color of wine, he applies the adjective chloros (meaning green with our understanding) to honey, and a nightingale (2). Chloros is not the only color that Homer uses in this unusual way. He also uses kyanos oddly, “Hector was dragged, his kyanos hair was falling about him” (3). Here it would seem, to our understanding, that Hector’s hair was blue as we associate the term kyanos with the semi-precious stone lapis lazuli, in our thinking kyanos means cyan (4). But we cannot assume that Hector’s hair was blue, rather, in light of the way that Homer consistently uses color adjectives, we must think about his meaning, did he indeed see honey as green, did he not see the ocean as blue, how does his perception of color reflect on himself, his people, and his world.
Homer’s odd color description usage was a cultural phenomenon and not simply color blindness on his part, Pindar describes the dew as chloros, in Euripides chloros describes blood and tears (5). Empedocles, one of the earliest Ancient Greek color theorists, described color as falling into four areas, light or white, black or dark, red and yellow; Xenophanes described the rainbow as having three bands of color: purple, green/yellow, and red (6). These colors are fairly consistent with the four colors used by Homer in his color description, this leads us to the conclusion that all Ancient Greeks saw color only in the premise of Empedocles’ colors, in some way they lacked the ability to perceive the whole color spectrum. […]
This inability to perceive something because of linguistic restriction is called linguistic relativity (7). Because the Ancient Greeks were not really conscious of seeing, and did not have the words to describe what they unconsciously saw, they simply did not see the full spectrum of color, they were limited by linguistic relativity.
The color spectrum aside, it remains to explain the loose and unconventional application of Homer and other’s limited color descriptions, for an answer we look to the work of Eleanor Irwin. In her work, Irwin suggests that besides perceiving less chromatic distinction, the Ancient Greeks perceived less division between color, texture, and shadow, chroma may have been difficult for them to isolate (8). For the Ancient Greeks, the term chloros has been suggested to mean moistness, fluidity, freshness and living (9). It also seems likely that Ancient Greek perception of color was influenced by the qualities that they associated with colors, for instance the different temperaments being associated with colors probably affected the way they applied color descriptions to things. They didn’t simply see color as a surface, they saw it as a spirited thing and the word to describe it was often fittingly applied as an adjective meaning something related to the color itself but different from the simplicity of a refined color.
Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze. […]
The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”). […]
Blue certainly existed in the world, even if it was rare, and the Greeks must have stumbled across it occasionally even if they didn’t name it. But the thing is, if we don’t have a word for something, it turns out that to our perception—which becomes our construction of the universe—it might as well not exist. Specifically, neuroscience suggests that it might not just be “good or bad” for which “thinking makes it so,” but quite a lot of what we perceive.
The malleability of our color perception can be demonstrated with a simple diagram, shown here as figure six, “Afterimages”. The more our photoreceptors are exposed to the same color, the more fatigued they become, eventually giving out entirely and creating a reversed “afterimage” (yellow becomes blue, red becomes green). This is really just a parlor trick of sorts, and more purely physical, but it shows how easily shifted our vision is; other famous demonstrations like this selective attention test (its name gives away the trick) emphasize the power our cognitive functions have to suppress what we see. Our brains are pattern-recognizing engines, built around identifying things that are useful to us and discarding the rest of what we perceive as meaningless noise. (And a good thing that they do; deficiencies in this filtering, called sensory gating, are some of what cause neurological dysfunctions such as schizophrenia and autism.)
This suggests the possibility that not only did Homer lack a word for what we know as “blue”—he might never have perceived the color itself. To him, the sky really was bronze, and the sea really was the same color as wine. And because he lacked the concept “blue”—therefore its perception—to him it was invisible, nonexistent. This notion of concepts and language limiting cognitive perception is called linguistic relativism, and is typically used to describe the ways in which various cultures can have difficulty recalling or retaining information about objects or concepts for which they lack identifying language. Very simply: if we don’t have a word for it, we tend to forget it, or sometimes not perceive it at all. […]
So, if we’re all synesthetes, and our minds are extraordinarily plastic, capable of reorienting our entire perception around the addition of a single new concept (“there is a color between green and violet,” “schizophrenia is much more common than previously believed”), the implications of Homer’s wine-dark sea are rich indeed.
We are all creatures of our own time, our realities framed not by the limits of our knowledge but by what we choose to perceive. Do we yet perceive all the colors there are? What concepts are hidden from us by the convention of our language? When a noblewoman of Syracuse looked out across the Mare Siculum, did she see waves of Bacchanalian indigo beneath a sunset of hammered bronze? If a seagull flew east toward Thapsus, did she think of Venus and the fall of Troy?
The myriad details that define our everyday existence may define also the boundaries of our imagination, and with it our dreams, our ethics. We are lenses moving through time, beings of color and shadow.
Why were black, white, and red the first colors to be perceived by our forefathers? The evolutionary explanation is quite straightforward: ancient humans had to distinguish between night and day. And red is important for recognizing blood and danger. Even today, in us moderns, the color red causes an increase in skin galvanic response, a sign of tension and alarm. Green and yellow entered the vocabulary as the need to distinguish ripe fruit from unripe, grasses that are green from grasses that are wilting, etc. But what is the need for naming the color blue? Blue fruits are not very common, and the color of the sky is not really vital for survival.
Some languages have just three basic colors, others have 4, 5, 6, and so on. There’s even a debate as to whether the Pirahã tribe of the Amazon have any specialized color words at all! (If you ask a Pirahã tribe member to label something red, they’ll say that it’s blood-like).
But there’s still a pattern hidden in this diversity. […] You start with a black-and-white world of darks and lights. There are warm colors, and cool colors, but no finer categories. Next, the reds and yellows separate away from white. You can now have a color for fire, or the fiery color of the sunset. There are tribes that have stopped here. Further down, blues and greens break away from black. Forests, skies, and oceans now come of their own in your visual vocabulary. Eventually, these colors separate further. First, red splits from yellow. And finally, blue from green.
The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green one. This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).
However, and this is where things start to get a bit strange, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (as in the example images above), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green. It seems that color categories only matter in the right half of your visual field! […]
The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out. […]
But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.
They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It becomes easy to spot the blue among green, so you’re faster at straddling categories.
All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions.
In a category-learning paradigm, there was no evidence that Himba participants perceived the blue – green region of color space in a categorical manner. Like Berinmo speakers, they did not find this division easier to learn than an arbitrary one in the center of the green category. There was also a significant advantage for learning the dumbu-burou division, over the yellow-green division. It thus appears that CP for color category boundaries is tightly linked to the linguistic categories of the participant.
Two experiments attempted to reconcile discrepant recent ﬁndings relating to children’s color naming and categorization. In a replication of Franklin and colleagues ( Journal of Experimental Child Psychology, 90 (2005) 114–141), Experiment 1 tested English toddlers’ naming and memory for blue–green and blue–purple colors. It also found advantages for between-category presentations that could be interpreted as support for universal color categories. However, a different deﬁnition of knowing color terms led to quite different conclusions in line with the Whorﬁan view of Roberson and colleagues (Journal of Experimental Psychology: General, 133 (2004) 554–571). Categorical perception in recognition memory was now found only for children with a fuller understanding of the relevant terms. It was concluded that color naming can both under estimate and overestimate toddlers’ knowledge of color terms. Experiment 2 replicated the between-category recognition superiority found in Himba children by Franklin and colleagues for the blue–purple range. But Himba children, whose language does not have separate terms for green and blue, did not show across-category advantage for that set; rather, they behaved like English children who did not know their color terms.
It’s interesting that the Berinmo and Himba tribes have the same number of color terms, as well, because that rules out one possible alternative explanation of their data. It could be that as languages develop, they develop a more sophisticated color vocabulary, which eventually approximates the color categories that are actually innately present in our visual systems. We would expect, then, that two languages that are at similar levels of development (in other words, they both have the same number of color categories) would exhibit similar effects, but the speakers’ of the two languages remembered and perceived the colors differently. Thus it appears that languages do not develop towards any single set of universal color categories. In fact, Roberson et al. (2004) reported a longitudinal study that implies that exactly the opposite may be the case4. They found that children in the Himba tribe, and English-speaking children in the U.S., initially categorized color chips in a similar way, but as they grew older and more familiar with the color terms of their languages, their categorizations diverged, and became more consistent with their color names. This is particularly strong evidence that color names affect color concepts.
The children of the Himba were able to differentiate between many more shades of green than their English counterparts, but did not recognize the color blue as being distinct from green. The research found that the 11 basic English colors have no basis in the visual system, lending further credence to the linguistic theories of Deutscher, Geiger, Gladstone, and other academics.
This is a group of people in Namibia who were asked to do some color matching and similarity judgments for us. It’s a remote part of the world, but not quite so remote that somebody hasn’t got the t-shirt, but it’s pretty remote. That’s the sort of environment they live in, and these are the youngsters that I’m going to show you some particular data on. They are completely monolingual in their own language, which has a tremendous richness in certain types of terms, in cattle terms (I can’t talk about that now), but has a dramatic lack in color terms. They’ve only got five color terms. So all of the particular colors of the world, and this is an illustration which can go from white to black at the top, red to yellow, green, blue, purple, back to red again, if this was shown in terms of the whole colors of the spectrum, but they only have five terms. So they see the world as, perhaps differently than us, perhaps slightly plainer. So we looked at these young children, and we showed them a navy blue color at the top and we asked them to point to the same color again from another group of colors. And those colors included the correct color, but of course sometimes the children made mistakes. What I want to show was that the English children and the Himba children, these people are the Himba of Northwest Namibia, start out from the same place, they have this undefined color space in which, at the beginning of the testing, T1, they make errors in choosing the navy blue, sometimes they’ll choose the blue, sometimes they’ll choose the black, sometimes they’ll choose the purple. Now the purple one, actually if you did a spectral analysis, the blue and the purple, the one on the right, are the closest. And as you can see, as the children got older, the most common error, both for English children and the Himba children, is the increase (that’s going up on the graph) of the purple mistakes. But, their language, the Himba language, has the same word for blue as for black. We, of course, have the same word for the navy blue as the blue on the left, only as the children get older, three or four, the English children only ever confuse the navy blue to the blue on the left, whereas the Himba children confuse the navy blue with the black. So, what’s happening? Someone asked yesterday whether the Sapir-Worf Hypothesis had any currency. Well, if it has a little bit of currency, it has it certainly here, in that what is happening, because the names of colors mean different things in the different cultures, because blue and black are the same in the Himba language, the actual similarity does seem to have been altered in the pictorial register. So, the blues that we call blue, and the claim is that there is no natural category called blue, they were just sensations we want to group together, those natural categories don’t exist. But because we have constructed these categories, blues look more similar to us in the pictorial register, whereas to these people in Northwest Namibia, the blues and the blacks look more similar. So, in brief, I’d like to further add more evidence or more claim that we are constructing the world of colors and in some way at least our memory structures do alter, to a modest extent at least, what we’re seeing.
Not only has no evidence emerged to link the 11 basic English colors to the visual system, but the English-Himba data support the theory that color terms are learned relative to language and culture.
First, for children who didn’t know color terms at the start of the study, the pattern of memory errors in both languages was very similar. Crucially, their mistakes were based on perceptual distances between colors rather than a given set of predetermined categories, arguing against an innate origin for the 11 basic color terms of English. The authors write that an 11-color organization may have become common because it efficiently serves cultures with a greater need to communicate more precisely. Still, they write, “even if [it] were found to be optimal and eventually adopted by all cultures, it need not be innate.”
Second, the children in both cultures didn’t acquire color terms in any particular, predictable order–such as the universalist idea that the primary colors of red, blue, green and yellow are learned first.
Third, the authors say that as both Himba and English children started learning their cultures’ color terms, the link between color memory and color language increased. Their rapid perceptual divergence once they acquired color terms strongly suggests that cognitive color categories are learned rather than innate, according to the authors.
The study also spotlights the power of psychological research conducted outside the lab, notes Barbara Malt, PhD, a cognitive psychologist who studies language and thought and also chairs the psychology department at Lehigh University.
“To do this kind of cross-cultural work at all requires a rather heroic effort, [which] psychologists have traditionally left to linguists and anthropologists,” says Malt. “I hope that [this study] will inspire more cognitive and developmental psychologists to go into the field and pursue these kinds of comparisons, which are the only way to really find out which aspects of perception and cognition are universal and which are culture or language specific.”
Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.
This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t see blue at all. Or, more accurately, they probably saw it as we do now, but they never really noticed it.
MRI experiments confirm that people who process color through their verbal left brains, where the names of colors are accessed, recognize them more quickly. Language molds us into the image of the culture in which we are born.
Both adults and infants are faster at discriminating between two colors from different categories than two colors from the same category, even when between- and within-category chromatic separation sizes are equated. For adults, this categorical perception (CP) is lateralized; the category effect is stronger for the right visual field (RVF)–left hemisphere (LH) than the left visual field (LVF)–right hemisphere (RH). Converging evidence suggests that the LH bias in color CP in adults is caused by the influence of lexical color codes in the LH. The current study investigates whether prelinguistic color CP is also lateralized to the LH by testing 4- to 6-month-old infants. A colored target was shown on a differently colored background, and time to initiate an eye movement to the target was measured. Target background pairs were either from the same or different categories, but with equal target-background chromatic separations. Infants were faster at initiating an eye movement to targets on different-category than same-category backgrounds, but only for targets in the LVF–RH. In contrast, adults showed a greater category effect when targets were presented to the RVF–LH. These results suggest that whereas color CP is stronger in the LH than RH in adults, prelinguistic CP in infants is lateralized to the RH. The findings suggest that language-driven CP in adults may not build on prelinguistic CP, but that language instead imposes its categories on a LH that is not categorically prepartitioned.
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is unique to Korean, these results are not consistent with a suggestion made by Drivonikou [Drivonikou, G. V., Kay, P., Regier, T., Ivry, R. B., Gilbert, A. L., Franklin, A. et al. (2007) Further evidence that Whorfian effects are stronger in the right visual field than in the left. Proceedings of the National Academy of Sciences 104, 1097–1102] that CP effects in the left visual field provide evidence for the existence of a set of universal colour categories. Dividing Korean participants into fast and slow responders demonstrated that fast responders show CP only in the right visual field while slow responders show CP in both visual fields. We argue that this finding is consistent with the view that CP in both visual fields is verbally mediated by the left hemisphere language system.
The other, The Unfolding of Language (2005), deals with the actual evolution of language. […]
Yet, while erosion occurs there is also a creative force in the human development of language. That creativity is revealed in our unique capacity for metaphor. “…metaphor is the indispensible element in the thought-process of every one of us.” (page 117) “It transpired that metaphor is an essential tool of thought, an indispensible conceptual mechanism which allows us to think of abstract concepts in terms of simpler concrete things. It is, in fact, the only way we have of dealing with abstraction.” (page 142) […]
The use of what can be called ‘nouns’ and not just ‘things’ is a fairly recent occurrence in language, reflecting a shift in human experience. This is a ‘fossil’ of linguistics. “The flow from concrete to abstract has created many words for concepts that are no longer physical objects, but nonetheless behave like thing-words in the sentence. The resulting abstract concepts are no longer thing-words, but they inherit their distribution from the thing-words that gave rise to them. A new category of words has thus emerged…which we can now call ‘noun’.” (page 246)
The way language is used, its accepted uses by people through understood rules of grammar, is the residue of collective human experience. “The grammar of a language thus comes to code most compactly and efficiently those constructions that are used most frequently…grammar codes best what it does most often.” (page 261) This is centrally why I hold the grammar of language to be almost a sacred portal into human experience.
In the 2010 work, Deutscher’s emphasis shifts to why different languages reveal that humans actually experience life differently. We do not all feel and act the same way about the things of life. My opinion is that it is a mistake to believe “humanity” thinks, feels and experiences to a high degree of similarity. The fact is language shows that, as it diversified across the earth, humanity has a multitude of diverse ways of experiencing.
First of all, “…a growing body of reliable scientific research provides solid evidence that our mother tongue can affect how we think and perceive the world.” (page 7) […]
The author does not go as far as me, nor is he as blunt; I am interjecting much of my personal beliefs in here. Still, “…fundamental aspects of our thought are influenced by cultural conventions of our society, to a much greater extent than is fashionable to admit today….what we find ‘natural’ depends largely on the conventions we have been brought up on.” (page 233) There are clear echoes of Nietzsche in here.
The conclusion is that “habits of speech can create habits of mind.” So, language affects culture fundamentally. But, this is a reciprocal arrangement. Language changes due to cultural experience yet cultural experience is affected by language.
In Through the Language Glass, Guy Deutscher addresses the question as to whether the natural language we speak will have an influence on our thought and our perception. He focuses on perceptions, and specifically the perceptions of colours and perceptions of spatial relations. He is very dismissive of the Sapir-Whorf hypothesis and varieties of linguistic relativity which would say that if the natural language we speak is of a certain sort then we cannot have certain types of concepts or experiences. For example, a proponent of this type of linguistic relativity might say that if your language does not have a word for the colour blue then you cannot perceive something as blue. Nonetheless, Deutscher argues that the natural language we speak will have some influence on how we think and see the world, giving several examples, many of which are fascinating. However, I believe that several of his arguments that dismiss views like the Sapir-Whorf hypothesis are based on serious misunderstandings.
The view that language is the medium in which conceptual thought takes place has a long history in philosophy, and this is the tradition out of which the Sapir-Whorf hypothesis was developed. […]
It is important to note that in this tradition the relation between language and conceptual thought is not seen as one in which the ability to speak a language is one capacity and the ability to think conceptually a completely separate faculty, and in which the first merely has a causal influence on the other. It is rather the view that the ability to speak a language makes it possible to think conceptually and that the ability to speak a language makes it possible to have perceptions of certain kinds, such as those in which what is perceived is subsumed under a concept. For example, it might be said that without language it is possible to see a rabbit but not possible to see it as a rabbit (as opposed to a cat, a dog, a squirrel, or any other type of thing). Thus conceptual thinking and perceptions of these types are seen not as separate from language and incidentally influenced by it but dependent on language and taking their general form from language. This does not mean that speech or writing must be taking place every time a person thinks in concepts or has these types of perception, though. To think that it must is a misunderstanding essentially the same as a common misinterpretation of Kant, which I will discuss in more detail in a later post.
While I take this to be the idea behind the Sapir-Whorf hypothesis, Deutscher evidently interprets that hypothesis as a very different kind of view. According to this view, the ability to speak a language is separate from the ability to think conceptually and from the ability to have the kinds of perceptions described above and it merely influences such thought and perception from without. Furthermore, it is not a relation in which language makes these types of thought and perception possible but one in which thought and perception are actually constrained by language. This interpretation runs through all of Deutscher’s criticisms of linguistic relativity. […]
Certainly many questionable assertions have been made based on the premise that language conditions the way that we think. Whorf apparently made spurious claims about Hopi conceptions of time. Today a great deal of dubious material is being written about the supposed influence of the internet and hypertext media on the way that we think. This is mainly inspired by Marshall McLuhan but generally lacking his originality and creativity. Nevertheless, there have been complex and sophisticated versions of the idea that the natural language that we speak conditions our thought and our perceptions, and these deserve serious attention. There are certainly more complex and sophisticated versions of these ideas than the crude caricature that Deutscher sets up and knocks down. Consequently, I don’t believe that he has given convincing reasons for seeing the relations between language and thought as limited to the types of relations in the examples he gives, interesting though they may be. For instance, he notes that the aboriginal tribes in question would have to always keep in mind where the cardinal directions were and consequently in this sense the language would require them to think a certain way.
If you think about it, there is not a lot of blue in nature. Most people do not have blue eyes, blue flowers do not occur naturally without human intervention, and blue animals are rare — bluebirds and bluejays only live in isolated areas. The sky is blue — or is it? One theory suggests that before humans had words for the color blue, they actually saw the sky as another color. This theory is supported by the fact that if you never describe the color of the sky to a child, and then ask them what color it is, they often struggle to describe its color. Some describe it as colorless or white. It seems that only after being told that the sky is blue, and after seeing other blue objects over a period of time, does one start seeing the sky as blue. […]
Scientists generally agree that humans began to see blue as a color when they started making blue pigments. Cave paintings from 20,000 years ago lack any blue color, since as previously mentioned, blue is rarely present in nature. About 6,000 years ago, humans began to develop blue colorants. Lapis, a semiprecious stone mined in Afghanistan, became highly prized among the Egyptians. They adored the bright blue color of this mineral. They used chemistry to combine the rare lapis with other ingredients, such as calcium and limestone, and generate other saturated blue pigments. It was at this time that an Egyptian word for “blue” emerged.
Slowly, the Egyptians spread their blue dyes throughout the word, passing them on to the Persians, Mesoamericans and Romans. The dyes were expensive — only royalty could afford them. Thus, blue remained rare for many centuries, though it slowly became popular enough to earn its own name in various languages.
Cognitive Variations: Reflections on the Unity and Diversity of the Human Mind
by Geoffrey Lloyd
Kindle Locations 178-208
Standard colour charts and Munsell chips were, of course, used in the research to order to ensure comparability and to discount local differences in :he colours encountered in the natural environment. But their use carried major risks, chiefly that of circularity. The protocols of the enquiry presupposed the differences that were supposed to be under investigation and to that extent and in that regard the investigators just got out what they had put in. That is to say, the researchers presented their interviewees with materials that already incorporated the differentiations the researchers themselves were interested in. Asked to identify, name, or group different items, the respondents’ replies were inevitably matched against those differentiations. Of course the terms in which the replies were made-in the natural languages the respondents used-must have borne some relation to the differences perceived, otherwise they would not have been used in replying to the questions (assuming, as we surely may, that the questions were taken seriously and that the respondents were doing their honest best). But it was assumed that what the respondents were using in their replies were essentially colour terminologies, distinguishing hues, and that assumption was unfounded in general, and in certain cases can be shown to be incorrect.
It was unfounded in general because there are plenty of natural languages in which the basic discrimination relates not to hues, but to luminosities. Ancient Greek is one possible example. Greek colour classifications are rich and varied and were, as we shall see, a matter of dispute among the Greeks themselves. They were certainly capable of drawing distinctions between hues. I have already given one example. When Aristotle analyses the rainbow, where it is clearly hue that separates one end of the spectrum from the other, he identifies three colours using terms that correspond, roughly, to ‘red’ ‘green’, and ‘blue’, with a fourth, corresponding to ‘yellow’, which he treats (as noted) as a mere ‘appearance’ between ‘red’ and ‘green’. But the primary contrariety that figures in ancient Greek (including in Aristotle) is between Ieukon and melan, which usually relate not to hues, so much as to luminosity. Leukos, for instance, is used of the sun and of water, where it is clearly not the case that they share, or were thought to share, the same hue. So the more correct translation of that pair is often ‘bright’ or ‘light’ and ‘dark’, rather than ‘white’ and ‘black’.’ ^ Berlin and Kay (1969: 70) recognized the range of application of Ieukon, yet still glossed the term as ‘white’. Even more strangely they interpreted glaukon as ‘black’. That term is particularly context-dependent, but when Aristotle (On the Generation of Animals 779″z6, b34 ff.) tells us that the eyes of babies are glaukon, that corresponds to ‘blue’ where melon, the usual term for ‘black’ or rather ‘dark’, is represented as its antonym, rather than its synonym, as Berlin and Kay would need it to be.
So one possible source of error in the Berlin and Kay methodology was the privileging of hue over luminosity. But that still does not get to the bottom of the problem, which is that in certain cases the respondents were answering in terms whose primary connotations were not colours at all. The Hanunoo had been studied before Berlin and Kay in a pioneering article by Conklin (1955), and Lyons (1995; 1999) has recently reopened the discussion of this material.? First Conklin observed that the Hanunoo have no word for colour as such. But (as noted) that does not mean, of course, that they are incapable of discriminating between different hues or luminosities. To do so they use four terms, mabiru, malagti, rtarara, and malatuy, which may be thought to correspond, roughly, to ‘black’, ‘white’, ‘red’, and ‘green’. Hanunoo way then classified as a s:age 3 language, in Berlin and Kay’s taxonomy, one that discriminates between four basic colour terms, indeed those very four. 7 Cf. also Lucy 1992: ch. 5, who similarly criticizes taking purportedly colour terms out of context.
Yet, according to Conklin, chromatic variation was not the primary basis for differentiation of those four terms at all. Rather the two principal dimensions of variation are (T) lightness versus darkness, and (2) wetness versus dryness, or freshness (succulence) versus desiccation. siccation. A third differentiating factor is indelibility versus fadedness, referring to permanence or impermanence, rather than to hue as such.
Berlin and Kay only got to their cross-cultural universals by ignoring ing (they may even sometimes have been unaware of) the primary connotations of the vocabulary in which the respondents expressed their answers to the questions put to them. That is not to say, of course, that the members of the societies concerned are incapable of distinguishing colours whether as hues or as luminosities. That would be to make the mistake that my first philosophical observation was designed to forestall. You do not need colour terms to register colour differences. Indeed Berlin and Kay never encountered-certainly they never reported-a society where tie respondents simply had nothing to say when questioned about how their terms related to what they saw on the Munsell chips. But the methodology was flawed in so far as it was assumed that the replies given always gave access to a classification of colour, when sometimes colours were not the primary connotations of the vocabulary used at all.’
The Language Myth: Why Language Is Not an Instinct
by Vyvyan Evans
The neo-Whorfians have made four main criticisms of this research tradition as it relates to linguistic relativity. 33 First off, the theoretical construct of the ‘basic colour term’ is based on English. It is then assumed that basic colour terms – based on English – correspond to an innate biological specification. But the assumption that basic colour terms – based on English – correspond to universal semantic constraints, due to our common biology, biases the findings in advance. The ‘finding’ that other languages also have basic colour terms is a consequence of a self-fufilling prophecy: as English has been ‘found’ to exhibit basic colour terms, all other languages will too. But this is no way to investigate putative cross-linguistic universals; it assumes, much like Chomsky did, that colour in all of the world’s languages will be, underlyingly, English-like. And as we shall see, other languages often do things in startlingly different ways.
Second, the linguistic analysis Berlin and Kay conducted was not very rigorous – to say the least. For most of the languages they ‘examined’, Berlin and Kay relied on second-hand sources, as they had no first-hand knowledge of the languages they were hoping to find basic colour terms in. To give you a sense of the problem, it is not even clear whether many of the putative basic colour terms Berlin and Kay ‘uncovered’, were from the same lexical class; for instance, in English, the basic colour terms – white, black, red and so on – are all adjectives. Yet, for many of the world’s languages, colour expressions often come from different lexical classes. As we shall see shortly, one language, Yélî Dnye, draws its colour terms from several lexical classes, none of which is adjectives. And the Yélî language is far from exceptional in this regard. The difficulty here is that, without a more detailed linguistic analysis, there is relatively little basis for the assumption that what is being compared involves comparable words. And, that being the case, can we still claim that we are dealing with basic colour terms?
Third, many other languages do not conceptualise colour as an abstract domain independent of the objects that colour happens to be a property of. For instance, some languages do not even have a word corresponding to the English word colour – as we shall see later. This shows that colour is often not conceptualised as a stand-alone property in the way that it is in English. In many languages, colour is treated in combination with other surface properties. For English speakers this might sound a little odd. But think about the English ‘colour’ term roan: this encodes a surface pattern, rather than strictly colour – in this case, brown interspersed with white, as when we describe a horse as ‘roan’. Some languages combine colour with other properties, such as dessication, as in the Old Germanic word saur, which meant yellow and dry. The problem, then, is that in languages with relatively simple colour technology − arguably the majority of the world’s languages − lexical systems that combine colour with other aspects of an object’s appearance are artificially excluded from being basic colour terms – as English is being used as the reference point. And this, then, distorts the true picture of how colour is represented in language, as the analysis only focuses on those linguistic features that correspond to the ‘norm’ derived from English. 34
And finally, the ‘basic colour term’ project is flawed, in so far as it constitutes a riposte to linguistic relativity; as John Lucy has tellingly observed, linguistic relativity is the thesis that language influences non-linguistic aspects of thought: one cannot demonstrate that it is wrong by investigating the effect of our innate colour sense on language. 35 In fact, one has to demonstrate the reverse: that language doesn’t influence psychophysics (in the domain of colour). Hence, the theory of basic colour terms cannot be said to refute the principle of linguistic relativity as ironically, it wasn’t in fact investigating it.
The neo-Whorfian critique, led by John Lucy and others, argued that, at its core, the approach taken by Berlin and Kay adopted an unwarranted ethnocentric approach that biased findings in advance. And, in so doing, it failed to rule out the possibility that what other languages and cultures were doing was developing divergent semantic systems – rather than there being a single universal system – in the domain of colour, albeit an adaptation to a common human set of neurobiological constraints. By taking the English language in general, and in particular the culture of the English-speaking peoples – the British Isles, North America and the Antipodes – as its point of reference, it not only failed to establish what different linguistic systems – especially in non-western cultures – were doing, but led, inevitably, to the conclusion that all languages, even when strikingly diverse in terms of their colour systems, were essentially English-like. 36
The Master and His Emissary: The Divided Brain and the Making of the Western World
by Iain McGilchrist
Consciousness is not the same as inwardness, although there can be no inwardness without consciousness. To return to Patricia Churchland’s statement that it is reasonable to identify the blueness of an object with its disposition to scatter electromagnetic waves preferentially at about 0.46μm, 52 to see it like this, as though from the outside, excluding the ‘subjective’ experience of the colour blue – as though to get the inwardness of consciousness out of the picture – requires a very high degree of consciousness and self-consciousness. The polarity between the ‘objective’ and ‘subjective’ points of view is a creation of the left hemisphere’s analytic disposition. In reality there can be neither absolutely, only a choice between a betweenness which acknowledges itself, and one which denies its own nature. By identifying blueness solely with the behaviour of electromagnetic particles one is not avoiding value, not avoiding betweenness, not avoiding one’s shadow being cast across the picture. One is using the inwardness of consciousness in a very specialised way to strive to empty itself as much as possible of value, of the self. The paradoxical result is an extremely partial, fragmented version of the colour blue, which is neither value-free nor independent of the self ‘s disposition towards its object.
Another thought-provoking detail about sadness and the right hemisphere involves the perception of colour. Brain regions involved in conscious identification of colour are probably left-sided, perhaps because it involves a process of categorisation and naming; 288 however, it would appear that the perception of colour in mental imagery under normal circumstances activates only the right fusiform area, not the left, 289 and imaging studies, lesion studies and neuropsychological testing all suggest that the right hemisphere is more attuned to colour discrimination and perception. 290 Within this, though, there are hints that the right hemisphere prefers the colour green and the left hemisphere prefers the colour red (as the left hemisphere may prefer horizontal orientation, and the right hemisphere vertical – a point I shall return to in considering the origins of written language in Chapter 8). 291 The colour green has traditionally been associated not just with nature, innocence and jealousy but with – melancholy: ‘She pined in thought, / And with a green and yellow melancholy / She sat like Patience on a monument, / Smiling at grief ‘. 292
Is there some connection between the melancholy tendencies of the right hemisphere and the mediaeval belief that the left side of the body was dominated by black bile? Black bile was, of course, associated with melancholy (literally, Greek melan–, black ⊕ chole, bile) and was thought to be produced by the spleen, a left-sided organ. For the same reasons the term spleen itself was, from the fourteenth century to the seventeenth century, applied to melancholy; though, as if intuiting that melancholy, passion, and sense of humour all came from the same place (in fact the right hemisphere, associated with the left side of the body), ‘spleen’ could also refer to each or any of these.
‘There are hints from many sources that the left hemispheremay innately prefer red over green, just as it may prefer horizontal over vertical. I have already discussed thelanguage-horizontal connection. The connection between the left hemisphere and red is also indirect, but is supported by a remarkable convergence of observations from comparative neurology, which has shown appropriate asymmetries between both the hemispheres and even between the eyes (cone photoreceptor differencesbetween the eyes of birds are consistent with a greater sensitivity to movement and to red on the part of the righteye (Hart, 2000)) and from introspective studies over the millennia in three great religions that have all convergedin the same direction on an association between action, heat, red, horizontal, far etc and the right side of the body (i.e. the left cerebral hemisphere, given the decussation between cerebral hemisphere and output) compared withinaction, cold, green, vertical, near etc and the left side/ right hemisphere respectively’ (Pettigrew, 2001, p. 94).
Louder Than Words: The New Science of How the Mind Makes Meaning
by Benjamin K. Bergen
We perceive objects in the real world in large part through their color. Are the embodied simulations we construct while understanding language in black and white, or are they in color? It seems like the answer should be obvious. When you imagine a yellow trucker hat, you feel the subjective experience of yellowness that looks a lot like yellow as you would perceive it in the real world. But it turns out that color is actually a comparatively fickle visual property of both perceived and imagined objects. Children can’t use color to identify objects until about a year of age, much later than they can use shape. And even once they acquire this ability, as adults, people’s memory for color is substantially less accurate than their memory for shape, and they have to pay closer attention to detect changes in the color of objects than in their shape or location.
And yet, with all this going against it, color still seeps into our embodied simulations, at least briefly. One study looking at color used the same sentence-picture matching method we’ve been talking about. People read sentences that implied particular colors for objects. For instance, John looked at the steak on his plate implies a cooked and therefore appropriately brown steak, while John looked at the steak in the butcher’s window implies an uncooked and therefore red steak. In the key trials, participants then saw a picture of the same object, which could either match or mismatch the color implied by the sentence— that is, the steak could be red or brown. Once again, this method produced an interaction. Curiously, though, the result was slower reactions to matching-color images (unlike the faster reactions to matching shape and orientation images in the previous studies). One explanation for why this effect appears in the opposite direction is that perhaps people processing sentences only mentally simulate color briefly and then suppress color to represent shape and orientation. This might lead to slower responses to a matching color when an image is subsequently presented.
Another example of how languages make people think differently comes from color perception. Languages have different numbers of color categories, and those categories have different boundaries. For instance, in English, we make a categorical distinction between reds and pinks— we have different names for them, and we judge colors to be one or the other (we don’t think of pinks as a type of red or vice versa— they’re really different categories). And because our language makes this distinction, when we use English and we want to identify something by its color, we have to attend to where in the pink-red range it falls. But other languages don’t make this distinction. For instance, Wobé, a language spoken in Ivory Coast, only has one color category that spans English pinks and reds. So to speak Wobé, you don’t need to pay as close attention to colors in the pink-red range to identify them; all you have to do is recognize that they’re in that range, retrieve the right color term, and you’re set.
We can see this phenomenon in reverse when we look at the blue range. For the purposes of English, light blues and dark blues are all blues; perceptibly different shades, no doubt, but all blues nonetheless. Russian, however, splits blue apart in the way that we separate red and pink. There are two distinct color categories in Russian for our blues: goluboy (light blues) and siniy (dark blues). For the purposes of English, you don’t have to worry about what shade of blue something is to describe it successfully. Of course you can be more specific if you want; you can describe a shade as powder blue or deep blue, or any variety of others. But you don’t have to. In Russian, however, you do. To describe the colors of Cal or UCLA, for example, there would be no way in Russian to say they’re both blue; you’d have to say that Cal is siniy and UCLA is goluboy. And to say that, you’d need to pay attention to the shades of blue that each school wears. The words the language makes available mandate that you pay attention to particular perceptual details in order to speak.
The flip side of thinking for speaking is thinking for understanding. Each time someone describes something as siniy or goluboy in Russian, there’s a little bit more information there than when the same things are described as blue in English. So if you think about it, saying that the sky is blue in English is actually less specific than its equivalent would be in Russian— some languages provide more information about certain things each time you read or hear about them.
The fact that different languages encode different information in everyday words could have a variety of effects on how people understand those languages. For one, when a language systematically encodes something, that might lead people to regularly encode that detail as part of their embodied simulations. Russian comprehenders might construct more detailed representations of the shades of blue things than their English-comprehending counterparts. Pormpuraawans might understand language about locations by mentally representing cardinal directions in space while their English-comprehending counterparts use ego-centered mental representations to do the same thing.
Or an alternative possibility is that people might ultimately understand language about the given domain in the same way, regardless of the language, but, in order to get there, they might have to do more mental gymnastics. To get from the word blue in English to the color of the sky might take longer than to go there directly from goluboy in Russian. Or, to take another example, to construct an egocentric idea of where the bay windows are relative to you might be easier when you hear on your right than to your north.
A third possibility, and one that has caught a lot of people’s interest, is that there may be longer-term and more pervasive effects of linguistic differences on people’s cognition, even outside of language. Perhaps, for instance, Pormpuraawan speakers, by dint of years and years of having to pay attention to the cardinal directions, learn to constantly monitor them, even when they’re not using language; perhaps more so than English speakers. Likewise, perhaps the color categories your language provides affect not merely what you attend to and think about when using color words but also what differences you perceive among colors and how easily you distinguish between colors. This is the idea of linguistic relativism, that the language you speak can affect the way you think. The debate about linguistic relativism is a hot one, but the jury is still out on how and when language affects nonlinguistic thought.
All of this is to say that individual languages are demanding of their speakers. To speak and understand a language, you have to think, and languages, to some extent, dictate what things you ought to think, what things you ought to pay attention to, and how you should break the world up into categories. As a result, the routine patterns of thought that an English speaker engages in will differ from those of a Russian or Wobé or Pormpuraaw speaker. Native speakers of these languages are also native thinkers of these languages.
The First Signs: Unlocking the Mysteries of the World’s Oldest Symbols
by Genevieve von Petzinger
Kindle Locations 479-499
Not long after the people of Sima de los Huesos began placing their dead in their final resting place, another group of Homo heidelbergensis, this time in Zambia, began collecting colored minerals from the landscape around them. They not only preferred the color red, but also collected minerals ranging in hue from yellow and brown to black and even to a purple shade with sparkling flecks in it. Color symbolism— associating specific colors with particular qualities, ideas, or meanings— is widely recognized among modern human groups. The color red, in particular, seems to have almost universal appeal. These pieces of rock show evidence of grinding and scraping, as though they had been turned into a powder.
This powdering of colors took place in a hilltop cave called Twin Rivers in what is present-day Zambia between 260,000 and 300,000 years ago. 10 At that time, the environment in the region was very similar to what we find there today: humid and semitropical with expansive grasslands broken by stands of short bushy trees. Most of the area’s colorful rocks, which are commonly known as ochre, contain iron oxide, which is the mineral pigment later used to make the red paint on the walls of caves across Ice Age Europe and beyond. In later times, ochre is often associated with nonutilitarian activities, but since the people of Twin Rivers lived before the emergence of modern humans (Homo sapiens, at 200,000 years ago), they were not quite us yet. If this site were, say, 30,000 years old, most anthropologists would agree that the collection and preparation of these colorful minerals had a symbolic function, but because this site is at least 230,000 years older, there is room for debate.
Part of this uncertainty is owing to the fact that ground ochre is also useful for utilitarian reasons. It can act as an adhesive, say, for gluing together parts of a tool. It also works as an insect repellent and in the tanning of hides, and may even have been used for medicinal purposes, such as stopping the bleeding of wounds.
If the selection of the shades of ochre found at this site were for some mundane purpose, then the color shouldn’t matter, right? Yet the people from the Twin Rivers ranged out across the landscape to find these minerals, often much farther afield than necessary if they just required something with iron oxide in it. Instead, they returned to very specific mineral deposits, especially ones containing bright-red ochre, then carried the ochre with them back to their home base. This use of ochre, and the preference for certain colors, particularly bright red, may have been part of a much earlier tradition, and it is currently one of the oldest examples we have of potential symbolism in an ancestral human species.
Kindle Locations 669-683
Four pieces of bright-red ochre collected from a nearby mineral source were also found in the cave. 6 Three of the four pieces had been heated to at least 575 ° F in order to convert them from yellow to red. The inhabitants of Skhul had prospected the landscape specifically for yellowish ochre with the right chemical properties to convert into red pigment. The selective gathering of materials and their probable heat-treatment almost certainly indicates a symbolic aspect to this practice, possibly similar to what we saw with the people at Pinnacle Point about 30,000 years earlier. […]
The combination of the oldest burial with grave goods; the preference for bright-red ochre and the apparent ability to heat-treat pigments to achieve it; and what are likely some of the earliest pieces of personal adornment— all these details make the people from Skhul good candidates for being our cognitive equals. And they appear at least 60,000 years before the traditional timing of the “creative explosion.”
Kindle Locations 1583-1609
There is something about the color red. It can represent happiness, anger, good luck, danger, blood, heat, sun, life, and death. Many cultures around the world attach a special significance to red. Its importance is also reflected in many of the languages spoken today. Not all languages include words for a range of colors, and the simplest systems recognize only white and black, or light and dark, but whenever they do include a third color word in their language, it is always red.
This attachment to red seems to be embedded deep within our collective consciousness. Not only did the earliest humans have a very strong preference for brilliant red ochre (except for the inhabitants of Sai Island, in Sudan, who favored yellow), but even earlier ancestral species were already selecting red ochre over other shades. It may also be significant (although we don’t know how) that the pristine quartzite stone tool found in the Pit of Bones in Spain was of an unusual red hue.
This same preference for red is evident on the walls of caves across Europe during the Ice Age. But by this time, artists had added black to their repertoire and the vast majority of paintings were done in one or both of these colors. I find it intriguing that two of the three most common colors recognized and named across all languages are also the ones most often used to create the earliest art. The third shade, though well represented linguistically, is noticeably absent from Ice Age art. Of all the rock art sites currently known in Europe, only a handful have any white paint in them. Since many of the cave walls are a fairly light gray or a translucent yellowy white, it’s possible that the artists saw the background as representing this shade, or that its absence could have been due to the difficulty in obtaining white pigment: the small number of sites that do have white images all used kaolin clay to create this color. (Since kaolin clay was not as widely available as the materials for making red and black paint, it is certainly possible that scarcity was a factor in color choice.)
While the red pigment was created using ochre, the black paint was made using either ground charcoal or the mineral manganese oxide. The charcoal was usually sourced from burnt wood, though in some instances burnt bone was used instead. Manganese is found in mineral deposits, sometimes in the same vicinity as ochre. Veins of manganese can also occasionally be seen embedded right in the rock at some cave sites. Several other colors do appear on occasion— yellow and brown are the most common— though they appear at only about 10 percent of sites.
There is also a deep purple color that I’ve only ever seen in cave art in northern Spain, and even there it’s rare. La Pasiega (the site where I saw the grinding stone) has a series of paintings in this shade of violet in one section of the cave. Mixed in with more common red paintings, there are several purple signs— dots, stacked lines, rectangular grills— along with a single purple bison that was rendered in great detail (see fig. 4 in insert). Eyes, muzzle, horns— all have been carefully depicted, and yet the purple shade is not an accurate representation of a bison’s coloring. Did the artist use this color simply because it’s what he or she had at hand? Or could it be that the color of the animal was being dictated by something other than a need for this creature to be true to life? We know these artists had access to brown and black pigments, but at many sites they chose to paint animals in shades of red or yellow, or even purple, like the bison here at La Pasiega. These choices are definitely suggestive of there being some type of color symbolism at work, and it could even be that creating accurate replicas of real-life animals was not the main goal of these images.
A common argument against the success of certain societies is that it wouldn’t be possible in the United States. As it is claimed, what makes them work well is there lack of diversity. Sometimes, it will be added that they are small countries which is to imply ‘tribalistic’. Compared to actual tribes, these countries are rather diverse and large. But I get the point being made and I’m not one to dismiss it out of hand.
Still, not all the data agrees with this conclusion. One example is seen in the comparisons of education systems. In the successful social democracies, even the schools with higher rates of diversity and immigrant students tend to have higher test scores, as compared to a country like the US. There is one book that seriously challenges the tribal argument: Segregation and Mistrust by Eric M. Uslaner. Looking at the data, he determined that (Kindle Locations 72-73), “It wasn’t diversity but segregation that led to less trust.”
Segregation tends to go along with various forms of inequality: social position, economic class and mobility, political power and representation, access to resources, quality of education, systemic and institutional racism, environmental racism, ghettoization, etc. And around inequality, there is unsurprisingly a constellation of other social and health problems that negatively impact the segregated most of all but also the entire society in general—such as an increase of: food deserts, obesity, stunted neurocognitive development (including brain damage from neurotoxins), mental illnesses, violent crime, teen pregnancies, STDs, high school drop outs, child and spousal abuse, bullying, and the list goes on.
Obviously, none of that creates the conditions for a culture of trust. Segregation and inequality undermine everything that allows for a healthy society. Therefore, lessen inequality and, in proportion, a healthy society will follow. That is even true with high levels of diversity.
Related to this, I recall a study that showed that children raised in diverse communities tended to grow up to be socially liberal adults, which included greater tolerance and acceptance, fundamental traits of social trust.
On the opposite end, a small tribe has high trust within that community, but they have almost little if any trust of anyone outside of the community. Is such a small community really more trusting in the larger sense? I don’t know if that has ever been researched.
Such people in tight-knit communities may be willing to do anything for those within their tribe, but a stranger might be killed for no reason other than being an outsider. Take the Puritans, as an example. They had high trust societies. And from early on they had collectivist tendencies, in their being community-oriented with a strong shared vision. Yet anyone who didn’t quite fit in would be banished, tortured, or killed.
Maybe there are many kinds of trust, as there are many kinds of social capital, social cohesion, and social order. There are probably few if any societies that excel in all forms of trust. Some forms of trust might even be diametrically opposed to other forms of trust. Besides, trust in some cases such as an authoritarian regime isn’t necessarily a good thing. Low diversity societies such as Russia, Germany, Japan, China, etc have their own kinds of potential problems that can endanger the lives of those far outside of their own societies.
Trust is complex. What kind of trust? And to what end?
The debate on causes and consequences of social capital has been recently complemented with an investigation into factors that erode it. Various scholars concluded that diversity, and racial heterogeneity in particular, is damaging for the sense of community, interpersonal trust and formal and informal interactions. However, most of this research does not adequately account for the negative effect of a community’s low socio-economic status on neighbourhood interactions and attitudes. This paper is the first to date empirical examination of the impact of racial context on various dimensions of social capital in British neighbourhoods. Findings show that the low neighbourhood status is the key element undermining all dimensions of social capital, while eroding effect of racial diversity is limited.
At no developmental age are children less racist than in elementary school. But that’s not innocence, exactly, since preschoolers are obsessed with race. At ages 3 and 4, children are mapping their world, putting things and people into categories: size, shape, color. Up, down; day, night; in, out; over, under. They see race as a useful sorting measure and ask their parents to give them words for the differences they see, generally rejecting the adult terms “black” and “white,” and preferring finer (and more accurate) distinctions: “tan,” “brown,” “chocolate,” “pinkish.” They make no independent value judgments about racial difference, obviously, but by 4 they are already absorbing the lessons of a racist culture. All of them know reflexively which race it is preferable to be. Even today, almost three-quarters of a century since the Doll Test, made famous in Brown v. Board of Education, experiments by CNN and Margaret Beale Spencer have found that black and white children still show a bias toward people with lighter skin.
But by the time they have entered elementary school, they are in a golden age. At 7 or 8, children become very concerned with fairness and responsive to lessons about prejudice. This is why the third, fourth, and fifth grades are good moments to teach about slavery and the Civil War, suffrage and the civil-rights movement. Kids at that age tend to be eager to wrestle with questions of inequality, and while they are just beginning to form a sense of racial identity (this happens around 7 for most children, though for some white kids it takes until middle school), it hasn’t yet acquired much tribal force. It’s the closest humans come to a racially uncomplicated self. The psychologist Stephen Quintana studies Mexican-American kids. At 6 to 9 years old, they describe their own racial realities in literal terms and without value judgments. When he asks what makes them Mexican-American, they talk about grandparents, language, food, skin color. When he asks them why they imagine a person might dislike Mexican-Americans, they are baffled. Some can’t think of a single answer. This is one reason cross-racial friendships can flourish in elementary school — childhood friendships that researchers cite as the single best defense against racist attitudes in adulthood. The paradise is short-lived, though. Early in elementary school, kids prefer to connect in twos and threes over shared interests — music, sports, Minecraft. Beginning in middle school, they define themselves through membership in groups, or cliques, learning and performing the fraught social codes that govern adult interactions around race. As early as 10, psychologists at Tufts have shown, white children are so uncomfortable discussing race that, when playing a game to identify people depicted in photos, they preferred to undermine their own performance by staying silent rather than speak racial terms aloud.
The researchers assessed the ideas each group generated after 10 minutes of brainstorming. In same-sex groups, they found, political correctness priming produced less creative ideas. In the mixed groups however, creativity got a boost. “They generated more ideas, and those ideas were more novel,” Duguid told NPR. “Whether it was two men and one woman or two women and one man, the results were consistent.” The creativity of each group’s ideas was assessed by independent, blind raters.
Despite the fact that diversity is so central to the American condition, scholars who’ve studied the cognitive effects of diversity have long made the mistake of treating homogeneity as the norm. Only this year did a group of researchers from MIT, Columbia University, and Northwestern University publish a paper questioning the conventional wisdom that homogeneity represents some kind of objective baseline for comparison or “neutral indicator of the ideal response in a group setting.”
To bolster their argument, the researchers cite a previous study that found that members of homogenous groups tasked with solving a mystery tend to be more confident in their problem-solving skills than their performance actually merits. By contrast, the confidence level of individuals in diverse groups corresponds better with how well their group actually performs. The authors concluded that homogenous groups “were actually further than diverse groups from an objective index of accuracy.”
The researchers also refer to a 2006 experiment showing that homogenous juries made “more factually inaccurate statements and considered a narrower range of information” than racially diverse juries. What these and other findings suggest, wrote the researchers, is that people in diverse groups “are more likely to step outside their own perspective and less likely to instinctively impute their own knowledge onto others” than people in homogenous groups.
Many practices aimed at cultivating multicultural competence in educational and organizational settings (e.g., exchange programs, diversity education in college, diversity management at work) assume that multicultural experience fosters creativity. In line with this assumption, the research reported in this article is the first to empirically demonstrate that exposure to multiple cultures in and of itself can enhance creativity. Overall, the authors found that extensiveness of multicultural experiences was positively related
to both creative performance (insight learning, remote association, and idea generation) and creativity-supporting cognitive processes (retrieval of unconventional knowledge, recruitment of ideas from unfamiliar cultures for creative idea expansion). Furthermore, their studies showed that the serendipitous creative benefits resulting from multicultural experiences may depend on the extent to which individuals open themselves to foreign cultures, and that creativity is facilitated in contexts that deemphasize the need for firm answers or existential concerns. The authors discuss the implications of their findings for promoting creativity in increasingly global learning and work environments.
For example, there’s evidence that corporations with better gender and racial representation make more money and are more innovative. And many higher education groups have collected large amounts of evidence on the educational benefits of diversity in support of affirmative action policies.
In one set of studies, Phillips gave small groups of three people a murder mystery to solve. Some of the groups were all white and others had a nonwhite member. The diverse groups were significantly more likely to find the right answer.
Sundown Towns: A Hidden Dimension Of American Racism
by James W. Loewen
In addition to discouraging new people, hypersegregation may also discourage new ideas. Urban theorist Jane Jacobs has long held that the mix of peoples and cultures found in successful cities prompts creativity. An interesting study by sociologist William Whyte shows that sundown suburbs may discourage out-of-the-box thinking. By the 1970s, some executives had grown weary of the long commutes with which they had saddled themselves so they could raise their families in elite sundown suburbs. Rather than move their families back to the city, they moved their corporate headquarters out to the suburbs. Whyte studied 38 companies that left New York City in the 1970s and ’80s, allegedly “to better [the] quality-of-life needs of their employees.” Actually, they moved close to the homes of their CEOs, cutting their average commute to eight miles; 31 moved to the Greenwich-Stamford, Connecticut, area. These are not sundown towns, but adjacent Darien was, and Greenwich and Stamford have extensive formerly sundown neighborhoods that are also highly segregated on the basis of social class. Whyte then compared those 38 companies to 36 randomly chosen comparable companies that stayed in New York City. Judged by stock price, the standard way to measure how well a company is doing, the suburbanized companies showed less than half the stock appreciation of the companies that chose to remain in the city.7 […]
Research suggests that gay men are also important members of what Richard Florida calls “the creative class”—those who come up with or welcome new ideas and help drive an area economically.11 Metropolitan areas with the most sundown suburbs also show the lowest tolerance for homosexuality and have the lowest concentrations of “out” gays and lesbians, according to Gary Gates of the Urban Institute. He lists Buffalo, Cleveland, Detroit, Milwaukee, and Pittsburgh as examples. Recently, some cities—including Detroit—have recognized the important role that gay residents can play in helping to revive problematic inner-city neighborhoods, and now welcome them.12 The distancing from African Americans embodied by all-white suburbs intensifies another urban problem: sprawl, the tendency for cities to become more spread out and less dense. Sprawl can decrease creativity and quality of life throughout the metropolitan area by making it harder for people to get together for all the human activities—from think tanks to complex commercial transactions to opera—that cities make possible in the first place. Asked in 2000, “What is the most important problem facing the community where you live?” 18% of Americans replied sprawl and traffic, tied for first with crime and violence. Moreover, unlike crime, sprawl is increasing. Some hypersegregated metropolitan areas like Detroit and Cleveland are growing larger geographically while actually losing population.13
Research on large, innovative organizations has shown repeatedly that this is the case. For example, business professors Cristian Deszö of the University of Maryland and David Ross of Columbia University studied the effect of gender diversity on the top firms in Standard & Poor’s Composite 1500 list, a group designed to reflect the overall U.S. equity market. First, they examined the size and gender composition of firms’ top management teams from 1992 through 2006. Then they looked at the financial performance of the firms. In their words, they found that, on average, “female representation in top management leads to an increase of $42 million in firm value.” They also measured the firms’ “innovation intensity” through the ratio of research and development expenses to assets. They found that companies that prioritized innovation saw greater financial gains when women were part of the top leadership ranks.
Racial diversity can deliver the same kinds of benefits. In a study conducted in 2003, Orlando Richard, a professor of management at the University of Texas at Dallas, and his colleagues surveyed executives at 177 national banks in the U.S., then put together a database comparing financial performance, racial diversity and the emphasis the bank presidents put on innovation. For innovation-focused banks, increases in racial diversity were clearly related to enhanced financial performance.
Evidence for the benefits of diversity can be found well beyond the U.S. In August 2012 a team of researchers at the Credit Suisse Research Institute issued a report in which they examined 2,360 companies globally from 2005 to 2011, looking for a relationship between gender diversity on corporate management boards and financial performance. Sure enough, the researchers found that companies with one or more women on the board delivered higher average returns on equity, lower gearing (that is, net debt to equity) and better average growth. […]
In 2006 Margaret Neale of Stanford University, Gregory Northcraft of the University of Illinois at Urbana-Champaign and I set out to examine the impact of racial diversity on small decision-making groups in an experiment where sharing information was a requirement for success. Our subjects were undergraduate students taking business courses at the University of Illinois. We put together three-person groups—some consisting of all white members, others with two whites and one nonwhite member—and had them perform a murder mystery exercise. We made sure that all group members shared a common set of information, but we also gave each member important clues that only he or she knew. To find out who committed the murder, the group members would have to share all the information they collectively possessed during discussion. The groups with racial diversity significantly outperformed the groups with no racial diversity. Being with similar others leads us to think we all hold the same information and share the same perspective. This perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation.
Other researchers have found similar results. In 2004 Anthony Lising Antonio, a professor at the Stanford Graduate School of Education, collaborated with five colleagues from the University of California, Los Angeles, and other institutions to examine the influence of racial and opinion composition in small group discussions. More than 350 students from three universities participated in the study. Group members were asked to discuss a prevailing social issue (either child labor practices or the death penalty) for 15 minutes. The researchers wrote dissenting opinions and had both black and white members deliver them to their groups. When a black person presented a dissenting perspective to a group of whites, the perspective was perceived as more novel and led to broader thinking and consideration of alternatives than when a white person introduced that same dissenting perspective. The lesson: when we hear dissent from someone who is different from us, it provokes more thought than when it comes from someone who looks like us.
This effect is not limited to race. For example, last year professors of management Denise Lewin Loyd of the University of Illinois, Cynthia Wang of Oklahoma State University, Robert B. Lount, Jr., of Ohio State University and I asked 186 people whether they identified as a Democrat or a Republican, then had them read a murder mystery and decide who they thought committed the crime. Next, we asked the subjects to prepare for a meeting with another group member by writing an essay communicating their perspective. More important, in all cases, we told the participants that their partner disagreed with their opinion but that they would need to come to an agreement with the other person. Everyone was told to prepare to convince their meeting partner to come around to their side; half of the subjects, however, were told to prepare to make their case to a member of the opposing political party, and half were told to make their case to a member of their own party.
The result: Democrats who were told that a fellow Democrat disagreed with them prepared less well for the discussion than Democrats who were told that a Republican disagreed with them. Republicans showed the same pattern. When disagreement comes from a socially different person, we are prompted to work harder. Diversity jolts us into cognitive action in ways that homogeneity simply does not.
For this reason, diversity appears to lead to higher-quality scientific research. This year Richard Freeman, an economics professor at Harvard University and director of the Science and Engineering Workforce Project at the National Bureau of Economic Research, along with Wei Huang, a Harvard economics Ph.D. candidate, examined the ethnic identity of the authors of 1.5 million scientific papers written between 1985 and 2008 using Thomson Reuters’s Web of Science, a comprehensive database of published research. They found that papers written by diverse groups receive more citations and have higher impact factors than papers written by people from the same ethnic group. Moreover, they found that stronger papers were associated with a greater number of author addresses; geographical diversity, and a larger number of references, is a reflection of more intellectual diversity. […]
In a 2006 study of jury decision making, social psychologist Samuel Sommers of Tufts University found that racially diverse groups exchanged a wider range of information during deliberation about a sexual assault case than all-white groups did. In collaboration with judges and jury administrators in a Michigan courtroom, Sommers conducted mock jury trials with a group of real selected jurors. Although the participants knew the mock jury was a court-sponsored experiment, they did not know that the true purpose of the research was to study the impact of racial diversity on jury decision making.
Sommers composed the six-person juries with either all white jurors or four white and two black jurors. As you might expect, the diverse juries were better at considering case facts, made fewer errors recalling relevant information and displayed a greater openness to discussing the role of race in the case. These improvements did not necessarily happen because the black jurors brought new information to the group—they happened because white jurors changed their behavior in the presence of the black jurors. In the presence of diversity, they were more diligent and open-minded.
Jared Dillian wrote an article simply titled, Frrrreeeeeddoommmm. I think we are supposed to imagine the title being screamed by Mel Gibson as his Braveheart character, William Wallace, is tortured to death. The author compares two states, concluding that he prefers ‘freedom’:
“If you want someone from Connecticut to get all riled up, drive extra slow in the passing lane. Connecticutians are very particular about that. The right lane is for traveling, the left lane is for passing. If you’re in the left lane for any other reason than passing, you are a jerk.
“So if you really want to ruin someone’s day, drive in the left lane at about 50 miles per hour. They will be grumpy for three days straight, I assure you.
“I was telling this story to one of my South Carolina friends—how upset people from Connecticut get about this, and how people from South Carolina basically drive however the hell they want—and he said ruefully, “Freedom…”
“He’s a guy who perhaps likes lots of rules to organize society, and perhaps he’d rather live in a world where some law governs how you conduct yourself in every aspect of your life, including how you drive. I tell you what, after growing up in Connecticut and then spending the last six years in the South, I’m enjoying the freedom, even if it means I occasionally get stuck behind some idiot.”
Here is my response. Mine isn’t exactly a contrarian view. Rather, it’s more of a complexifying view.
I take seriously the freedom to act, even when others think it’s wrong, depending of course on other factors. But there is no such thing as absolute freedom, just trade-offs made between benefits and costs. There are always constraints upon our choices and, as social animals, most constraints involve a social element, whether or not laws are involved.
Freedom is complex. Freedom from what and/or toward what?
The driving example is perfect. Connecticut has one of the lowest rates of car accidents and fatalities in the country. And South Carolina has one of the highest. Comparing the most dangerous driving state to the safest, a driver is 10 times more likely to die in an accident.
Freedom from death is no small freedom. Yet there is more to life than just freedom from death. Authoritarian countries like Singapore probably have low car accident rates and fatalities, but I’d rather not live in an authoritarian country.
There needs to be a balance of freedoms. There is an individual’s freedom to act. And then there is the freedom to not suffer the consequences of the actions of others. There is nothing free in externalized costs or, to put it another way, all costs must be paid by someone. It’s related to the free rider problem and moral hazard.
That is supposed to be the purpose of well designed (i.e., fair and just) political, legal, and economic systems. Freedom doesn’t just happen. A free society is a creation of choices made by many people over many generations. Every law passed does have unintended consequences. But, then again, every law repealed or never passed in the first place also has unintended consequences. There is no escaping unintended consequences.
There is also a cultural component to this. Southern states like South Carolina have a different kind of culture than Northern states like Connecticut. Comparing the two regions, the South is accident prone in general with higher rates of not just car accidents but also such things as gun accidents. In the North, even in states with high gun ownership, there tends to be lower rates of gun accidents.
In Connecticut or Iowa, it’s not just lower rate of dying in accidents (car, gun, etc). These kinds of states have lower mortality rates in general and hence on average longer lifespans. Maybe it isn’t the different kinds of laws that are the significant causal factor. Instead, maybe it’s the cultural attitude that leads both to particular laws and particular behaviors. The laws don’t make Connecticut drivers more safe. It’s simply that safety-conscious Connecticut drivers want such laws, but they’d likely drive safer even without such laws.
I’m not sure ‘freedom’ is a central issue in examples like this. I doubt Connecticutians feel less free for having safer roads and more orderly driving behavior. It’s probably what they want. They are probably just valuing and emphasizing different freedoms than South Carolinians.
There is the popular saying that your freedom ends at my nose. Even that leaves much room for interpretation. If your factory is polluting the air I breathe, your freedom to pollute has fully entered not only my nose but also my lungs and bloodstream.
It’s no mere coincidence that states with high accident rates also tend to have high pollution rates. And no mere coinicidence that states with low accident rates tend to have low pollution rates. These are the kinds of factors that contribute to the disparity of mortality rates.
It also has to do with attitudes toward human life. The South, with its history of slavery, seems to view life as being cheap. Worker accident rates are also higher in the South. All of this does have to do with laws, regulations, and unionization (and laws that make union organization easier or harder). But that leaves the question of why life is perceived differently in some places. Why are Southerners more cavalier about life and death? And why do they explain this cavalier attitude as being an expression of liberty?
To many Northerners, this cavalier attitude would be perceived quite differently. It wouldn’t be placed in the frame of ‘liberty’. This relates to the North literally not being part of the Cavalier culture that became the mythos of the South. The Cavaliers fought on the losing side of the English Civil War and many of them escaped to Virginia where they helped establish a particular culture that was later embraced by many Southerners who never descended from Cavaliers*.
Cavalier culture was based on honor culture. It included, for example, dueling and violent fighting. Men had to prove themselves. Recent research shows that Southerners are still more aggressive today, compared to Northerners. This probably relates to higher rates of road rage and, of course, car accidents.
Our culture doesn’t just encourage or discourage freedom. It more importantly shapes our view of freedom.
(*The apparent origin of Dillian’s article is a bit ironic. William Wallace fought against England which was still ruled by a Norman king, which is to say ruled by those whose descendants would later be called Cavaliers in their defense of the king against the Roundheads. The French Normans had introduced such fine traditions as monarchy, aristocracy, and feudalism. But they also introduced a particular variety of honor culture that was based on class and caste, the very same tradition that became the partly fictionalized origin story of Southern culture.)