Environmentalist Majority

I keep coming back to corporatist politics, centered in Washington and Wall Street, and the corporate media that reports on it. This is what gets called ‘mainstream’. But the reality is that the ideological worldview of concentrated wealth and power is skewed far right compared to the general public, AKA the citizenry… ya know, We the People.

Most Americans are surprisingly far to the left of the plutocratic and kleptocratic establishment. Most Americans support left-wing healthcare reform (single payer or public option), maintaining the Roe vs Wade decision, stronger gun regulations (including among most NRA members), more emphasis on rehabilitation than punishment of criminals, drug legalization or decriminalization, etc. They are definitely to the left of Clinton New Democrats with their corporatist alliance between neoliberalism and neoconservatism. Hillary Clinton, for example, has long had ties to heavily polluting big energy corporations.

Maybe it’s unsurprising to learn that the American public, both left and right, is also to the left on the issue of climate change and global warming. This isn’t the first time I’ve brought up issue of environmentalism and public opinion. Labels don’t mean what they used to, which adds to the confusion. But when you dig down into the actual issues themselves, public opinion becomes irrefutably clear. Even though few look closely at polls and surveys, the awareness of this is slowly trickling out. We might be finally reaching a breaking point in this emerging awareness. The most politicized issues of our time show that the American public supports leftist policies. This includes maybe the most politicized of all issues, climate change and global warming.

Yet as the American public steadily marches to the left, the Republican establishment uses big money to push the ‘mainstream’ toward right-wing extremism and the Democrats pretend that their conservatism represents moderate centrism. The tension can’t be maintained without ripping the country apart. We can only hope that recent events will prove to have been a wake up call, that maybe the majority of Americans are finally realizing they are the majority, not just silent but silenced.

The environmental issues we are facing are larger than any problems Americans have ever before faced. The reality of it hasn’t fully set in, but that will likely change quickly. It appears to have already changed in the younger generations. Still, you don’t even need to look to the younger generations to realize how much has changed. Trump voters are perceived as being among the most right-wing of Americans. Yet on many issues these political right demographics hold rather leftist views and support rather leftist policies. This shows how the entire American public is far to the left of the entire bi-partisan political establishment.

When even Trump voters support these environmental policies, why aren’t Democratic politicians pushing for what is supported by the majority across the political spectrum? Could it be because those Democratic politicians, like Republican politicians, are dependent on the backing and funding of big biz? Related to this, the data shows Americans are confused about climate change. Could that be because corporate propaganda and public relations campaigns, corporate lies and obfuscation, and corporate media has created this confusion?

It is quite telling that, despite all of this confusion and despite not thinking it will personally harm them, most Americans still support taking major actions to deal with the problem — such as more regulations, controls and taxes, along with also greater use of renewable energy. The corporate media seems to be catching on and news reporting is starting to do better coverage, probably because of the corporate media simultaneously being challenged by alternative media that threatens their profit model and being attacked as ‘fake news’ by those like Trump. The conflict is forcing the issue to the surface.

This growing concern among the majority isn’t being primarily driven by self-interest, demographics, ideological worldview, political rhetoric, etc. False equivalency has long dominated public debate, in corporatist politics and corporate media. This is changing. Maybe enough people, including those in power, are realizing that this is not merely a political issue, that there is a real problem that we have to face as a society.

* * *

The ‘Spiral of Silence’ Theory Explains Why People Don’t Speak Up on Things That Matter
By Olga Mecking
New York Magazine

The Spiral Of Silence Keeps People From Speaking Out On The Issues That Matter Most
Curiosity

‘Global warming’ vs ‘climate change’
socomm@cornell

Climate Change
Gallup

Yale Climate Opinion Maps – U.S. 2016
by Peter Howe, Matto Mildenberger, Jennifer Marlon, & Anthony Leiserowitz
Yale Program on Climate Change Communication

Voters Favor Climate-Friendly Candidates
by Geoff Feinberg
Yale Program on Climate Change Communication

Most Clinton, Sanders, Kasich, and Trump Supporters–but not Cruz Supporters–Think Global Warming Is Happening
by Anthony Leiserowitz, Edward Maibach, Connie Roser-Renouf, Geoff Feinberg, & Seth Rosenthal
Yale Program on Climate Change Communication

More than Six in Ten Trump Voters Support Taxing and/or Regulating the Pollution that Causes Global Warming
by Anthony Leiserowitz, Edward Maibach, Connie Roser-Renouf, Matthew Cutler , & Seth Rosenthal
Yale Program on Climate Change Communication

Sanders Supporters Are the Most Likely to Say “Global Warming” Is a Very Important Issue When Deciding Whom to Vote For
by Anthony Leiserowitz, Edward Maibach, Connie Roser-Renouf, Geoff Feinberg, & Seth Rosenthal
Yale Program on Climate Change Communication

Americans Say Schools Should Teach Children About the Causes, Consequences, and Potential Solutions to Global Warming
by Anthony Leiserowitz, Edward Maibach, Connie Roser-Renouf, Seth Rosenthal, & Matthew Cutler
Yale Program on Climate Change Communication

Relatively Few Americans Who Think Global Warming Is Not Happening Think It is a Hoax
Yale Program on Climate Change Communication

Americans Who Think Global Warming Is Not Happening Are Concerned Range of Energy and Environmental Issues
Yale Program on Climate Change Communication

Americans Who Think Global Warming Is Not Happening Favor or Do Not Oppose Policies
Yale Program on Climate Change Communication

2016 Election Memo: It’s The Climate, Stupid!
by Elliott Negin
Moyers & Company

Politicians at Sea
by Marina Schauffler
Natural Choices

70 Percent of Americans Have This Surp
rising View of Global Warming

by Sean Breslin
The Weather Channel

Ready and Organizing: Scientists, and Most Americans, Have Climate Change on Their Minds
by Astrid Caldas
Union of Concerned Scientists

Maps Show Where Americans Care about Climate Change
by Erika Bolstad
Scientific American

Many More Republicans Now Believe in Climate Change
Poll shows a big leap from two years ago
by Evan Lehmann
Scientific American

Half of U.S. Conservatives Say Climate Change Is Real
Trump and Cruz reject global warming, while more Republicans see it as a threat.
by Eric Roston
Bloomberg

Trump doesn’t represent American views on climate change: a visual guide
by John D. Sutter
CNN

Trump supporters don’t like his climate policies
by Dana Nuccitelli
Bulletin of the Atomic Scientists

Did The Pope Change Catholics’ Minds On Climate Change?
by Maggie Koerth-Baker
FiveThirtyEight

Brief exposure to Pope Francis heightens moral beliefs about climate change
by Jonathon P. Schuldt, Adam R. Pearson, Rainer Romero-Canyas, & Dylan Larson-Konar
Pomona College

New poll shows Exxon CEO is closer to public opinion on climate than Trump
by Bill Dawson
Texas Climate News

How Americans Think About Climate Change, in Six Maps
by Nadja Popovich, John Schwartz, & Tatiana Schlossberg
The New York Times

Climate change is a threat – but it won’t hurt me, Americans say
by J.D. Capelouto
Thomson Reuters Foundation

Americans are confused on climate, but support cutting carbon pollution
by Dana Nuccitelli
The Guardian

Well Lookie Here, a Majority of Americans Support Restricting Carbon Pollution from Coal Plants
by Ellie Shechet
Jezebel

Surveys Show Major Gap Between Voters and Their Representatives On Global Warming
by Noa Banayan
Earthjustice

Climate Change Denial ‘a Problem’ for Republicans
by Steve Baragona
VOA News

Climate of Capitulation
by Vivian Thomson
The MIT Press

Conservatives can lead the charge to deal with climate change
by Susan Atkinson
The Pueblo Chieftan

Race Unrealist

I was noticing again a post by RaceRealist from last year: Strong Evidence, Strong Argument: Race IQ and Adoption. It’s in response to a previous post of mine: Weak Evidence, Weak Argument: Race, IQ, Adoption. I don’t want to waste too much time on it, but the intellectual dishonesty and simplemindedness of it amuses me. I’ll do a quick breakdown of it, that is quick by my standards.

In reference to my post, he says that it’s “an environmentalist in the B-W IQ debate regurgitates the same old and boring long-refuted studies and the same long-refuted researchers, to attempt to prove that the gap in IQ is purely environmental in nature. I have written on this before, so his reasoning that there is “weak evidence” and “a weak argument on race and IQ” is clearly wrong, as we know the studies and researchers he cites have been disproven. Steele then references another discussion he had on the black-white IQ gap, speaking about people being “uninformed” about a position while arguing it.”

First of all, I’m not a mere ‘environmentalist’ and I’ve never argued for a blank slate view of human nature. Anyone who has seriously studied the topic knows that the nature vs nurture debate is meaningless. There is no way to separate the two because genes never exist outside of nor are expressed separate from environment and epigenetics. Genetics, in an evolutionary sense, are simply a biological aspect of the environment. That is just reality, no matter one’s ideology.

I’ve never denied the role of ‘nature’. I’ve simply pointed out the obvious fact that it isn’t separate from nurture. That RaceRealist has previously expressed his ignorance on this matter is irrelevant. He neither disproves what he disbelieves nor proves what he believes. He just makes a lot of assertions based on weak evidence that he cherrypicks and strong evidence he ignores. I’m not sure how I’m supposed to respond to that in an intelligent and rational way.

“Since he’s saying that there is a “difficulty of replicability” with IQ tests in transracial adoption studies, he hasn’t read the ones for the hereditarian argument and seeing how they show the biological origin of IQ or he’s just being willfully ignorant.”

I have read about them. And I’ve written many posts about the issue. Just do a search in my blog about twin studies, adoption studies, heritability, etc (or look below at the blog posts I have listed). Any argument RaceRealist could attempt to make I’ve probably dismantled before. I don’t plan on repeating myself. It is pointless that he wishes to deny his own willful ignorance and project it onto others. I’m unimpressed.

“There are no racial biases in education nor policing. Police arrest less black offenders than are reported by the NCVS and affirmative action getting blacks ahead shows that the racial bias is for them, not whites. Saying that it’s “systemic and institutional” is a cop out since you know he doesn’t want to even entertain the idea of the hereditarian hypothesis.”

That is as willfully ignorant as one can get, as the overwhelming evidence is there for all to see, assuming one wants to see. But I can’t force knowledge onto those who don’t want to know. Trust me, I’ve tried. That is what amuses me. I’m laughing here as I write these words. It is so ludicrous. If I can’t have a meaningful debate with ignoramuses like this, I can at least mock them.

“Stereotype threat, my favorite. ST can only be replicated in the lab. “Prejudice” doesn’t matter.”

What the fuck does that even mean? Stereotype threat has been studied no different than anything else. I don’t know what is meant by “in the lab”. Stereotype threat has been studied, for example, in classrooms. I guess anything where research happens is in a sense a ‘lab’. Numerous studies have been done and replicated. It’s standard scientific research and well supported.

Prejudice doesn’t matter, he claims. Yet this is the same kind of person who complains about prejudices against whites and right-wingers, as if those supposed prejudices matter a lot. What he really means to say is that he doesn’t think anyone who isn’t like himself matters. He should be honest enough to state the truth, instead of hiding behind politically correct rhetoric.

“What other confounders could be controlled for that you think had a negative impact on the mean IQ of blacks at adolescence throughout adulthood?”

That is shocking that anyone who wants to pretend to not be a complete ignoramus could ask such a question. Does he really not know about confounding factors? Whose ass has his head been shoved up?

The confounding factors have been detailed in thousands of research papers, articles, books, and posts. Many edifying sources can easily found just by doing a web search for “confounding factors”. If he really wants an answer, he could use the search function on my blog, as I’ve listed confounding factors in numerous blog posts and comments. Even so, most of these confounding factors are obvious to the point of being common sense.

Yet he would, of course, dismiss out of hand any confounding factor for the simple reason that no confounding factor will ever fit into his preconceived belief system. RaceRealist’s entire post is a dancing around the issue of confounding factors, momentarily asking a question of me that he would never ask of himself, much less attempt to answer. He doesn’t want to know. It’s ignorance upon ignorance, all the way down.

““Internalized racial biases” don’t matter since blacks have a higher self-esteeem about their physical attractiveness (Kanazawa, 2011), so “internalized racial biases” (which includes things such as one’s thoughts of one’s self physically) do not matter as they are more confident than are whites. This is due to testosterone, which makes blacks more extroverted than whites who are more extroverted than Asians (Rushton’s Differential-K Theory). If these racial biases were really to manifest themselves to actually sap 15 to 18 (1 to 1.2 SDs) IQ points from blacks, this would show in their self-confidence about themselves. Yet they are more confident, on average, than the other two major races.”

I know what he believes. That has already been made perfectly clear. These just-so stories amuse me endlessly. I really can’t stop laughing. Watching a race realist make an argument is like watching a monkey dressed up like a human doing tricks in the circus. It has the vague appearance of something resembling an argument, but it is simply absurd on the face of it.

I can knock his points down like shooting at a flock of ducks with a machine gun.

The Kanazawa study doesn’t say what he claims it says. Blacks in the study are told to rate themselves, but no comparison is asked of them to rate others. So, we have no idea how they rate themselves compared to how they rate others. It could be simply a fluke in how different populations interpret the rating system and so may say nothing about actual perception of self relative to perception of others. Besides, Kanazawa doesn’t acknowledge and discuss confounding factors, much less try to control for them. Kanazawa doesn’t even mention who were used as test subjects or make an argument for why they are representative of the broader populations, which would require him to deal with confounding factors.

For example, maybe he was using test subjects that came from different backgrounds of socioeconomic class status, residential conditions, regional cultures, etc. Thomas Sowell argues that blacks adopted the white redneck culture before many of them migrated to states in the North. If that is the case, then multiple factors would need to be controlled for. What results would be seen with poor white Southerners or even poor whites in general? And how would they compare to blacks or at least particular black populations? We don’t know because Kanazawa’s research is near worthless, other than as a preliminary study to demonstrate that a better study needs to be done.

Does this really mean what Kanazawa and RaceRealist thinks it means? There is no evidence to support their ideologically-biased conclusion.

Oppressed populations often respond with pride. Think of the proud Irish when they were under the oppression of the English. Think of the proud Scots-Irish in impoverished Appalachia. For such groups, the personal sense of pride gives them an attitude of self-respect in a social situation that makes it difficult to achieve the more tangible forms of self-worth. If you are part of a privileged demographic, you don’t need as much overtly declared sense of self-respect because all of society regularly tells you that you are valued more than others. The privileged, by default, have respect given to them by others. That is not the case for the underprivileged.

If that is true, then an exaggerated concern for self-esteem as a compensatory mechanism might be standard evidence of societal disadvantage and systemic prejudice. Centuries of institutionalized racism could explain why this compensatory mechanism has been so important for the black population. For much of their past, the black population’s sense of self-value was all that they had, as the majority of the black population for most of American history couldn’t even claim the value of self-ownership. This sense of ferociously defended self-value could have been a means of survival under centuries of brutal oppression. If so, it took centuries to develop and so it won’t likely disappear very quickly, especially considering the legacy of racial prejudice has been proven beyond all doubt to continue to this day, not to mention what epigenetic factors may still be involved in influencing neurocognitive and psychological development.

Then again, there could be an even simpler explanation. Blacks on average deal with a lot more difficulties in life than whites on average, such as higher rates of: poverty, unemployment, police targeting, police brutality, etc. Maybe dealing with immense difficulties and managing to survive builds a sense of self-confidence, a proven belief that the individual can manage problems and that they will get by. Instead of a compensatory mechanism, it would be more directly an expression of survival in a dangerous and difficult world.

This could be easily tested by looking at other poor and disadvantaged populations. But it might be hard to find comparable populations that were historically oppressed in the manner of centuries of racialized slavery, chain gang re-enslavement, Jim Crow laws, race wars, lynching, sundown towns, redlining, etc. Simply being a non-white minority isn’t necessarily comparable. Asian-Americans and Hispanic-Americans didn’t experience oppression to this degree and they don’t show signs of higher self-esteem, the two maybe being causally related.

It’s telling that researchers like Kanazawa never bother to fully test their own hypotheses. And it’s telling that race realists have so little intellectual capacity to analyze research like this to actually understand what it does and does not say, what can and cannot be concluded from it.

The point is we don’t know, as many possible explanations can be articulated and need to be further researched (see: Factors Influencing Racial Comparisons of Self-Esteem, Gray-Little & Hafdahl). Interestingly, according to Twenge and Crocker (Race and self-esteem): “Blacks’ self-esteem increased over time relative to Whites’, with the Black advantage not appearing until the 1980s.” If testosterone explains the racial differences, then what evidence is there that black levels of testosterone increased around 1980 and what caused it? Testosterone levels is a strange argument to make, especially considering that self-perception and self-assessment has been proven to change according to environmental conditions, besides just stereotype threat: television watching, a presidential election, etc. Besides, there is much conflicting research about testosterone differences, some of it even showing no notable racial differences, specifically between blacks and whites.

As for Rushton’s differential k theory, there has been much debate about it with research showing different results. But as far as I know, no researcher has yet tested the hypothesis by controlling for all known confounding factors. So, for the time being, it remains an unproven hypothesis. Many have argued that Rushton’s research was designed badly, an inevitable outcome when confounding factors are ignored.

Yet more just-so stories shot down.

“It’s been discussed ad nasueam. The data attempting to say that blacks are just as intelligent are whites are wrong, as I will show below. The data for the hereditarian hypothesis is not weak, as I have detailed on this blog extensively.”

Race realists declare their beliefs ad nauseum. So what? I find it interesting that race realists are only able to make their arguments by ignoring the data that disconfirms or complicates their ideologically motivated conclusions and by ignoring criticisms of the data that they use as a defense. If you can’t make an intellectually honest argument, why would you expect others to treat you as though you were intellectually honest? A good question that RaceRealist should ask himself.

“Race is not a social construct, but a biological reality. If this debate is “about as meaningful as attempting to compare the average magical intelligence of those sorted into each Hogwarts Houses by the magical sorting hat”, why waste youre time writing this post with tons of misinformation?”

Declaring your beliefs doesn’t add anything to debate. Everyone knows what you believe. The trick is you have to prove what you believe. But that would require you take the evidence seriously, all of the evidence and not just what is convenient.

“Steele cites Block (2005), a “philosopher of science”. Rushton and Jensen (2005, p. 279) say that those (Block) who say that gene-environment interactions are so hard to entangle, why then, do identical twins raised apart show identical signs of intelligence (among many other heritable items)?”

I’ve written about this before. Identical twin research is some of the worst research around for the reason I constantly repeat, a lack of controlling for confounding factors, such as most twins raised apart still sharing the same in utero environment, sometimes the same early childhood environment, or else raised in similar environments because adopted to similar families in the same or similar community.

All of this is common knowledge for anyone not utterly ignorant on the matter. How am I supposed to argue against someone’s ignorance when they want to be ignorant? I don’t know. I haven’t figured out how to force the ignorant to not be ignorant. That would be a great trick, if I was capable of doing that.

“Eyferth comes out, of course, which the study has been discredited. To be breif, 20 to 25 percent of the fathers to German women’s children weren’t sub-Saharan African, but French North Africans. 30 percent of blacks got refused in military service in comparison to 3 percent of whites due to rigorous testing for IQ in 70 years ago. One-third of the children were between the ages of 5 and 10 and two-thirds were between the ages of 10 and 13. Heritability estiamtes really begin to increase around puberty as well, so if the Eyferth study would have retested in the following 5 to 8 years to see IQ scores then, the scores would have dropped as that’s when genetic effects start to dominate and environments effects are close to 0.”

That is really amusing. He admits that his race realism means nothing. Because it is inconvenient, he suddenly argues that not all blacks are the same and that we shouldn’t make broad generalizations about all blacks. Were the populations representative? Maybe not. But then that exact criticism has been made against much of the data race realists obsess over. That is the whole point.

Sure, there is a lot of imperfect data out there. That is the core of my argument about why only an ignoramus could state a clear, strong conclusion when we know so little and what we do know is of such uncertain value. Often we can’t even determine how representative various populations are because we don’t know all the confounding factors or how to control for them. That is my whole point. I do find it endlessly humorous that someone like RaceRealist can’t see how this applies to his own arguments.

I can’t help but laugh at the rest of his ‘analysis’ as well. He states that, “20 to 25 percent of the fathers to German women’s children weren’t sub-Saharan African”. So? About one in five American blacks have are mostly European. And more than one in twenty have no detectable African genetics whatsoever. That means there is a significant number of American blacks with little to no sub-Sarharan African ancestry that shows up on genetic tests. Most post-colonial black populations are heavily mixed in various ways.

The issue remains that ignorant race realists like to pretend that all blacks are somehow a single ‘race’ in any meaningful sense. But that is obviously untrue, even according to the data they use. This ignorance is further exacerbated because I have never met a race realist, at least not of this bigoted variety, who even understands what heritability means (hint: it isn’t the same thing as genetic inheritance, as any geneticist knows). Heritability rates would include any confounding factors not controlled for and, of course, most of those confounding factors would be non-genetic. Beyond that, there is no rational reason to assume that genetic factors have any more effect at one age than at another. Such an assumption comes from the lack of basic comprehension about heritability.

We know next to nothing about genetics, since almost all the research is based on measuring correlations. It is rare that direct genetic causation is ever studied and even more rare that it is proven. This is why many researchers have simply given up on finding genetic causes for much of anything. The fact is that genetics never exist or get expressed in isolation from non-genetic factors. The two responses to this are intellectual humility and willful ignorance. I’ve chosen the former and RaceRealist chose the latter.

“Headstart gains are temporary, and there is a fadeout over time.. Arthur Jensen was writing about this 50 years ago. IQ and scholastic achievement gains only last for a few years after Headstart, then genetics starts to take effect as the child grows older.”

Is RaceRealist utterly stupid? I ask that in all seriousness. The only other possibility is that he is being disingenuous. Why would it be surprising that a temporary change in environmental conditions often only has a temporary change in results for individuals temporarily affected? It doesn’t take a genius to figure that out.

I could go on and on, ripping apart everyone of RaceRealist’s beliefs. But what is the point? I’ve already disproven this kind of bullshit again and again, as have many others. Such ignorance is infinite. That is why I end up just throwing my hands up in the air and laughing with amusement. I’ll go on mocking such people, as long as I continue to find them amusing. What other use can they serve?

As RaceRealist ends by quoting Rushton and Jensen in response to Nisbett, I’ll turn the table around. Nisbet writes, basically stating they are full of shit:

Rushton and Jensen’s (2005) article is characterized by failure to cite, in any but the most cursory way, strong evidence against their position. Their lengthy presentation of indirectly relevant evidence which, in light of the direct evidence against the hereditarian view they prefer, has little probative value, and their “scorecard” tallies of evidence on various points cannot be sustained by the evidence.

* * * *

If you actually care about knowledge more than ignorance, questioning curiosity more than dogmatic ideology, then you can read what I’ve posted before. I offer a ton of data, quotes, and sources:

Basic Issues First: Race and IQ
Heritability & Inheritance, Genetics & Epigenetics, Etc
What Genetics Does And Doesn’t Tell Us
What do we inherit? And from whom?
Identically Different: A Scientist Changes His Mind
Unseen Influences: Race, Gender, and Twins
Using Intelligence to Assess Intelligence
The IQ Conundrum
HBD Proponents, Racists and Racialists
Racial Perceptions and Genetic Admixtures
To Know Racism
Examining Our Racialized Lives
Racial Reality Tunnel
Race Realism, Social Constructs, and Genetics
Race Realism and Racialized Medicine
The Bouncing Basketball of Race Realism
Race Is Not Real, Except In Our Minds
Racist Realist
To Control or Be Controlled
Disturbing Study Highlights Racism
Racism Without Racists: Victimization & Silence
An Unjust ‘Justice’ System: Victimizing the Innocent
Are Blacks More Criminal, More Deserving of Punishment and Social Control?
War On Drugs Is War On Minorities
Substance Control is Social Control
Institutional Racism & Voting Rights
Black Feminism and Epistemology of Ignorance
Racist Ideology within Racial Terminology
Racecraft: Political Correctness & Free Marketplace of Ideas
Race-Racism Evasion
Racism, Proto-Racism, and Social Constructs
The Racial Line and Racial Identity
Scientific Races and Genetic Diversity
Structural Racism and Personal Responsibility
Working Hard, But For What?
Whose Work Counts? Who Gets Counted?
Worthless Non-Workers
Deep Roots in Dark Soil
“Before the 1890s…”
Opportunity Precedes Achievement, Good Timing Also Helps
Are White Appalachians A Special Case?
Americans Left Behind: IQ, Education, Poverty, Race, & Ethnicity
Class and Race as Proxies
Race & Wealth Gap
No, The Poor Aren’t Undeserving Moral Reprobates
The Desperate Acting Desperately
The Privilege of Even Poor Whites
To Be Poor, To Be Black, To Be Poor and Black
Poverty In Black And White
Black Families: “Broken” and “Weak”
The Myth of Weak and Broken Black Families
Crime and Incarceration, Cause and Correlation
On Racialization of Crime and Violence
Fearful Perceptions
Paranoia of a Guilty Conscience
John Bior Deng: Racism, Classism
Why Are Blacks Concentrated in Inner Cities?
From Slavery to Mass Incarceration
Invisible Men: Mass Incarceration, Race, & Data
Invisible Problems of Invisible People
Are Blacks More Criminal, More Deserving of Punishment and Social Control?
White Violence, White Data
More Minorities, Less Crime
Conservative Arguments Recycled and Repackaged
Race & Racism: Reality & Imagination, Fear & Hope
Slavery and Eugenics
Slavery and Eugenics: Part 2
Black Superiority
Eugenics: Past & Future
Slavery and Capitalism
12 Years a Slave, 4 Centuries an Oppression
Facing Shared Trauma and Seeking Hope
Society: Precarious or Persistent?
Plowing the Furrows of the Mind
Union Membership, Free Labor, and the Legacy of Slavery.

Bias About Bias

A common and contentious issue is accusations of bias, often in the media but more interestingly in science. But those making perceiving bias can’t agree what they are. Some even see biases in how biases are understood. An example of this is how ideologies are labeled, defined, framed, and measured. I’m specifically thinking in terms of opinion polling and social science research.

A certain kind of liberal oddly agrees with conservatives about many criticisms of liberalism. I can be that kind of odd liberal in some ways, as complaining about liberals is one of my favorite activities and I do so very much from a liberal perspective. But there are two areas where I disagree with liberals who critique their fellow liberals.

First, I don’t see a liberal bias in the social sciences or whatever else, at least not in the way it is often argued. And second, I don’t see human nature as being biased toward conservatism (nor, as Jonathan Haidt concludes, that conservatives are more broadly representative and better understanding of human nature).

* * *

Let me begin with the first.

I agree in one sense, from a larger perspective, and I could go even further. There is a liberal bias in our entire society and in all of modern Western civilization. Liberalism is the dominant paradigm.

As far as that goes, even conservatives today have a liberal bias, which is obvious when one considers how most of conservatism is defined by the liberalism of the past and often not even that far into the past. Conservatives in the modern West are more liberal than liberals used to be — not just more liberal in a vague relative sense, as contemporary conservatives in historical terms are amazingly liberal (politically, socially, and psychologically). Beyond comparisons to the past, the majority who identify as conservative even hold largely liberal positions in terms of present-day standard liberalism.

Being in a society that has been more or less liberal for centuries has a way of making nearly everyone in that society liberal to varying degrees. Our short lives don’t allow us the perspective to be shocked by how liberal we’ve all become. This shows how much Western society has embraced the liberal paradigm. Even the most reactionary politics ends up being defined and shaped by liberalism. We live in a liberal world and, to that extent, we are all liberals in the broad sense.

But this gets into what we even mean by the words we use. A not insignificant issue.

This insight about the relativity of liberalism has been driven home for me. In the context of our present society, using the general population as the measure, those who identify as and are perceived as liberals (specifically in mainstream politics and mainstream media) are really moderate-to-center-right. Sure, the average ‘liberal’ is to the left of the political right, by definition. Then again, the average ‘liberal’ is far to the right of most of the political left (or at least this is true for the liberal class that dominates). Those who supposedly represent liberalism are often neither strongly nor consistently liberal, and so I wonder: In what sense are they liberal? Well, beyond the general fact of their living in a liberal society during a liberal age.

This watered-down liberalism defined by the status quo skewed rightward becomes the defining context of everything in our society (and, assuming the so-called liberals are somewhere in the moderate middle, that still leaves unresolved the issue of what exactly they are in the middle of — middle of elite-promoted mainstream thought? middle of the professional middle-to-upper class?). If social science has a liberal bias, it is this bias of this ‘moderate middle’ or rather what gets portrayed as such. And put that way, it doesn’t sound like much of a bias as described, other than the bias of ideological confusion and self-confirmation, but certainly not a bias toward the political left. As far as leftists go, this supposed liberalism is already pretty far right in its embrace or at least tolerance of neoliberal corporatism and neocon oligarchy. Certainly, the ‘liberals’ of the Democratic Party are in many ways to the right of the American public, with nearly half of the latter not voting (and so we aren’t talking about a ‘liberalism’ that is in the middle of majority opinion).

The question isn’t just what words mean but who gets to define words and who has the power to enforce their definitions onto the rest of society. Liberalism ends up being a boundary, a last line of defense. This far left and no further. Meanwhile, there seems to be no limit to how far our society is allowed to drift right, often with the cooperation of ‘liberal’ New Democrats, until we teeter on the edge of authoritarianism and fascism, although always with liberal rhetoric playing in our ears. The liberal paradigm so dominates our imaginations that we can’t see the illiberal all around us. So, liberalism dominates, even as it doesn’t rule, at least not in a direct and simplistic sense.

With all this in mind, the mainstream may have a ‘liberal’ bias in this way. But it obviously doesn’t have a leftist bias. There is the problem. Leftism has been largely ignored, except for its usefulness as a bogeyman since the Cold War. Mainstream liberalism is as far (maybe further) away from leftism as it is from conservatism. And yet to mainstream thought, leftism isn’t allowed to have an independent identity outside of liberalism, besides when a scapegoat is needed. Ignored in all this is how far leftist is the American public, the silenced majority — an important detail, one might think.

Social scientists, political scientists, and pollsters all the time include nuanced categories for the political right, distinguishing conservatives from libertarians, authoritarians, and reactionaries. But what about nuanced categories for the political left? They don’t exist, at least not within mainstream thought. There is little if any research and data about American social democrats, socialists, communists, Marxists, anarchosyndicalists, left-libertarians, etc; as if such people either don’t exist or don’t matter. It’s only been in recent years that pollsters even bothered to ask Americans about some of this, discovering that the majority of certain demographics (younger generations, minorities, etc) do lean left, including about the terms and labels they favor, such as seeing ‘socialism’ in a positive light.

In social science, we know so little about the political left. The research simply isn’t there. Social science researchers may be ‘liberal’, however we wish to define that, but one gets the sense that few social science researchers are left-liberals and fewer still are leftists. It would be hard for radical left-wingers (or those who are perceived as such within the mainstream) to get into and remain within academia, to get hired and get tenure and then to do social science research. As hierarchical and bureaucratic institutions often run on a business model and increasingly privately funded, present day universities aren’t as welcoming to the most liberal-minded leftist ideologies.

Anarchists, in particular, are practically invisible to social science research. Just as invisible are left-libertarians (many being anarcho-syndicalists), as it is assumed in the mainstream that libertarian is by definition right-wing, despite the fact that even right-libertarians tend to be rather liberal-minded (more liberal-minded than mainstream liberals in many ways). It’s almost impossible to find any social science research on these ideologies and what mindsets might underlie them.

Let’s at least acknowledge our ignorance and not pretend to know more than we do.

* * *

This brings me to the second thing.

Among some liberals (e.g., Jonathan Haidt), it’s assumed that human nature is inherently conservative. What is interesting is that this is, of course, a standard conservative argument. But you never hear the opposite, conservatives arguing human nature is liberal.

The very notion of a singular human nature is itself a conservative worldview. A more liberal-minded view is that human nature either doesn’t exist, not in a monolithic sense at least, or else that human nature is fluid, malleable, and shaped by the environment. The latter view is becoming the dominant view in the social sciences, although there are some holdouts like Haidt.

Mainstream thought changes slowly. The idea of a singular human nature was primarily held by the liberal-minded in centuries past. This is because it was used to defend universal human rights and civil rights, often in terms of inborn natural rights. The Enlightenment thinkers and later revolutionary pamphleteers helped spread the notion that everyone had a human nature and that it was basically the same, no matter if European or otherwise, rich or poor, free or slave, civilized or savage.

As opposed to today, the conservative-minded of that earlier era weren’t open to such thinking. Now conservatives have embraced this former ideologically and psychologically liberal position. Classical liberalism, radical in its opposition to the traditionalism of its day, is now seen by even conservatives as the bedrock tradition of our liberal society.

The very notion of a human nature is the product of civilization, not of a supposed human nature. Prior to the Axial Age, no one talked about a human nature nor is it obvious that they ever acted based on the assumption that such a thing existed. The invention of the idea of a ‘human nature’ was itself a radical act, a reconception of what it means to be human. All of post-Axial Age civilization is built on this earliest of radical visions that was further radicalized during the Enlightenment. Without the Axial Age (and one might argue the breakdown of the bicameral mind that made it possible), there would have been no Greco-Roman democracy, republicanism, philosophy, and science; and so no Renaissance that would have helped inspire the European Enlightenment.

The question isn’t just what is human nature, such as conservative or liberal, individualistic or social, etc. First and foremost, we must ask if such a thing exists. If so, what exactly does it even mean to speak of a ‘human nature’? Those are the kinds of questions that are more likely to be considered by the most liberal-minded, at least in the context of present Western society.

When certain liberals argue for a conservative human nature, I suspect an ulterior motive. The implication seems to be that conservatism is the most primitive and base, uncultured and uncivilized layer of the human psyche. As liberals we must contend with this conservatism and so let’s throw the conservative wolf a bone in hopes of domesticating it into a dog that can be house-broken and house-trained.

This could be seen as turning liberalism into an advanced achievement of modern civilization that transcends beyond a base and primal human nature, as if the difficulties and weaknesses of liberalism prove its worth. Sure, conservatism may be the foundation, but liberalism is the penthouse on the upper floors decked out with the finest of modern conveniences. Liberalism is to conservatism, from this perspective, in the way modern civilization is to ancient tribalism. Whatever one may argue about those earlier societies in relation to human nature, I doubt many want to return to that kind of social order, not even among the most nostalgic of reactionaries.

This is an argument made by Jonathan Haidt in promoting a Whiggish narrative of capitalism, despite his at other times bending over backwards to praise conservatism. Using conservatism as a broad base upon which to build the progressive liberal dream is not exactly what conservatives are hoping for from their ideological movement. This is why Haidt doesn’t grasp that most conservatives don’t want to just get along, for egalitarian tolerance isn’t a conservative-minded attitude.

One might suspect that calling human nature fundamentally conservative is a bit of a backhanded compliment. A wary conservative likely would assume a hidden condescension or else an attempt to butter them up for some ulterior motive. Even with the best of intentions, this seems like a wrong way to think about the ideological situation.

Here is a central problem. Anthropological accounts of tribal societies, I’d argue, don’t confirm the hypothesis of a conservative human nature. Outside of the modern Westernized world, I doubt it makes much sense to use a modern Westernized frame like liberal vs conservative. The approach used by theorists of Darwinian psychology has too many pitfalls, misguiding us with cultural biases and leading to deeply unfalsifiable just-so stories. As John Gray stated so clearly, in The Knowns and the Unknowns (New Republic):

“There is no line of evolutionary development that connects our hominid ancestors with the emergence of the Tea Party. Human beings are not amoebae that have somehow managed to turn themselves into clever primates. They are animals with a history, part of which consists of creating cultures that are widely divergent. Using evolutionary psychology to explain current political conflicts represents local and ephemeral differences as perennial divisions in the human mind. It is hard to think of a more stultifying exercise in intellectual parochialism.

“Like distinctions between right and left, typologies of liberalism and conservatism may apply in societies that are broadly similar. But the meaning that attaches to these terms differs radically according to historical circumstances, and in many contexts they have no meaning at all.”

For example, in thinking about the Pirahã, I don’t see them as being fundamentally conservative, at least as Daniel Everett portrays them. It appears they don’t particularly care about or, in some cases, even comprehend the worldview of what we call conservatism: need for control and closure, ideological dogmatism and rigid belief systems, natural law and universal morality, family values and the sanctity of marriage, organized religion and religiosity (much less literalism and fundamentalism), rituals and traditions, law and order, social roles and authority figures, overtly enforced social norms and community-sanctioned punishments, public shaming and harsh judgment, disciplinarian parenting and indoctrination of children, strict morality and sexual prudery, disgust about uncleanliness and protection against contagion, worry about injury and death, fear-ridden anxiety and heightened threat perception, dislike toward a lack of orderliness and clear guidelines, etc.

Within their society, they don’t have any kind of hierarchies or privileged positions. They have no chiefs, respected committee of elders, governing body, or political system. Any person could be a temporary leader for a particular activity, but the need for a leader is merely pragmatic and rare. Their society is loosely organized with no formal or traditional roles, such as shaman or medicine man. They lack anything resembling a social institution or social structure. They don’t even have such things as initiations into adulthood, traditions of storytelling, etc. The communal aspects of their tribalism are quite basic and mostly in the background. What holds their society together is simply a cultural identity and personal relationships, not outward rules and forms.

Their way of relating to the larger world is casual as well. They don’t have an inordinate amount of worries and concerns about outsiders or hatred and aggression toward them. They don’t seem to obsess about perceived enemies nor foster a worldview of conflict and danger. The worst that they do is complain about those they think treated them unfairly, such as trade deals and land usage, but even that is talked about in a personal way between individuals. Otherwise, their attitude toward non-Pirahã is mostly a casual indifference and the tolerant acceptance that follows from it.

In some key ways, the Pirahã are less conservative-minded and authoritarian than Western liberals. On the other hand, their society is basically conformist and ethnocentric in a typical tribalistic fashion. And they do have some gender role patterns, including in their language. But their pedophilia is gender neutral, not privileging men, as everyone is permitted to participate in sexual play.

Even within the conformity of their group identity, they strongly disapprove of one individual telling another individual what to do. No Pirahã will tell another Pirahã how to be a Pirahã. And if a Pirahã was unhappy being a Pirahã, I doubt another Pirahã would be bothered or try to stop them from leaving. They appear to have a rather live and let live philosophy.

Pointing out a specific area of social science research, I’m not sure how boundary types would be applied to the Pirahã, in that they don’t think about boundaries as modern Westerners do. They live in such a small world that what exists outside of the boundaries of their experience is simply irrelevant, such that they wouldn’t even recognize a boundary as such. Where their experience of the world stops, that is the edge of their world. There is just what they personally know and then there is everything else. Boundaries are explicitly acknowledged liminal spaces and so extremely fuzzy in their worldview, including boundaries of consciousness and identity. The worldviews of either individuality or group-mindedness would likely seem meaningless to them.

Even pointing out the few areas that could be interpreted as ‘conservative’, I wouldn’t think that would be all that helpful. It doesn’t really say much about human nature in a broad sense. What anthropology shows us, more than anything, is that human societies are diverse and human nature contains immense potential.

Consider all of this from the perspective of the outsider.

Jonathan Haidt came to his understanding partly because of an early experience among another traditional culture, India with its ancient Hinduism and caste system. That gave him a contrast to his liberal view of individualism and convinced him that individualism was lacking in something key to human nature.

I agree, as far as that goes. But I’d simply point out that in the United States the political right is often more obsessed with individualism than is the political left.

It’s American liberals who go on and on about community, the commons, social capital, social responsibility, concern for future generations, externalized costs, environmental protection, natural resource conservation, public parks, public good, public welfare, universal healthcare, universal education, child protection, worker protection, labor unions, public infrastructure, collective governance, group rights, defense of minority cultures, Native American tribal autonomy, etc. And a typical response by American conservatives is to accuse progressive liberals of being collectivists (maybe they’re right about this) while declaring the abstract rights and simplistic individualism of classical liberalism, often mixing this up with fundamentalist religion as though the Christian soul was the basis of Enlightenment individualism and the Biblical God the inspiration for the American Revolution.

Ironically, it is liberals in promoting tolerance who so often end up defending traditional religions and cultures against the attacks by modern-minded conservatives. The latter group, through internalizing libertarian and Objectivist ideologies, have become the fiercest advocates of classical liberalism and hyper-individualism.

Comparison between societies doesn’t necessarily tell us much about comparisons of ideologies within a society. If Haidt had instead spent that time in the Amazon with the Pirahã, he probably would have come to very different views. Plus, it always depends on your starting point, the biases you bring with you. Daniel Everett, who did spend years with the Pirahã, was coming from a different place and so ended up with a different view. Everett was a conservative missionary seeking to convert the natives, but instead they deconverted him and he became an atheist. My sense is that meeting a traditional society left Everett way more liberal than he began, causing him to embrace an attitude of cultural relativism, as inspired by the epistemological relativism of the Pirahã.

What Haidt misses is that Western religious conservatives, especially in the United States, tend to be individualistic Protestants (even American Catholics are strongly individualistic). It’s not that Everett necessarily lost his Evangelical individualism in being deconverted for the traditional society that he met was in some ways even more individualistic, even as it was less individualistic in other ways. The fundamental conflict had little to do with individualism at all. A religious conservative like Everett had been lost in abstractions, based on an abstract religious tradition, but he was blind to these abstractions until he met the Pirahã who found his abstractions to be useless and irritating.

American conservatism, religious and otherwise, can tell us nothing about traditional societies. As Corey Robin convincingly argues, modern conservatives aren’t traditionalists. Modern conservatism was created in response to the failure of the ancien régime. Conservatives came to power not to revive the old order but to create a new and improved order. It wasn’t a movement to conserve but a reaction to what had already been lost. This was clear even early on, as observed by the French counter-revolutionary Joseph de Maistre when he pointed out that people identifying as conservatives only appeared after revolution had largely destroyed what came before.

Also, keep in mind that individualism and liberalism didn’t appear out of nowhere. Incipient forms of both, as I pointed out earlier, came on the scene back in the Axial Age. Even the India Haidt visited was a fully modern society that had seen millennia of change and progress. Hinduism had long ago fallen under the sway of varying forms of influence from the Axial Age to British Imperialism. And if we are to speculate a bit by considering Julian Jaynes’ bicameral theory, even the hierarchical social orders of recent civilizations were late on the scene in the longer view of vast societal development beginning with agriculture and the first settled communities.

To claim we know the ideological substructure of our humanity is to overlook so many complicating factors, some of which we know but most of which we don’t.

This has been a difficulty in our attempt to understand our own psychological makeup, in how our minds and societies operate. The ultimate bias isn’t political but cultural. Most social science research has been done on the WEIRD (Western, Educated, Industrialized, Rich, And Democratic), primarily white middle class college students. It turns out that very different results are found when other populations are studied, not just countries like India but also tribes like the Pirahã. What we know about ideological groupings, as with human nature, might look far different if we did equally large numbers of studies on the poor, minorities, non-Westerners, independent societies, etc.

It’s not just a matter of what kind of human nature we might be talking about. More importantly, the question is exactly whose human nature are we talking about and who is doing the questioning. WEIRD (Western, educated, industrialized, rich, and democratic) researchers studying WEIRD subjects will lead to WEIRD results and conclusions. That is not exactly helpful. And it is even worse than that, as the biases go deep. Our very approach to human nature, identity, and the mind are shaped by our culture. In a WEIRD culture, that has tended to mean the assumption of an autonomous, bounded individual. As Robert Burton explained it (A Skeptic’s Guide to the Mind, pp. 107-108):

“Results of a scientific study that offer universal claims about human nature should be independent of location, cultural factors, and any outside influences. Indeed, one of the prerequisites of such a study would be to test the physical principles under a variety of situations and circumstances. And yet, much of what we know or believe we know about human behavior has been extrapolated from the study of a small subsection of the world’s population known to have different perceptions in such disparate domains as fairness, moral choice, even what we think about sharing. 16 If we look beyond the usual accusations and justifications— from the ease of inexpensively studying undergraduates to career-augmenting shortcuts— we are back at the recurrent problem of a unique self-contained mind dictating how it should study itself.

“The idea that minds operate according to universal principles is a reflection of the way we study biological systems in general. To understand anatomy, we dissect one body as thoroughly as possible and draw from it a general grasp of human anatomy. Though we expect variations, we see these as exceptions to a general rule. It is to be expected that we see the mind in the same light. One way to circumvent this potentially misleading tendency to draw universal conclusions whenever possible is to subdivide the very idea of a mind into the experiential (how we experience a mind) and the larger conceptual category of the mind— how we think about, describe, and explain what a mind is. What we feel at the personal (experiential) level should not be confused with what a mind might be at a higher level— either as a group or as an extended mind.”

The very belief that the mind can be explained by the mind is a particular worldview. In the context of WEIRD populations being biased toward such a belief, Burton brought up an interesting point (pp. 50-51):

“If each of us has his/ her own innate ease or difficulty with which a sense of causation is triggered, the same data may generate different degrees of a sense of underlying causation in its readers. Though purely speculative, I have a strong suspicion that those with the most easily triggered innate sense of causation are more likely to reduce complex behavior to specific cause-and-effect relationships, while those with lesser degrees of an inherent sense of causation are more comfortable with ambiguous and paradoxical views of human nature. (Of course, for me to make any firm argument as to the cause of the authors’ behavior would be to fall into the same trap.)

“Unfortunately for science, there is no standard methodology for objectively studying subjective phenomena such as the mind. One investigator’s possible correlation is another’s absolute causation. The interpretation of the cause of subjective experience is the philosophical equivalent of asking every researcher if he/ she sees the same red that you do. The degree and nature of neuroscientists’ causal conclusions about the mind are as idiosyncratic as their experience of love, a sunset, or a piece of music.

“There is a great irony that underlies modern neuroscience and philosophy: the stronger an individual’s involuntary mental sense of self, agency, causation, and certainty, the greater that individual’s belief that the mind can explain itself. Given what we understand about inherent biases and subliminal perceptual distortions, hiring the mind as a consultant for understanding the mind feels like the metaphoric equivalent of asking a known con man for his self-appraisal and letter of reference.”

* * *

Here are some further thoughts about liberalism and such.

Maybe our very view of liberal bias has been biased by the ‘liberal class’ that dominates, defines, and studies liberalism. I don’t doubt that there are all kinds of biases related to our living in a modern liberal society as part of post-Enlightenment Western Civilization. But this bias might be wider, deeper, and more complex than we realize.

This class issue has been on my mind a lot lately. We live in a class-obsessed society. Sure, we obsess about class differently than the Indian caste system, but in some ways we are even more obsessed by caste for the very reason that it stands in for so much else, such as how castes include factors of ethnicity, religion and social roles. Class, in American society, has to do so much more ideological work to accomplish the same ends of maintaining a social hierarchy.

Maybe this is why class ideology gets conflated with political ideology, in a way that wouldn’t be seen in a different kind of society. Calling oneself a liberal in our society only indirectly has anything to do with liberal politics and a liberal mentality, as many who identify as liberal aren’t strongly liberal-minded about politics while many who are strongly liberal-minded about politics don’t identify as liberals.

The word ‘liberal’ doesn’t actually mean what we think it means. The same goes for ‘conservative’. These words are proxies for other things. To be called liberal in America most likely means you are part of the broad liberal class, which typically means you’re a well-educated middle-to-upper class professional, no matter that your politics might be moderate-to-conservative in many ways. A poor person who is liberal across the board, however, will unlikely identify as a liberal because they aren’t part of the liberal class. This is why rhetoric about the liberal elite has such currency in our society, even as this so-called liberal elite can be surprisingly more conservative than the general public on a wide variety of key issues.

What we forget is that our society is highly unusual and not representative of human nature, not in the slightest. The American liberal class is the product of a society that is based on Social Darwinian pseudo-meritocracy, late capitalism, plutocratic cronyism, and neoliberal corporatism. As I argued earlier, even American universities are hierarchical, bureaucratic institutions. And the Ivy League colleges still use class-based legacy privileges, which is important for maintaining the American social order as most politicians are Ivy League graduates as are many who are recruited by alphabet soup agencies (e.g., CIA). The larger history of Western universities precedes Enlightenment liberalism by centuries, not having been designed with leftist ideologies in mind.

Yet we consider universities to be refuges for the intellectual elite of the liberal class. That is only true in terms of the class social order. The majority of the liberal-minded, of the socially and politically liberal won’t find a refuge in such a place. In fact, the most strongly liberal-minded would rarely fit into the stultifying regimented lifestyle of a university. To be successful in a university career would require some strong personality traits of conservative-mindedness, although some have argued that was less true decades ago.

As such, liberalism in the United States has taken on so much meaning that has directly nothing to do with liberalism itself, specifically when talking about the role of liberalism within human nature. Consider other societies. In feudal Europe or the slave American South, being liberal (psychologically, socially, and politically) would have had nothing whatsoever to do with class; and if anything, being too liberal in such societies would have been harmful to your class status and class aspirations.

During the American Revolution, it was actually among the lower classes that were found the most liberal-minded radicals and rabblerousers. Thomas Paine, a self-taught working class bloke and often dirt poor, was on of the more liberal-minded among the so-called founding fathers. The more elite founding fathers were too invested in the status quo to go very far in embracing liberalism and many of them became or always were reactionaries and counter-revolutionaries. The working class revolutionaries who fought for liberalism didn’t tend to bode well, either before or after the revolutionary era. It took many more generations before a liberal class began to develop and, even then, the most strongly and radically liberal would often be excluded.

This is the point. A liberal class hasn’t always existed, despite liberal-minded traits having been part of human nature for longer than civilization has existed. The status quo ‘liberalism’ of the liberal class in a modern capitalism of the West is the product of specific conditions. It’s a social construct, as is ‘conservatism’. The entire framework of liberal vs conservative is a social construct that makes no sense outside of the specific society that formed it.

Environments are powerful shapers of the psyche, of attitudes and behavior, of worldviews and politics. All of Western civilization has become increasingly liberal and large part of that has to do with improved conditions for larger parts of the population, such as improved health and education even for the poor. In direct correlation with rising IQ, there is increasing liberalism. How class plays into this is that the upper classes see the improvements before the lower classes, but eventually the improvements trickle down or that is what has happened so far. The average working class American today is healthier, smarter, better educated, and more liberal than the middle class was in centuries past.

So, even class can only be spoken of as a comparative status at any given point in history because it isn’t an objective reality. The liberalism of the American liberal class, as such, can only be meaningfully discussed within the context of its time and place. This is more about a social order than about political ideologies, per se. That is most obvious in how conservatives embrace the liberalism of the past, for conservative and liberal have no objective meaning and there is no objective way to measure them.

Environments effect us in ways that involve confounding factors, and most of us inherit our environments along with other factors from our parents (epigenetics connecting environmental influences to new generations, even if a child was raised in another environment). Think about cats. For whatever reason, cat ownership is much more common in the Northeast and the Northwest of the United States. And as these are colder regions, people are more likely to keep cats inside. But this habit of having cats as indoor pets is a recent development. It has led to a rise in toxoplasmosis, a parasitic infection — as I’ve discussed before in terms of psychology and ideology:

“When mapped for the US population, there is significant overlap between the rate of toxoplasma gondii infections and the rate of the neuroticism trait. Toxoplasmosis is a known factor strongly correlated with neuroticism, a central factor of personality and mental health. When rates are high enough in a specific population, it can potentially even alter culture, which is related to ideology. Is it a coincidence that liberals have high rates of neuroticism and that one of the areas with high rates of toxoplasmosis is known for its liberalism?”

Are New Englanders a particular kind of liberal simply because that is the way they are? Or if we corrected for the confounding factor of cats and toxoplasmosis, would we find for example that there is no causal relation between liberalism and neuroticism?

Environments aren’t always inherited, as it can change quite easily. Will a New England family that moved to the South still show increased rates of neurotic liberalism several generations later? Probably not. Most of this isn’t intentional and parents are often perplexed about why their children turn out differently, oblivious to the larger conditions that shape individuals.

My conservative parents raised me in a liberal church and in some liberal towns. And maybe more importantly, they raised me with cats in the house. It wasn’t genetic determinism and inborn nature that made me into a neurotic liberal. Still, the potential for neuroticism and liberalism had to be within me for environmental conditions to make it manifest. And indeed I can see how my neurotic liberalism is just an exaggerated variation of personality traits I did inherit from my conservative parents who are mildly liberal-minded.

Then again, I did inherit much of my broader environment from my parents: born in the United States, spent my formative years in the Midwest, grew up during the Cold War, went to public schools, encouraged to respect education from a young age, my entire life shaped by Western culture and capitalism, etc. So, my parents’ conservatism and my liberalism probably has more in common than not, as compared to the rest of the world’s population and as compared to past societies. Parents and their children share a social order and the way that social order shapes not just people but all the world around them. And in many cases, parents and their children will share the same basic position or place in society.

That is the case with my family, as contact with the broad liberal class has influenced my conservative parents as much as it has influenced me. The same goes for the Midwestern sensibility I share with my parents. My parents’ Midwestern conservatism seemed liberal when our family moved South. And my liberalism is far different than what goes for liberalism in the South. Had various lines of my family remained outside of the Midwest, the following generations would probably have been far different. Choices to move that were made by previous generations of non-Midwesterners led to my parents and I being born as Midwesterners.

Then, even later on living in the South, my parents and I couldn’t shake how growing up in the Midwest had permanently altered us, more powerfully than any political ideology (although less so for my dad, maybe because his mother was a Southerner). This is why it is often easier for me to talk with my conservative parents or to conservative Iowans than to talk to the liberals of the liberal class from other parts of the country.

Context is everything. And this gets me wondering. If all confounding factors were controlled for, what would be left that could be fairly and usefully identified as political ideology?

When feudalism was the dominant paradigm and ruling social order, it simply seemed like reality itself. It was assumed that social and class position were built into human nature. This is one of the earliest sources of racial thinking. The aristocracy and monarchy assumed (based on pseudo-scientific theories and observations of class, ethnicity, and animal husbandry) that feudal serfs were a separate race, i.e., a sub-species. It turns out that they were wrong. But if they had had the ability to measure various factors (from personality to ideology, from physiology to health), they would have noted consistent patterns that supported the belief that the social order was based on a natural order. It was a dogmatic ideology that was systematically enforced and so became a self-fulfilling prophecy.

What if our own society operates in a similar way? Class-based opportunities and disadvantages, privileges and punishments socially and physically construct a shared experience of reality. A cultural worldview then rationalizes and encloses this in a mythos of ideological realism. The sense of identity is framed by this and those who inquire into human nature already have their sense of human nature constrained accordingly. Unless they are confronted by a truly foreign society, their worldview will remain hermetically sealed.

* * *

How many in our society, even among the well-educated, ever manage to escape from this blindered habitus? Not many. Only as the culture itself shifts will more people within the culture be able to explore new undestandings. This will then lead to new biases, but one could hope those biases will be more expansive and flexible.

Bias is inevitable. But we have the added problem of being biased in our perception of bias. It’s impossible to fully discern one’s own biases while under their influence, although we can gain the awareness of our predicament. The fact that we are beginning to question the biases of our culture indicates that we are beginning to shift outside of them. It will take at least a few more generations, though, before we can understand this shift and what it means.

Give it some time and liberalism will mean something entirely new. And the conservatives of the future will embrace the liberalism of our present. Some of what we now consider radical or even unimaginable will eventually be normal and commonplace. There will be different sets of biases framed in a different worldview and dominated by a different paradigm.

Most people in the future likely won’t even notice that a shift happened, as it likely will be gradual. They’ll assume that the world they know is in some sense how the world has always been. That assumption will shape their sense of human nature, how they think about it and study it, probably in ways that would surprise us. But one thing is for sure. They’ll look back on our debates about ideological natures and biases in the way we look back on the simplistic and misguided rhetoric of feudalism that defined the classes as separate races.

One thing that is safe to assume is that our society is wrong about most things we’ve taken as obvious truth. The realization of such uncertainty is a step toward new understanding.

Human Nature: Categories & Biases

There is something compelling about seemingly opposing views. There is Mythos vs Logos, Apollonian vs Dionysian, Fox vs Hedgehog, Socratic vs the Sophistic, Platonic vs Aristotelian, Spinoza vs Locke, Paine vs Burke, Jung vs Freud, nature vs nurture, biology vs culture, determinism vs free will, parenting style vs peer influence, etc.

And these perceived divisions overlap in various ways, a long developing history of ideas, worldviews, and thinkers. It’s a dance. One side will take the lead and then the other. The two sides will take different forms, the dividing lines shifting.

In more recent decades, we’ve come to more often think in terms of political ideologies. The greatest of them all is liberal vs conservative. But since World War II, there has been a growing obsession with authoritarianism and anti-authoritarianism. And there is the newer area of social dominance orientation (SDO). Some prefer focusing on progressive vs reactionary as more fundamental, as it relates to the history of the revolutionary and counterrevolutionary.

With the advent of social science and neuroscience, we’ve increasingly put all of this in new frames. Always popular, there is left and right brain hemispheres, along with more specific brain anatomy (e.g., conservatives on average have a larger amygdala). Then there is the personality research: Myers-Briggs, trait theory, boundary types, etc — of those three, trait theory being the most widely used.

Part of it is that humans simply like to categorize. It’s how we attempt to make sense of the world. And there is nothing that preoccupies human curiosity more than humanity itself, our shared inheritance of human ideas and human nature. For as long as humans have been writing and probably longer, there have been categorizations to slot humans into.

My focus has most often been toward personality, along with social science more generally. What also interests me is that one’s approach to such issues also comes in different varieties. With that in mind, I wanted to briefly compare two books. Both give voice to two sides of my own thinking. The first I’ll discuss is The Liberal’s Guide to Conservatives by J. Scott Wagner. And the second is A Skeptic’s Guide to the Mind by Robert Burton.

Wagner’s book is the kind of overview I wish I’d had earlier last decade. But a book like this gets easier to write as time goes on. Many points of confusion have been further clarified, if not always resolved, by more recent research. Then again, often this has just made us more clear about what exactly is our confusion.

What is useful about a book like this is that it helps show what we do know at the moment. Or simply what we think we know, until further research is done to confirm or disconfirm present theories. But at least some of it allows a fair amount of certainty that we are looking at significant patterns in the data.

It’s a straightforward analysis with a simple purpose. The author is on the political left and he wants to help those who share his biases to understand those on the political right who have different biases. A noble endeavor, as always. He covers a lot of territory and it is impressive. I won’t even attempt to summarize it all. I’m already broadly familiar with the material, as this area of study involves models and theories that have been researched for a long time.

What most stood out to me was his discussion of authoritarianism and social dominance orientation (SDO). For some reason, that seems like more important than all the rest. Those taken together represent the monkey wrench thrown into the gears of the human mind. I was amused when Wagner opined that,

Unlike all that subtlety around “social conformity-autonomy” and authoritarianism, the SDO test is straightforward: not to put too fine a point on it, but to me, the questions measure how much of a jerk you are. (Kindle Locations 3765-3767)

He holds no love for SDOs. And for good reason. Combine the worst aspects from the liberal elite of the classical liberal variety as found in a class-based pseudo-meritocracy. Remove any trace of liberal-minded tolerance, empathy, kindness, and compassion. And then wrap this all up with in-group domination. Serve with a mild sauce of near sociopathy.

Worse part of it is that SDOs are disproportionately found among those with wealth and power, authority and privilege. These people are found among the ruling elite for the simple reason that they want to be a ruling elite. Unless society stops them from dominating, they will dominate. It’s their nature, like the scorpion that stings the frog carrying him across the river. The scorpion can’t help itself.

All of that is important info. I do wish more people would read books like these. There is no way for the public, conservative and liberal alike, to come together in defense against threats to the public good when they don’t understand or often even clearly see those threats.

Anyway, Wagner’s book offers a systematizing approach, with a more practical emphasis that offers useful insight. He shows what differentiates people and what those demarcations signify. He offers various explanations and categorizations, models and theories. You could even take professional tests that will show your results on the various scales discussed, in order to see where you fit in the scheme of personality traits and ideological predispositions. Reading his book will help you understand why conflicts are common and communication difficult. But he doesn’t leave it at that, as he shares personal examples and helpful advice.

Now for the other approach, more contrarian in nature. This is exemplified by the other book I’ve been reading, the one by Robert Burton (who I quoted in a recent post). As Wagner brings info together, Burton dissects it into its complicated messy details (Daniel Everett has a similar purpose). Yet Burton also is seeking to be of use, in promoting clear thinking and a better scientific understanding. His is a challenge not just to the public but also to scientific researchers.

Rather than promising answers to age-old questions about the mind, it is my goal to challenge the underlying assumptions that drive these questions. In the end, this is a book questioning the nature of the questions about the mind that we seem compelled to ask yet are scientifically unable to answer. (p. 7)

Others like Wagner show the answers so far found for the questions we ask. Burton’s motive is quite the opposite, to question those answers. This is in the hope of improving both questions and answers.

Here is what I consider the core insight from Burton’s analysis (p. 105-7):

“Heinrich’s team showed the illusion to members of sixteen different social groups including fourteen from small-scale societies such as native African tribes. To see how strong the illusion was in each of these groups, they determined how much longer the “shorter” line needed to be for the observer to conclude that the two lines were equal. (You can test yourself at this website— http://www.michaelbach.de/ot/sze_muelue/index.html.) By measuring the amount of lengthening necessary for the illusion to disappear, they were able to chart differences between various societies. At the far end of the spectrum— those requiring the greatest degree of lengthening in order to perceive the two lines as equal (20 percent lengthening)— were American college undergraduates, followed by the South African European sample from Johannesburg. At the other end of the spectrum were members of a Kalahari Desert tribe, the San foragers. For the San tribe members, the lines looked equal; no line adjustment was necessary, as they experienced no sense of illusion. The authors’ conclusion: “This work suggests that even a process as apparently basic as visual perception can show substantial variation across populations. If visual perception can vary, what kind of psychological processes can we be sure will not vary?” 14

“Challenging the entire field of psychology, Heinrich and colleagues have come to some profoundly disquieting conclusions. Lifelong members of societies that are Western, educated, industrialized, rich, democratic (the authors coined the acronym WEIRD) reacted differently from others in experiment after experiment involving measures of fairness, antisocial punishment, and cooperation, as well as when responding to visual illusions and questions of individualism and conformity. “The fact that WEIRD people are the outliers in so many key domains of the behavioral sciences may render them one of the worst subpopulations one could study for generalizing about Homo sapiens.” The researchers found that 96 percent of behavioral science experiment subjects are from Western industrialized countries, even though those countries have just 12 percent of the world’s population, and that 68 percent of all subjects are Americans.

“Jonathan Haidt, University of Virginia psychologist and prepublication reviewer of the article, has said that Heinrich’s study “confirms something that many researchers knew all along but didn’t want to admit or acknowledge because its implications are so troublesome.” 15 Heinrich feels that either many behavioral psychology studies have to be redone on a far wider range of cultural groups— a daunting proposition— or they must be understood to offer insight only into the minds of rich, educated Westerners.

“Results of a scientific study that offer universal claims about human nature should be independent of location, cultural factors, and any outside influences. Indeed, one of the prerequisites of such a study would be to test the physical principles under a variety of situations and circumstances. And yet, much of what we know or believe we know about human behavior has been extrapolated from the study of a small subsection of the world’s population known to have different perceptions in such disparate domains as fairness, moral choice, even what we think about sharing. 16 If we look beyond the usual accusations and justifications— from the ease of inexpensively studying undergraduates to career-augmenting shortcuts— we are back at the recurrent problem of a unique self-contained mind dictating how it should study itself.”

I don’t feel much need to add to that. The implications of it are profound. This possibly throws everything up in the air. We might be forced to change what we think we know. I will point out Jonathan Haidt being quoted in that passage. Like many other social scientists, Haidt’s own research has been limited in scope, something that has been pointed out before (by me and others). But at least those like Haidt are acknowledging the problem and putting some effort into remedying it.

These are exciting times. There is the inevitable result that, as we come to know more, we come to realize how little we know and how limited is what we know (or think we know). We become more circumspect in our knowledge.

Still, that doesn’t lessen the significance of what we’ve so far learned. Even with the WEIRD bias disallowing generalization about a universal human nature, the research done remains relevant to showing the psychological patterns and social dynamics in WEIRD societies. So, for us modern Westerners, the social science is as applicable as it ever was. But what it shows is that there is nothing inevitable about human nature, as what has been shown is that there is immense potential for diverse expressions of our shared humanity.

If you combine these two books, you will have greater understanding than either alone. They can be seen as opposing views, but at a deeper level they share a common purpose, that of gaining better insight into ourselves and others.

Development of Language and Music

Evidence Rebuts Chomsky’s Theory of Language Learning
by Paul Ibbotson and Michael Tomasello

All of this leads ineluctably to the view that the notion of universal grammar is plain wrong. Of course, scientists never give up on their favorite theory, even in the face of contradictory evidence, until a reasonable alternative appears. Such an alternative, called usage-based linguistics, has now arrived. The theory, which takes a number of forms, proposes that grammatical structure is not in­­nate. Instead grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place). More important, this theory proposes that language recruits brain systems that may not have evolved specifically for that purpose and so is a different idea to Chomsky’s single-gene mutation for recursion.

In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.

Broca and Wernicke are dead – it’s time to rewrite the neurobiology of language
by Christian Jarrett, BPS Research Digest

Yet the continued dominance of the Classic Model means that neuropsychology and neurology students are often learning outmoded ideas, without getting up to date with the latest findings in the area. Medics too are likely to struggle to account for language-related symptoms caused by brain damage or illness in areas outside of the Classic Model, but which are relevant to language function, such as the cerebellum.

Tremblay and Dick call for a “clean break” from the Classic Model and a new approach that rejects the “language centric” perspective of the past (that saw the language system as highly specialised and clearly defined), and that embraces a more distributed perspective that recognises how much of language function is overlaid on cognitive systems that originally evolved for other purposes.

Signing, Singing, Speaking: How Language Evolved
by Jon Hamilton, NPR

There’s no single module in our brain that produces language. Instead, language seems to come from lots of different circuits. And many of those circuits also exist in other species.

For example, some birds can imitate human speech. Some monkeys use specific calls to tell one another whether a predator is a leopard, a snake or an eagle. And dogs are very good at reading our gestures and tone of voice. Take all of those bits and you get “exactly the right ingredients for making language possible,” Elman says.

We are not the only species to develop speech impediments
by Moheb Costandi, BBC

Jarvis now thinks vocal learning is not an all-or-nothing function. Instead there is a continuum of skill – just as you would expect from something produced by evolution, and which therefore was assembled slowly, piece by piece.

The music of language: exploring grammar, prosody and rhythm perception in zebra finches and budgerigars
by Michelle Spierings, Institute of Biology Leiden

Language is a uniquely human trait. All animals have ways to communicate, but these systems do not bear the same complexity as human language. However, this does not mean that all aspects of human language are specifically human. By studying the language perception abilities of other species, we can discover which parts of language are shared. It are these parts that might have been at the roots of our language evolution. In this thesis I have studied language and music perception in two bird species, zebra finches and budgerigars. For example, zebra finches can perceive the prosodic (intonation) patterns of human language. The budgerigars can learn to discriminate between different abstract (grammar) patterns and generalize these patterns to new sounds. These and other results give us insight in the cognitive abilities that might have been at the very basis of the evolution of human language.

How Music and Language Mimicked Nature to Evolve Us
by Maria Popova, Brain Pickings

Curiously, in the majority of our interaction with the world, we seem to mimic the sounds of events among solid objects. Solid-object events are comprised of hits, slides and rings, producing periodic vibrations. Every time we speak, we find the same three fundamental auditory constituents in speech: plosives (hit-sounds like t, d and p), fricatives (slide-sounds like f, v and sh), and sonorants (ring-sounds like a, u, w, r and y). Changizi demonstrates that solid-object events have distinct “grammar” recurring in speech patterns across different languages and time periods.

But it gets even more interesting with music, a phenomenon perceived as a quintessential human invention — Changizi draws on a wealth of evidence indicating that music is actually based on natural sounds and sound patterns dating back to the beginning of time. Bonus points for convincingly debunking Steven Pinker’s now-legendary proclamation that music is nothing more than “auditory cheesecake.”

Ultimately, Harnessed shows that both speech and music evolved in culture to be simulacra of nature, making our brains’ penchant for these skills appear intuitive.

The sounds of movement
by Bob Holmes, New Scientist

It is this subliminal processing that spoken language taps into, says Changizi. Most of the natural sounds our ancestors would have processed fall into one of three categories: things hitting one another, things sliding over one another, and things resonating after being struck. The three classes of phonemes found in speech – plosives such as p and k, fricatives such as sh and f, and sonorants such as r, m and the vowels – closely resemble these categories of natural sound.

The same nature-mimicry guides how phonemes are assembled into syllables, and syllables into words, as Changizi shows with many examples. This explains why we acquire language so easily: the subconscious auditory processing involved is no different to what our ancestors have done for millions of years.

The hold that music has on us can also be explained by this kind of mimicry – but where speech imitates the sounds of everyday objects, music mimics the sound of people moving, Changizi argues. Primitive humans would have needed to know four things about someone moving nearby: their distance, speed, intent and whether they are coming nearer or going away. They would have judged distance from loudness, speed from the rate of footfalls, intent from gait, and direction from subtle Doppler shifts. Voila: we have volume, tempo, rhythm and pitch, four of the main components of music.

Scientists recorded two dolphins ‘talking’ to each other
by Maria Gallucci, Mashable

While marine biologists have long understood that dolphins communicate within their pods, the new research, which was conducted on two captive dolphins, is the first to link isolated signals to particular dolphins. The findings reveal that dolphins can string together “sentences” using a handful of “words.”

“Essentially, this exchange of [pulses] resembles a conversation between two people,” Vyacheslav Ryabov, the study’s lead researcher, told Mashable.

“The dolphins took turns in producing ‘sentences’ and did not interrupt each other, which gives reason to believe that each of the dolphins listened to the other’s pulses before producing its own,” he said in an email.

“Whistled Languages” Reveal How the Brain Processes Information
by Julien Meyer, Scientific American

Earlier studies had shown that the left hemisphere is, in fact, the dominant language center for both tonal and atonal tongues as well as for nonvocalized click and sign languages. Güntürkün was interested in learning how much the right hemisphere—associated with the processing of melody and pitch—would also be recruited for a whistled language. He and his colleagues reported in 2015 in Current Biology that townspeople from Kuşköy, who were given simple hearing tests, used both hemispheres almost equally when listening to whistled syllables but mostly the left one when they heard vocalized spoken syllables.

Did Music Evolve Before Language?
by Hank Campbell, Science 2.0

Gottfriend Schlaug of Harvard Medical School does something a little more direct that may be circumstantial but is a powerful exclamation point for a ‘music came first’ argument. His work with patients who have suffered severe lesions on the left side of their brain showed that while they could not speak – no language skill as we might define it – they were able to sing phrases like “I am thirsty”, sometimes within two minutes of having the phrase mapped to a melody.

Chopin, Bach used human speech ‘cues’ to express emotion in music
by Andrew Baulcomb, Science Daily

“What we found was, I believe, new evidence that individual composers tend to use cues in their music paralleling the use of these cues in emotional speech.” For example, major key or “happy” pieces are higher and faster than minor key or “sad” pieces.

Theory: Music underlies language acquisition
by B.J. Almond, Rice University

Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.

“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”

– See more at: http://news.rice.edu/2012/09/18/theory-music-underlies-language-acquisition/#sthash.kQbEBqnh.dpuf

How Brains See Music as Language
by Adrienne LaFrance, The Atlantic

What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations “take root in the brain as a language,” Limb said.

“It makes perfect sense,” said Ken Schaphorst, chair of the Jazz Studies Department at the New England Conservatory in Boston. “I improvise with words all the time—like I am right now—and jazz improvisation is really identical in terms of the way it feels. Though it’s difficult to get to the point where you’re comfortable enough with music as a language where you can speak freely.”

Along with the limitations of musical ability, there’s another key difference between jazz conversation and spoken conversation that emerged in Limb’s experiment. During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this kind of music is syntactic but it’s not semantic.

“Music communication, we know it means something to the listener, but that meaning can’t really be described,” Limb said. “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music—Beethoven’s dun dun dun duuuun—we might hear that and think it means something but nobody could agree what it means.”

 

Dark Matter of the Mind

The past half year has been spent in anticipation. Daniel Everett has a new book that finally came out the other day: Dark Matter of the Mind. I was so curious to read it because Everett is the newest and most well known challenger to mainstream linguistics theory. This is only an interest to me because it so happens to directly touch upon every aspect of our humanity: human nature (vs nurture), self-identity, consciousness, cognition, perception, behavior, culture, philosophy, etc.

The leading opponent to Everett’s theory is Noam Chomsky, a well-known and well-respected public intellectual. Chomsky is the founder of the so-called cognitive revolution — not that Everett sees it as all that revolutionary: “it was not a revolution in any sense, however popular that narrative has become” (Kindle Location 306). That brings into the conflict issues of personality, academia, politics, and funding. It’s two paradigms clashing, one of the paradigms having been dominant for more than a half century.

Now that I’ve been reading the book, I find my response to be mixed. Everett is running headlong into difficult terrain and I must admit he does so competently. He is doing the tough scholarly work that needs to be done. As Bill Benzon explained (at 3 Quarks Daily):

“While the intellectual world is rife with specialized argumentation arrayed around culture and associated concepts (nature, nurture, instinct, learning) these concepts themselves do not have well-defined technical meanings. In fact, I often feel they are destined to go the way of phlogiston, except that, alas, we’ve not yet discovered the oxygen that will allow us to replace them [4]. These concepts are foundational, but the foundation is crumbling. Everett is attempting to clear away the rubble and start anew on cleared ground. That’s what dark matter is, the cleared ground that becomes visible once the rubble has been pushed to the side. Just what we’ll build on it, and how, that’s another question.”

This explanation points to a fundamental problem, if we are to consider it a problem. Earlier in the piece, Benzon wrote that, “OK, I get it, I think, you say, but this dark matter stuff is so vague and metaphorical. You’re right. And it remains that way to the end of the book. And that, I suppose, is my major criticism, though it’s a minor one. “Dark matter” does a lot of conceptual work for Everett, but he discusses it indirectly.” Basically, Everett struggles with a limited framework of terminology and concepts. But that isn’t entirely his fault. It’s not exactly new territory that Everett discovered, just not yet fully explored and mapped out. The main thing he did, in his earliest work, was to bring up evidence that simply did not fit into prevailing theories. And now in a book like this he is trying to make sense of what that evidence indicates and what theory better explains it.

It would have been useful if Everett had been able to give a fuller survey of the relevant scholarship. But if he had, it would have been a larger and more academic book. It is already difficult enough for most readers not familiar with the topic. Besides, I suspect that Everett was pushing against the boundaries of his own knowledge and readings. It was easy for me to see everything that was left out, in relation to numerous other fields beyond his focus of linguistics and anthropology — such as: neurocognitive research, consciousness studies, classical studies of ancient texts, voice-hearing and mental health, etc.

The book sometimes felt like reinventing the wheel. Everett’s expertise is in linguistics, and apparently that has has been an insular field of study defended by a powerful and entrenched academic establishment. My sense is that linguistics is far behind in development, compared to many other fields. The paradigm shift that is just now happening in linguistics has been for decades creating seismic shifts elsewhere in academia. Some argue that this is because linguistics became enmeshed in Pentagon-funded computer research and so has had a hard time disentangling itself in order to become an independent field once again. Chomsky as leader of the cognitive revolution has effectively dissuaded a generation of linguists from doing social science, instead promoting the hard sciences, a problematic position to hold about a rather soft field like linguistics. As anthropologist Chris Knight explains it, in Decoding Chomsky (Chapter 1):

“[O]ne bedrock assumption underlies his work. If you want to be a scientist, Chomsky advises, restrict your efforts to natural science. Social science is mostly fraud. In fact, there is no such thing as social science.[49] As Chomsky asks: ‘Is there anything in the social sciences that even merits the term “theory”? That is, some explanatory system involving hidden structures with non-trivial principles that provide understanding of phenomena? If so, I’ve missed it.’[50]

“So how is it that Chomsky himself is able to break the mould? What special factor permits him to develop insights which do merit the term ‘theory’? In his view, ‘the area of human language . . . is one of the very few areas of complex human functioning’ in which theoretical work is possible.[51] The explanation is simple: language as he defines it is neither social nor cultural, but purely individual and natural. Provided you acknowledge this, you can develop theories about hidden structures – proceeding as in any other natural science. Whatever else has changed over the years, this fundamental assumption has not.”

This makes Everett’s job harder than it should be, in breaking new ground in linguistics and in trying to connect it to the work already done elsewhere, most often in the social sciences. As humans are complex social animals living in a complex world, it is bizarre and plain counterproductive to study humans in the way one studies a hard science like geology. Humans aren’t isolated biological computers that can operate outside of the larger context of specific cultures and environments. But Chomsky simply assumes all of that is irrelevant on principle. Field research of actual functioning languages, as Everett has done, can be dismissed because it is mere social science. One can sense how difficult it is for Everett in struggling against this dominant paradigm.

Still, even with these limitations of the linguistics field, the book remains a more than worthy read. His using Plato and Aristotle to frame the issue was helpful to an extent, although it also added another variety of limitation. I got a better sense of the conflict of worldviews and how they relate to the larger history of ideas. But in doing so, I became more aware of the problems of that frame, very closely related to the problems of the nature vs nurture debate (for, in reality, nature and nurture are inseparable). He describes linguistic theoreticians like Chomsky as being in the Platonic school of thought. Chomsky surely would agree, as he has already made that connection in his own writings, what he discusses as Plato’s problem and Plato’s answer. Chomsky’s universal grammar are Platonic in nature, for as he has written such “knowledge is ‘remembered’” (“Linguistics, a personal view” from The Chomskyan Turn). This is Plato’s ananmesis and alethia, an unforgetting of what is true, based on the belief that humans are born with certain kinds of innate knowledge.

That is interesting to think about. But in the end I felt that something was being oversimplified or entirely left out. Everett is arguing against nativism, that there is an inborn predetermined human nature. It’s not so much that he is arguing for a blank slate as he is trying to explain the immense diversity and potential that exists across cultures. But the duality of nativism vs non-nativism lacks the nuance to wrestle down complex realities.

I’m sympathetic to Everett’s view and to his criticisms of the nativist view. But there are cross-cultural patterns that need to be made sense of, even with the exceptions that deviate from those patterns. Dismissing evidence is never satisfying. Along with Chomsky, he throws in the likes of Carl Jung. But the difference between Chomsky and Jung is that the former is an academic devoted to pure theory unsullied by field research while the latter was a practicing psychotherapist who began with the particulars of individual cases. Everett is arguing for a focus on the particulars, upon which to build theory, but that is what Jung did. The criticisms of Chomsky can’t be shifted over to Jung, no matter what one thinks of Jung’s theories.

Part of the problem is that the kind of evidence Jung dealt with remains to be explained. It’s simply a fact that certain repeating patterns are found in human experience, across place and time. That is evidence to be considered, not dismissed, however one wishes to interpret it. Not even most respectable nativist thinkers want to confront this kind of evidence that challenges conventional understandings on all sides. Maybe Jungian theories of archetypes, personality types, etc are incorrect. But how do we study and test such things, going from direct observation to scientific research? And how is the frame of nativism/non-nativism helpful at all?

Maybe there are patterns, not unlike gravity and other natural laws, that are simply native to the world humans inhabit and so might not be entirely or at all native to the human mind, which is to say not in the way that Chomsky makes nativist claims about universal grammar. Rather, these patterns would be native to to humans in the way and to the extent humans are native to the world. This could be made to fit into Everett’s own theorizing, as he is attempting to situate the human within larger contexts of culture, environment, and such.

Consider an example from psychedelic studies. It has been found that people under the influence of particular psychedelics often have similar experiences. This is why shamanic cultures speak of psychedelic plants as having spirits that reside within or are expressed through them.

Let me be more specific. DMT is the most common psychedelic in the world, it being found in numerous plants and even is produced in small quantities by the human brain. It’s an example of interspecies co-evolution, plants and humans having chemicals in common. Plants are chemistry factories and they use chemicals for various purposes, including communication with other plants (e.g., chemically telling nearby plants that something is nibbling on its leaves and so put up your chemical defenses) and communicating with non-plants (e.g., sending out bitter chemicals to help inform the nibbler that they might want to eat elsewhere). Animals didn’t just co-evolve with edible plants but also psychedelic plants. And humans aren’t the only species to imbibe. Maybe chemicals like DMT serve a purpose. And maybe there is a reason so many humans tripping on DMT experience what some describe as self-replicating machine elves or self-transforming fractal elves. Humans have been tripping on DMT for longer than civilization has existed.

DMT is far from being the only psychedelic plant like this. It’s just one of the more common. The reason plant psychedelics do what they do to our brains is because our brains were shaped by evolution to interact with chemicals like this. These chemicals almost seem designed for animal brains, especially DMT which our own brains produce.

That brings up some issues about the whole nativism/non-nativism conflict. Is a common experience many humans have with a psychedelic plant native to humans, native to the plant, or native to the inter-species relationship between human and plant? Where do the machine/fractal elves live, in the plant or in our brain? My tendency is to say that they in some sense ‘exist’ in the relationship between plants and humans, an experiential expression of that relationship, as immaterial and ephemeral as the love felt by two humans. These weird psychedelic beings are a plant-human hybrid, a shared creation of our shared evolution. They are native to our humanity to the extent that we are native to the ecosystems we share with those psychedelic plants.

Other areas of human experience lead down similar strange avenues. Take as another example the observations of Jacques Vallée. When he was a practicing astronomer, he became interested in UFOs as some of his fellow astronomers would destroy rather than investigate anomalous observational data. This led him to look into the UFO field and that led to his studying those claiming alien abduction experiences. What he noted was that the stories told were quite similar to fairy abduction folktales and shamanic accounts of initiation. There seemed to be a shared pattern of experience that was interpreted differently according to culture but that in a large number of cases the basic pattern held.

Or take yet another example. Judith Weissman has noted patterns among the stated experiences of voice-hearers. Another researcher on voice-hearing, Tanya Luhrmann, has studied how voice-hearing both has commonalities and differences across cultures. John Geiger has shown how common voice-hearing can be, even if for most people it is usually only elicited during times of stress. Based on this and the work of others, it is obvious that voice-hearing is a normal capacity existing within all humans. It is actually quite common among children and some theorize it was more common for adults in other societies. Is pointing out the surprisingly common experience of voice-hearing an argument for nativism?

These aspects of our humanity are plain weird. It was the kind of thing that always fascinated Jung. But what do we do with such evidence? It doesn’t prove a universal human nature that is inborn and predetermined. Not everyone has these experiences. But it appears everyone is capable of having these experiences.

This is where mainstream thinking in the field of linguistics shows its limitations. Going by Everett’s descriptions of the Pirahã, it seems likely that voice-hearing is common among them, although they wouldn’t interpret it that way. For them, voice-hearing appears to manifest as full possession and what, to Western outsiders, seems like a shared state of dissociation. It’s odd that as a linguist it didn’t occur to Everett to study the way of speaking of those who were possessed or to think more deeply about the experiential significance of the use of language indicating dissociation. Maybe it was too far outside of his own cultural biases, the same cultural biases that causes many Western voice-hearers to be medicated and institutionalized.

And if we’re going to talk about voice-hearing, we have to bring up Julian Jaynes. Everett probably doesn’t realize it, but his views seem to be in line with the bicameral theory or at least not in explicit contradiction with it on conceptual grounds. He seems to be coming out of the cultural school of thought within anthropology, the same influence on Jaynes. It is precisely Everett’s anthropological field research that distinguishes him from a theoretical linguist like Chomsky who has never formally studied any foreign language nor gone out into the field to test his theories. It was from studying the Pirahã firsthand over many years that the power of culture was impressed upon him. Maybe that is a commonality with Jaynes who began his career doing scientific research, not theorizing.

As I was reading the book, I kept being reminded of Jaynes, despite Everett never mentioning him or related thinkers. It’s largely how he talks about individuals situated in a world and worldview, along with his mentioning of Bordieu’s habitus. This fits into his emphasis on the culture and nurture side of influences, arguing that people (and languages) are products of their environments. Also, when Everett wrote that his view was there is “nothing to an individual but one’s body” (Kindle Location 328), it occurred to me how this fit into the proposed experience of hypothetical ancient bicameral humans. My thought was confirmed when he stated that his own understanding was most in line with the Buddhist anatnam, ‘non-self’. Just a week ago, I wrote the following in reference to Jaynes’ bicameral theory:

“We modern Westerners identify ourselves with our thoughts, the internalized voice of egoic consciousness. And we see this as the greatest prize of civilization, the hard-won rights and freedoms of the heroic individual. It’s the story we tell. But in other societies, such as in the East, there are traditions that teach the self is distinct from thought. From the Buddhist perspective of dependent (co-)origination, it is a much less radical notion that the self arises out of thought, instead of the other way around, and that thought itself simply arises. A Buddhist would have a much easier time intuitively grasping the theory of bicameralism, that thoughts are greater than and precede the self.”

Jaynes considered self-consciousness and self-identity to be products of thought, rather than the other way around. Like Everett, this is an argument against the old Western belief in a human soul that is eternal and immortal, that Platonically precedes individual corporality. But notions like Chomsky’s universal grammar feel like an attempt to revamp the soul for a scientific era, a universal human nature that precedes any individual, a soul as the spark of God and the divine expressed as a language imprinted on the soul. If I must believe in something existing within me that pre-exists me, then I’d rather go with alien-fairy-elves hiding out in the tangled undergrowth of my neurons.

Anyway, how might Everett’s views of nativism/non-nativism been different if he had been more familiar with the work of these other researchers and thinkers? The problem is that the nativism/non-nativism framework is itself culturally biased. It’s related to the problem of anthropologists who try to test the color perception of other cultures using tests that are based on Western color perception. Everett’s observations of the Pirahã, by the way, have also challenged that field of study — as he has made the claim that the Pirahã have no color terms and no particular use in discriminating colors. That deals with the relationship of language to cognition and perception. Does language limit our minds? If so, how and to what extent? If not, are we to assume that such things as ‘colors’ are native to how the human brain functions? Would an individual born into and raised in a completely dark room still ‘see’ colors in their mind’s eye?

Maybe the fractal elves produce the colors, consuming the DMT and defecating rainbows. Maybe the alien-fairies abduct us in our sleep and use advanced technology to implant the colors into our brains. Maybe without the fractal elves and alien-fairies, we would finally all be colorblind and our society would be free from racism. Just some alternative theories to consider.

Talking about cultural biases, I was fascinated by some of the details he threw out about the Pirahã, the tribe he had spent the most years studying. He wrote that (Kindle Locations 147-148), “Looking back, I can identify many of the hidden problems it took me years to recognize, problems based in contrasting sets of tacit assumptions held by the Pirahãs and me.” He then lists some of the tacit assumptions held by these people he came to know.

They don’t appear to have any concepts, language, or interest in God or gods, in religion, or anything spiritual/supernatural that wasn’t personally experienced by them or someone they personally know. Their language is very direct and precise about all experience and the source of claims. But they don’t feel like they’re spiritually lost or somehow lacking anything. In fact, Everett describes them as being extremely happy and easygoing, except on the rare occasion when a trader gives them alcohol.

They don’t have any concern or fear about nor do they seek out and talk about death, the dead, ancestral spirits, or the afterlife. They apparently are entirely focused on present experience. They don’t speculate, worry, or even have curiosity about what is outside their experience. Foreign cultures are irrelevant to them, this being an indifference and not hatred of foreigners. It’s just that foreign cultures is thought of as good for foreigners, as Pirahã culture is good for Pirahã. Generally, they seem to lack the standard anxiety that is typical of our society, despite living in and walking around barefoot in one of the most dangerous environments on the planet surrounded by poisonous and deadly creatures. It’s actually malaria that tends to cut their lives short. But they don’t much comparison in thinking that their lives are cut short.

Their society is based on personal relationships and “do not like for any individual to tell another individual how to live” (Kindle Locations 149-150). They don’t have governments or, as far as I know, governing councils. They don’t practice social coercion, community-mandated punishments, and enforced norms. They are very small tribe living in isolation with a way of life that has likely remained basically the same for millennia. Their culture and lifestyle is well-adapted to their environmental niche, and so they don’t tend to encounter many new problems that require them to act differently than in the past. They also don’t practice or comprehend incarceration, torture, capital punishment, mass war, genocide, etc. It’s not that violence never happens in their society, but I get the sense that it’s rare.

In the early years of life, infants and young toddlers live in near constant proximity to their mothers and other adults. They are given near ownership rights of their mothers’ bodies, freely suckling whenever they want without asking permission or being denied. But once weaned, Pirahã are the opposite of coddled. Their mothers simply cut them off from their bodies and the toddlers go through a tantrum period that is ignored by adults. They learn from experience and get little supervision in the process. They quickly become extremely knowledgeable and capable about living in and navigating the world around them. The parents have little fear about their children and it seems to be well-founded, as the children prove themselves able to easily learn self-sufficiency and a willingness to contribute. It reminded me of Jean Liedloff’s continuum concept.

Then, once they become teenagers, they don’t go through a rebellious phase. It seems a smooth transition into adulthood. As he described it in his first book (Don’t Sleep, There Are Snakes, p. 99-100):

“I did not see Pirahã teenagers moping, sleeping in late, refusing to accept responsibility for their own actions, or trying out what they considered to be radically new approaches to life. They in fact are highly productive and conformist members of their community in the Pirahã sense of productivity (good fishermen, contributing generally to the security, food needs, and o ther aspects of the physical survival of the community). One gets no sense of teenage angst, depression, or insecurity among the Pirahã youth. They do not seem to be searching for answers. They have them. And new questions rarely arise.

“Of course, this homeostasis can stifle creativity and individuality, two important Western values. If one considers cultural evolution to be a good thing, then this may not be something to emulate, since cultural evolution likely requires conflict, angst, and challenge. But if your life is unthreatened (so far as you know) and everyone in your society is satisfied, why would you desire change? How could things be improved? Especially if the outsiders you came into contact with seemed more irritable and less satisfied with life than you. I asked the Pirahãs once during my early missionary years if they knew why I was there. “You are here because this is a beautiful place. The water is pretty. There are good things to eat here. The Pirahãs are nice people.” That was and is the Pirahãs’ perspective. Life is good. Their upbringing, everyone learning early on to pull their own weight, produces a society of satisfied members. That is hard to argue against.”

The most strange and even shocking aspect of Pirahã life is their sexuality. Kids quickly learn about sex. It’s not that people have sex out in the open. But it’s a lifestyle that provides limited privacy. Sexual activity isn’t considered a mere adult activity and children aren’t protected from it. Quite the opposite (Kindle Locations 2736-2745):

“Sexual behavior is another behavior distinguishing Pirahãs from most middle-class Westerners early on. A young Pirahã girl of about five years came up to me once many years ago as I was working and made crude sexual gestures, holding her genitalia and thrusting them at me repeatedly, laughing hysterically the whole time. The people who saw this behavior gave no sign that they were bothered. Just child behavior, like picking your nose or farting. Not worth commenting about.

“But the lesson is not that a child acted in a way that a Western adult might find vulgar. Rather, the lesson, as I looked into this, is that Pirahã children learn a lot more about sex early on, by observation, than most American children. Moreover, their acquisition of carnal knowledge early on is not limited to observation. A man once introduced me to a nine- or ten-year-old girl and presented her as his wife. “But just to play,” he quickly added. Pirahã young people begin to engage sexually, though apparently not in full intercourse, from early on. Touching and being touched seem to be common for Pirahã boys and girls from about seven years of age on. They are all sexually active by puberty, with older men and women frequently initiating younger girls and boys, respectively. There is no evidence that the children then or as adults find this pedophilia the least bit traumatic.”

This seems plain wrong to most Westerners. Then again, to the Pirahã, much of what Westerners do would seem plain wrong or simply incomprehensible. Which is worse, Pirahã pedophilia or Western mass violence and systematic oppression?

What is most odd is that, like death for adults, sexuality for children isn’t considered a traumatizing experience and they don’t act traumatized. It’s apparently not part of their culture to be traumatized. They aren’t a society based on and enmeshed in a worldview of violence, fear, and anxiety. That isn’t how they think about any aspect of their lifeworld. I would assume that, like most tribal people, they don’t have high rates of depression and other mental illnesses. Everett pointed out that in the thirty years he knew the Pirahã there never was a suicide. And when he told them about his stepmother killing herself, they burst out in laughter because it made absolutely no sense to them that someone would take their own life.

That demonstrates the power of culture, environment, and lifestyle. According to Everett, it also demonstrates the power of language, inseparable from the society that shapes and is shaped by it, and demonstrates how little we understand the dark matter of the mind.

* * *

The Amazon’s Pirahã People’s Secret to Happiness: Never Talk of the Past or Future
by Dominique Godrèche, Indian Country

Being Pirahã Means Never Having to Say You’re Sorry
by Christopher Ryan, Psychology Today

The Myth of Teenage Rebellion
by Suzanne Calulu, Patheos

The Suicide Paradox: Full Transcript
from Freakonomics

How do we make the strange familiar?

I’ve been simultaneously looking at two books: This is Your Brain on Parasites by Kathleen McAuliffe. And Stranger Than We Can Imagine by John Higgs. The two relate, with the latter offering a larger context for the former. The theme of both might well be summed up with the word ‘strange’. The world is strange and becoming ever stranger. We are becoming aware of how utterly bizarre the world is, both within us and all around us.

The first is not only about parasites, despite the catchy title. It goes so far beyond just that. After all, most of the genetic material we carry around with us, including within our brains, is non-human. It’s not merely that we are part of environments for we are environments. We are mobile ecosystems with boundaries that are fluid and permeable.

For a popular science book, it covers a surprising amount of territory and done so with more depth than one might expect. Much of the research discussed is preliminary and exploratory, as the various scientific fields have been slow to emerge. This might be because of how much they challenge the world as we know it and society as it is presently ordered. There are other psychological factors the author details such as the resistance humans have in dealing with topics of perceived disgust.

To summarize the book, McAuliffe explores the conclusions and implications of research involving parasitism and microbiomes in terms of neurocognitive functioning, behavioral tendencies, personality traits, political ideologies, population patterns, social structures, and culture. She offers some speculations of those involved in these fields, and what makes the speculations interesting is how they demonstrate the potential challenges of these new understandings. Whether or not we wish to take the knowledge and speculations seriously, the real world consequences will remain to be dealt with somehow.

The most obvious line of thought is the powerful influence of environments. The world around us doesn’t just effect us. It shapes who we are at a deep level and so shapes our entire society. There is no way to separate the social world from the natural world. This isn’t fatalism, since we also shape our environments. The author points to the possibility that Western societies have been liberalized at least partly because of the creation of healthier conditions that allow human flourishing. All of the West not that long ago was dominated by fairly extreme forms of social conservatism, violent ethnocentrism, authoritarian systems, etc. Yet in the generations following the creation of sewer systems, clean water, environmental regulations and improved healthcare, there was a revolution in Western social values along with vast improvements in human development.

In terms of intelligence, some call this the Moral Flynn Effect, a convergence of diverse improvements. And there is no reason to assume it will stop and won’t spread further. We know the problems we face. We basically understand what those problems are, what causes them and alleviates them, even if not entirely eliminates them. So, we know what we should do, assuming we actually wanted to create a better world. Most importantly, we have the monetary wealth, natural resources, and human capacity to implement what needs to be done. It’s not a mystery, not beyond our comprehension and ability. But the general public has so far lacked this knowledge, for it takes a while for new info and understandings to spread — e.g., Enlightenment ideas developed over centuries and it wasn’t until the movable type printing press became common that revolutions began. The ruling elite, as in the past, will join in solving these problems when fear of the masses forces them to finally act. Or else the present ruling elite will itself be eliminated, as happened with previous societies.

What is compelling about this book are the many causal links and correlations shown. It matches closely with what is seen from other fields, forming a picture that can’t be ignored. It’s probably no accident that ethnocentric populations, socially conservative societies, authoritarian governments, and strict religions all happen to be found where there are high rates of disease, parasites, toxins, malnutrition, stress, poverty, inequality, etc — all the conditions that stunt and/or alter physical, neurocognitive, and psychological development.

For anti-democratic ruling elites, there is probably an intuitive or even conscious understanding that the only way to maintain social control is through keeping the masses to some degree unhealthy and stunted. If you let people develop more of their potential, they will start demanding more. If you let intelligence increase and education improve, individuals will start thinking for themselves and the public imagining new possibilities.

Maybe its unsurprising that American conservatives have seen the greatest threat not just in public education but, more imporantly, in public health. The political right doesn’t fear the failures of the political left, the supposed wasted use of tax money. No, what they fear is that the key leftist policies have been proven to work. The healthier, smarter, and better educated people become the more they develop attitudes of social liberalism and anti-authoritarianism, which leads toward the possibility of radical imagination and radical action. Until people are free to more fully develop their potentials, freedom is a meaningless and empty abstraction. The last thing the political right wants, and sadly this includes many mainstream ‘liberals’, is a genuinely free population.

This creates a problem. The trajectory of Western civilization for centuries has been the improvement of all these conditions that seems to near inevitably create a progressive society. That isn’t to say the West is perfect. Far from it. But imagine what kind of world it would be if universal healthcare and education was provided to every person on the planet. This is within the realm of possibility at this very moment, if we so chose to invest our resources in this way. It’s nothing special about the West and even in the West there are still large parts of the population living in severe deprivation and oppression. In a single generation, we could transform civilization and solve (or at least shrink to manageable size) the worst social problems. There is absolutely nothing stopping us but ourselves. Instead, Western governments have been using their vast wealth and power to dominate other countries, making the world a worst place in the process, helping to create the very conditions that further undermine any hope for freedom and democracy. Blowing up hospitals, destroying infrastructure, and banning trade won’t lead to healthier and more peaceful populations; if anything, the complete opposite.

A thought occurred to me. If environmental conditions are so important to how individuals and societies form, then maybe political ideologies are less key than we think or else not as important in the way we normally think about them. Our beliefs about our society might be more result than cause (maybe the limited healthcare availability in the American South being a central factor in maintaining its historical conservatism and authoritarianism). We have a hard time thinking outside of the conditions that have shaped our very minds.

That isn’t to say there is no feedback loop where ideology can reinforce the conditions that made it possible. The point is that free individuals aren’t fully possible in an unfree society where individuals aren’t free on a practical level to develop toward optimal health and ability. As such, fights over ideology miss an important point. The actual fight needs to be over the conditions that precede any particular ideological framing and conflict. On a practical level, we would be better off investing money and resources where it is needed most and in ways that practically improve lives, rather than simply imprisoning populations into submission and bombing entire societies into oblivion, either of which worsens the problems for those people and for everyone else as well. The best way to fight crime and terrorism would be by improving the lives for all people. Imagine that!

The only reason we can have a public debate now is because we have finally come to the point in society where conditions have improved just enough where these issues are finally comprehensible, as we have begun to see their real world impact in improving society. It would have been fruitless trying to have a public debate about public goods such as public healthcare and public education in centuries past when even the notion of a ‘public’ still seemed radical. The conditions for a public with a voice to be heard had to first be created. Once that was in place, it is unsurprising that it required radicals like socialists to take it to the next level in suggesting the creation of public sanitation and public bakeries, based on the idea that health was a priority, if not an individual right then a social responsibility. Now, these kinds of socialist policies have become the norm in Western societies, the most basic level of a social safety net.

As I began reading McAuliffe’s book, I came across Higgs’ book. It wasn’t immediately apparent that there was a connection between the two. Reading some reviews and interviews showed the importance Higgs placed on the role (hyper-)individualism has played this past century. And upon perusing the book, it became clear that he understood how this went beyond philosophy and politics, touching upon every aspect of our society, most certainly including science.

It was useful thinking about the issue of micro-organisms in a larger historical context. McAuliffe doesn’t shy away from the greater implications, but her writing was focused on a single area of study. To both of these books, we could also add such things as the research on epigentics which might further help transform our entire understanding of humanity. Taken together, it is clear that we are teetering on the edge of a paradigm shift, of the extent only seen a few times before. We live in a transitional era, but it isn’t a smooth transition. As Higgs argues, the 20th century has been a rupture, what having developed not being fully explicable according to what came before.

We are barely beginning to scratch the surface of our own ignorance, which is to say our potential new knowledge. We know just enough to realize how wrong mainstream views have been in the past. Our society was built upon and has been operating according to beliefs that have been proven partial, inaccurate, and false. The world is more complex and fascinating than we previously acknowledged.

Realizing we have been so wrong, how do we make it right going forward? What will it take for us to finally confront what we’ve ignored for so long? How do we make the strange familiar?

* * *

Donald Trump: Stranger Than We Can Imagine?
by David McConkey

Why Jeremy Corbyn makes sense in the age of the selfie
By John Higgs

Stranger Than We Can Imagine:
Making Sense of the Twentieth Century
by John Higgs
pp. 308-310

In the words of the American social physicist Alex Pentland, “It is time that we dropped the fiction of individuals as the unit of rationality, and recognised that our rationality is largely determined by the surrounding social fabric. Instead of being actors in markets, we are collaborators in determining the public good.” Pentland and his team distributed smartphones loaded with tracking software to a number of communities in order to study the vast amount of data the daily interactions of large groups generated. They found that the overriding factor in a whole range of issues, from income to weight gain and voting intentions, was not individual free will but the influence of others. The most significant factor deciding whether you would eat a doughnut was not willpower or good intentions, but whether everyone else in the office took one. As Pentland discovered, “The single biggest factor driving adoption of new behaviours was the behaviour of peers. Put another way, the effects of this implicit social learning were roughly the same size as the influence of your genes on your behaviour, or your IQ on your academic performance.”

A similar story is told by the research into child development and neuroscience. An infant is not born with language, logic and an understanding of how to behave in society. They are instead primed to acquire these skills from others. Studies of children who have been isolated from the age of about six months, such as those abandoned in the Romanian orphanages under the dictatorship of Nicolae Ceauşescu, show that they can never recover from the lost social interaction at that crucial age. We need others, it turns out, in order to develop to the point where we’re able to convince ourselves that we don’t need others.

Many aspects of our behaviour only make sense when we understand their social role. Laughter, for example, creates social bonding and strengthens ties within a group. Evolution did not make us make those strange noises for our own benefit. In light of this, it is interesting that there is so much humour on the internet.

Neuroscientists have come to view our sense of “self,” the idea that we are a single entity making rational decisions, as no more than a quirk of the mind. Brain-scanning experiments have shown that the mental processes that lead to an action, such as deciding to press a button, occur a significant period before the conscious brain believes it makes the decision to press the button. This does not indicate a rational individual exercising free will. It portrays the conscious mind as more of a spin doctor than a decision maker, rationalising the actions of the unconscious mind after the fact. As the Canadian-British psychologist Bruce Hood writes, “Our brain creates the experience of our self as a model – a cohesive, integrated character – to make sense of the multitude of experiences that assault our senses throughout our lifetime.”

In biology an “individual” is an increasingly complicated word to define. A human body, for example, contains ten times more non-human bacteria than it does human cells. Understanding the interaction between the two, from the immune system to the digestive organs, is necessary to understand how we work. This means that the only way to study a human is to study something more than that human.

Individualism trains us to think of ourselves as isolated, self-willed units. That description is not sufficient, either biologically, socially, psychologically, emotionally or culturally. This can be difficult to accept if you were raised in the twentieth century, particularly if your politics use the idea of a free individual as your primary touchstone. The promotion of individualism can become a core part of a person’s identity, and something that must be defended. This is ironic, because where did that idea come from? Was it created by the person who defends their individualism? Does it belong to them? In truth, that idea was, like most ideas, just passing through.

* * *

Social Conditions of an Individual’s Condition

Uncomfortable Questions About Ideology

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

Social Disorder, Mental Disorder

The Desperate Acting Desperately

Homelessness and Mental Illness

It’s All Your Fault, You Fat Loser!

Morality-Punishment Link

Denying the Agency of the Subordinate Class

Freedom From Want, Freedom to Imagine

Ideological Realism & Scarcity of Imagination

The Unimagined: Capitalism and Crappiness

Neoliberalism: Dream & Reality

Moral Flynn Effect?

Racists Losing Ground: Moral Flynn Effect?

Immoral/Amoral Flynn Effect?

Of Mice and Men and Environments

What do we inherit? And from whom?

Radical & Moderate Enlightenments: Revolution & Reaction, Science & Religion

No One Knows

The Psychology and Anthropology of Consciousness

“There is in my opinion no tenable argument against the hypothesis that psychic functions which today seem conscious to us were once unconscious and yet worked as if they were conscious. We could also say that all the psychic phenomena to be found in man were already present in the natural unconscious state. To this it might be objected that it would then be far from clear why there is such a thing as consciousness at all.”
~ Carl Jung, On the Nature of the Psyche 

An intriguing thought by Jung. Many have considered this possibility. It leads to questions about what is consciousness and what purpose it serves. A recent exploration of this is the User Illusion by Tor Nørretranders, in which the author proposes that consciousness doesn’t determine what we do but chooses what we don’t do, the final vote before action is taken, but action itself requires no consciousness. As such, consciousness is useful and advantageous, just not absolutely necessary. It keeps you from eating that second cookie or saying something cruel.

Another related perspective is that of Julian Jaynes’ bicameral mind theory. I say related because Jaynes influenced Nørretranders. About Jung, Jaynes was aware of his writings and stated disagreement with some ideas: “Jung had many insights indeed, but the idea of the collective unconscious and of the archetypes has always seemed to me to be based on the inheritance of acquired characteristics, a notion not accepted by biologists or psychologists today.” (Quoted by Philip Ardery in “Ramifications of Julian Jaynes’s theory of consciousness for traditional general semantics.”) What these three thinkers agree about is that the unconscious mind is much more expansive and capable, more primary and important than is normally assumed. There is so much more to our humanity than the limits of interiorized self-awareness.

What interested me was the anthropological angle. Here is something I wrote earlier:

“Julian Jaynes had written about the comparison of shame and guilt cultures. He was influenced in by E. R. Dodds (and Bruno Snell). Dodds in turn based some of his own thinking about the Greeks on the work of Ruth Benedict, who originated the shame and guilt culture comparison in her writings on Japan and the United States. Benedict, like Margaret Mead, had been taught by Franz Boas. Boas developed some of the early anthropological thinking that saw societies as distinct cultures.”

Boas founded a school of thought about the primacy of culture, the first major challenge to race realism and eugenics. He gave the anthropology field new direction and inspired a generation of anthropologists. This was the same era during which Jung was formulating his own views.

As with Jung before him, Jaynes drew upon the work of anthropologists. Both also influenced anthropologists, but Jung’s influence of course came earlier. Even though some of these early anthropologists were wary of Jungian psychology, such as archetypes and collective unconscious, they saw personality typology as a revolutionary framework (those influenced also included the likes of Edward Sapir and Benjamin Lee Whorf). Through personality types, it was possible to begin understanding what fundamentally made one mind different from another, a necessary factor in distinguishing one culture from another.

In Jung and the Making of Modern Psychology, Sonu Shamdasani describes this meeting of minds (Kindle Locations 4706-4718):

“The impact of Jung’s typology on Ruth Benedict may be found in her concept of Apollonian and Dionysian culture patterns which she first put forward in 1928 in “Psychological Types in the cultures of the Southwest,” west,” and subsequently elaborated in Patterns of Culture. Mead recalled that their conversations on this topic had in part been shaped by Sapir and Oldenweiser’s discussion of Jung’s typology in Toronto in 1924 as well as by Seligman’s article cited above (1959, 207). In Patterns of Culture, ture, Benedict discussed Wilhelm Worringer’s typification of empathy and abstraction, Oswald Spengler’s of the Apollonian and the Faustian and Friedrich Nietzsche’s of the Apollonian and the Dionysian. Conspicuously, ously, she failed to cite Jung explicitly, though while criticizing Spengler, she noted that “It is quite as convincing to characterize our cultural type as thoroughly extravert … as it is to characterize it as Faustian” (1934, 54-55). One gets the impression that Benedict was attempting to distance herself from Jung, despite drawing some inspiration from his Psychological Types.

“In her autobiography, Mead recalls that in the period that led up to her Sex and Temperament, she had a great deal of discussion with Gregory Bateson concerning the possibility that aside from sex difference, there were other types of innate differences which “cut across sex lines” (1973, 216). She stated that: “In my own thinking I drew on the work of Jung, especially his fourfold scheme for grouping human beings as psychological ical types, each related to the others in a complementary way” (217). Yet in her published work, Mead omitted to cite Jung’s work. A possible explanation for the absence of citation of Jung by Benedict and Mead, despite the influence of his typological model, was that they were developing oping diametrically opposed concepts of culture and its relation to the personality to Jung’s. Ironically, it is arguably through such indirect and half-acknowledged conduits that Jung’s work came to have its greatest impact upon modern anthropology and concepts of culture. This short account of some anthropological responses to Jung may serve to indicate that when Jung’s work was engaged with by the academic community, it was taken to quite different destinations, and underwent a sea change.”

It was Benedict’s Patterns of Culture that was a major source of influence on Jaynes. It created a model for comparing and contrasting different kinds of societies. Benedict was studying two modern societies, but Dodds came to see how it could be applied to different societies across time, even into the ancient world. That was a different way of thinking and opened up new possibilities of understanding. It set the stage for Jaynes’ radical proposal, that consciousness itself was built on culture. From types of personalities to types of cultures.

All of that is just something that caught my attention. I find fascinating such connections, how ideas get passed on and develop. None of that was the original reason for this post, though. I was doing my regular perusing of the web and came across some stuff of interest. This post is simply an excuse to share some of it.

This topic is always on my mind. The human psyche is amazing. It’s easy to forget what a miracle it is to be conscious and the power of the unconscious that underlies it. There is so much more to our humanity than we can begin to comprehend. Such things as dissociation and voice hearing isn’t limited to crazy people or, if it is, then we’re all a bit crazy.

* * *

Other Multiplicity
by Mark and Rana Mannng, Legion Theory

When the corpus callosum is severed in adults, we create separate consciousnesses which can act together cooperatively within a single body. In Multiple Personality Disorder (MPD), or Dissociative Identity Disorder (DID), as it is now known, psychological trauma to the developing mind also creates separate consciousnesses which can act together cooperatively within a single body. And in both cases, in most normal social situations, the individual would provide no reason for someone to suspect that they were not dealing with someone with a unitary consciousness.

The Third Man Factor: Surviving the Impossible
by John Geiger
pp. 161-162

For modern humans generally, however, the stress threshold for triggering a bicameral hallucination is much higher, according to Jaynes: “Most of us need to be over our heads in trouble before we would hear voices.” 10 Yet, he said, “contrary to what many an ardent biological psychiatrist wishes to think, they occur in normal individuals also.” 11 Recent studies have supported him, with some finding that a large minority of the general population, between 30 and 40 percent, report having experienced auditory hallucinations. These often involve hearing one’s own name, but also phrases spoken from the rear of a car, and the voices of absent friends or dead relatives. 12 Jaynes added that it is “absolutely certain that such voices do exist and that experiencing them is just like hearing actual sound.” Even today, though they are loath to admit it, completely normal people hear voices, he said, “often in times of stress.”

Jaynes pointed to an example in which normally conscious individuals have experienced vestiges of bicameral mentality, notably, “shipwrecked sailors during the war who conversed with an audible God for hours in the water until they were saved.” 13 In other words, it emerges in normal people confronting high stress and stimulus reduction in extreme environments. A U.S. study of combat veterans with post-traumatic stress disorder found a majority (65 percent) reported hearing voices, sometimes “command hallucinations to which individuals responded with a feeling of automatic obedience.”

Gods, voice-hearing and the bicameral mind
by Jules Evans, Philosophy for Life

Although humans evolved into a higher state of subjective consciousness, vestiges of the bicameral mind still remain, most obviously in voice-hearing. As much as 10% of the population hear voices at some point in their lives, much higher than the clinical incidence of schizophrenia (1%). For many people, voice-hearing is not debilitating and can be positive and encouraging.

Sensing a voice or presence often emerges in stressful situations – anecdotally, it’s relatively common for the dying to see the spirits of dead loved ones, likewise as many as 35% of people who have recently lost a loved one say they have a sense of the departed’s continued presence. Mountaineers in extreme conditions often report a sensed presence guiding them (known as the Third Man Factor).

And around 65% of children say they have had ‘imaginary friends’ or toys that play a sort of guardian-angel role in their lives – Jaynes thought children evolve from bicameral to conscious, much as Piaget thought young children are by nature animist

Earslips: Of Mishearings and Mondegreens
by Steven Connor, personal blog

The processing of the sounds of the inanimate world as voices may strike us as a marginal or anomalous phenomenon. However, some recent work designed to explain why THC, the active component of cannabis, might sometimes trigger schizophrenia, points in another direction. Zerrin Atakan of London’s Institute of Psychiatry conducted experiments which suggest that subjects who had been given small doses of THC were much less able to inhibit involuntary actions. She suggests that THC may induce psychotic hallucinations, especially the auditory hallucinations which are classically associated with paranoid delusion, by suppressing the response inhibition which would normally prevent us from reacting to nonvocal sounds as though they were voices. The implications of this argument are intriguing; for it seems to imply that, far from only occasionally or accidentally hearing voices in sounds, we have in fact continuously and actively to inhibit this tendency. Perhaps, without this filter, the wind would always and for all of us be whispering ‘Mary’, or ‘Malcolm’.

Hallucinations and Sensory Overrides
by T. M. Luhrmann, Stanford University

Meanwhile, the absence of cultural categories to describe inner experience does limit
the kinds of psychotic phenomena people experience. In the West, those who are psychotic sometimes experience symptoms that are technically called “thought insertion” and “thought withdrawal”, the sense that some external force has placed thoughts in one’s mind or taken them out. Thought insertion and withdrawal are standard items in symptoms checklists. Yet when Barrett (2004) attempted to translate the item in Borneo, he could not. The Iban do not have an elaborated idea of the mind as a container, and so the idea that someone could experience external thoughts as placed within the mind or removed from it was simply not available to them.

Hallucinatory ‘voices’ shaped by local culture, Stanford anthropologist says
by Clifton B. Parker, Stanford University

Why the difference? Luhrmann offered an explanation: Europeans and Americans tend to see themselves as individuals motivated by a sense of self identity, whereas outside the West, people imagine the mind and self interwoven with others and defined through relationships.

“Actual people do not always follow social norms,” the scholars noted. “Nonetheless, the more independent emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally in many places.”

As a result, hearing voices in a specific context may differ significantly for the person involved, they wrote. In America, the voices were an intrusion and a threat to one’s private world – the voices could not be controlled.

However, in India and Africa, the subjects were not as troubled by the voices – they seemed on one level to make sense in a more relational world. Still, differences existed between the participants in India and Africa; the former’s voice-hearing experience emphasized playfulness and sex, whereas the latter more often involved the voice of God.

The religiosity or urban nature of the culture did not seem to be a factor in how the voices were viewed, Luhrmann said.

“Instead, the difference seems to be that the Chennai (India) and Accra (Ghana) participants were more comfortable interpreting their voices as relationships and not as the sign of a violated mind,” the researchers wrote.

Tanya Luhrmann, hearing voices in Accra and Chenai
by Greg Downey, Neuroanthropology

local theory of mind—the features of perception, intention, and inference that the community treats as important—and local practices of mental cultivation will affect both the kinds of unusual sensory experiences that individuals report and the frequency of those experiences. Hallucinations feel unwilled. They are experienced as spontaneous and uncontrolled. But hallucinations are not the meaningless biological phenomena they are understood to be in much of the psychiatric literature. They are shaped by explicit and implicit learning around the ways that people pay attention with their senses. This is an important anthropological finding because it demonstrates that cultural ideas and practices can affect mental experience so deeply that they lead to the override of ordinary sense perception.

How Universal Is The Mind?
by Salina Golonka, Notes from Two Scientific Psychologists

To the extent that you agree that the modern conception of “cognition” is strongly related to the Western, English-speaking view of “the mind”, it is worth asking what cognitive psychology would look like if it had developed in Japan or Russia. Would text-books have chapter headings on the ability to connect with other people (kokoro) or feelings or morality (dusa) instead of on decision-making and memory? This possibility highlights the potential arbitrariness of how we’ve carved up the psychological realm – what we take for objective reality is revealed to be shaped by culture and language.

A puppet is a magical object. It is not a toy, is it? Here they see it as puppet theatre, as puppets for kids. But it’s just not like that. These native tribes — in Africa or Oceania, etc. — the shamans use puppets in communication not only with the upper world, with the gods, but even in relation when they treat a sick person. Those shamans, when they dress as some demon or some deity, they incarnate genuinely. They are either the totem animal or the demon. (via Matt Cardin)

The Chomsky Problem

Somehow I’ve ended up reading books on linguistics.

It started years ago with my reading books by such thinkers as E. R. Dodds and Julian Jaynes. Their main focus was on language usage of the ancient world. For entirely different reasons, I ended up interested in Daniel L. Everett who became famous for his study of the Piraha, an Amazonian tribe with a unique culture and language. A major figure I have had an interest in for a long time, Noam Chomsky, is also in the linguistics field, but I had never previously been interested in his linguistic writings.

It turns out that Everett and Chomsky are on two sides of the central debate within linguistics. That debate has overshadowed all other issues in the field since what is known as the cognitive revolution. I was peripherally aware of this, but some recent books have forced me to try to make sense of it. Two books I read, though, come at the debate from an entirely different angle.

The first book I read isn’t one I’d recommend. It is The Kingdom of Speech by Tom Wolfe. I’ve never looked at much of his writings, despite having seen his books around for decades. The only prior book I even opened was The Electric Kool-Aid Acid Test, a catchy title if there ever was one. Maybe he is getting old enough that he isn’t as great of a writer as he once was. I don’t know. This latest publication wasn’t that impressive, even as I think I understood and agreed with the central conclusion of his argument posed as a confused angry rant.

It’s possible that such a book might serve a purpose, if reading it led one to read better books on the topic. Tom Wolfe does have a journalistic flair about him that makes the debate seem entertaining to those who might otherwise find it boring — a melodramatic clashing of minds and ideas, sometimes a battle of wills with charisma winning the day. His portrayal of Chomsky definitely gets one thinking, but I wasn’t quite sure what to think of it. Fortunately, another book by an entirely different kind of author, Chris Knight’s Decoding Chomsky, takes on a similar understanding to Chomsky’s linguistics career and does so with more scholarly care.

Both books helped me put my finger on something that has been bothering me about Chomsky. Like Knight, I highly respect Chomsky’s political activism and his being a voice for truth and justice. Yet there was a disconnect I sensed. I remember being disappointed by a video I saw of him being asked by someone about what should be done and his response was that he couldn’t tell anyone what to do and that everyone had to figure it out for themself. The problem is that no one has ever figured out any major problem by themselves in all of human existence. Chomsky knows full well the challenges we face and still, when push comes to shove, the best he has to offer is to tell people to vote for the Democratic presidential candidate once again. That is plain depressing.

Knight gives one possible explanation for why that disconnection exists and why it matters. It’s not just a disconnection. After reading Knight’s book, I came to the conclusion that there is a dissociation involved, a near complete divide within Chomsky’s psyche. Because of his career and his activism, he felt compelled to split himself in two. He admits that this is what he has done and states that he has a remarkable talent in being able to do so, but he doesn’t seem grasp the potentially severe consequences. Knight shows that Chomsky should understand this, as it relates to key social problems Chomsky has written about involving the disconnect of the knowing mind — between what we know, what we think we know, what don’t know, and what we don’t know we know. It relates to what Knight discussion of Orwell’s problem and Plato’s problem:

He shows no appetite for dwelling on contradictions: ‘Plato’s problem . . . is to explain how we know so much, given that the evidence available to us is so sparse. Orwell’s problem is to explain why we know and understand so little, even though the evidence available to us is so rich.’[36]

How do we know so little? That’s Orwell’s problem. How do we know so much? That’s Plato’s. Chomsky makes no attempt to reconcile these two problems, leaving the contradiction between their flatly opposed assumptions unresolved. Which problem is chosen depends on who is speaking, whether activist or scientist. Chomsky’s ‘two problems’ seem not only different but utterly unconnected with one another, as if to deliberately illustrate the gulf between the two compartments of his brain.

I’m not sure I fully understand what this division is and what the fundamental issue might be. I do sense how this goes far beyond Chomsky and linguistics. Knight points out that this kind of splitting is common in academia. I’d go further. It is common throughout our society.

Dissociation is not an unusual response, but when taken to extremes the results can be problematic. An even more extreme example than that of Chomsky, as used by Derrick Jensen, is the Nazi doctors who experimented on children and then went home to play with their own children. The two parts of their lives never crossed, neither in their experience nor in their minds. This is something most people learn to do, if never to such a demented degree. Our lives become splintered in endless ways, a near inevitability in such a large complex society as this. Our society maybe couldn’t operate without such dissociation, a possibility that concerns me.

This brings my mind back around to the more basic problem of linguistics itself. What is linguistics a study of and what is the purpose toward what end? That relates to a point Knight makes, arguing that Chomsky has split theory from meaning, science from humanity. Between the Pentagon-funded researcher and the anti-Pentagon anarchist, the twain shall never meet. Two people live in Chomsky’s mind and they are fundamentally opposed, according to Knight. Maybe there is something to this.

Considering the larger-than-life impact Chomsky has had on the linguistics field, what does this mean for our understanding of our own humanity? Why has the Pentagon backed Chomsky’s side and what do they get for their money?

Blue on Blue

“Abstract words are ancient coins whose concrete images in the busy give-and-take of talk have worn away with use.”
~ Julian Jaynes, The Origin of Consciousness in the
Breakdown of the Bicameral Mind

“This blue was the principle that transcended principles. This was the taste, the wish, the Binah that understands, the dainty fingers of personality and the swirling fingerprint lines of individuality, this sigh that returns like a forgotten and indescribable scent that never dies but only you ever knew, this tingle between familiar and strange, this you that never there was word for, this identifiable but untransmittable sensation, this atmosphere without reason, this illicit fairy kiss for which you are more fool than sinner, this only thing that God and Satan mistakenly left you for your own and which both (and everyone else besides) insist to you is worthless— this, your only and invisible, your peculiar— this secret blue.”
~ Quentin S. Crisp, Blue on Blue

Perception is as much cognition as sensation. Colors don’t exist in the world. It is our brain’s way of processing light waves detected by the eyes. Someone unable to see from birth will never be able to see normal colors, even if they gain sight as an adult. The brain has to learn how to see the world and that is a process that primarily happens in infancy and childhood.

Radical questions follow from this insight. Do we experience blue, forgiveness, individuality, etc before our culture has the language for it? And, conversely, does the language we use and how we use it indicate our actual experience? Or does it filter and shape it? Did the ancients lack not only perceived blueness but also individuated/interiorized consciousness and artistic perspective because they had no way of communicating and expressing it? If they possessed such things as their human birthright, why did they not communicate them in their texts and show them in their art?

The most ancient people would refer to the sky as black. Some isolated people in more recent times have also been observed offering this same description. This apparently isn’t a strange exception. Guy Deutscher mentions that, in an informal color experiment, his young daughter once pointed to the “pitch-black sky late at night” and declared it blue—that was at the age of four, long after having learned the color names for blue and black. She had the language to make the distinction and yet she made a similar ‘mistake’ as some isolated island people. How could that be? Aren’t ‘black’ and ‘blue’ obviously different?

The ancients described physical appearances in some ways that seem bizarre to the modern sensibility. Homer says the sea appears something like wine and so do sheep. Or else the sea is violet, just as are oxen and iron. Even more strangely, green is the color of honey and the color human faces turn under emotional distress. Yet no where in the ancient world is anything blue for no word for it existed. Things that seem blue to us are either green, black or simply dark in ancient texts.

It has been argued that Homer’s language such as the word for ‘bronze’ might not have referred to color at all. But that just adds to the strangeness. We not only can’t determine what colors he might have been referring to or even if he was describing colors at all. There weren’t abstractly generalized color terms that were exclusively dedicated to colors, instead also describing other physical features, psychological experiences, and symbolic values. This might imply that synesthesia once was a more common experience, related to the greater capacity preliterate individuals had for memorizing vast amounts of information (see Knowledge and Power in Prehistoric Societies by Lynne Kelly).

The paucity and confusion of ancient color language indicates color wasn’t perceived as all that significant, to the degree it was consciously perceived at all, at least not in the way we moderns think about it. Color hue might have not seemed all that relevant in the ancient world that was mostly lacking artificially colored objects and entirely lacking in bright garden flowers. Besides the ancient Egyptians, no one in the earliest civilizations had developed blue pigment and hence a word to describe it. Blue is a rare color in nature. Even water and sky is rarely a bright clear blue, when blue at all.

This isn’t just about color. There is something extremely bizarre going on, according to what we moderns assume to the case about the human mind and perception.

Consider the case of the Piraha, as studied by Daniel L. Everett (a man who personally understands the power of their cultural worldview). The Piraha have no color terms, not as single words, although they are able to describe colors using multiple words and concrete comparisons—such as red described as being like blood or green as like not yet ripe. Of course, they’ve been in contact with non-Piraha for a while now and so no one knows how they would’ve talked about colors before interaction with outsiders.

From a Western perspective, there are many other odd things about the Piraha. Their language does not fit the expectations of what many have thought as universal to all human language. They have no terms for numbers and counting, as well as no “quantifiers like all, each, every, and so on” (Everett, Don’t Sleep, There Are Snakes, p. 119). Originally, they had no pronouns and the pronouns they borrowed from other languages are used limitedly. They refer to ‘say’ in place of ‘think’, which makes one wonder what this indicates about their experience—is their thought an act of speaking?

Along with lacking ancestor worship, there aren’t even words to refer to family one never personally knew. Also, there are no creation stories or myths or fiction or any apparent notion of the world having been created or another supernatural world existing. They don’t think in those terms nor, one might presume, perceive reality in those terms. They are epistemological agnostics about anything they haven’t personally experienced or someone they personally know hasn’t personally experienced, and their language is extremely precise in knowledge claims, making early Western philosophers seem simpleminded in comparison. Everett was put in the unfortunate position of having tried to convert them to Christianity, but instead they converted him to atheism. Yet the Piraha live in a world they perceive as filled with spirits. These aren’t otherworldly spirits. They are very much in this world and when a Piraha speaks as a spirit they are that spirit. To put it another way, the world is full of diverse and shifting selves.

Color terms refer to abstract unchanging categories, the very thing that seems least relevant to the Piraha. They favor a subjective mentality, but that doesn’t mean they possess a subjective self similar to Western culture. Like many hunter-gatherers, they have a fluid sense of identity that changes along with their names, their former self treated as no longer existing whatsoever, just gone. There is no evidence of belief in a constant self that would survive death, as there is no belief in gods nor a heaven and hell. Instead of being obsessed with what is beyond, they are endlessly fascinated by what is at the edge of experience, what appears and disappears. In Cultural Constraints on Grammar and Cognition in Piraha, Everett explains this:

“After discussions and checking of many examples of this, it became clearer that the Piraha are talking about liminality—situations in which an item goes in and out of the boundaries of their experience. This concept is found throughout Piraha˜ culture.
Piraha˜’s excitement at seeing a canoe go around a river bend is hard to describe; they see this almost as traveling into another dimension. It is interesting, in light of the postulated cultural constraint on grammar, that there is an important Piraha˜ term and cultural value for crossing the border between experience and nonexperience.”

To speak of colors is to speak of particular kinds of perceptions and experiences. The Piraha culture is practically incomprehensible to us, as the Piraha represent an alien view of the world. Everett, in making a conclusion, writes that,

“Piraha thus provides striking evidence for the influence of culture on major grammatical structures, contradicting Newmeyer’s (2002:361) assertion (citing “virtually all linguists today”), that “there is no hope of correlating a language’s gross grammatical properties with sociocultural facts about its speakers.” If I am correct, Piraha shows that gross grammatical properties are not only correlated with sociocultural facts but may be determined by them.”

Even so, Everett is not arguing for a strong Whorfian positon of linguistic determinism. Then again, Vyvyan Evans states that not even Benjamin Lee Whorf made this argument. In Language, Thought and Reality, Whorf wrote (as quoted by Evans in The Language Myth):

“The tremendous importance of language cannot, in my opinion, be taken to mean necessarily that nothing is back of it of the nature of what has traditionally been called ‘mind’. My own studies suggest, to me, that language, for all its kingly role, is in some sense a superficial embroidery upon deeper processes of consciousness, which are necessary before any communication, signalling, or symbolism whatsoever can occur.”

Anyway, Everett observed that the Piraha demonstrated a pattern to how they linguistically treated certain hues of color. It’s just that they had much diversity and complexity in how they described colors, a dark brown object being described differently than a dark-skinned person, and no consistency across all the Piraha members in which phrases they’d use to describe which colors. Still, like any other humans, they had the capacity for color perception, whether or not their color cognition matches that of other cultures.

To emphasize the point, the following is a similar example, as presented by Vyvyan Evans from The Language Myth (p. 207-8):

“The colour system in Yélî Dnye has been studied extensively by linguistic anthropologist Stephen Levinson. 38 Levinson argues that the lesson from Rossel Island is that each of the following claims made by Berlin and Kay is demonstrably false:

  • Claim 1: All languages have basic colour terms
  • Claim 2: The colour spectrum is so salient a perceptual field that all cultures must systematically and exhaustively name the colour space
  • Claim 3: For those basic colour terms that exist in any given language, there are corresponding focal colours – there is an ideal hue that is the prototypical shade for a given basic colour term
  • Claim 4: The emergence of colour terms follows a universal evolutionary pattern

“A noteworthy feature of Rossel Island culture is this: there is little interest in colour. For instance, there is no native artwork or handiwork in colour. The exception to this is hand-woven patterned baskets, which are usually uncoloured, or, if coloured, are black or blue. Moreover, the Rossel language doesn’t have a word that corresponds to the English word colour: the domain of colour appears not to be a salient conceptual category independent of objects. For instance, in Yélî, it is not normally possible to ask what colour something is, as one can in English. Levinson reports that the equivalent question would be: U pââ ló nté? This translates as “Its body, what is it like?” Furthermore, colours are not usually associated with objects as a whole, but rather with surfaces.”

Evans goes into greater detail. Suffice it to say, she makes a compelling argument that this example contradicts and falsifies the main claims of conventional theory, specifically that of Berlin and Kay. This culture defies expectations. It’s one of the many exceptions that appears to disprove the hypothetical rule.

Part of the challenge is we can’t study other cultures as neutral observers. Researchers end up influencing those cultures they study or else simply projecting their own cultural biases onto them and so interpreting the results accordingly. Even the tests used to analyze color perceptions across cultures are themselves culturally biased. They don’t just measure how people divide up hues. In the process of being tested, the design of the test is teaching the subjects a particular way of thinking about color perception. The test can’t tell us how people think about colors prior to the test itself. And obviously, even if the test could accomplish this impossible feat, we have no way of time traveling back in order to apply the test to ancient people.

We are left with a mystery and no easy way to explore it.

* * *

Here are a few related posts of mine. And below that are other sources of info, including a video at the very bottom.

Radical Human Mind: From Animism to Bicameralism and Beyond

Folk Psychology, Theory of Mind & Narrative

Self, Other, & World

Does Your Language Shape How You Think?
by Guy Deutscher

SINCE THERE IS NO EVIDENCE that any language forbids its speakers to think anything, we must look in an entirely different direction to discover how our mother tongue really does shape our experience of the world. Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about. […]

For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies. We may not know as yet how to measure these consequences directly or how to assess their contribution to cultural or political misunderstandings. But as a first step toward understanding one another, we can do better than pretending we all think the same.

Why Isn’t the Sky Blue?
by Tim Howard, Radiolab

Is the Sky Blue?
by Lisa Wade, PhD, Sociological Images

Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction. There are other examples of this phenomenon. What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany. Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

Could our ancestors see blue?
by Ellie Zolfagharifard, Daily Mail

But it’s not just about lighting conditions or optical illusions – evidence is mounting that until we have a way to describe something, we may not see its there.

Fathoming the wine-dark sea
by Christopher Howse, The Spectator

It wasn’t just the ‘wine-dark sea’. That epithet oinops, ‘wine-looking’ (the version ‘wine-dark’ came from Andrew Lang’s later translation) was applied both to the sea and to oxen, and it was accompanied by other colours just as nonsensical. ‘Violet’, ioeis, (from the flower) was used by Homer of the sea too, but also of wool and iron. Chloros, ‘green’, was used of honey, faces and wood. By far the most common colour words in his reticent vocabulary were black (170 times) and white (100), followed distantly by red (13).

What could account for this alien colour-sense? It wasn’t that Homer (if Homer existed) was blind, for there are parallel usages in other Greek authors.

A Winelike Sea
by Caroline Alexander, Lapham’s Quarterly

The image Homer hoped to conjure with his winelike sea greatly depended upon what wine meant to his audience. While the Greeks likely knew of white wine, most ancient wine was red, and in the Homeric epics, red wine is the only wine specifically described. Drunk at feasts, poured onto the earth in sacred rituals, or onto the ashes around funeral pyres, Homeric wine is often mélas, “dark,” or even “black,” a term with broad application, used of a brooding spirit, anger, death, ships, blood, night, and the sea. It is also eruthrós, meaning “red” or the tawny-red hue of bronze; and aíthops, “bright,” “gleaming,” a term also used of bronze and of smoke in firelight. While these terms notably have more to do with light, and the play of light, than with color proper, Homeric wine was clearly dark and red and would have appeared especially so when seen in the terracotta containers in which it was transported. “Winelike sea” cannot mean clear seawater, nor the white splash of sea foam, nor the pale color of a clear sea lapping the shallows of a sandy shore. […]

Homer’s sea, whether háls, thálassa, or póntos, is described as misty, darkly troubled, black-dark, and grayish, as well as bright, deep, clashing, tumultuous, murmuring, and tempestuous—but it is never blue. The Greek word for blue, kuáneos, was not used of the sea until the late sixth or early fifth century BC, in a poem by the lyric poet Simonides—and even here, it is unclear if “blue” is strictly meant, and not, again, “dark”:

the fish straight up from the
dark/blue water leapt
at the beautiful song

After Simonides, the blueness of kuáneos was increasingly asserted, and by the first century, Pliny the Elder was using the Latin form of the word, cyaneus, to describe the cornflower, whose modern scientific name, Centaurea cyanus, still preserves this lineage. But for Homer kuáneos is “dark,” possibly “glossy-dark” with hints of blue, and is used of Hector’s lustrous hair, Zeus’ eyebrows, and the night.

Ancient Greek words for color in general are notoriously baffling: In The Iliad, “chlorós fear” grips the armies at the sound of Zeus’ thunder. The word, according to R. J. Cunliffe’s Homeric lexicon, is “an adjective of color of somewhat indeterminate sense” that is “applied to what we call green”—which is not the same as saying it means “green.” It is also applied “to what we call yellow,” such as honey or sand. The pale green, perhaps, of vulnerable shoots struggling out of soil, the sickly green of men gripped with fear? […]

Rather than being ignorant of color, it seems that the Greeks were less interested in and attentive to hue, or tint, than they were to light. As late as the fourth century BC, Plato named the four primary colors as white, black, red, and bright, and in those cases where a Greek writer lists colors “in order,” they are arranged not by the Newtonian colors of the rainbow—red, orange, yellow, green, blue, indigo, violet—but from lightest to darkest. And The Iliad contains a broad, specialized vocabulary for describing the movement of light: argós meaning “flashing” or “glancing white”; aiólos, “glancing, gleaming, flashing,” or, according to Cunliffe’s Lexicon, “the notion of glancing light passing into that of rapid movement,” and the root of Hector’s most defining epithet, koruthaíolos—great Hector “of the shimmering helm.” Thus, for Homer, the sky is “brazen,” evoking the glare of the Aegean sun and more ambiguously “iron,” perhaps meaning “burnished,” but possibly our sense of a “leaden” sky. Significantly, two of the few unambiguous color terms in The Iliad, and which evoke the sky in accordance with modern sensibilities, are phenomena of light: “Dawn robed in saffron” and dawn shining forth in “rosy fingers of light.”

So too, on close inspection, Homeric terms that appear to describe the color of the sea, have more to do with light. The sea is often glaukós or mélas. In Homer, glaukós (whence glaucoma) is color neutral, meaning “shining” or “gleaming,” although in later Greek it comes to mean “gray.” Mélas (whence melancholy) is “dark in hue, dark,” sometimes, perhaps crudely, translated as “black.” It is used of a range of things associated with water—ships, the sea, the rippled surface of the sea, “the dark hue of water as seen by transmitted light with little or no reflection from the surface.” It is also, as we have seen, commonly used of wine.

So what color is the sea? Silver-pewter at dawn; gray, gray-blue, green-blue, or blue depending on the particular day; yellow or red at sunset; silver-black at dusk; black at night. In other words, no color at all, but rather a phenomenon of reflected light. The phrase “winelike,” then, had little to do with color but must have evoked some attribute of dark wine that would resonate with an audience familiar with the sea—with the póntos, the high sea, that perilous path to distant shores—such as the glint of surface light on impenetrable darkness, like wine in a terracotta vessel. Thus, when Achilles, “weeping, quickly slipping away from his companions, sat/on the shore of the gray salt sea,” stretches forth his hands toward the oínopa pónton, he looks not on the enigmatic “wine-dark sea,” but, more explicitly, and possibly with more weight of melancholy, on a “sea as dark as wine.”

Ancient Greek Color Vision
by Ananda Triulzi

In his writings Homer surprises us by his use of color. His color descriptive palate was limited to metallic colors, black, white, yellowish green and purplish red, and those colors he often used oddly, leaving us with some questions as to his actual ability to see colors properly (1). He calls the sky “bronze” and the sea and sheep as the color of wine, he applies the adjective chloros (meaning green with our understanding) to honey, and a nightingale (2). Chloros is not the only color that Homer uses in this unusual way. He also uses kyanos oddly, “Hector was dragged, his kyanos hair was falling about him” (3). Here it would seem, to our understanding, that Hector’s hair was blue as we associate the term kyanos with the semi-precious stone lapis lazuli, in our thinking kyanos means cyan (4). But we cannot assume that Hector’s hair was blue, rather, in light of the way that Homer consistently uses color adjectives, we must think about his meaning, did he indeed see honey as green, did he not see the ocean as blue, how does his perception of color reflect on himself, his people, and his world.

Homer’s odd color description usage was a cultural phenomenon and not simply color blindness on his part, Pindar describes the dew as chloros, in Euripides chloros describes blood and tears (5). Empedocles, one of the earliest Ancient Greek color theorists, described color as falling into four areas, light or white, black or dark, red and yellow; Xenophanes described the rainbow as having three bands of color: purple, green/yellow, and red (6). These colors are fairly consistent with the four colors used by Homer in his color description, this leads us to the conclusion that all Ancient Greeks saw color only in the premise of Empedocles’ colors, in some way they lacked the ability to perceive the whole color spectrum. […]

This inability to perceive something because of linguistic restriction is called linguistic relativity (7). Because the Ancient Greeks were not really conscious of seeing, and did not have the words to describe what they unconsciously saw, they simply did not see the full spectrum of color, they were limited by linguistic relativity.

The color spectrum aside, it remains to explain the loose and unconventional application of Homer and other’s limited color descriptions, for an answer we look to the work of Eleanor Irwin. In her work, Irwin suggests that besides perceiving less chromatic distinction, the Ancient Greeks perceived less division between color, texture, and shadow, chroma may have been difficult for them to isolate (8). For the Ancient Greeks, the term chloros has been suggested to mean moistness, fluidity, freshness and living (9). It also seems likely that Ancient Greek perception of color was influenced by the qualities that they associated with colors, for instance the different temperaments being associated with colors probably affected the way they applied color descriptions to things. They didn’t simply see color as a surface, they saw it as a spirited thing and the word to describe it was often fittingly applied as an adjective meaning something related to the color itself but different from the simplicity of a refined color.

The Wine-Dark Sea: Color and Perception in the Ancient World
by Erin Hoffman

Homer’s descriptions of color in The Iliad and The Odyssey, taken literally, paint an almost psychedelic landscape: in addition to the sea, sheep were also the color of wine; honey was green, as were the fear-filled faces of men; and the sky is often described as bronze. […]

The conspicuous absence of blue is not limited to the Greeks. The color “blue” appears not once in the New Testament, and its appearance in the Torah is questioned (there are two words argued to be types of blue, sappir and tekeleth, but the latter appears to be arguably purple, and neither color is used, for instance, to describe the sky). Ancient Japanese used the same word for blue and green (青 Ao), and even modern Japanese describes, for instance, thriving trees as being “very blue,” retaining this artifact (青々とした: meaning “lush” or “abundant”). […]

Blue certainly existed in the world, even if it was rare, and the Greeks must have stumbled across it occasionally even if they didn’t name it. But the thing is, if we don’t have a word for something, it turns out that to our perception—which becomes our construction of the universe—it might as well not exist. Specifically, neuroscience suggests that it might not just be “good or bad” for which “thinking makes it so,” but quite a lot of what we perceive.

The malleability of our color perception can be demonstrated with a simple diagram, shown here as figure six, “Afterimages”. The more our photoreceptors are exposed to the same color, the more fatigued they become, eventually giving out entirely and creating a reversed “afterimage” (yellow becomes blue, red becomes green). This is really just a parlor trick of sorts, and more purely physical, but it shows how easily shifted our vision is; other famous demonstrations like this selective attention test (its name gives away the trick) emphasize the power our cognitive functions have to suppress what we see. Our brains are pattern-recognizing engines, built around identifying things that are useful to us and discarding the rest of what we perceive as meaningless noise. (And a good thing that they do; deficiencies in this filtering, called sensory gating, are some of what cause neurological dysfunctions such as schizophrenia and autism.)

This suggests the possibility that not only did Homer lack a word for what we know as “blue”—he might never have perceived the color itself. To him, the sky really was bronze, and the sea really was the same color as wine. And because he lacked the concept “blue”—therefore its perception—to him it was invisible, nonexistent. This notion of concepts and language limiting cognitive perception is called linguistic relativism, and is typically used to describe the ways in which various cultures can have difficulty recalling or retaining information about objects or concepts for which they lack identifying language. Very simply: if we don’t have a word for it, we tend to forget it, or sometimes not perceive it at all. […]

So, if we’re all synesthetes, and our minds are extraordinarily plastic, capable of reorienting our entire perception around the addition of a single new concept (“there is a color between green and violet,” “schizophrenia is much more common than previously believed”), the implications of Homer’s wine-dark sea are rich indeed.

We are all creatures of our own time, our realities framed not by the limits of our knowledge but by what we choose to perceive. Do we yet perceive all the colors there are? What concepts are hidden from us by the convention of our language? When a noblewoman of Syracuse looked out across the Mare Siculum, did she see waves of Bacchanalian indigo beneath a sunset of hammered bronze? If a seagull flew east toward Thapsus, did she think of Venus and the fall of Troy?

The myriad details that define our everyday existence may define also the boundaries of our imagination, and with it our dreams, our ethics. We are lenses moving through time, beings of color and shadow.

Evolution of the Color Blue
by Dov Michaeli MD, PhD, The Doctor Weighs In

Why were black, white, and red the first colors to be perceived by our forefathers? The evolutionary explanation is quite straightforward: ancient humans had to distinguish between night and day. And red is important for recognizing blood and danger. Even today, in us moderns, the color red causes an increase in skin galvanic response, a sign of tension and alarm. Green and yellow entered the vocabulary as the need to distinguish ripe fruit from unripe, grasses that are green from grasses that are wilting, etc. But what is the need for naming the color blue? Blue fruits are not very common, and the color of the sky is not really vital for survival.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)
by Aatish Bhatia, Empirical Zeal

Some languages have just three basic colors, others have 4, 5, 6, and so on. There’s even a debate as to whether the Pirahã tribe of the Amazon have any specialized color words at all! (If you ask a Pirahã tribe member to label something red, they’ll say that it’s blood-like).

But there’s still a pattern hidden in this diversity. […] You start with a black-and-white world of darks and lights. There are warm colors, and cool colors, but no finer categories. Next, the reds and yellows separate away from white. You can now have a color for fire, or the fiery color of the sunset. There are tribes that have stopped here. Further down, blues and greens break away from black. Forests, skies, and oceans now come of their own in your visual vocabulary. Eventually, these colors separate further. First, red splits from yellow. And finally, blue from green.

The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II)
by Aatish Bhatia, Empirical Zeal

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green one. This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit strange, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (as in the example images above), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green. It seems that color categories only matter in the right half of your visual field! […]

The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out. […]

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It becomes easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions.

Color categories: Confirmation of the relativity hypothesis.
by Debi Roberson, Jules Davidoff, Ian R. L. Davies, & Laura R. Shapiro

In a category-learning paradigm, there was no evidence that Himba participants perceived the blue – green region of color space in a categorical manner. Like Berinmo speakers, they did not find this division easier to learn than an arbitrary one in the center of the green category. There was also a significant advantage for learning the dumbu-burou division, over the yellow-green division. It thus appears that CP for color category boundaries is tightly linked to the linguistic categories of the participant.

Knowing color terms enhances recognition: Further evidence from English and Himba
by Julie Goldstein, Jules B. Davidoff, & Debi Roberson, JECP

Two experiments attempted to reconcile discrepant recent findings relating to children’s color naming and categorization. In a replication of Franklin and colleagues ( Journal of Experimental Child Psychology, 90 (2005) 114–141), Experiment 1 tested English toddlers’ naming and memory for blue–green and blue–purple colors. It also found advantages for between-category presentations that could be interpreted as support for universal color categories. However, a different definition of knowing color terms led to quite different conclusions in line with the Whorfian view of Roberson and colleagues (Journal of Experimental Psychology: General, 133 (2004) 554–571). Categorical perception in recognition memory was now found only for children with a fuller understanding of the relevant terms. It was concluded that color naming can both under estimate and overestimate toddlers’ knowledge of color terms. Experiment 2 replicated the between-category recognition superiority found in Himba children by Franklin and colleagues for the blue–purple range. But Himba children, whose language does not have separate terms for green and blue, did not show across-category advantage for that set; rather, they behaved like English children who did not know their color terms.

The Effects of Color Names on Color Concepts, or Like Lazarus Raised from the Tomb
by Chris, ScienceBlogs

It’s interesting that the Berinmo and Himba tribes have the same number of color terms, as well, because that rules out one possible alternative explanation of their data. It could be that as languages develop, they develop a more sophisticated color vocabulary, which eventually approximates the color categories that are actually innately present in our visual systems. We would expect, then, that two languages that are at similar levels of development (in other words, they both have the same number of color categories) would exhibit similar effects, but the speakers’ of the two languages remembered and perceived the colors differently. Thus it appears that languages do not develop towards any single set of universal color categories. In fact, Roberson et al. (2004) reported a longitudinal study that implies that exactly the opposite may be the case4. They found that children in the Himba tribe, and English-speaking children in the U.S., initially categorized color chips in a similar way, but as they grew older and more familiar with the color terms of their languages, their categorizations diverged, and became more consistent with their color names. This is particularly strong evidence that color names affect color concepts.

Forget the dress; what color did early Israelites see when they looked up to the sky?
by David Streever, Episcopal Cafe

The children of the Himba were able to differentiate between many more shades of green than their English counterparts, but did not recognize the color blue as being distinct from green. The research found that the 11 basic English colors have no basis in the visual system, lending further credence to the linguistic theories of Deutscher, Geiger, Gladstone, and other academics.

Colour Categories as Cultural Constructs
by Jules Davidoff, Artbrain

This is a group of people in Namibia who were asked to do some color matching and similarity judgments for us. It’s a remote part of the world, but not quite so remote that somebody hasn’t got the t-shirt, but it’s pretty remote. That’s the sort of environment they live in, and these are the youngsters that I’m going to show you some particular data on. They are completely monolingual in their own language, which has a tremendous richness in certain types of terms, in cattle terms (I can’t talk about that now), but has a dramatic lack in color terms. They’ve only got five color terms. So all of the particular colors of the world, and this is an illustration which can go from white to black at the top, red to yellow, green, blue, purple, back to red again, if this was shown in terms of the whole colors of the spectrum, but they only have five terms. So they see the world as, perhaps differently than us, perhaps slightly plainer. So we looked at these young children, and we showed them a navy blue color at the top and we asked them to point to the same color again from another group of colors. And those colors included the correct color, but of course sometimes the children made mistakes. What I want to show was that the English children and the Himba children, these people are the Himba of Northwest Namibia, start out from the same place, they have this undefined color space in which, at the beginning of the testing, T1, they make errors in choosing the navy blue, sometimes they’ll choose the blue, sometimes they’ll choose the black, sometimes they’ll choose the purple. Now the purple one, actually if you did a spectral analysis, the blue and the purple, the one on the right, are the closest. And as you can see, as the children got older, the most common error, both for English children and the Himba children, is the increase (that’s going up on the graph) of the purple mistakes. But, their language, the Himba language, has the same word for blue as for black. We, of course, have the same word for the navy blue as the blue on the left, only as the children get older, three or four, the English children only ever confuse the navy blue to the blue on the left, whereas the Himba children confuse the navy blue with the black. So, what’s happening? Someone asked yesterday whether the Sapir-Worf Hypothesis had any currency. Well, if it has a little bit of currency, it has it certainly here, in that what is happening, because the names of colors mean different things in the different cultures, because blue and black are the same in the Himba language, the actual similarity does seem to have been altered in the pictorial register. So, the blues that we call blue, and the claim is that there is no natural category called blue, they were just sensations we want to group together, those natural categories don’t exist. But because we have constructed these categories, blues look more similar to us in the pictorial register, whereas to these people in Northwest Namibia, the blues and the blacks look more similar. So, in brief, I’d like to further add more evidence or more claim that we are constructing the world of colors and in some way at least our memory structures do alter, to a modest extent at least, what we’re seeing.

Hues and views
A cross-cultural study reveals how language shapes color perception.
by Rachel Adelson, APA

Not only has no evidence emerged to link the 11 basic English colors to the visual system, but the English-Himba data support the theory that color terms are learned relative to language and culture.

First, for children who didn’t know color terms at the start of the study, the pattern of memory errors in both languages was very similar. Crucially, their mistakes were based on perceptual distances between colors rather than a given set of predetermined categories, arguing against an innate origin for the 11 basic color terms of English. The authors write that an 11-color organization may have become common because it efficiently serves cultures with a greater need to communicate more precisely. Still, they write, “even if [it] were found to be optimal and eventually adopted by all cultures, it need not be innate.”

Second, the children in both cultures didn’t acquire color terms in any particular, predictable order–such as the universalist idea that the primary colors of red, blue, green and yellow are learned first.

Third, the authors say that as both Himba and English children started learning their cultures’ color terms, the link between color memory and color language increased. Their rapid perceptual divergence once they acquired color terms strongly suggests that cognitive color categories are learned rather than innate, according to the authors.

The study also spotlights the power of psychological research conducted outside the lab, notes Barbara Malt, PhD, a cognitive psychologist who studies language and thought and also chairs the psychology department at Lehigh University.

“To do this kind of cross-cultural work at all requires a rather heroic effort, [which] psychologists have traditionally left to linguists and anthropologists,” says Malt. “I hope that [this study] will inspire more cognitive and developmental psychologists to go into the field and pursue these kinds of comparisons, which are the only way to really find out which aspects of perception and cognition are universal and which are culture or language specific.”

Humans didn’t even see the colour blue until modern times, research suggests
by Fiona MacDonald, Science Alert

Another study by MIT scientists in 2007 showed that native Russian speakers, who don’t have one single word for blue, but instead have a word for light blue (goluboy) and dark blue (siniy), can discriminate between light and dark shades of blue much faster than English speakers.

This all suggests that, until they had a word from it, it’s likely that our ancestors didn’t see blue at all. Or, more accurately, they probably saw it as we do now, but they never really noticed it.

Blue was the Last Color Perceived by Humans
by Nancy Loyan Schuemann, Mysterious Universe

MRI experiments confirm that people who process color through their verbal left brains, where the names of colors are accessed, recognize them more quickly. Language molds us into the image of the culture in which we are born.

Categorical perception of color is lateralized to the right hemisphere in infants, but to the left hemisphere in adults
by A. Franklin, G. V. Drivonikou, L. Bevis, I. R. L. Davies, P. Kay, & T. Regier, PNAS

Both adults and infants are faster at discriminating between two colors from different categories than two colors from the same category, even when between- and within-category chromatic separation sizes are equated. For adults, this categorical perception (CP) is lateralized; the category effect is stronger for the right visual field (RVF)–left hemisphere (LH) than the left visual field (LVF)–right hemisphere (RH). Converging evidence suggests that the LH bias in color CP in adults is caused by the influence of lexical color codes in the LH. The current study investigates whether prelinguistic color CP is also lateralized to the LH by testing 4- to 6-month-old infants. A colored target was shown on a differently colored background, and time to initiate an eye movement to the target was measured. Target background pairs were either from the same or different categories, but with equal target-background chromatic separations. Infants were faster at initiating an eye movement to targets on different-category than same-category backgrounds, but only for targets in the LVF–RH. In contrast, adults showed a greater category effect when targets were presented to the RVF–LH. These results suggest that whereas color CP is stronger in the LH than RH in adults, prelinguistic CP in infants is lateralized to the RH. The findings suggest that language-driven CP in adults may not build on prelinguistic CP, but that language instead imposes its categories on a LH that is not categorically prepartitioned.

Categorical perception of colour in the left and right visual field is verbally mediated: Evidence from Korean
by Debi Roberson, Hyensou Pak, & J. Richard Hanley

In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is unique to Korean, these results are not consistent with a suggestion made by Drivonikou [Drivonikou, G. V., Kay, P., Regier, T., Ivry, R. B., Gilbert, A. L., Franklin, A. et al. (2007) Further evidence that Whorfian effects are stronger in the right visual field than in the left. Proceedings of the National Academy of Sciences 104, 1097–1102] that CP effects in the left visual field provide evidence for the existence of a set of universal colour categories. Dividing Korean participants into fast and slow responders demonstrated that fast responders show CP only in the right visual field while slow responders show CP in both visual fields. We argue that this finding is consistent with the view that CP in both visual fields is verbally mediated by the left hemisphere language system.

Linguistic Fossils of the Mind’s Eye
by Keith, UMMAGUMMA blog

The other, The Unfolding of Language (2005), deals with the actual evolution of language. […]

Yet, while erosion occurs there is also a creative force in the human development of language. That creativity is revealed in our unique capacity for metaphor. “…metaphor is the indispensible element in the thought-process of every one of us.” (page 117) “It transpired that metaphor is an essential tool of thought, an indispensible conceptual mechanism which allows us to think of abstract concepts in terms of simpler concrete things. It is, in fact, the only way we have of dealing with abstraction.” (page 142) […]

The use of what can be called ‘nouns’ and not just ‘things’ is a fairly recent occurrence in language, reflecting a shift in human experience. This is a ‘fossil’ of linguistics. “The flow from concrete to abstract has created many words for concepts that are no longer physical objects, but nonetheless behave like thing-words in the sentence. The resulting abstract concepts are no longer thing-words, but they inherit their distribution from the thing-words that gave rise to them. A new category of words has thus emerged…which we can now call ‘noun’.” (page 246)

The way language is used, its accepted uses by people through understood rules of grammar, is the residue of collective human experience. “The grammar of a language thus comes to code most compactly and efficiently those constructions that are used most frequently…grammar codes best what it does most often.” (page 261) This is centrally why I hold the grammar of language to be almost a sacred portal into human experience.

In the 2010 work, Deutscher’s emphasis shifts to why different languages reveal that humans actually experience life differently. We do not all feel and act the same way about the things of life. My opinion is that it is a mistake to believe “humanity” thinks, feels and experiences to a high degree of similarity. The fact is language shows that, as it diversified across the earth, humanity has a multitude of diverse ways of experiencing.

First of all, “…a growing body of reliable scientific research provides solid evidence that our mother tongue can affect how we think and perceive the world.” (page 7) […]

The author does not go as far as me, nor is he as blunt; I am interjecting much of my personal beliefs in here. Still, “…fundamental aspects of our thought are influenced by cultural conventions of our society, to a much greater extent than is fashionable to admit today….what we find ‘natural’ depends largely on the conventions we have been brought up on.” (page 233) There are clear echoes of Nietzsche in here.

The conclusion is that “habits of speech can create habits of mind.” So, language affects culture fundamentally. But, this is a reciprocal arrangement. Language changes due to cultural experience yet cultural experience is affected by language.

Guy Deutscher’s Through the Language Glass
Stuart Hindmarsh, Philosophical Overview

In Through the Language Glass, Guy Deutscher addresses the question as to whether the natural language we speak will have an influence on our thought and our perception. He focuses on perceptions, and specifically the perceptions of colours and perceptions of spatial relations. He is very dismissive of the Sapir-Whorf hypothesis and varieties of linguistic relativity which would say that if the natural language we speak is of a certain sort then we cannot have certain types of concepts or experiences. For example, a proponent of this type of linguistic relativity might say that if your language does not have a word for the colour blue then you cannot perceive something as blue. Nonetheless, Deutscher argues that the natural language we speak will have some influence on how we think and see the world, giving several examples, many of which are fascinating. However, I believe that several of his arguments that dismiss views like the Sapir-Whorf hypothesis are based on serious misunderstandings.

The view that language is the medium in which conceptual thought takes place has a long history in philosophy, and this is the tradition out of which the Sapir-Whorf hypothesis was developed. […]

It is important to note that in this tradition the relation between language and conceptual thought is not seen as one in which the ability to speak a language is one capacity and the ability to think conceptually a completely separate faculty, and in which the first merely has a causal influence on the other. It is rather the view that the ability to speak a language makes it possible to think conceptually and that the ability to speak a language makes it possible to have perceptions of certain kinds, such as those in which what is perceived is subsumed under a concept. For example, it might be said that without language it is possible to see a rabbit but not possible to see it as a rabbit (as opposed to a cat, a dog, a squirrel, or any other type of thing). Thus conceptual thinking and perceptions of these types are seen not as separate from language and incidentally influenced by it but dependent on language and taking their general form from language. This does not mean that speech or writing must be taking place every time a person thinks in concepts or has these types of perception, though. To think that it must is a misunderstanding essentially the same as a common misinterpretation of Kant, which I will discuss in more detail in a later post.

While I take this to be the idea behind the Sapir-Whorf hypothesis, Deutscher evidently interprets that hypothesis as a very different kind of view. According to this view, the ability to speak a language is separate from the ability to think conceptually and from the ability to have the kinds of perceptions described above and it merely influences such thought and perception from without. Furthermore, it is not a relation in which language makes these types of thought and perception possible but one in which thought and perception are actually constrained by language. This interpretation runs through all of Deutscher’s criticisms of linguistic relativity. […]

Certainly many questionable assertions have been made based on the premise that language conditions the way that we think. Whorf apparently made spurious claims about Hopi conceptions of time. Today a great deal of dubious material is being written about the supposed influence of the internet and hypertext media on the way that we think. This is mainly inspired by Marshall McLuhan but generally lacking his originality and creativity. Nevertheless, there have been complex and sophisticated versions of the idea that the natural language that we speak conditions our thought and our perceptions, and these deserve serious attention. There are certainly more complex and sophisticated versions of these ideas than the crude caricature that Deutscher sets up and knocks down. Consequently, I don’t believe that he has given convincing reasons for seeing the relations between language and thought as limited to the types of relations in the examples he gives, interesting though they may be. For instance, he notes that the aboriginal tribes in question would have to always keep in mind where the cardinal directions were and consequently in this sense the language would require them to think a certain way.

The History and Science Behind the Color Blue
by staff, Dunn-Edwards Paints

If you think about it, there is not a lot of blue in nature. Most people do not have blue eyes, blue flowers do not occur naturally without human intervention, and blue animals are rare — bluebirds and bluejays only live in isolated areas. The sky is blue — or is it? One theory suggests that before humans had words for the color blue, they actually saw the sky as another color. This theory is supported by the fact that if you never describe the color of the sky to a child, and then ask them what color it is, they often struggle to describe its color. Some describe it as colorless or white. It seems that only after being told that the sky is blue, and after seeing other blue objects over a period of time, does one start seeing the sky as blue. […]

Scientists generally agree that humans began to see blue as a color when they started making blue pigments. Cave paintings from 20,000 years ago lack any blue color, since as previously mentioned, blue is rarely present in nature. About 6,000 years ago, humans began to develop blue colorants. Lapis, a semiprecious stone mined in Afghanistan, became highly prized among the Egyptians. They adored the bright blue color of this mineral. They used chemistry to combine the rare lapis with other ingredients, such as calcium and limestone, and generate other saturated blue pigments. It was at this time that an Egyptian word for “blue” emerged.

Slowly, the Egyptians spread their blue dyes throughout the word, passing them on to the Persians, Mesoamericans and Romans. The dyes were expensive — only royalty could afford them. Thus, blue remained rare for many centuries, though it slowly became popular enough to earn its own name in various languages.

Cognitive Variations:
Reflections on the Unity and Diversity of the Human Mind
by Geoffrey Lloyd
Kindle Locations 178-208

Standard colour charts and Munsell chips were, of course, used in the research to order to ensure comparability and to discount local differences in :he colours encountered in the natural environment. But their use carried major risks, chiefly that of circularity. The protocols of the enquiry presupposed the differences that were supposed to be under investigation and to that extent and in that regard the investigators just got out what they had put in. That is to say, the researchers presented their interviewees with materials that already incorporated the differentiations the researchers themselves were interested in. Asked to identify, name, or group different items, the respondents’ replies were inevitably matched against those differentiations. Of course the terms in which the replies were made-in the natural languages the respondents used-must have borne some relation to the differences perceived, otherwise they would not have been used in replying to the questions (assuming, as we surely may, that the questions were taken seriously and that the respondents were doing their honest best). But it was assumed that what the respondents were using in their replies were essentially colour terminologies, distinguishing hues, and that assumption was unfounded in general, and in certain cases can be shown to be incorrect.

It was unfounded in general because there are plenty of natural languages in which the basic discrimination relates not to hues, but to luminosities. Ancient Greek is one possible example. Greek colour classifications are rich and varied and were, as we shall see, a matter of dispute among the Greeks themselves. They were certainly capable of drawing distinctions between hues. I have already given one example. When Aristotle analyses the rainbow, where it is clearly hue that separates one end of the spectrum from the other, he identifies three colours using terms that correspond, roughly, to ‘red’ ‘green’, and ‘blue’, with a fourth, corresponding to ‘yellow’, which he treats (as noted) as a mere ‘appearance’ between ‘red’ and ‘green’. But the primary contrariety that figures in ancient Greek (including in Aristotle) is between Ieukon and melan, which usually relate not to hues, so much as to luminosity. Leukos, for instance, is used of the sun and of water, where it is clearly not the case that they share, or were thought to share, the same hue. So the more correct translation of that pair is often ‘bright’ or ‘light’ and ‘dark’, rather than ‘white’ and ‘black’.’ ^ Berlin and Kay (1969: 70) recognized the range of application of Ieukon, yet still glossed the term as ‘white’. Even more strangely they interpreted glaukon as ‘black’. That term is particularly context-dependent, but when Aristotle (On the Generation of Animals 779″z6, b34 ff.) tells us that the eyes of babies are glaukon, that corresponds to ‘blue’ where melon, the usual term for ‘black’ or rather ‘dark’, is represented as its antonym, rather than its synonym, as Berlin and Kay would need it to be.

So one possible source of error in the Berlin and Kay methodology was the privileging of hue over luminosity. But that still does not get to the bottom of the problem, which is that in certain cases the respondents were answering in terms whose primary connotations were not colours at all. The Hanunoo had been studied before Berlin and Kay in a pioneering article by Conklin (1955), and Lyons (1995; 1999) has recently reopened the discussion of this material.? First Conklin observed that the Hanunoo have no word for colour as such. But (as noted) that does not mean, of course, that they are incapable of discriminating between different hues or luminosities. To do so they use four terms, mabiru, malagti, rtarara, and malatuy, which may be thought to correspond, roughly, to ‘black’, ‘white’, ‘red’, and ‘green’. Hanunoo way then classified as a s:age 3 language, in Berlin and Kay’s taxonomy, one that discriminates between four basic colour terms, indeed those very four. 7 Cf. also Lucy 1992: ch. 5, who similarly criticizes taking purportedly colour terms out of context.

Yet, according to Conklin, chromatic variation was not the primary basis for differentiation of those four terms at all. Rather the two principal dimensions of variation are (T) lightness versus darkness, and (2) wetness versus dryness, or freshness (succulence) versus desiccation. siccation. A third differentiating factor is indelibility versus fadedness, referring to permanence or impermanence, rather than to hue as such.

Berlin and Kay only got to their cross-cultural universals by ignoring ing (they may even sometimes have been unaware of) the primary connotations of the vocabulary in which the respondents expressed their answers to the questions put to them. That is not to say, of course, that the members of the societies concerned are incapable of distinguishing colours whether as hues or as luminosities. That would be to make the mistake that my first philosophical observation was designed to forestall. You do not need colour terms to register colour differences. Indeed Berlin and Kay never encountered-certainly they never reported-a society where tie respondents simply had nothing to say when questioned about how their terms related to what they saw on the Munsell chips. But the methodology was flawed in so far as it was assumed that the replies given always gave access to a classification of colour, when sometimes colours were not the primary connotations of the vocabulary used at all.’

The Language Myth:
Why Language Is Not an Instinct
by Vyvyan Evans
pp. 204-206

The neo-Whorfians have made four main criticisms of this research tradition as it relates to linguistic relativity. 33 First off, the theoretical construct of the ‘basic colour term’ is based on English. It is then assumed that basic colour terms – based on English – correspond to an innate biological specification. But the assumption that basic colour terms – based on English – correspond to universal semantic constraints, due to our common biology, biases the findings in advance. The ‘finding’ that other languages also have basic colour terms is a consequence of a self-fufilling prophecy: as English has been ‘found’ to exhibit basic colour terms, all other languages will too. But this is no way to investigate putative cross-linguistic universals; it assumes, much like Chomsky did, that colour in all of the world’s languages will be, underlyingly, English-like. And as we shall see, other languages often do things in startlingly different ways.

Second, the linguistic analysis Berlin and Kay conducted was not very rigorous – to say the least. For most of the languages they ‘examined’, Berlin and Kay relied on second-hand sources, as they had no first-hand knowledge of the languages they were hoping to find basic colour terms in. To give you a sense of the problem, it is not even clear whether many of the putative basic colour terms Berlin and Kay ‘uncovered’, were from the same lexical class; for instance, in English, the basic colour terms – white, black, red and so on – are all adjectives. Yet, for many of the world’s languages, colour expressions often come from different lexical classes. As we shall see shortly, one language, Yélî Dnye, draws its colour terms from several lexical classes, none of which is adjectives. And the Yélî language is far from exceptional in this regard. The difficulty here is that, without a more detailed linguistic analysis, there is relatively little basis for the assumption that what is being compared involves comparable words. And, that being the case, can we still claim that we are dealing with basic colour terms?

Third, many other languages do not conceptualise colour as an abstract domain independent of the objects that colour happens to be a property of. For instance, some languages do not even have a word corresponding to the English word colour – as we shall see later. This shows that colour is often not conceptualised as a stand-alone property in the way that it is in English. In many languages, colour is treated in combination with other surface properties. For English speakers this might sound a little odd. But think about the English ‘colour’ term roan: this encodes a surface pattern, rather than strictly colour – in this case, brown interspersed with white, as when we describe a horse as ‘roan’. Some languages combine colour with other properties, such as dessication, as in the Old Germanic word saur, which meant yellow and dry. The problem, then, is that in languages with relatively simple colour technology − arguably the majority of the world’s languages − lexical systems that combine colour with other aspects of an object’s appearance are artificially excluded from being basic colour terms – as English is being used as the reference point. And this, then, distorts the true picture of how colour is represented in language, as the analysis only focuses on those linguistic features that correspond to the ‘norm’ derived from English. 34

And finally, the ‘basic colour term’ project is flawed, in so far as it constitutes a riposte to linguistic relativity; as John Lucy has tellingly observed, linguistic relativity is the thesis that language influences non-linguistic aspects of thought: one cannot demonstrate that it is wrong by investigating the effect of our innate colour sense on language. 35 In fact, one has to demonstrate the reverse: that language doesn’t influence psychophysics (in the domain of colour). Hence, the theory of basic colour terms cannot be said to refute the principle of linguistic relativity as ironically, it wasn’t in fact investigating it.

The neo-Whorfian critique, led by John Lucy and others, argued that, at its core, the approach taken by Berlin and Kay adopted an unwarranted ethnocentric approach that biased findings in advance. And, in so doing, it failed to rule out the possibility that what other languages and cultures were doing was developing divergent semantic systems – rather than there being a single universal system – in the domain of colour, albeit an adaptation to a common human set of neurobiological constraints. By taking the English language in general, and in particular the culture of the English-speaking peoples – the British Isles, North America and the Antipodes – as its point of reference, it not only failed to establish what different linguistic systems – especially in non-western cultures – were doing, but led, inevitably, to the conclusion that all languages, even when strikingly diverse in terms of their colour systems, were essentially English-like. 36

The Master and His Emissary: The Divided Brain and the Making of the Western World
by Iain McGilchrist
pp. 221-222

Consciousness is not the same as inwardness, although there can be no inwardness without consciousness. To return to Patricia Churchland’s statement that it is reasonable to identify the blueness of an object with its disposition to scatter electromagnetic waves preferentially at about 0.46μm, 52 to see it like this, as though from the outside, excluding the ‘subjective’ experience of the colour blue – as though to get the inwardness of consciousness out of the picture – requires a very high degree of consciousness and self-consciousness. The polarity between the ‘objective’ and ‘subjective’ points of view is a creation of the left hemisphere’s analytic disposition. In reality there can be neither absolutely, only a choice between a betweenness which acknowledges itself, and one which denies its own nature. By identifying blueness solely with the behaviour of electromagnetic particles one is not avoiding value, not avoiding betweenness, not avoiding one’s shadow being cast across the picture. One is using the inwardness of consciousness in a very specialised way to strive to empty itself as much as possible of value, of the self. The paradoxical result is an extremely partial, fragmented version of the colour blue, which is neither value-free nor independent of the self ‘s disposition towards its object.

p. 63

Another thought-provoking detail about sadness and the right hemisphere involves the perception of colour. Brain regions involved in conscious identification of colour are probably left-sided, perhaps because it involves a process of categorisation and naming; 288 however, it would appear that the perception of colour in mental imagery under normal circumstances activates only the right fusiform area, not the left, 289 and imaging studies, lesion studies and neuropsychological testing all suggest that the right hemisphere is more attuned to colour discrimination and perception. 290 Within this, though, there are hints that the right hemisphere prefers the colour green and the left hemisphere prefers the colour red (as the left hemisphere may prefer horizontal orientation, and the right hemisphere vertical – a point I shall return to in considering the origins of written language in Chapter 8). 291 The colour green has traditionally been associated not just with nature, innocence and jealousy but with – melancholy: ‘She pined in thought, / And with a green and yellow melancholy / She sat like Patience on a monument, / Smiling at grief ‘. 292

Is there some connection between the melancholy tendencies of the right hemisphere and the mediaeval belief that the left side of the body was dominated by black bile? Black bile was, of course, associated with melancholy (literally, Greek melan–, black ⊕ chole, bile) and was thought to be produced by the spleen, a left-sided organ. For the same reasons the term spleen itself was, from the fourteenth century to the seventeenth century, applied to melancholy; though, as if intuiting that melancholy, passion, and sense of humour all came from the same place (in fact the right hemisphere, associated with the left side of the body), ‘spleen’ could also refer to each or any of these.

Note 291

‘There are hints from many sources that the left hemispheremay innately prefer red over green, just as it may prefer horizontal over vertical. I have already discussed thelanguage-horizontal connection. The connection between the left hemisphere and red is also indirect, but is supported by a remarkable convergence of observations from comparative neurology, which has shown appropriate asymmetries between both the hemispheres and even between the eyes (cone photoreceptor differencesbetween the eyes of birds are consistent with a greater sensitivity to movement and to red on the part of the righteye (Hart, 2000)) and from introspective studies over the millennia in three great religions that have all convergedin the same direction on an association between action, heat, red, horizontal, far etc and the right side of the body (i.e. the left cerebral hemisphere, given the decussation between cerebral hemisphere and output) compared withinaction, cold, green, vertical, near etc and the left side/ right hemisphere respectively’ (Pettigrew, 2001, p. 94).

Louder Than Words:
The New Science of How the Mind Makes Meaning
by Benjamin K. Bergen
pp. 57-58

We perceive objects in the real world in large part through their color. Are the embodied simulations we construct while understanding language in black and white, or are they in color? It seems like the answer should be obvious. When you imagine a yellow trucker hat, you feel the subjective experience of yellowness that looks a lot like yellow as you would perceive it in the real world. But it turns out that color is actually a comparatively fickle visual property of both perceived and imagined objects. Children can’t use color to identify objects until about a year of age, much later than they can use shape. And even once they acquire this ability, as adults, people’s memory for color is substantially less accurate than their memory for shape, and they have to pay closer attention to detect changes in the color of objects than in their shape or location.

And yet, with all this going against it, color still seeps into our embodied simulations, at least briefly. One study looking at color used the same sentence-picture matching method we’ve been talking about. People read sentences that implied particular colors for objects. For instance, John looked at the steak on his plate implies a cooked and therefore appropriately brown steak, while John looked at the steak in the butcher’s window implies an uncooked and therefore red steak. In the key trials, participants then saw a picture of the same object, which could either match or mismatch the color implied by the sentence— that is, the steak could be red or brown. Once again, this method produced an interaction. Curiously, though, the result was slower reactions to matching-color images (unlike the faster reactions to matching shape and orientation images in the previous studies). One explanation for why this effect appears in the opposite direction is that perhaps people processing sentences only mentally simulate color briefly and then suppress color to represent shape and orientation. This might lead to slower responses to a matching color when an image is subsequently presented.

pp. 190-192

Another example of how languages make people think differently comes from color perception. Languages have different numbers of color categories, and those categories have different boundaries. For instance, in English, we make a categorical distinction between reds and pinks— we have different names for them, and we judge colors to be one or the other (we don’t think of pinks as a type of red or vice versa— they’re really different categories). And because our language makes this distinction, when we use English and we want to identify something by its color, we have to attend to where in the pink-red range it falls. But other languages don’t make this distinction. For instance, Wobé, a language spoken in Ivory Coast, only has one color category that spans English pinks and reds. So to speak Wobé, you don’t need to pay as close attention to colors in the pink-red range to identify them; all you have to do is recognize that they’re in that range, retrieve the right color term, and you’re set.

We can see this phenomenon in reverse when we look at the blue range. For the purposes of English, light blues and dark blues are all blues; perceptibly different shades, no doubt, but all blues nonetheless. Russian, however, splits blue apart in the way that we separate red and pink. There are two distinct color categories in Russian for our blues: goluboy (light blues) and siniy (dark blues). For the purposes of English, you don’t have to worry about what shade of blue something is to describe it successfully. Of course you can be more specific if you want; you can describe a shade as powder blue or deep blue, or any variety of others. But you don’t have to. In Russian, however, you do. To describe the colors of Cal or UCLA, for example, there would be no way in Russian to say they’re both blue; you’d have to say that Cal is siniy and UCLA is goluboy. And to say that, you’d need to pay attention to the shades of blue that each school wears. The words the language makes available mandate that you pay attention to particular perceptual details in order to speak.

The flip side of thinking for speaking is thinking for understanding. Each time someone describes something as siniy or goluboy in Russian, there’s a little bit more information there than when the same things are described as blue in English. So if you think about it, saying that the sky is blue in English is actually less specific than its equivalent would be in Russian— some languages provide more information about certain things each time you read or hear about them.

The fact that different languages encode different information in everyday words could have a variety of effects on how people understand those languages. For one, when a language systematically encodes something, that might lead people to regularly encode that detail as part of their embodied simulations. Russian comprehenders might construct more detailed representations of the shades of blue things than their English-comprehending counterparts. Pormpuraawans might understand language about locations by mentally representing cardinal directions in space while their English-comprehending counterparts use ego-centered mental representations to do the same thing.

Or an alternative possibility is that people might ultimately understand language about the given domain in the same way, regardless of the language, but, in order to get there, they might have to do more mental gymnastics. To get from the word blue in English to the color of the sky might take longer than to go there directly from goluboy in Russian. Or, to take another example, to construct an egocentric idea of where the bay windows are relative to you might be easier when you hear on your right than to your north.

A third possibility, and one that has caught a lot of people’s interest, is that there may be longer-term and more pervasive effects of linguistic differences on people’s cognition, even outside of language. Perhaps, for instance, Pormpuraawan speakers, by dint of years and years of having to pay attention to the cardinal directions, learn to constantly monitor them, even when they’re not using language; perhaps more so than English speakers. Likewise, perhaps the color categories your language provides affect not merely what you attend to and think about when using color words but also what differences you perceive among colors and how easily you distinguish between colors. This is the idea of linguistic relativism, that the language you speak can affect the way you think. The debate about linguistic relativism is a hot one, but the jury is still out on how and when language affects nonlinguistic thought.

All of this is to say that individual languages are demanding of their speakers. To speak and understand a language, you have to think, and languages, to some extent, dictate what things you ought to think, what things you ought to pay attention to, and how you should break the world up into categories. As a result, the routine patterns of thought that an English speaker engages in will differ from those of a Russian or Wobé or Pormpuraaw speaker. Native speakers of these languages are also native thinkers of these languages.

The First Signs: Unlocking the Mysteries of the World’s Oldest Symbols
by Genevieve von Petzinger
Kindle Locations 479-499

Not long after the people of Sima de los Huesos began placing their dead in their final resting place, another group of Homo heidelbergensis, this time in Zambia, began collecting colored minerals from the landscape around them. They not only preferred the color red, but also collected minerals ranging in hue from yellow and brown to black and even to a purple shade with sparkling flecks in it. Color symbolism— associating specific colors with particular qualities, ideas, or meanings— is widely recognized among modern human groups. The color red, in particular, seems to have almost universal appeal. These pieces of rock show evidence of grinding and scraping, as though they had been turned into a powder.

This powdering of colors took place in a hilltop cave called Twin Rivers in what is present-day Zambia between 260,000 and 300,000 years ago. 10 At that time, the environment in the region was very similar to what we find there today: humid and semitropical with expansive grasslands broken by stands of short bushy trees. Most of the area’s colorful rocks, which are commonly known as ochre, contain iron oxide, which is the mineral pigment later used to make the red paint on the walls of caves across Ice Age Europe and beyond. In later times, ochre is often associated with nonutilitarian activities, but since the people of Twin Rivers lived before the emergence of modern humans (Homo sapiens, at 200,000 years ago), they were not quite us yet. If this site were, say, 30,000 years old, most anthropologists would agree that the collection and preparation of these colorful minerals had a symbolic function, but because this site is at least 230,000 years older, there is room for debate.

Part of this uncertainty is owing to the fact that ground ochre is also useful for utilitarian reasons. It can act as an adhesive, say, for gluing together parts of a tool. It also works as an insect repellent and in the tanning of hides, and may even have been used for medicinal purposes, such as stopping the bleeding of wounds.

If the selection of the shades of ochre found at this site were for some mundane purpose, then the color shouldn’t matter, right? Yet the people from the Twin Rivers ranged out across the landscape to find these minerals, often much farther afield than necessary if they just required something with iron oxide in it. Instead, they returned to very specific mineral deposits, especially ones containing bright-red ochre, then carried the ochre with them back to their home base. This use of ochre, and the preference for certain colors, particularly bright red, may have been part of a much earlier tradition, and it is currently one of the oldest examples we have of potential symbolism in an ancestral human species.

Kindle Locations 669-683

Four pieces of bright-red ochre collected from a nearby mineral source were also found in the cave. 6 Three of the four pieces had been heated to at least 575 ° F in order to convert them from yellow to red. The inhabitants of Skhul had prospected the landscape specifically for yellowish ochre with the right chemical properties to convert into red pigment. The selective gathering of materials and their probable heat-treatment almost certainly indicates a symbolic aspect to this practice, possibly similar to what we saw with the people at Pinnacle Point about 30,000 years earlier. […]

The combination of the oldest burial with grave goods; the preference for bright-red ochre and the apparent ability to heat-treat pigments to achieve it; and what are likely some of the earliest pieces of personal adornment— all these details make the people from Skhul good candidates for being our cognitive equals. And they appear at least 60,000 years before the traditional timing of the “creative explosion.”

Kindle Locations 1583-1609

There is something about the color red. It can represent happiness, anger, good luck, danger, blood, heat, sun, life, and death. Many cultures around the world attach a special significance to red. Its importance is also reflected in many of the languages spoken today. Not all languages include words for a range of colors, and the simplest systems recognize only white and black, or light and dark, but whenever they do include a third color word in their language, it is always red.

This attachment to red seems to be embedded deep within our collective consciousness. Not only did the earliest humans have a very strong preference for brilliant red ochre (except for the inhabitants of Sai Island, in Sudan, who favored yellow), but even earlier ancestral species were already selecting red ochre over other shades. It may also be significant (although we don’t know how) that the pristine quartzite stone tool found in the Pit of Bones in Spain was of an unusual red hue.

This same preference for red is evident on the walls of caves across Europe during the Ice Age. But by this time, artists had added black to their repertoire and the vast majority of paintings were done in one or both of these colors. I find it intriguing that two of the three most common colors recognized and named across all languages are also the ones most often used to create the earliest art. The third shade, though well represented linguistically, is noticeably absent from Ice Age art. Of all the rock art sites currently known in Europe, only a handful have any white paint in them. Since many of the cave walls are a fairly light gray or a translucent yellowy white, it’s possible that the artists saw the background as representing this shade, or that its absence could have been due to the difficulty in obtaining white pigment: the small number of sites that do have white images all used kaolin clay to create this color. (Since kaolin clay was not as widely available as the materials for making red and black paint, it is certainly possible that scarcity was a factor in color choice.)

While the red pigment was created using ochre, the black paint was made using either ground charcoal or the mineral manganese oxide. The charcoal was usually sourced from burnt wood, though in some instances burnt bone was used instead. Manganese is found in mineral deposits, sometimes in the same vicinity as ochre. Veins of manganese can also occasionally be seen embedded right in the rock at some cave sites. Several other colors do appear on occasion— yellow and brown are the most common— though they appear at only about 10 percent of sites.

There is also a deep purple color that I’ve only ever seen in cave art in northern Spain, and even there it’s rare. La Pasiega (the site where I saw the grinding stone) has a series of paintings in this shade of violet in one section of the cave. Mixed in with more common red paintings, there are several purple signs— dots, stacked lines, rectangular grills— along with a single purple bison that was rendered in great detail (see fig. 4 in insert). Eyes, muzzle, horns— all have been carefully depicted, and yet the purple shade is not an accurate representation of a bison’s coloring. Did the artist use this color simply because it’s what he or she had at hand? Or could it be that the color of the animal was being dictated by something other than a need for this creature to be true to life? We know these artists had access to brown and black pigments, but at many sites they chose to paint animals in shades of red or yellow, or even purple, like the bison here at La Pasiega. These choices are definitely suggestive of there being some type of color symbolism at work, and it could even be that creating accurate replicas of real-life animals was not the main goal of these images.