Useful Fictions Becoming Less Useful

Humanity has long been under the shadow of the Axial Age, no less true today than in centuries past. But what has this meant in both our self-understanding and in the kind of societies we have created? Ideas, as memes, can survive and even dominate for millennia. This can happen even when they are wrong, as long as they are useful to the social order.

One such idea involves nativism and essentialism, made possible through highly developed abstract thought. This notion of something inherent went along with the notion of division, from mind-body dualism to brain modules (what is inherent in one area being separate from what is inherent elsewhere). It goes back at least to the ancient Greeks such as with Platonic idealism (each ideal an abstract thing unto itself), although abstract thought required two millennia of development before it gained its most powerful form through modern science. As Elisa J. Sobo noted, “Ironically, prior to the industrial revolution and the rise of the modern university, most thinkers took a very comprehensive view of the human condition. It was only afterward that fragmented, factorial, compartmental thinking began to undermine our ability to understand ourselves and our place in— and connection with— the world.”

Maybe we are finally coming around to more fully questioning these useful fictions because they have become less useful as the social order changes, as the entire world shifts around us with globalization, climate change, mass immigration, etc. We saw emotions as so essentialist that we decided to start a war against one of them with the War on Terror, as if this emotion was definitive of our shared reality (and a great example of metonymy, by the way), but obviously fighting wars against a reified abstraction isn’t the most optimal strategy for societal progress. Maybe we need new ways of thinking.

The main problem with useful fictions isn’t necessarily that they are false, partial, or misleading. A useful fiction wouldn’t last for millennia if it weren’t, first and foremost, useful (especially true in relation to the views of human nature found in folk psychology). It is true that our seeing these fictions for what they are is a major change, but more importantly what led us to question their validity is that some of them have stopped being as useful as they once were. The nativists, essentialists, and modularists argued that such things as emotional experience, color perception, and language learning were inborn abilities and natural instincts: genetically-determined, biologically-constrained, and neurocognitively-formed. Based on theory, immense amounts of time, energy, and resources were invested into the promises made.

This motivated the entire search to connect everything observable in humans back to a gene, a biological structure, or an evolutionary trait (with the brain getting outsized attention). Yet reality has turned out to be much more complex with environmental factors, epigenetics, brain plasticity, etc. The original quest hasn’t been as fruitful as hoped for, partly because of problems in conceptual frameworks and the scientific research itself, and this has led some to give up on the search. Consider how when one part of the brain is missing or damaged, other parts of the brain often compensate and take over the correlated function. There have been examples of people lacking most of their brain matter and still able to function in what appears to be outwardly normal behavior. The whole is greater than the sum of the parts, such that the whole can maintain its integrity even without all of the parts.

The past view of the human mind and body has been too simplistic to an extreme. This is because we’ve lacked the capacity to see most of what goes on in making it possible. Our conscious minds, including our rational thought, is far more limited than many assumed. And the unconscious mind, the dark matter of the mind, is so much more amazing in what it accomplishes. In discussing what they call conceptual blending, Gilles Fauconnier and Mark Turner write (The Way We Think, p. 18):

“It might seem strange that the systematicity and intricacy of some of our most basic and common mental abilities could go unrecognized for so long. Perhaps the forming of these important mechanisms early in life makes them invisible to consciousness. Even more interestingly, it may be part of the evolutionary adaptiveness of these mechanisms that they should be invisible to consciousness, just as the backstage labor involved in putting on a play works best if it is unnoticed. Whatever the reason, we ignore these common operations in everyday life and seem reluctant to investigate them even as objects of scientific inquiry. Even after training, the mind seems to have only feeble abilities to represent to itself consciously what the unconscious mind does easily. This limit presents a difficulty to professional cognitive scientists, but it may be a desirable feature in the evolution of the species. One reason for the limit is that the operations we are talking about occur at lightning speed, presumably because they involve distributed spreading activation in the nervous system, and conscious attention would interrupt that flow.”

As they argue, conceptual blending helps us understand why a language module or instinct isn’t necessary. Research has shown that there is no single part of the brain nor any single gene that is solely responsible for much of anything. The constituent functions and abilities that form language likely evolved separately for other reasons that were advantageous to survival and social life. Language isn’t built into the brain as an evolutionary leap; rather, it was an emergent property that couldn’t have been predicted from any prior neurocognitive development, which is to say language was built on abilities that by themselves would not have been linguistic in nature.

Of course, Fauconnier and Turner are far from being the only proponents of such theories, as this perspective has become increasingly attractive. Another example is Mark Changizi’s theory presented in Harnessed where he argues that (p. 11), “Speech and music culturally evolved over time to be simulacra of nature” (see more about this here and here). Whatever theory one goes with, what is required is to explain the research challenging and undermining earlier models of cognition, affect, linguistics, and related areas.

Another book I was reading is How Emotions are Made by Lisa Feldman Barrett. She is covering similar territory, despite her focus being on something so seemingly simple as emotions. We rarely give emotions much thought, taking them for granted, but we shouldn’t. How we understand our experience and expression of emotion is part and parcel of a deeper view that our society holds about human nature, a view that also goes back millennia. This ancient lineage of inherited thought is what makes it problematic, since it feels intuitively true in it being so entrenched within our culture (Kindle Locations 91-93):

“And yet . .  . despite the distinguished intellectual pedigree of the classical view of emotion, and despite its immense influence in our culture and society, there is abundant scientific evidence that this view cannot possibly be true. Even after a century of effort, scientific research has not revealed a consistent, physical fingerprint for even a single emotion.”

“So what are they, really?,” Barret asks about emotions (Kindle Locations 99-104):

“When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. In short, we find that your emotions are not built-in but made from more basic parts. They are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real— that is, hardly an illusion, but a product of human agreement.”

This goes along with an area of thought that arose out of philology, classical studies, consciousness studies, Jungian psychology, and anthropology. As always, I’m particularly thinking of the bicameral mind theory of Julian Jaynes. In the most ancient civilizations, there weren’t monetary systems nor according to Jaynes was there consciousness as we know it. He argues that individual self-consciousness was built on an abstract metaphorical space that was internalized and narratized. This privatization of personal space led to the possibility of self-ownership, the later basis of capitalism (and hence capitalist realism). It’s abstractions upon abstractions, until all of modern civilization bootstrapped itself into existence.

The initial potentials within human nature could and have been used to build diverse cultures, but modern society has genocidally wiped out most of this once existing diversity, leaving behind a near total dominance of WEIRD monoculture. This allows us modern Westerners to mistake our own culture for universal human nature. Our imaginations are constrained by a reality tunnel, which further strengthens the social order (control of the mind is the basis for control of society). Maybe this is why certain abstractions have been so central in conflating our social reality with physical reality, as Barret explains (Kindle Locations 2999-3002):

“Essentialism is the culprit that has made the classical view supremely difficult to set aside. It encourages people to believe that their senses reveal objective boundaries in nature. Happiness and sadness look and feel different, the argument goes, so they must have different essences in the brain. People are almost always unaware that they essentialize; they fail to see their own hands in motion as they carve dividing lines in the natural world.”

We make the world in our own image. And then we force this social order on everyone, imprinting it onto not just onto the culture but onto biology itself. With epigenetics, brain plasticity, microbiomes, etc, biology readily accepts this imprinting of the social order (Kindle Locations 5499-5503):

“By virtue of our values and practices, we restrict options and narrow possibilities for some people while widening them for others, and then we say that stereotypes are accurate. They are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”

There are clear consequences to humans as individuals and communities. But there are other costs as well (Kindle Locations 129-132):

“Not long ago, a training program called SPOT (Screening Passengers by Observation Techniques) taught those TSA agents to detect deception and assess risk based on facial and bodily movements, on the theory that such movements reveal your innermost feelings. It didn’t work, and the program cost taxpayers $ 900 million. We need to understand emotion scientifically so government agents won’t detain us— or overlook those who actually do pose a threat— based on an incorrect view of emotion.”

This is one of the ways in which our fictions have become less than useful. As long as societies were relatively isolated, they could maintain their separate fictions and treat them as reality. But in a global society, these fictions end up clashing with each other in not just unuseful ways but in wasteful and dangerous ways. If TSA agents were only trying to observe people who shared a common culture of social constructs, the standard set of WEIRD emotional behaviors would apply. The problem is TSA agents have to deal with people from diverse cultures that have different ways of experiencing, processing, perceiving, and expressing what we call emotions. It would be like trying to understand world cuisine, diet, and eating habits by studying the American patrons of fast food restaurants.

Barret points to the historical record of ancient societies and to studies done on non-WEIRD cultures. What was assumed to be true based on WEIRD scientists studying WEIRD subjects turns out not to be true for the rest of the world. But there is an interesting catch to the research, the reason so much confusion prevailed for so long. It is easy to teach people cultural categories of emotion and how to identify them. Some of the initial research on non-WEIRD populations unintentionally taught the subjects the very WEIRD emotions that they were attempting to study. The structure of the studies themselves had WEIRD biases built into them. It was only with later research that they were able to filter out these biases and observe the actual non-WEIRD responses of non-WEIRD populations.

Researchers only came to understand this problem quite recently. Noam Chomsky, for example, thought it unnecessary to study actual languages in the field. Based on his own theorizing, he believed that studying a single language such as English would tell us everything we needed to know about the basic workings of all languages in the world. This belief proved massively wrong, as field research demonstrated. There was also an idealism in the early Cold War era that lead to false optimism, as Americans felt on top of the world. Chris Knight made this point in Decoding Chomsky (from the Preface):

“Pentagon’s scientists at this time were in an almost euphoric state, fresh from victory in the recent war, conscious of the potential of nuclear weaponry and imagining that they held ultimate power in their hands. Among the most heady of their dreams was the vision of a universal language to which they held the key. […] Unbelievable as it may nowadays sound, American computer scientists in the late 1950s really were seized by the dream of restoring to humanity its lost common tongue. They would do this by designing and constructing a machine equipped with the underlying code of all the world’s languages, instantly and automatically translating from one to the other. The Pentagon pumped vast sums into the proposed ‘New Tower’.”

Chomsky’s modular theory dominated linguistics for more than a half century. It still is held in high esteem, even as the evidence increasingly is stacked against it. This wasn’t just a waste of immense amount of funding. It derailed an entire field of research and stunted the development of a more accurate understanding. Generations of linguists went chasing after a mirage. No brain module of language has been found nor is there any hope of ever finding one. Many researchers wasted their entire careers on a theory that proved false and many of these researchers continue to defend it, maybe in the hope that another half century of research will finally prove it to be true after all.

There is no doubt that Chomsky has a brilliant mind. He is highly skilled in debate and persuasion. He won the battle of ideas, at least for a time. Through sheer power of his intellect, he was able to overwhelm his academic adversaries. His ideas came to dominate the field of linguistics, in what came to be known as the cognitive revolution. But Daniel Everett has stated that “it was not a revolution in any sense, however popular that narrative has become” (Dark Matter of the Mind, Kindle Location 306). If anything, Chomsky’s version of essentialism caused the temporary suppression of a revolution that was initiated by linguistic relativists and social constructionists, among others. The revolution was strangled in the crib, partly because it was fighting against an entrenched ideological framework that was millennia old. The initial attempts at research struggled to offer a competing ideological framework and they lost that struggle. Then they were quickly forgotten about, as if the evidence they brought forth was irrelevant.

Barret explains the tragedy of this situation. She is speaking of essentialism in terms of emotions, but it applies to the entire scientific project of essentialism. It has been a failed project that refuses to accept its failure, a paradigm that refuses to die in order to make way for something else. She laments all of the waste and lost opportunities (Kindle Locations 3245-3293):

“Now that the final nails are being driven into the classical view’s coffin in this era of neuroscience, I would like to believe that this time, we’ll actually push aside essentialism and begin to understand the mind and brain without ideology. That’s a nice thought, but history is against it. The last time that construction had the upper hand, it lost the battle anyway and its practitioners vanished into obscurity. To paraphrase a favorite sci-fi TV show, Battlestar Galactica, “All this has happened before and could happen again.” And since the last occurrence, the cost to society has been billions of dollars, countless person-hours of wasted effort, and real lives lost. […]

“The official history of emotion research, from Darwin to James to behaviorism to salvation, is a byproduct of the classical view. In reality, the alleged dark ages included an outpouring of research demonstrating that emotion essences don’t exist. Yes, the same kind of counterevidence that we saw in chapter 1 was discovered seventy years earlier . .  . and then forgotten. As a result, massive amounts of time and money are being wasted today in a redundant search for fingerprints of emotion. […]

“It’s hard to give up the classical view when it represents deeply held beliefs about what it means to be human. Nevertheless, the facts remain that no one has found even a single reliable, broadly replicable, objectively measurable essence of emotion. When mountains of contrary data don’t force people to give up their ideas, then they are no longer following the scientific method. They are following an ideology. And as an ideology, the classical view has wasted billions of research dollars and misdirected the course of scientific inquiry for over a hundred years. If people had followed evidence instead of ideology seventy years ago, when the Lost Chorus pretty solidly did away with emotion essences, who knows where we’d be today regarding treatments for mental illness or best practices for rearing our children.”

 

Instinct For Pride

Of all the emotions, pride perplexes me the most.

People are proud of all kinds of things, apparently nothing can escape it. People feel proud of their inborn talents, their looks, their perceived race, their ethnicity, their nationality, their family and kin, their ancestry, their religions, their jobs, their houses, their cars, the nice outfit they bought recently, and on and on. Some people feel proud for just being human, instead of some other animal… or else just proud for existing, as if surviving rather than dying is a great accomplishment of personal merit.

Pride is such a natural and simple emotion. No one has to learn how to be proud. 

A kid draws a picture, builds a tower of blocks, climbs a tree, or whatever. The unselfconscious response of the kid is to be proud and the kid will beam with a confidence in his or her greatness… which will then make the kid’s parents proud as well, and so it is a whole lovefest of pride.

Even when kids are away from adults, pride is a driving force in peer behavior. Oh, the things a kid will do to gain social standing in a group or to prove themselves to prospective friends.

There is nothing inherently wrong about any of this. I don’t even intend personal criticism. I just find it an odd attribute of human behavior.

Most things people have pride of have little to do with choices they’ve made. People are born smart or beautiful. People are born into good families or good communities. People are born lucky with the right circumstances and opportunities. People simply do whatever it is in them to do.

There is nothing particularly amazing about any of this, yet all of civilization would probably collapse without pride for it seems at the center of what motivates people. Shame is the opposite and necessary corollary of pride. People avoid shame like the plague for it is social death and sometimes leads to physical death via suicide (or else, in honor societies, via homicide). Pride and shame make the world go round.

Violence, Dark Thoughts, Righteousness, Collective Mood, Contingent Love, Public Opinion

Here are some articles from The New York Times that caught my interest (I do look at other news sources such as The Wall Street Journal, but for whatever reason The New York Times seems to have more articles on subjects of interest to me).  Anyone who is familiar with my blog will notice that these articles relate to subjects I often write about.

 – – –

Memorial Held for Slain Anti-Abortion Protester by Damien Cave

Stephen McGee for The New York Times
About 300 people attended a memorial service Wednesday for James Pouillon, who was slain Friday while protesting abortion.
 
Paul Sancya/Associated Press
Mary Jo Pouillon sang at a memorial service for her slain father, anti-abortion protester James Pouillon, in Owosso, Mich. on Wednesday

I’m always saddened by killings based on ideology whether or not I agree with the ideology of either side.  A random killing by a gang or a crazy person seems less evil.  Ideological killings seem so evil because the killer often rationalizes their actions as good.

There was nothing particularly interesting about this article except for one line.

His killing is believed to be the first of someone protesting abortion, and at the memorial and a vigil later outside a Planned Parenthood office, he was praised as a symbol of dedicated action.

That is utterly amazing.  He was the first anti-abortion protester to be killed.  On the other hand, anti-abortion protesters regularly kill abortion doctors.  Why did Damien Cave leave that important detail out?  There are two extensive Wikipedia articles about anti-abortion violence.

http://en.wikipedia.org/wiki/Anti-abortion_violence

http://en.wikipedia.org/wiki/Anti-abortion_violence_in_the_United_States

Why is this one murdered anti-abortion protester a symbol of dedicated action?  Are all of the doctors, nurses, receptionists, and security guards who died in supporting abortion (or simply doing their jobs) also symbols of dedicated action?  Going by the Wikipedia articles, anti-abortion protesters have committed hundreds of incidents of violent attacks, death threats, murders, attempted murders, kidnappings, bioterror threats, property crimes, bomb threats, bombings, arsons, vandalism, and trespassing.  Most of those, of course, were committed in the US.

This reminds me of protesters who try to protect nature and animals, but the situation is in reverse.  Evironmentalists and those against animal testing have never killed anyone in US history.  However, these protesters have been the target of numerous threats and acts of violence leading to many deaths and injuries.  Why is that?  Why are conservatives (social conservatives in the case of anti-abortion protesters) more prone to violence than liberals?  The most violent liberal protesters ever in US history were the Weather Underground and even they never killed anyone.  The Weather underground used bombs, but were always careful that people wouldn’t be harmed.  Contrast that to anti-abortion bombers who specifically target people.

What is interesting is that liberal protesters are often threatened, harmed and killed by people working for the government or large corporations.  The reason for this is that liberals are more likely than conservatives to clash with authority probably because conservatives by nature are more subservient to authority (which can be explained using the research into boundary types which shows that thick boundary types are more likely to be promoted in hierarchical institutions).  Maybe I’m being unfair, but it seems to me that conservatives for whatever reason are more likely to turn their aggression towards private citizens (i.e., those they perceive as being below them rather than those they perceive being above them).

Actually, I wonder how true it is that conservative protesters are less likely to confront and conflict with authority.  There are some conservative protesters that are aggressively confrontational to the powers that be and they tend to be libertarians especially of the religious variety, but maybe that says more about religious extremism than conservativism.  I was also thinking about how libertarians (such as farmers and other landowners) will support environentalists against the government and big business (such as when the government wants to take or otherwise use their land).

The odd thing is that Fox news was during the Bush administration so critical of protesters.  But now that a Democrat is in power they support and actively promote protest.  However, the protesters of Bush were often libertarians.  Why does the conservative party have an uncertain relationship with libertaranism.  When it comes to protesting, libertarians became identified with liberals because it’s often impossible to tell them apart and even the protesters don’t necessarilly make this differentiation.

So, there are two questions.  Why are conservatives reluctant towards becoming involved in protesting and often critical of protesters?  Why are conservatives the most violent protesters when they do become involved?

 – – –

Stumbling Blocks on the Path of Righteousness by Benedict Carey

Ross MacDonald

I really loved this article.  It goes against commonsense, but I must admit it’s the type of thing that has always made sense to me.  I’m just happy when research supports my own intuition.  🙂  However, I have no special power of intuitive knowing.  If you’ve studied widely the subject of psychology, I doubt you’d be surprised by this research.

In recent years, social psychologists have begun to study what they call the holier-than-thou effect. They have long known that people tend to be overly optimistic about their own abilities and fortunes — to overestimate their standing in class, their discipline, their sincerity.

But this self-inflating bias may be even stronger when it comes to moral judgment, and it can greatly influence how people judge others’ actions, and ultimately their own.

Heck, you don’t even need to study psychology.  Just observe people and this holier-than-thou effect is fairly obvious.  There really is nothing surprising about the fact that moral judgment has a personal bias.  That’s just basic human nature.  However, self-awareness of one’s own human nature isn’t inherently of human nature… or, to put it simply, most people are oblivious to their own biases.

A quote from the social psychologist David Dunning is more intriguing.

“But the point is that many types of behavior are driven far more by the situation than by the force of personality. What someone else did in that situation is a very strong warning about what you yourself would do.”

That is something that is so important that it can’t be over-emphasized.  Social conservatives always worry about moral relativism, but what their ideology misses is the actual psychology of moral behavior.  People should think twice before judging someone else.  If you had the same experiences and were in the same situation as another person, you’d probably make the same choices.  In this light, righteousness isn’t very moral in and of itself.  Compassionate awareness and humility is more likely to lead to tangible moral results.  I would guess that the more righteous someone is the more likely they’re to act against their own stated beliefs.  This is partly why outspoken evangelists become involved in socially unacceptable sexual activities.

“The problem with these holier-than-thou assessments is not only that we overestimate how we would have behaved,” Dr. Epley said. “It’s also that we blame every crisis or scandal on failure of character — you know, if we just fire all the immoral Wall Street bankers and replace them with moral ones, we’ll solve the problem.”

And that is exactly what moral conservatives believe.  This attitude comes up all of the time in the comments of the local news website.  The more different someone is the more likely they’re to be judged harshly for their failings.  It’s easy to dismiss the situation of another person when you’ve never lived in that situation.  Also, people tend to want to take credit for the advantages they were given in life and claim it as “moral character”.

In experiments as in life, the holier-than-thou effect diminishes quickly when people have actually had the experience they are judging: dubious accounting practices will appear less shady to the person who has had to put a good face on a failing company. And the effect is apparently less pronounced in cultures that emphasize interdependence over individual achievement, like China and Spain.

It’s hard to be humble and compassionate if you’ve never experienced difficulties and suffering, and even then you’ll tend to only sympathize with the specific difficulties and sufferings that you’ve experienced.  I always get irritated by people who judge others for something they’ve never personally experienced.  That is one of my pet peeves.

I appreciated the last comment about “cultures that emphasize interdependence”.  I’d assume that those cultures also emphasize sympathy because it’s through sympathy that interdepndence is encouraged.  On the other hand, I should point out that research also shows that interdependent cultures tend to isolate individuals and so the sympathy that is encouraged might be very narrow.  Anyways, an interdependent culture would certainly value personal humility over personal righteousness.

One practice that can potentially temper feelings of moral superiority is religion. All major faiths emphasize the value of being humble and the perils of hubris. “In humility count others as better than yourself,” St. Paul advises in his letter to the Philippians.

Yet for some people, religion appears to amplify the instinct to feel like a moral beacon. In a 2002 study, [ . . . ] the students in this highly religious group considered themselves, on average, almost twice as likely as their peers to adhere to such biblical commandments as “Love your neighbor as yourself.” The study also found that the most strictly fundamentalist of the students were at the highest end of the scale. “It reminds me of one of my favorite bumper stickers,” said Dr. Epley, of Chicago. “ ‘Jesus loves you, but I’m his favorite.’ ”

This reminds me of a long post I wrote trying to come to terms with Christians relationship with morality (Morality: Christians vs. Jesus).  I was comparing research done on the type of person who supports torture with the teachings of Jesus who was tortured.  The extremely interresting fact was that Christians were largely in favor of torture.  This seems rather odd until you consider the larger context of Christian history and modern fundamentalism.  This article adds even further data to explain this situation.  The more ideologically religious one is the more one is likely to judge oneself favorably and presumably more likely to judge others less favorably.  This might be explained partially by the way a religion creates a clear sense of an in-crowd and an out-crowd.  And the person not a part of the group is inherently less worthy (and this attitude is probably responsible for a fair amount of the violence in the world).

For all that, an abiding feeling of moral superiority is intrinsic to what some psychologists call self-enhancement. So-called self-enhancers think that they’re blessed, that they’re highly appreciated by others and that they’ll come out on top. And sometimes they do, studies suggest — especially in life-or-death crises like 9/11 and the Bosnian war.

“Self-enhancers do very well, across the board, on measures of mental healthin these situations,” said George Bonanno, a psychologist at Columbia.

But in the mundane ebb and flow of life, an inflated sense of personal virtue can also be a minefield. “Overconfident stock traders tend to do worse; people buy too many gym memberships,” said Dr. Dunning, of Cornell. “In the economic realm, the outcomes are not so good.”

This reminds me of research done on pessimism and optimism.  Optimists are more successful in many fields and there are many advantages to being an optimist such as better health.  However, pessimists have a more realistic assessment of the actual facts and also a more realistic assessment of themselves.  A pessimist may sound like a cynic, but they might be more likely to consistently act according to their own sense of morality.

 – – –

Why the Imp in Your Brain Gets Out by Benedict Carey

Scott Menchin

An important point I’ve read about before is the following.

But a vast majority of people rarely, if ever, act on such urges, and their susceptibility to rude fantasies in fact reflects the workings of a normally sensitive, social brain, argues a paper published last week in the journal Science.

It’s normal to have “abnormal” thoughts and fantasies.  It’s because people worry about these kinds of things that they become so prominent in the workings of our minds.  The person who acts on such horrible thoughts may actually think and fantasize about it less than normal.  However, these thoughts do have influence.

The empirical evidence of this influence has been piling up in recent years, as Dr. Wegner documents in the new paper. In the lab, psychologists have people try to banish a thought from their minds — of a white bear, for example — and find that the thought keeps returning, about once a minute. Likewise, people trying not to think of a specific word continually blurt it out during rapid-fire word-association tests.

The same “ironic errors,” as Dr. Wegner calls them, are just easy to evoke in the real world. Golfers instructed to avoid a specific mistake, like overshooting, do it more often when under pressure, studies find. Soccer players told to shoot a penalty kick anywhere but at a certain spot of the net, like the lower right corner, look at that spot more often than any other.

[ . . . ]

The researchers had about half the students try to suppress bad stereotypes of black males as they read and, later, judged Donald’s character on measures like honesty, hostility and laziness. These students rated Donald as significantly more hostile — but also more honest — than did students who were not trying to suppress stereotypes.

In short, the attempt to banish biased thoughts worked, to some extent. But the study also provided “a strong demonstration that stereotype suppression leads stereotypes to become hyperaccessible,” the authors concluded.

None of this is exactly new insight, but the point is that research is starting to prove it.  Psychologists and parenting gurus have been telling people for a long time to state things in the positive because the mind doesn’t understand a negative.  To the subconscious mind, the phrase “don’t think” simply translates to “think”.  Any self-aware person realizes the truth of this.

The point of taking this type of research into consideration is that it can be helpful to give people perspective.  People shouldn’t be so hard on themselves.  There is nothing wrong with you for having strange thoughts.  If you’re worried about acting on dark fantasies, your worrying demonstrates that your unlikely to act on them.  However, if those urges become too strong, I’d recommend seeking help.  When the voices tell you to kill someone, please get a second opinion.

 – – –

When a Parent’s ‘I Love You’ Means ‘Do as I Say’ by Alfie Kohn

Wesley Bedrosian

I was just recently writing about this topic and this author in my blog (Punishment/Reward, Good/Evil, Victim/Victimizer).  This article is about contingent love as a method of parenting (and I think this topic has direct bearing on the above article about moral righteousness).  One can question the morality of contingent parenting, but the practical side of it is simply whether it works or not.

This raises the intriguing possibility that the problem with praise isn’t that it is done the wrong way — or handed out too easily, as social conservatives insist. Rather, it might be just another method of control, analogous to punishment. The primary message of all types of conditional parenting is that children must earn a parent’s love. A steady diet of that, Rogers warned, and children might eventually need a therapist to provide the unconditional acceptance they didn’t get when it counted.

 Any reward always implies a potential punishment.  Even if the punishment isn’t overt or even intentional per se, what is the effect of this contingent love?

It turned out that children who received conditional approval were indeed somewhat more likely to act as the parent wanted. But compliance came at a steep price. First, these children tended to resent and dislike their parents. Second, they were apt to say that the way they acted was often due more to a “strong internal pressure” than to “a real sense of choice.” Moreover, their happiness after succeeding at something was usually short-lived, and they often felt guilty or ashamed. [ . . . ] Those mothers who, as children, sensed that they were loved only when they lived up to their parents’ expectations now felt less worthy as adults. Yet despite the negative effects, these mothers were more likely to use conditional affection with their own children.

[In another study] giving more approval when children did what parents wanted was carefully distinguished from giving less when they did not.

The studies found that both positive and negative conditional parenting were harmful, but in slightly different ways. The positive kind sometimes succeeded in getting children to work harder on academic tasks, but at the cost of unhealthy feelings of “internal compulsion.” Negative conditional parenting didn’t even work in the short run; it just increased the teenagers’ negative feelings about their parents.

 I’m a fan of research.  Most people ground their opinions in ideology rather than facts.  Of course, the data has to be interpreted.   There are always other interpretations, but even so an interpretation is only as good as the data it’s based on.  I don’t believe parents should simply submit to experts to tell them what to do any more than they should blindly submit to any other authority figure.  Parents should trust their own experience to an extent, but research can help us to understand the larger context of our experiences.  Any parent should take this kind of research very seriously.

In practice, according to an impressive collection of data by Dr. Deci and others, unconditional acceptance by parents as well as teachers should be accompanied by “autonomy support”: explaining reasons for requests, maximizing opportunities for the child to participate in making decisions, being encouraging without manipulating, and actively imagining how things look from the child’s point of view.

The last of these features is important with respect to unconditional parenting itself. Most of us would protest that of course we love our children without any strings attached. But what counts is how things look from the perspective of the children — whether they feel just as loved when they mess up or fall short.

 I liked these ending comments.  This answers the crticisms of those who would oppose unconditional parenting.  It doesn’t simply mean to let kids do whatever they want, but it means having a sympathetic and understanding of one’s child.  The idea is that if you want respect from your children then you should treat them with respect.  If you  want to teach your children how to be loving, how to be open and trusting, then you should teach by example.  One has to decide about one’s priorities.  Is it more important to force a child through fear (or withholding of love) to respect one’s authority or is it more important to raise a happy and well-balanced child?

  – – –

Does a Nation’s Mood Lurk in Its Songs and Blogs? by Benedict Carey

Wesley Bedrosian

This is the type of research that fascinates me.

In a new paper, a pair of statisticians at the University of Vermont argue that linguistic analysis — not just of song lyrics but of blogs and speeches — could add a new and valuable dimension to a growing area of mass psychology: the determination of national well-being.

“We argue that you can use this data as a kind of remote sensor of well-being,” said Peter Sheridan Dodds, a co-author of the new paper, with Christopher M. Danforth; both are in the department of mathematics and statistics.

“It’s information people are volunteering; they’re not being surveyed in the usual way,” Dr. Dodds went on. “You mess with people when you ask them questions about happiness. You’re not sure if they’re trying to make you happy, or have no idea whether they’re happy. It’s reactive.”

But I do have some criticisms.  Emotional expression may not be equivalent to emotional well-being.  The ways of expressing emotion may change, but I’m unconvinced that the basic level of emotion has changed.  Even so, I wouldn’t be surprised if such a change has occurred.  I do share the excitement of these researchers but I also share the opinions of the skeptics.

“The new approach that these researchers are taking is part of movement that is really exciting, a cross-pollination of computer science, engineering and psychology,” said James W. Pennebaker, a psychologist at the University of Texas. “And it’s going to change the social sciences; that to me is very clear.”

Researchers who specialize in analyzing mass measures of well-being are skeptical about what a content analysis of pop culture can really say, at least as a stand-alone measure.

“The approach is interesting, but I don’t see any evidence that the method produces a valid population-based measure of well-being,” Uli Schimmack, a psychologist at the University of Toronto, wrote in an e-mail message.

One issue is that pop culture and mainstream media have changed which might be the actual result of this apparent change in emotional well-being.  Media was more controlled and self-censored in the past.  There are more indie musicians who get their music out now than in the past.  There are more people voicing their opinions through non-traditional media.  So, maybe this only demonstrates a shift in censorship of emotional expression.

 – – –

‘Athens’ on the Net by Anand Giridharadas

Ridharadasp

I’m impressed by the quality of journalism in this article.  The subject matter a bit different from the other articles in this post, but it’s related.  It’s about how the common person participates (or not) in US democracy, and how this could change.  So, it’s about human relationships.  More importantly, it’s about challenging the hierarchical territory of politics where democracy only exists in name (btw I see this issue of hierarchical politics loosely related to the hierarchical style of parenting that promotes contingent love).  It’s a serious issue to consider whether democracy is doomed to be forever controlled and manipulated by the money and power of corporations and special interest groups.  It’s hard to imagine what a real democracy would even look like.  Some people claim a direct democracy where the average person’s opinion actually counts is an impossibility…. or even dangerous as the general population if given power supposedly would just turn into a mobocracy.

PERHAPS the biggest big idea to gather speed during the last millennium was that we humans might govern ourselves. But no one really meant it.

 Exactly!  Ideals are always nice.  They make for good political fodder and an effective method for subduing the masses… as long as they forever remain just ideals.

The headlines from Washington today blare of bailouts, stimulus, clunkers, Afpak, health care. But it is possible that future historians, looking back, will fixate on a quieter project of Barack Obama’s White House: its exploration of how government might be opened to greater public participation in the digital age, of how to make self-government more than a metaphor.

 I’ve been of the opinion for some time that we are in the midst of a major socio-political shift in our culture and probably in the world in general.  Technology is utterly transforming the world and we’ve only seen the tip of the iceberg.  With the technological generations coming into power and taking over the workforce, we are going to see a massive jump in technological innovation of the likes that hasn’t been seen in recent decades.  The industrial age and the modernist ideals it fostered are still very powerful, but a new paradigm has finally gained enough power to challenge it.  It’s been a long time coming, but the massive size of Boomers slowed down this shift.  Gen Xers have been working in the background building the infrastructure of the Information Age and now we have our first Gen X president.  Obama won by appealing to the youth which offers us a glimpse of what we’re going to see in the near future when in 2012 the Millennials will dominate the presidential election.  The US is no longer controlled by the Boomers, but the Boomers are far from being out of the game.  There will be some major generational clashing in the next decade.

President Obama declared during the campaign that “we are the ones we’ve been waiting for.” That messianic phrase held the promise of a new style of politics in this time of tweets and pokes. But it was vague, a paradigm slipped casually into our drinks. To date, the taste has proven bittersweet.

 I’m not sure it matters that Obama lives up to his promise.  The important point is the promise was made.  The sweetness of it may be undermined with the bitterness of politics as usual, but still the sweetness once tasted creates a hunger.  Any promising ideal will usually fail when it’s first proposed.  If one looks to history, it can take centuries for a good idea to really catch on and succeed.  Without a revolution to overthrow the government, it takes time to change established politics.  However, technology may speed up this process.

Federal agencies have been directed to release online information that was once sealed; reporters from Web-only publications have been called on at news conferences; the new portal Data.gov is allowing citizens to create their own applications to analyze government data. But the most revealing efforts have been in “crowdsourcing”: in soliciting citizens’ policy ideas on the Internet and allowing them to vote on one another’s proposals.

During the transition, the administration created an online “Citizen’s Briefing Book” for people to submit ideas to the president. “The best-rated ones will rise to the top, and after the Inauguration, we’ll print them out and gather them into a binder like the ones the president receives every day from experts and advisors,” Valerie Jarrett, a senior adviser to Mr. Obama, wrote to supporters.

 It sounds good in theory.  LOL  The author describes the results of this gathering of public opinion.  It may not seem inspiring, but I’d rather hear people’s actual opinions no matter what they are.  Even if the average person’s opinion is completely stupid, that is still a good thing to know.  Maybe the public isn’t capable of more serious opinions until their collective opinion is taken seriously.

There is a lively debate in progress about what some call Gov 2.0. One camp sees in the Internet an unprecedented opportunity to bring back Athenian-style direct democracy. [ . . . ] The people in this camp point to information technology’s aid to grassroots movements from Moldova to Iran. They look at India, where voters can now access, via text message, information on the criminal records of parliamentary candidates, and Africa, where cellphones are improving election monitoring. They note the new ease of extending reliable scientific and scholarly knowledge to a broad audience. They observe how the Internet, in democratizing access to facts and figures, encourages politician and citizen alike to base decisions on more than hunches.

But their vision of Internet democracy is part of a larger cultural evolution toward the expectation that we be consulted about everything, all the time. Increasingly, the best articles to read are the most e-mailed ones, the music worth buying belongs to singers we have just text-voted into stardom, the next book to read is one bought by other people who bought the last book you did, and media that once reported to us now publish whatever we tweet.

Yes, it’s a strange new world.  The question is does this actually open debate.  Do people just listen to the crowd and follow along?  Do people just get stuck in their own self-created niche where everything caters to their biases?  There are definite dangers.

Another camp sees the Internet less rosily. Its members tend to be enthusiastic about the Web and enthusiastic about civic participation; they are skeptical of the Internet as a panacea for politics. They worry that it creates a falsely reassuring illusion of equality, openness, universality. [ . . . ] “Many methods and technologies can be used to give voice to the public will. But some give a picture of public opinion as if through a fun-house mirror.”

True it creates an illusion, but politics at present just creates another kind of illusion.  Choose your illusion, as they say.  From my viewpoint, the risk is worth it because the opportunity is increased (as are the stakes).

Because it is so easy to filter one’s reading online, extreme views dominate the discussion. Moderates are underrepresented, so citizens seeking better health care may seem less numerous than poker fans. The Internet’s image of openness and equality belies its inequities of race, geography and age.

Now, there is a criticism that resonates deeply with me.  I get annoyed by how few moderates choose to voice their opinions and I get annoyed that so many ideologues feel it’s necessary to announce their every thought.  The internet is a specific medium that attracts a specific type of person.  The internet is Social Darwinism in action where thoughtful debate isn’t always fostered.  It takes effort to encourage people to relate well, but the ease of the internet doesn’t lend itself to people going to this effort.  People often make their quick rude comments and the people running the site are too busy or lazy to moderate such trolling and other anti-social behavior.

Lies spread like wildfire on the Web; Eric Schmidt, the chief executive of Google, no Luddite, warned last October that if the great brands of trusted journalism died, the Internet would become a “cesspool” of bad information. Wikipedia plans to add a layer of editing — remember editing? — for articles on living people.

This sounds like fear-mongering to me.  The great brands of trusted journalism aren’t going to entirely die out.  The ones that do die out will be replaced by new ones.  People want good journalism and anyways the quality of journalism was suspect long before the internet.  People have been looking for alternative journalism for much of this past century and now the opportunity is here for alternative journalism on a large-scale.  It will take time for all of this to develop, but it will develop because the demand is there.

Perhaps most menacingly, the Internet’s openness allows well-organized groups to simulate support, to “capture and impersonate the public voice,” as Mr. Fishkin wrote in an e-mail exchange.

Ah, yes.  This very well may be the biggest danger of them all.  The new technologies allow for manipulation and propaganda on a scale never before possible.  The workings of the internet are so subtle that most people don’t even notice the inherent biases to search engines.  Also, it’s hard to tell if a website is trustworthy or even who is running and funding it.  Even so, there is more info than there ever has been.  The difference of todays technology is that it allows people to research something if they want to.  However, the average person has little desire (not to mention time and energy) to research most things.  If manipulation succeeds in todays world, it’s because of willful ignorance.  As long as people are willing to unquestioningly accept lies and deception, then there will always be those willing to supply it.  But this has always been true no matter what kind of technology is used.

There is no turning back the clock. We now have more public opinion exerting pressure on politics than ever before. The question is how it may be channeled and filtered to create freer, more successful societies, because simply putting things online is no cure-all.

Damn straight!  There is no turning back.  Full speed ahead be it utopia or dystopia.  It’s a brave new world, baby.  However, I don’t see too much reason to worry about it mainly because worry won’t alter the change that is happening.  We all might as go along with the flow.  Instead of struggling against the inevitable, let’s save our energies and keep our eyes open.  Democracy needs to be able to adapt and that is true now more than ever.  Also, democracy needs vigilance.

To end on a humorous note, I shall reward anyone who made it all the way down to the bottom of this post.