Introspective Illusion

On split brain research, Susan Blackmore observed that, “In this way, the verbal left brain covered up its ignorance by confabulating.” This relates to the theory of introspective illusion (see also change blindness, choice blindness, and bias blind spot). In both cases, the conscious mind turns to confabulation to explain what it has no access to and so what it doesn’t understand.

This is how we maintain a sense of being in control. Our egoic minds have immense talent at rationalization and it can happen instantly with total confidence in the reason(s) given. That indicates that consciousness is a lot less conscious than it really is… or rather that consciousness isn’t what we think it is.

Our theory of mind, as such, is highly theoretical in the speculative sense. That is to say it isn’t particularly reliable in most cases. First and foremost, what matters is that the story told is compelling, to both us and others (self-justification, in its role within consciousness, is close to Jaynesian self-authorization). We are ruled by our need for meaning, even as our body-minds don’t require meaning to enact behaviors and take actions. We get through our lives just fine mostly on automatic.

According to Julian Jaynes theory of the bicameral mind, the purpose of consciousness is to create an internal stage upon which we play out narratives. As this interiorized and narratized space is itself confabulated, that is to say psychologically and socially constructed, this space allows all further confabulations of consciousness. We imaginatively bootstrap our individuality into existence, and that requires a lot of explaining.

* * *

Introspection illusion
Wikipedia

A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness.[8][9] Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, “little or no introspective access to higher order cognitive processes”.[10] They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.[8]

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated.[9]A ten-year-anniversary review of the paper raised several objections, questioning the idea of “process” they had used and arguing that unambiguous tests of introspective access are hard to achieve.[3]

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching.[10] He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity.[7] However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and “seem” to be “unaware of their unawareness”.[11]

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including “belief” or “pain” will turn out to be quite different from what is commonly expected as science advances.

The faulty guesses that people make to explain their thought processes have been called “causal theories”.[1] The causal theories provided after an action will often serve only to justify the person’s behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

2017 Report on Consciousness and Moral Patienthood
Open Philanthropy Project

Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability.

59. I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

[Keith Frankish, Editorial Introduction, Journal of Consciousness Studies, Volume 23, Numbers 11-12, 2016, pp. 9-10(2)]

“…consciousness is itself the result of learning.”

As above, so below
by Axel Cleeremans

A central aspect of the entire hierarchical predictive coding approach, though this is not readily apparent in the corresponding literature, is the emphasis it puts on learning mechanisms. In other works (Cleeremans, 2008, 2011), I have defended the idea that consciousness is itself the result of learning. From this perspective, agents become conscious in virtue of learning to redescribe their own activity to themselves. Taking the proposal that consciousness is inherently dynamical seriously opens up the mesmerizing possibility that conscious awareness is itself a product of plasticity-driven dynamics. In other words, from this perspective, we learn to be conscious. To dispel possible misunderstandings of this proposal right away, I am not suggesting that consciousness is something that one learns like one would learn about the Hundred Years War, that is, as an academic endeavour, but rather that consciousness is the result (vs. the starting point) of continuous and extended interaction with the world, with ourselves, and with others. The brain, from this perspective, continuously (and unconsciously) learns to anticipate the consequences of its own activity on itself, on the environment, and on other brains, and it is from the practical knowledge that accrues in such interactions that conscious experience is rooted. This perspective, in short, endorses the enactive approach introduced by O’Regan and Noë (2001), but extends it both inwards (the brain learning about itself) and further outwards (the brain learning about other brains), so connecting with the central ideas put forward by the predictive coding approach to cognition. In this light, the conscious mind is the brain’s (implicit, enacted) theory about itself, expressed in a language that other minds can understand.

The theory rests on several assumptions and is articulated over three core ideas. A first assumption is that information processing as carried out by neurons is intrinsically unconscious. There is nothing in the activity of individual neurons that make it so that their activity should produce conscious experience. Important consequences of this assumption are (1) that conscious and unconscious processing must be rooted in the same set of representational systems and neural processes, and (2) that tasks in general will always involve both conscious and unconscious influences, for awareness cannot be “turned off” in normal participants.

A second assumption is that information processing as carried out by the brain is graded and cascades (McClelland, 1979) in a continuous flow (Eriksen & Schultz, 1979) over the multiple levels of a heterarchy (Fuster, 2008) extending from posterior to anterior cortex as evidence accumulates during an information processing episode. An implication of this assumption is that consciousness takes time.

The third assumption is that plasticity is mandatory: The brain learns all the time, whether we intend to or not. Each experience leaves a trace in the brain (Kreiman, Fried, & Koch, 2002).

The social roots of consciousness
by Axel Cleeremans

How does this ability to represent the mental states of other agents get going? While there is considerable debate about this issue, it is probably fair to say that one crucial mechanism involves learning about the consequences of the actions that one directs towards other agents. In this respect, interactions with the natural world are fundamentally different from interactions with other agents, precisely because other agents are endowed with unobservable internal states. If I let a spoon drop on a hard floor, the sound that results will always be the same, within certain parameters that only vary in a limited range. The consequences of my action are thus more or less entirely predictable. But if I smile to someone, the consequences that may result are many. Perhaps the person will smile back to me, but it may also be the case that the person will ignore me or that she will display puzzlement, or even that she will be angry at me. It all depends on the context and on the unobservable mental states that the person currently entertains. Of course, there is a lot I can learn about the space of possible responses based on my knowledge of the person, my history of prior interactions with her, and on the context in which my interactions take place. But the point is simply to say that in order to successfully predict the consequences of the actions that I direct towards other agents, I have to build a model of how these agents work. And this is complex because, unlike what is the case for interactions with the natural world, it is an inverse problem: The same action may result in many different reactions, and those different reactions can themselves be caused by many different internal states.

Based on these observations, one provocative claim about the relationships between self-awareness and one’s ability to represent the mental states of other agents (“theory of mind”, as it is called) is thus that theory of mind comes first, as the philosopher Peter Caruthers has defended. That is, it is in virtue of my learning to correctly anticipate the consequences of the actions that  dIirect towards other agents that I end up developing models of the internal states of such agents, and it is in virtue of the existence of such models that I become able to gain insight about myself (more specifically: about my self). Thus, by this view, self-awareness, and perhaps subjective experience itself, is a consequence of theory of mind as it develops over extended periods of social intercourse.

The Shallows of the Mainstream Mind

The mainstream mindset always seems odd to me.

There can be an issue, event, or whatever that was reported in the alternative media, was written about by independent investigative journalists, was the target of leaks and whistleblowers, was researched by academics, and was released in an official government document. But if the mainstream media didn’t recently, widely, extensively, and thoroughly report on it, those in the mainstream can act as if they don’t know about it, as if it never happened and isn’t real.

There is partly a blind faith in the mainstream media, but it goes beyond that. Even the mainstream news reporting that happened in the past quickly disappears from memory. There is no connection in the mainstream mind between what happened in the past and what happens in the present, much less between what happens in other (specifically non-Western) countries and what happens in the US.

It’s not mere ignorance, willful or passive. Many people in the mainstream are highly educated and relatively well informed, but even in what they know there is a superficiality and lack of insight. They can’t quite connect one thing to another, to do their own research and come to their own conclusions. It’s a permanent and vast state of dissociation. It’s a conspiracy of silence where the first casualty is self-awareness, where individuals silence their own critical thought, their own doubts and questions.

There is also an inability to imagine the real. Even when those in the mainstream see hard data, it never quite connects on a psychological and visceral level. It is never quite real. It remains simply info that quickly slips from the mind.

Most Americans Know What is True

There is one topic I return to more often than most, a topic that has been on my mind for about a decade now. This topic has to do with the confluence of ideology, labels, and social science. I’ve written about this topic more than I care to remember.

I’m about equally interested in conservatism and liberalism (along with other ideological labels). But liberalism in some ways has intrigued me more because of all the massive confusion surrounding the label. Most Americans hold fairly strong left-leaning views on many of the most important major issues.

There are a number of facts that have become permanently caught in my craw. I considered two of these in a post from not too long ago, Wirthlin Effect & Symbolic Conservatism. In that post, I pointed out that most Americans are more in agreement with one another than they are with the more right-leaning political elites who claim to speak for and represent them. But there is a complicating factor involving the odd mixture of liberalism and conservatism in the American Mind (I never get tired of quoting this fascinating explanation):

Since the time of the pioneering work of Free & Cantril (1967), scholars of public opinion have distinguished between symbolic and operational aspects of political ideology (Page & Shapiro 1992, Stimson 2004). According to this terminology, “symbolic” refers to general, abstract ideological labels, images, and categories, including acts of self-identification with the left or right. “Operational” ideology, by contrast, refers to more specific, concrete, issue-based opinions that may also be classified by observers as either left or right. Although this distinction may seem purely academic, evidence suggests that symbolic and operational forms of ideology do not coincide for many citizens of mass democracies. For example, Free & Cantril (1967) observed that many Americans were simultaneously “philosophical conservatives” and “operational liberals,” opposing “big government” in the abstract but supporting the individual programs comprising the New Deal welfare and regulatory state. More recent studies have obtained impressively similar results; Stimson (2004) found that more than two-thirds of American respondents who identify as symbolic conservatives are operational liberals with respect to the issues (see also Page & Shapiro 1992, Zaller 1992). However, rather than demonstrating that ideological belief systems are multidimensional in the sense of being irreducible to a single left-right continuum, these results indicate that, in the United States at least, leftist/liberal ideas are more popular when they are manifested in specific, concrete policy solutions than when they are offered as ideological abstractions. The notion that most people like to think of themselves as conservative despite the fact that they hold a number of liberal opinions on specific issues is broadly consistent with system-justification theory, which suggests that most people are motivated to look favorably upon the status quo in general and to reject major challenges to it (Jost et al. 2004a).

What the heck is a symbolic conservatism? I’m not quite sure. I don’t know if anyone has that one figured out yet.

I also pointed out that even most Southerners are on the left side of the spectrum. It’s just that most Southerners are disenfranchized. If most Southerners voted, Republicans would never be able to win another election in the South without completely altering what they campaign on.

The claim of a polarized population is overstated. This brings me to a new angle. I came across another piece of data that now can be permanently caught in my craw with the rest. It is from a book by Cass R. Sunstein, not an author I normally read, but the book looked intriguing. He wrote (How to Humble a Wingnut and Other Lessons from Behavioral Economics, Kindle Locations 249-253):

Recent studies by Yale University’s John Bullock and his co-authors suggest that with respect to facts, Democrats and Republicans disagree a lot less than we might think.

True, surveys reveal big differences. But if people are given economic rewards for giving the right answer, the partisan divisions start to become a lot smaller. Here’s the kicker: With respect to facts , there is a real difference between what people say they believe and what they actually believe.

This was from a fairly short essay that ends with this conclusion (Kindle Locations 271-282):

What’s going on here? Bullock and his colleagues think that when people answer factual questions about politics, they engage in a degree of cheerleading, even at the expense of the truth. In a survey setting, there is no cost to doing that.

With economic incentives, of course, the calculus is altered. If you stand to earn some money with an accurate answer, cheerleading becomes much less attractive . And if you will lose real money with an inaccurate answer, you will put a higher premium on accuracy.

What is especially striking is that Bullock and his colleagues were able to slash polarization with very modest monetary rewards. If the incentives were greater (say, $ 100 for a correct answer and $ 25 for “I don’t know”), there is every reason to expect that partisan differences would diminish still more.

It might seem disturbing to find such a divergence between what people say and what they actually believe, but in a way, these findings are immensely encouraging. They suggest that with respect to facts, partisan differences are much less sharp than they seem—and that political polarization is often an artifact of the survey setting.

When Democrats and Republicans claim to disagree, they might be reporting which side they are on, not what they really think. Whatever they say in response to survey questions, they know, in their heart of hearts, that while they are entitled to their own opinions, they are not entitled to their own facts.

Incentives can make people honest. And when honest, people agree a lot more. This reminds me of research showing that, by doing word jumble puzzles and such, people can be primed for rational thought and indeed they do think more rationally under those conditions. Between incentives and priming, we could have a much higher quality public debate and political action.

This also reminds me of implicit knowledge (see here and here). Many writers have observed the strange phenomenon of people simultaneously knowing and not knowing. Maybe this directly relates to incentives and similar factors. It might not just be an issue of incentives to be honest, but also incentives to be self-aware, to admit to themselves what they already know, even when such truths might be uncomfortable and inconvenient.

A further confounding factor, as research also shows, the political elites and the political activists are very much polarized. Those with the most power and influence are the stumbling blocks for democracy or any other moral and effective political process. This plays straight into the cheerleading of the masses. Too many people will simply go along with what the pundits and politicians tell them, unless some other motivation causes them to think more carefully and become more self-aware.

One wonders what the public debate would be like about issues from global warming to economic inequality, if the incentives were different. A single honest public debate could transform our society. It would be a shock to the entire social, political, and economic system.

My Bumper Car Philosophy of Life

I sometimes find myself complaining about a particular person or group or criticizing a type of person. It’s amusing. Everyone feels this way sometimes. In being who we are, we inevitably can’t fully understand (emotionally or cognitively) others who are very different from us. It’s perfectly normal, but most often we don’t think about how odd this is.

None of us really knows why we are the way we are or even exactly how we became that way. We all have our own stories that explain our lives, but these really are just rationalizations to explain away the uncomfortable fact that we are mostly shaped by and motivated by things of which we are unaware. The factors that go into making a human are infinite, beyond comprehension. Maybe what bothers us about not understanding others is that we ultimately don’t even understand ourselves.

In life, we are driving blind. We learn of the world by running into things. This is my bumper car philosophy of life.

When Stupid People Don’t Know They’re Stupid

http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

http://www.nytimes.com/2000/01/18/health/among-the-inept-researchers-discover-ignorance-is-bliss.html?pagewanted=1

http://www.psychologytoday.com/blog/evolved-primate/201006/when-ignorance-begets-confidence-the-classic-dunning-kruger-effect

Calvin Hobbes Ignorance