Technological Fears and Media Panics

“One of the first characteristics of the first era of any new form of communication is that those who live through it usually have no idea what they’re in.”
~Mitchell Stephens

“Almost every new medium of communication or expression that has appeared since the dawn of history has been accompanied by doomsayers and critics who have confidently predicted that it would bring about The End of the World as We Know It by weakening the brain or polluting our precious bodily fluids.”
~New Media Are Evil, from TV Tropes

“The internet may appear new and fun…but it’s really a porn highway to hell. If your children want to get on the internet, don’t let them. It’s only a matter of time before they get sucked into a vortex of shame, drugs, and pornography from which they’ll never recover. The internet…it’s just not worth it.”
~Grand Theft Auto: Liberty City Stories

“It’s the same old devil with a new face.”
~Rev. George Bender, Harry Potter book burner

Media technology is hard to ignore. This goes beyond it being pervasive. Our complaints and fears, our fascination and optimism are mired in far greater things. It is always about something else. Media technology is not only the face of some vague cultural change but the embodiment of new forms of power that seem uncontrollable. Our lives are no longer fully our own, a constant worry in an individualistic society. With globalization, it’s as if the entire planet has become a giant company town.

I’m not one for giving into doom and gloom about technology. That response is as old as civilization and doesn’t offer anything useful. But I’m one of the first to admit to the dire situation we are facing. It’s just that in some sense the situation has always been dire, the world has always been ending. We never know if this finally will be the apocalypse that has been predicted for millennia, an ending to end it all with no new beginning. One way or another, the world as we know it is ending. There probably isn’t much reason to worry about it. Whatever the future holds, it is beyond our imagining as our present world was beyond the imagining of past generations.

One thing is clear. There is no point in getting in a moral panic over it. The young who embrace what is new always get blamed for it, even though they are simply inheriting what others have created. The youth today aren’t any worse off than any other prior generation at the same age. Still, it’s possible that these younger generations might take us into a future that us old fogies won’t be able to understand. History shows how shocking innovations can be. Talking about panics, think about Orson Welles’s radio show, War of the Worlds. The voice of radio back then had a power that we no longer can appreciate. Yet here we are with radio being so much background noise added to the rest.

Part of what got me thinking about this were two posts by Matt Cardin, at The Teeming Brain blog. In one post, he shares some of Nathaniel Rich’s review, Roth Agonistes, of Philip Roth’s Why Write?: Collected Nonfiction 1960–2013. There is a quote from Roth in 1960:

“The American writer in the middle of the twentieth century has his hands full in trying to understand, describe, and then make credible much of American reality. It stupefies, it sickens, it infuriates, and finally it is even a kind of embarrassment to one’s own meager imagination. The actuality is continually outdoing our talents, and the culture tosses up figures almost daily that are the envy of any novelist.”

Rich comments that, “Roth, despite writing before the tumult of the Sixties, went farther, suggesting that a radically destabilized society had made it difficult to discriminate between reality and fiction. What was the point of writing or reading novels when reality was as fantastic as any fiction? Such apprehensions may seem quaint when viewed from the comic-book hellscape of 2018, though it is perversely reassuring that life in 1960 felt as berserk as it does now.”

We are no more post-truth now than back then. It’s always been this way. But it is easy to lose context. Rich notes that, “Toward the end of his career, in his novels and public statements, Roth began to prophesy the extinction of a literary culture — an age-old pastime for aging writers.” The ever present fear that the strangeness and stresses of the unknown will replace the comforting of the familiar. We all grow attached to the world we experienced in childhood, as it forms the foundation of our identity. But every now and then something comes along to threaten it all. And the post-World War era was definitely a time of dramatic and, for some, traumatic change — despite all of the nostalgia that has accrued to its memories like flowers on a gravestone.

The technological world we presently live in took its first form during that earlier era. Since then, the book as an art form is far from being near extinction. More books have been printed in recent decades than ever before in history. New technology has oddly led us read even more books, both in their old and new technological forms. My young niece, of the so-called Internet Generation, prefers physical books… not that she is likely to read Philip Roth. Literacy, along with education and IQ, is on the rise. There is more of everything right now, what makes it overwhelming. Technologies of the past for the  most part aren’t being replaced but incorporated into a different world. This Borg-like process of assimilation might be more disturbing to the older generations than simply becoming obsolete.

The other post by Matt Cardin shares an excerpt from an NPR piece by Laura Sydell, The Father Of The Internet Sees His Invention Reflected Back Through A ‘Black Mirror’. It is about the optimists of inventors and the consequences of inventions, unforeseen except by a few. One of those who did see the long term implications was William Gibson: “The first people to embrace a technology are the first to lose the ability to see it objectively.” Maybe so, but that is true for about everyone, including most of those who don’t embrace it or go so far as to fear it. It’s not in human nature to see much of anything objectively.

Gibson did see the immediate realities of what he coined as ‘Cyberspace’. We do seem to be moving in that general direction of cyberpunk dystopia, at least here in this country. I’m less certain about the even longer term developments, as Gibson’s larger vision is as fantastical as many others. But it is the immediate realities that always concern people because they can be seen and felt, if not always acknowledged for what they are, often not even by the fear-mongers.

I share his being more “interested in how people behave around new technologies.” In reference to “how TV changed New York City neighborhoods in the 1940s,” Gibson states that, “Fewer people sat out on the stoops at night and talked to their neighbors, and it was because everyone was inside watching television. No one really noticed it at the time as a kind of epochal event, which I think it was.”

I would make two points about.

First, there is what I already said. It is always an epochal event when a major technology is invented, going back to the many inventions before that such as media technology (radio, films, telegraph, printing press, bound book, etc) but also other technologies (assembly lines, cotton gin, compass, etc). Did the Chinese individual who assembled the first firework imagine the carnage of bombs that made castles easy targets and led to two world wars that transformed all of human existence? Of course not. Even the simplest of technologies can turn civilization on its head, which has happened multiple times over the past few millennia and often with destructive results.

The second point is to look at something specific like television. It happened along with the building of the interstate highway system, the rise of car culture, and the spread of suburbia. Television became a portal for the outside world to invade the fantasyland of home life that took hold after the war. Similar fears about radio and the telephone were transferred to the television set and those fears were directed at the young. The first half of the 20th century was constant technological wonder and uncertainty. The social order was thrown askew.

We like to imagine the 1940s and 1950s as a happy time of social conformity and social order, a time of prosperity and a well behaved population, but that fantasy didn’t match the reality. It was an era of growing concerns about adolescent delinquency, violent crime, youth gangs, sexual deviancy, teen pregnancy, loose morals, and rock ‘n roll — and the data bears out that a large number in that generation were caught up in the criminal system, whether because they were genuinely a bad generation or that the criminal system had become more punitive, although others have argued that it was merely a side effect of the baby boom with youth making up a greater proportion of society. Whatever was involved, the sense of social angst got mixed up with lingering wartime trauma and emerging Cold War paranoia. The policing, arrests, and detention of wayward youth became a priority to the point of oppressive obsession. Besides youth problems, veterans from World War II did not come home content and happy (listen to Audible’s “The Home Front”). It was a tumultuous time, quite opposite of the perfect world portrayed in those family sitcoms of the 1940s and 1950s.

The youth during that era had a lot in common with their grandparents, the wild and unruly Lost Generation corrupted by family and community breakdown from early mass immigration, urbanization, industrialization, consumerism, etc. Starting in the late 1800s, youth gangs and hooliganism became rampant, as moral panic became widespread. As romance novels earlier had been blamed and later comic books would be blamed, around the turn of the century the popular media most feared were the violent penny dreadfuls and dime novels that targeted tender young minds with portrayals of lawlessness and debauchery, so it seemed to the moral reformers and authority figures.

It was the same old fear rearing its ugly head. This pattern has repeated on a regular basis. What new technology does is give an extra push to the swings of generational cycles. So, as change occurs, much remains the same. For all that William Gibson got right, no one can argue that the world has been balkanized into anarcho-corporatist city-states (Snow Crash), although it sure is a plausible near future. The general point is true, though. We are a changed society. Yet the same old patterns of fear-mongering and moral panic continue. What is cyclical and what is trend is hard to differentiate as it happens, it being easier to see clearly in hindsight.

I might add that vast technological and social transformations have occurred every century for the past half millennia. The ending of feudalism was far more devastating. Much earlier, the technological advancement of written text and the end of oral culture had greater consequences than even Socrates could have predicted. And it can’t be forgotten that movable type printing presses ushered in centuries of mass civil unrest, populist movements, religious wars, and revolution across numerous countries.

Our own time so far doesn’t compare, one could argue. The present relative peace and stability will continue until maybe World War III and climate change catastrophe forces a technological realignment and restructuring of civilization. Anyway, the internet corrupting the youth and smart phones rotting away people’s brains should be the least of our worries.

Even the social media meddling that Russia is accused of in manipulating the American population is simply a continuation of techniques that go back to before the internet existed. The game has changed a bit, but nations and corporations are pretty much acting in the devious ways they always have, except they are collecting a lot more info. Admittedly, technology does increase the effectiveness of their deviousness. But it also increases the potential methods for resisting and revolting against oppression.

I do see major changes coming. My doubts are more about how that change will happen. Modern civilization is massively dysfunctional. That we use new technologies less than optimally might have more to do with pre-existing conditions of general crappiness. For example, television along with air conditioning likely did contribute to people not sitting outside and talking to their neighbors, but as great or greater of a contribution probably had to do with diverse social and economic forces driving shifts in urbanization and suburbanization with the dying of small towns and the exodus from ethnic enclaves. Though technology was mixed into these changes, we maybe give technology too much credit and blame for the changes that were already in motion.

It is similar to the shift away from a biological explanation of addiction. It’s less that certain substances create uncontrollable cravings. Such destructive behavior is only possible and probable when particular conditions are set in place. There already has to be breakdown of relationships of trust and support. But rebuild those relationships and the addictive tendencies will lessen.

Similarly, there is nothing inevitable about William Gibson’s vision of the future or rather his predictions might be more based on patterns in our society than anything inherent to the technology itself. We retain the choice and responsibility to create the world we want or, failing that, to fall into self-fulfilling prophecies.

The question is what is the likelihood of our acting with conscious intention and wise forethought. All in all, self-fulfilling prophecy appears to be the most probable outcome. It is easy to be cynical, considering the track record of the present superpower that dominates the world and the present big biz corporatism that dominates the economy. Still, I hold out for the chance that conditions could shift for various reasons, altering what otherwise could be taken as near inevitable.

* * *

6/13/21 – Here is an additional thought that could be made into a new separate post, but for now we’ll leave it here as a note. There is evidence that new media technology does have an effect on the thought, perception, and behavior. This is measurable in brain scans. But other research shows it even alters personality or rather suppresses it’s expression. In a scientific article about testing for the Big Five personality traits, Tim Blumer and Nicola Döring offer an intriguing conclusion:

“To sum up, we conclude that for four of the five factors the data indicates a decrease of personality expression online, which is most probably due to the specification of the situational context. With regard to the trait of neuroticism, however, an additional effect occurs: The emotional stability increases on the computer and the Internet. This trend is likely, as has been described in previous studies, due to the typical features of computer-mediated communication (see Rice & Markey, 2009)” (Are we the same online? The expression of the five factor personality traits on the computer and the Internet).

This makes one think what it actually means. These personality tests are self-reports and so have that bias. Still, that is useful info in indicating what people are experiencing and perceiving about themselves. It also gives some evidence to what people are expressing, even when they aren’t conscious of it, as that is how these tests are designed with carefully phrased questions and including decoy questions.

It is quite likely that the personality is genuinely being suppressed when people engage with the internet. Online experience eliminates so many normal behavioral and biological cues (tone of voice, facial expressions, eye gaze, hand gestures, bodily posture, rate of breathing, pheromones, etc). It would be unsurprising if this induces at least mild psychosis in many people, in that individuals literally become disconnected from most aspects of normal reality and human relating. If nothing else, it would surely increase anxiety and agitation.

When online, we really aren’t ourselves. That is because we are cut off from the social mirroring that allows self-awareness. There is a theory that theory of mind and hence cognitive empathy is developed in childhood first through observing others. It’s in becoming aware that others have minds behind their behavior that we then develop a sense of our own mind as separate from others and the world.

The new media technologies remove the ability to easily sense others as actual people. This creates a strange familiarity and intimacy as, in a way, all of the internet is inside of you. What is induced can at times be akin to psychological solipsism. We’ve noticed how often people don’t seem to recognize the humanity of others online in the way they would if a living-and-breathing person were right in front of them. Most of the internet is simply words. And even pictures and videos are isolated of all real-world context and physical immediacy.

Yet it’s not clear we can make a blanket accusation about the risks of new media. Not all aspects are the same. In fact, one study indicated “a positive association between general Internet use, general use of social platforms and Facebook use, on the one hand, and self-esteem, extraversion, narcissism, life satisfaction, social support and resilience, on the other hand.” It’s not merely being on the internet that is the issue but specific platforms of interaction and how they shape human experience and behavior.

The effects varied greatly, as the researchers found: “Use of computer games was found to be negatively related to these personality and mental health variables. The use of platforms that focus more on written interaction (Twitter, Tumblr) was assumed to be negatively associated with positive mental health variables and significantly positively with depression, anxiety, and stress symptoms. In contrast, Instagram use, which focuses more on photo-sharing, correlated positively with positive mental health variables” (Julia Brailovskaia et al, What does media use reveal about personality and mental health? An exploratory investigation among German students).

There is good evidence. The video game result is perplexing, though. It is image-based, as is Instagram. Why does the former lead to less optimal outcomes and the the latter not? It might have to do with Instagram being more socially-oriented, whereas video games can be played in isolation. Would that still be true of video games that are played with friends and/or on multi-user online worlds? Anyway, it is unsurprising that text-based social media is clearly a net loss for mental health. That would definitely fit with the theory that it’s particularly something about the disconnecting effect of words alone on a screen.

This fits our own experience even when interacting with people we’ve personally known for most or all of our lives. It’s common to write something to a family member or an old friend on email, text message, FB messenger, etc; and, knowing they received it and read it, receive no response; not even a brief acknowledgement. No one in the “real world” would act that way to “real people”. No one who wanted to maintain a relationship would stand near you while you spoke to them to their face, not look at you or say anything, and then walk away like you weren’t there. But that is the point of the power of textual derealization.

Part of this is the newness of new media. We simply have not yet fully adapted to it. People freaked out about every media innovation that came along. And no doubt the fears and anxiety were often based on genuine concerns and direct observations. When media changes, it does have profound effect on people and society. People probably do act deranged for a period of time. It might take generations or centuries for society to settle down after each period of change. That is the problem with the modern world where we’re hit by such a vast number of innovations in such quick succession. The moment we regain our senses enough to try to stand back up again we are hit by the next onslaught.

We are in the middle of what one could call the New Media Derangement Syndrome (NMDS). It’s not merely one thing but a thousands things and combined with a total technological overhaul of society. This past century has turned all of civilization on its head. Over a few generations, most of it occurring within a single lifespan, humanity went from mostly rural communities, farm-based economy, horse-and-buggies, books, and newspapers to mass urbanization, skyscrapers, factories, trains, cars, trucks, ocean liners, airplanes and jets, rocket ships, electricity, light bulbs, telegraphs, movies, radio, television, telephones, smartphones, internet, radar, x-rays, air conditioning, etc.

The newest of new media is bringing in a whole other aspect. We are now living in not just a banana republic but inverted totalitarianism. Unlike the past, the everyday experience of our lives are more defined by corporations than churches, communities, or governments. Think of how most people spend most of their waking hours regularly checking into various corporate media technologies, platforms, and networks; including while at work and a smartphone next to the bed giving one instant notifications.

Think about how almost all media that Americans now consume is owned and controlled by a handful of transnational corporations. Yet not that long ago, most media was owned and operated locally by small companies, non-profit organizations, churches, etc. Most towns had multiple independently-run newspapers. Likewise, most radio shows and tv shows were locally or regionally produced mid-20th century. The moveable type printing press made possible the first era of mass media, but that was small time change compared to the nationalization and globalization of mass media over the past century.

Part of NMDS is that we consumer-citizens have become commodified products. The social media and such is not a product that we are buying. No, we and our data is the product that is being sold. The crazifaction factor is how everything has become manipulated by data-gathering and algorithms. Corporations now have larger files on American citizens than the FBI did during the height of the Cold War. New media technology is one front of the corporate war on democracy and the public good. Economics is now the dominant paradigm of everything.  In her article Social Media Is a Borderline Personality Disorder, Kasia Chojecka wrote:

“Social media, as Tristan Harris said, undermined human weaknesses and contributed to what can be called a collective depression. I would say it’s more than that — a borderline personality disorder with an emotional rollercoaster, lack of trust, and instability. We can’t stay sane in this world right now, Harris said. Today the world is dominated not only by surveillance capitalism based on commodification and commercialization of personal data (Sh. Zuboff), but also by a pandemic, which caused us to shut ourselves at home and enter a full lockdown mode. We were forced to move to social media and are now doomed to instant messaging, notifications, the urge to participate. The scale of exposure to social media has grown incomparably (the percentages of growth vary — some estimate it would be circa ten percent or even several dozen percent in comparison to 2019).”

The result of this is media-induced mass insanity can be seen with conspiracy theories (QAnon, Pizzagate, etc) and related mass psychosis and mass hallucinations: Jewish lasers in space starting wildfires, global child prostitution rings that operate on moon bases, vast secret tunnel systems that connect empty Walmarts, and on and on. That paranoia, emerging from the dark corners of the web, helped launch an insurrection against the government and caused the attempted kidnappings and assassinations of politicians. Plus, it got a bizarre media personality cult elected as president. If anyone doubted the existence of NMDS in the past, it has since become the undeniable reality we all now live in.

* * *

Fear of the new - a techno panic timeline

11 Examples of Fear and Suspicion of New Technology
by Len Wilson

New communications technologies don’t come with user’s manuals. They are primitive, while old tech is refined. So critics attack. The critic’s job is easier than the practitioner’s: they score with the fearful by comparing the infancy of the new medium with the perfected medium it threatens. But of course, the practitioner wins. In the end, we always assimilate to the new technology.

“Writing is a step backward for truth.”
~Plato, c. 370 BC

“Printed book will never be the equivalent of handwritten codices.”
~Trithemius of Sponheim, 1492

“The horrible mass of books that keeps growing might lead to a fall back into barbarism..”
~Gottfried Wilhelm, 1680

“Few students will study Homer or Virgil when they can read Tom Jones or a thousand inferior or more dangerous novels.”
~Rev. Vicemius Know, 1778

“The most powerful of ignorance’s weapons is the dissemination of printed matter.”
~Count Leo Tolstoy, 1869

“We will soon be nothing but transparent heaps of jelly to each other.”
~New York Times 1877 Editorial, on the advent of the telephone

“[The telegraph is] a constant diffusion of statements in snippets.”
~Spectator Magazine, 1889

“Have I done the world good, or have I added a menace?”
~Guglielmo Marconi, inventor of radio, 1920

“The cinema is little more than a fad. It’s canned drama. What audiences really want to see is flesh and blood on the stage.”
~Charlie Chaplin, 1916

“There is a world market for about five computer.”
~Thomas J. Watson, IBM Chairman and CEO, 1943

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”
~Daryl Zanuck, 20th Century Fox CEO, 1946

Media Hysteria: An Epidemic of Panic
by Jon Katz

MEDIA HYSTERIA OCCURS when tectonic plates shift and the culture changes – whether from social changes or new technology.

It manifests itself when seemingly new fears, illnesses, or anxieties – recovered memory, chronic fatigue syndrome, alien abduction, seduction by Internet molesters, electronic theft – are described as epidemic disorders in need of urgent recognition, redress, and attention.

For those of us who live, work, message, or play in new media, this is not an abstract offshoot of the information revolution, but a topic of some urgency: We are the carriers of these contagious ideas. We bear some of the responsibility and suffer many of the consequences.

Media hysteria is part of what causes the growing unease many of us feel about the toxic interaction between technology and information.

Moral Panics Over Youth Culture and Video Games
by Kenneth A. Gagne

Several decades of the past century have been marked by forms of entertainment that were not available to the previous generation. The comic books of the Forties and Fifties, rock ‘n roll music of the Fifties, Dungeons & Dragons in the Seventies and Eighties, and video games of the Eighties and Nineties were each part of the popular culture of that era’s young people. Each of these entertainment forms, which is each a medium unto itself, have also fallen under public scrutiny, as witnessed in journalistic media such as newspapers and journals – thus creating a “moral panic.”

The Smartphone’s Impact is Nothing New
by Rabbi Jack Abramowitz

Any invention that we see as a benefit to society was once an upstart disruption to the status quo. Television was terrible because when listened to the radio, we used our imaginations instead of being spoon-fed. Radio was terrible because families used to sit around telling stories. Moveable type was terrible because if books become available to the masses, the lower classes will become educated beyond their level. Here’s a newsflash: Socrates objected to writing! In The Phaedrus (by his disciple Plato), Socrates argues that “this discovery…will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. … (Y)ou give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

When the Internet and the smartphone evolved, society did what we always do: we adapted. Every new technology has this effect. Do you know why songs on the radio are about 3½ minutes long? Because that’s what a 45-rpm record would hold. Despite the threat some perceived in this radical format, we adapted. (As it turns out, 45s are now a thing of the past but the pop song endures. Turns out we like 3½-minute songs!)

Is the Internet Making Us Crazy? What the New Research Says
by Tony Dokoupil

The first good, peer-reviewed research is emerging, and the picture is much gloomier than the trumpet blasts of Web utopians have allowed. The current incarnation of the Internet—portable, social, accelerated, and all-pervasive—may be making us not just dumber or lonelier but more depressed and anxious, prone to obsessive-compulsive and attention-deficit disorders, even outright psychotic. Our digitized minds can scan like those of drug addicts, and normal people are breaking down in sad and seemingly new ways. […]

And don’t kid yourself: the gap between an “Internet addict” and John Q. Public is thin to nonexistent. One of the early flags for addiction was spending more than 38 hours a week online. By that definition, we are all addicts now, many of us by Wednesday afternoon, Tuesday if it’s a busy week. Current tests for Internet addiction are qualitative, casting an uncomfortably wide net, including people who admit that yes, they are restless, secretive, or preoccupied with the Web and that they have repeatedly made unsuccessful efforts to cut back. But if this is unhealthy, it’s clear many Americans don’t want to be well. [,,,]

The Gold brothers—Joel, a psychiatrist at New York University, and Ian, a philosopher and psychiatrist at McGill University—are investigating technology’s potential to sever people’s ties with reality, fueling hallucinations, delusions, and genuine psychosis, much as it seemed to do in the case of Jason Russell, the filmmaker behind “Kony 2012.” The idea is that online life is akin to life in the biggest city, stitched and sutured together by cables and modems, but no less mentally real—and taxing—than New York or Hong Kong. “The data clearly support the view that someone who lives in a big city is at higher risk of psychosis than someone in a small town,” Ian Gold writes via email. “If the Internet is a kind of imaginary city,” he continues. “It might have some of the same psychological impact.”

What parallels do you see between the invention of the internet – the ‘semantic web’ and the invention of the printing press?
answer by Howard Doughty

Technology, and especially the technology of communication, has tremendous consequences for human relations – social, economic and political.

Socrates raged against the written word, insisting that it was the end of philosophy which, in his view, required two or more people in direct conversation. Anything else, such as a text, was at least one step removed from the real thing and, like music and poetry which he also despised, represented a pale imitation (or bastardization) of authentic life. (Thank goodness Plato wrote it all down.)

From an oral to a written society was one thing, but as Marshall McLuhan so eruditely explained in his book, The Gutenberg Galaxy, the printing press altered fundamantal cultural patterns again – making reading matter more easily available and, in the process, enabling the Protestant Reformation and its emphasis on isolated individual interpretations of whatever people imagined their god to be.

In time, the telegraph and the telephone began the destruction of space, time and letter writing, making it possible to have disembodied conversations over thousands of miles.

Don’t Touch That Dial!
by Vaughan Bell

A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both “confusing and harmful” to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an “always on” digital environment. It’s worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That’s not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press.

Worries about information overload are as old as information itself, with each generation reimagining the dangerous impacts of technology on mind and brain. From a historical perspective, what strikes home is not the evolution of these social concerns, but their similarity from one century to the next, to the point where they arrive anew with little having changed except the label.

 These concerns stretch back to the birth of literacy itself. In parallel with modern concerns about children’s overuse of technology, Socrates famously warned against writing because it would “create forgetfulness in the learners’ souls, because they will not use their memories.” He also advised that children can’t distinguish fantasy from reality, so parents should only allow them to hear wholesome allegories and not “improper” tales, lest their development go astray. The Socratic warning has been repeated many times since: The older generation warns against a new technology and bemoans that society is abandoning the “wholesome” media it grew up with, seemingly unaware that this same technology was considered to be harmful when first introduced.

Gessner’s anxieties over psychological strain arose when he set about the task of compiling an index of every available book in the 16th century, eventually published as the Bibliotheca universalis. Similar concerns arose in the 18th century, when newspapers became more common. The French statesman Malesherbes railed against the fashion for getting news from the printed page, arguing that it socially isolated readers and detracted from the spiritually uplifting group practice of getting news from the pulpit. A hundred years later, as literacy became essential and schools were widely introduced, the curmudgeons turned against education for being unnatural and a risk to mental health. An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.” Meanwhile, excessive study was considered a leading cause of madness by the medical community.

When radio arrived, we discovered yet another scourge of the young: The wireless was accused of distracting children from reading and diminishing performance in school, both of which were now considered to be appropriate and wholesome. In 1936, the music magazine the Gramophone reported that children had “developed the habit of dividing attention between the humdrum preparation of their school assignments and the compelling excitement of the loudspeaker” and described how the radio programs were disturbing the balance of their excitable minds. The television caused widespread concern as well: Media historian Ellen Wartella has noted how “opponents voiced concerns about how television might hurt radio, conversation, reading, and the patterns of family living and result in the further vulgarization of American culture.”

Demonized Smartphones Are Just Our Latest Technological Scapegoat
by Zachary Karabell

AS IF THERE wasn’t enough angst in the world, what with the Washington soap opera, #MeToo, false nuclear alerts, and a general sense of apprehension, now we also have a growing sense of alarm about how smartphones and their applications are impacting children.

In the past days alone, The Wall Street Journal ran a long story about the “parents’ dilemma” of when to give kids a smartphone, citing tales of addiction, attention deficit disorder, social isolation, and general malaise. Said one parent, “It feels a little like trying to teach your kid how to use cocaine, but in a balanced way.” The New York Times ran a lead article in its business section titled “It’s Time for Apple to Build a Less Addictive iPhone,” echoing a rising chorus in Silicon Valley about designing products and programs that are purposely less addictive.

All of which begs the question: Are these new technologies, which are still in their infancy, harming a rising generation and eroding some basic human fabric? Is today’s concern about smartphones any different than other generations’ anxieties about new technology? Do we know enough to make any conclusions?

Alarm at the corrosive effects of new technologies is not new. Rather, it is deeply rooted in our history. In ancient Greece, Socrates cautioned that writing would undermine the ability of children and then adults to commit things to memory. The advent of the printing press in the 15th century led Church authorities to caution that the written word might undermine the Church’s ability to lead (which it did) and that rigor and knowledge would vanish once manuscripts no longer needed to be copied manually.

Now, consider this question: “Does the telephone make men more active or more lazy? Does [it] break up home life and the old practice of visiting friends?” Topical, right? In fact, it’s from a 1926 survey by the Knights of Columbus about old-fashioned landlines.

 The pattern of technophobia recurred with the gramophone, the telegraph, the radio, and television. The trope that the printing press would lead to loss of memory is very much the same as the belief that the internet is destroying our ability to remember. The 1950s saw reports about children glued to screens, becoming more “aggressive and irritable as a result of over-stimulating experiences, which leads to sleepless nights and tired days.” Those screens, of course, were televisions.

Then came fears that rock-n-roll in the 1950s and 1960s would fray the bonds of family and undermine the ability of young boys and girls to become productive members of society. And warnings in the 2000s that videogames such as Grand Theft Auto would, in the words of then-Senator Hillary Rodham Clinton, “steal the innocence of our children, … making the difficult job of being a parent even harder.”

Just because these themes have played out benignly time and again does not, of course, mean that all will turn out fine this time. Information technologies from the printed book onward have transformed societies and upended pre-existing mores and social order.

Protruding Breasts! Acidic Pulp! #*@&!$% Senators! McCarthyism! Commies! Crime! And Punishment!
by R.C. Baker

In his medical practice, Wertham saw some hard cases—juvenile muggers, murderers, rapists. In Seduction, he begins with a gardening metaphor for the relationship between children and society: “If a plant fails to grow properly because attacked by a pest, only a poor gardener would look for the cause in that plant alone.” He then observes, “To send a child to a reformatory is a serious step. But many children’s-court judges do it with a light heart and a heavy calendar.” Wertham advocated a holistic approach to juvenile delinquency, but then attacked comic books as its major cause. “All comics with their words and expletives in balloons are bad for reading.” “What is the social meaning of these supermen, super women … super-ducks, super-mice, super-magicians, super-safecrackers? How did Nietzsche get into the nursery?” And although the superhero, Western, and romance comics were easily distinguishable from the crime and horror genres that emerged in the late 1940s, Wertham viewed all comics as police blotters. “[Children] know a crime comic when they see one, whatever the disguise”; Wonder Woman is a “crime comic which we have found to be one of the most harmful”; “Western comics are mostly just crime comic books in a Western setting”; “children have received a false concept of ‘love’ … they lump together ‘love, murder, and robbery.’” Some crimes are said to directly imitate scenes from comics. Many are guilty by association—millions of children read comics, ergo, criminal children are likely to have read comics. When listing brutalities, Wertham throws in such asides as, “Incidentally, I have seen children vomit over comic books.” Such anecdotes illuminate a pattern of observation without sourcing that becomes increasingly irritating. “There are quite a number of obscure stores where children congregate, often in back rooms, to read and buy secondhand comic books … in some parts of cities, men hang around these stores which sometimes are foci of childhood prostitution. Evidently comic books prepare the little girls well.” Are these stores located in New York? Chicago? Sheboygan? Wertham leaves us in the dark. He also claimed that powerful forces were arrayed against him because the sheer number of comic books was essential to the health of the pulp-paper manufacturers, forcing him on a “Don Quixotic enterprise … fighting not windmills, but paper mills.”

When Pac-Man Started a National “Media Panic”
by Michael Z. Newman

This moment in the history of pop culture and technology might have seemed unprecedented, as computerized gadgets were just becoming part of the fabric of everyday life in the early ‘80s. But we can recognize it as one in a predictable series of overheated reactions to new media that go back all the way to the invention of writing (which ancients thought would spell the end of memory). There is a particularly American tradition of becoming enthralled with new technologies of communication, identifying their promise of future prosperity and renewed community. It is matched by a related American tradition of freaking out about the same objects, which are also figured as threats to life as we know it.

The emergence of the railroad and the telegraph in the 19th century and of novel 20th century technologies like the telephone, radio, cinema, television, and the internet were all similarly greeted by a familiar mix of high hopes and dark fears. In Walden, published in 1854, Henry David Thoreau warned that, “we do not ride on the railroad; it rides upon us.” Technologies of both centuries were imagined to unite to unite a vast and dispersed nation and edify citizens, but they also were suspected of trivializing daily affairs, weakening local bonds, and worse yet, exposing vulnerable children to threats and hindering their development into responsible adults.

These expressions are often a species of moral outrage known as media panic, a reaction of adults to the perceived dangers of an emerging culture popular with children, which the parental generation finds unfamiliar and threatening. Media panics recur in a dubious cycle of lathering outrage, with grownups seeming not to realize that the same excessive alarmism has arisen in every generation. Eighteenth and 19th century novels might have caused confusion to young women about the difference between fantasy and reality, and excited their passions too much. In the 1950s, rock and roll was “the devil’s music,” feared for inspiring lust and youthful rebellion, and encouraging racial mixing. Dime novels, comic books, and camera phones have all been objects of frenzied worry about “the kids these days.”

The popularity of video games in the ‘80s prompted educators, psychotherapists, local government officeholders, and media commentators to warn that young players were likely to suffer serious negative effects. The games would influence their aficionados in the all the wrong ways. They would harm children’s eyes and might cause “Space Invaders Wrist” and other physical ailments. Like television, they would be addictive, like a drug. Games would inculcate violence and aggression in impressionable youngsters. Their players would do badly in school and become isolated and desensitized. A reader wrote to The New York Times to complain that video games were “cultivating a generation of mindless, ill-tempered adolescents.”

The arcades where many teenagers preferred to play video games were imagined as dens of vice, of illicit trade in drugs and sex. Kids who went to play Tempest or Donkey Kong might end up seduced by the lowlifes assumed to hang out in arcades, spiraling into lives of substance abuse, sexual depravity, and crime. Children hooked on video games might steal to feed their habit. Reports at the time claimed that video kids had vandalized cigarette machines, pocketing the quarters and leaving behind the nickels and dimes. […]

Somehow, a generation of teenagers from the 1980s managed to grow up despite the dangers, real or imagined, from video games. The new technology could not have been as powerful as its detractors or its champions imagined. It’s easy to be captivated by novelty, but it can force us to miss the cyclical nature of youth media obsessions. Every generation fastens onto something that its parents find strange, whether Elvis or Atari. In every moment in media history, intergenerational tension accompanies the emergence of new forms of culture and communication. Now we have sexting, cyberbullying, and smartphone addiction to panic about.

But while the gadgets keep changing, our ideas about youth and technology, and our concerns about young people’s development in an uncertain and ever-changing modern world, endure.

Why calling screen time ‘digital heroin’ is digital garbage
by Rachel Becker

The supposed danger of digital media made headlines over the weekend when psychotherapist Nicholas Kardaras published a story in the New York Post called “It’s ‘digital heroin’: How screens turn kids into psychotic junkies.” In the op-ed, Kardaras claims that “iPads, smartphones and XBoxes are a form of digital drug.” He stokes fears about the potential for addiction and the ubiquity of technology by referencing “hundreds of clinical studies” that show “screens increase depression, anxiety and aggression.”
We’ve seen this form of scaremongering before. People are frequently uneasy with new technology, after all. The problem is, screens and computers aren’t actually all that new. There’s already a whole generation — millennials — who grew up with computers. They appear, mostly, to be fine, selfies aside. If computers were “digital drugs,” wouldn’t we have already seen warning signs?

No matter. Kardaras opens with a little boy who was so hooked on Minecraft that his mom found him in his room in the middle of the night, in a “catatonic stupor” — his iPad lying next to him. This is an astonishing use of “catatonic,” and is almost certainly not medically correct. It’s meant to scare parents.

by Alison Gopnik

My own childhood was dominated by a powerful device that used an optical interface to transport the user to an alternate reality. I spent most of my waking hours in its grip, oblivious of the world around me. The device was, of course, the book. Over time, reading hijacked my brain, as large areas once dedicated to processing the “real” world adapted to processing the printed word. As far as I can tell, this early immersion didn’t hamper my development, but it did leave me with some illusions—my idea of romantic love surely came from novels.
English children’s books, in particular, are full of tantalizing food descriptions. At some point in my childhood, I must have read about a honeycomb tea. Augie, enchanted, agreed to accompany me to the grocery store. We returned with a jar of honeycomb, only to find that it was an inedible, waxy mess.

Many parents worry that “screen time” will impair children’s development, but recent research suggests that most of the common fears about children and screens are unfounded. (There is one exception: looking at screens that emit blue light before bed really does disrupt sleep, in people of all ages.) The American Academy of Pediatrics used to recommend strict restrictions on screen exposure. Last year, the organization examined the relevant science more thoroughly, and, as a result, changed its recommendations. The new guidelines emphasize that what matters is content and context, what children watch and with whom. Each child, after all, will have some hundred thousand hours of conscious experience before turning sixteen. Those hours can be like the marvellous ones that Augie and I spent together bee-watching, or they can be violent or mindless—and that’s true whether those hours are occupied by apps or TV or books or just by talk.

New tools have always led to panicky speculation. Socrates thought that reading and writing would have disastrous effects on memory; the novel, the telegraph, the telephone, and the television were all declared to be the End of Civilization as We Know It, particularly in the hands of the young. Part of the reason may be that adult brains require a lot of focus and effort to learn something new, while children’s brains are designed to master new environments spontaneously. Innovative technologies always seem distracting and disturbing to the adults attempting to master them, and transparent and obvious—not really technology at all—to those, like Augie, who encounter them as children.

The misguided moral panic over Slender Man
by Adam Possamai

Sociologists argue that rather than simply being created stories, urban legends represent the fear and anxieties of current time, and in this instance, the internet culture is offering a global and a more participatory platform in the story creation process.

New technology is also allowing urban legends to be transmitted at a faster pace than before the invention of the printing press, and giving more people the opportunity to shape folk stories that blur the line between fiction and reality. Commonly, these stories take a life of their own and become completely independent from what the original creator wanted to achieve.

Yet if we were to listen to social commentary this change in the story creation process is opening the door to deviant acts.

Last century, people were already anxious about children accessing VHS and Betamax tapes and being exposed to violence and immorality. We are now likely to face a similar moral panic with regards to the internet.

Sleepwalking Through Our Dreams

In The Secret Life of Puppets, Victoria Nelson makes some useful observations of reading addiction, specifically in terms of formulaic genres. She discusses Sigmund Freud’s repetition compulsion and Lenore Terr’s post-traumatic games. She sees genre reading as a ritual-like enactment that can’t lead to resolution, and so the addictive behavior becomes entrenched. This would apply to many other forms of entertainment and consumption. And it fits into Derrick Jensen’s discussion of abuse, trauma, and the victimization cycle.

I would broaden her argument in another way. People have feared the written text ever since it was invented. In the 18th century, there took hold a moral panic about reading addiction in general and that was before any fiction genres had developed (Frank Furedi, The Media’s First Moral Panic; full text available at Wayback Machine). The written word is unchanging and so creates the conditions for repetition compulsion. Every time a text is read, it is the exact same text.

That is far different from oral societies. And it is quite telling that oral societies have a much more fluid sense of self. The Piraha, for example, don’t cling to their sense of self nor that of others. When a Piraha individual is possessed by a spirit or meets a spirit who gives them a new name, the self that was there is no longer there. When asked where is that person, the Piraha will say that he or she isn’t there, even if the same body of the individual is standing right there in front of them. They also don’t have a storytelling tradition or concern for the past.

Another thing that the Piraha apparently lack is mental illness, specifically depression along with suicidal tendencies. According to Barbara Ehrenreich from Dancing in the Streets, there wasn’t much written about depression even in the Western world until the suppression of religious and public festivities, such as Carnival. One of the most important aspects of Carnival and similar festivities was the masking, shifting, and reversal of social identities. Along with this, there was the losing of individuality within the group. And during the Middle Ages, an amazing number of days in the year were dedicated to communal celebrations. The ending of this era coincided with numerous societal changes, including the increase of literacy with the spread of the movable type printing press.

The Media’s First Moral Panic
by Frank Furedi

When cultural commentators lament the decline of the habit of reading books, it is difficult to imagine that back in the 18th century many prominent voices were concerned about the threat posed by people reading too much. A dangerous disease appeared to afflict the young, which some diagnosed as reading addiction and others as reading rage, reading fever, reading mania or reading lust. Throughout Europe reports circulated about the outbreak of what was described as an epidemic of reading. The behaviours associated with this supposedly insidious contagion were sensation-seeking and morally dissolute and promiscuous behaviour. Even acts of self-destruction were associated with this new craze for the reading of novels.

What some described as a craze was actually a rise in the 18th century of an ideal: the ‘love of reading’. The emergence of this new phenomenon was largely due to the growing popularity of a new literary genre: the novel. The emergence of commercial publishing in the 18th century and the growth of an ever-widening constituency of readers was not welcomed by everyone. Many cultural commentators were apprehensive about the impact of this new medium on individual behaviour and on society’s moral order.

With the growing popularity of novel reading, the age of the mass media had arrived. Novels such as Samuel Richardson’s Pamela, or Virtue Rewarded (1740) and Rousseau’s Julie, or the New Heloise (1761) became literary sensations that gripped the imagination of their European readers. What was described as ‘Pamela-fever’ indicated the powerful influence novels could exercise on the imagination of the reading public. Public deliberation on these ‘fevers’ focused on what was a potentially dangerous development, which was the forging of an intense and intimate interaction between the reader and literary characters. The consensus that emerged was that unrestrained exposure to fiction led readers to lose touch with reality and identify with the novel’s romantic characters to the point of adopting their behaviour. The passionate enthusiasm with which European youth responded to the publication of Johann Wolfgang von Goethe’s novel The Sorrows of Young Werther (1774) appeared to confirm this consensus. […]

What our exploration of the narrative of Werther fever suggests is that it acquired a life of its own to the point that it mutated into a taken-for-granted rhetorical idiom, which accounted for the moral problems facing society. Warnings about an epidemic of suicide said more about the anxieties of their authors than the behaviour of the readers of the novels. An inspection of the literature circulating these warnings indicates a striking absence of empirical evidence. The constant allusion to Miss. G., to nameless victims and to similarly framed death scenes suggests that these reports had little factual content to draw on. Stories about an epidemic of suicide were as fictional as the demise of Werther in Goethe’s novel.

It is, however, likely that readers of Werther were influenced by the controversy surrounding the novel. Goethe himself was affected by it and in his autobiography lamented that so many of his readers felt called upon to ‘re-enact the novel, and possibly shoot themselves’. Yet, despite the sanctimonious scaremongering, it continued to attract a large readership. While there is no evidence that Werther was responsible for the promotion of a wave of copycat suicides, it evidently succeeded in inspiring a generation of young readers. The emergence of what today would be described as a cult of fans with some of the trappings of a youth subculture is testimony to the novel’s powerful appeal.

The association of the novel with the disorganisation of the moral order represented an early example of a media panic. The formidable, sensational and often improbable effects attributed to the consequences of reading in the 18th century provided the cultural resources on which subsequent reactions to the cinema, television or the Internet would draw on. In that sense Werther fever anticipated the media panics of the future.

Curiously, the passage of time has not entirely undermined the association of Werther fever with an epidemic of suicide. In 1974 the American sociologist Dave Phillips coined the term, the ‘Werther Effect’ to describe mediastimulated imitation of suicidal behaviour. But the durability of the Werther myth notwithstanding, contemporary media panics are rarely focused on novels. In the 21st century the simplistic cause and effect model of the ‘Werther Effect is more likely to be expressed through moral anxieties about the danger of cybersuicide, copycat online suicide.

The Better Angels of Our Nature
by Steven Pinker
Kindle Locations 13125-13143
(see To Imagine and Understand)

It would be surprising if fictional experiences didn’t have similar effects to real ones, because people often blur the two in their memories. 65 And a few experiments do suggest that fiction can expand sympathy. One of Batson’s radio-show experiments included an interview with a heroin addict who the students had been told was either a real person or an actor. 66 The listeners who were asked to take his point of view became more sympathetic to heroin addicts in general, even when the speaker was fictitious (though the increase was greater when they thought he was real). And in the hands of a skilled narrator, a fictitious victim can elicit even more sympathy than a real one. In his book The Moral Laboratory, the literary scholar Jèmeljan Hakemulder reports experiments in which participants read similar facts about the plight of Algerian women through the eyes of the protagonist in Malike Mokkeddem’s novel The Displaced or from Jan Goodwin’s nonfiction exposé Price of Honor. 67 The participants who read the novel became more sympathetic to Algerian women than those who read the true-life account; they were less likely, for example, to blow off the women’s predicament as a part of their cultural and religious heritage. These experiments give us some reason to believe that the chronology of the Humanitarian Revolution, in which popular novels preceded historical reform, may not have been entirely coincidental: exercises in perspective-taking do help to expand people’s circle of sympathy.

The science of empathy has shown that sympathy can promote genuine altruism, and that it can be extended to new classes of people when a beholder takes the perspective of a member of that class, even a fictitious one. The research gives teeth to the speculation that humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering. And as such, the cognitive process of perspective-taking and the emotion of sympathy must figure in the explanation for many historical reductions in violence. They include institutionalized violence such as cruel punishments, slavery, and frivolous executions; the everyday abuse of vulnerable populations such as women, children, homosexuals, racial minorities, and animals; and the waging of wars, conquests, and ethnic cleansings with a callousness to their human costs.

Innocent Weapons:
The Soviet and American Politics of Childhood in the Cold War

by Margaret E. Peacock
pp. 88-89

As a part of their concern over American materialism, politicians and members of the American public turned their attention to the rising influence of media and popular culture upon the next generation.69 Concerns over uncontrolled media were not new in the United States in the 1950s. They had a way of erupting whenever popular culture underwent changes that seemed to differentiate the generations. This was the case during the silent film craze of the 1920s and when the popularity of dime novels took off in the 1930s.70 Yet, for many in the postwar era, the press, the radio, and the television presented threats to children that the country had never seen before. As members of Congress from across the political spectrum would argue throughout the 1950s, the media had the potential to present a negative image of the United States abroad, and it ran the risk of corrupting the minds of the young at a time when shoring up national patriotism and maintaining domestic order were more important than ever. The impact of media on children was the subject of Fredric Wertham’s 1953 best-selling book Seduction of the Innocent, in which he chronicled his efforts over the course of three years to “trace some of the roots of the modern mass delinquency.”71 Wertham’s sensationalist book documented case after case of child delinquents who seemed to be mimicking actions that they had seen on the television or, in particular, in comic strips. Horror comics, which were popular from 1948 until 1954, showed images of children killing their parents and peers, sometimes in gruesome ways—framing them for murder—being cunning and devious, even cannibalistic. A commonly cited story was that of “Bloody Mary,” published by Farrell Comics, which told the story of a seven-year-old girl who strangles her mother, sends her father to the electric chair for the murder, and then kills a psychiatrist who has learned that the girl committed these murders and that she is actually a dwarf in disguise.72 Wertham’s crusade against horror comics was quickly joined by two Senate subcommittees in 1954, at the heads of which sat Estes Kefauver and Robert Hendrickson. They argued to their colleagues that the violence and destruction of the family in these comic books symbolized “a terrible twilight zone between sanity and madness.”73 They contended that children found in these comic books violent models of behavior and that they would otherwise be law abiding. J. Edgar Hoover chimed in to comment that “a comic which makes lawlessness attractive . . . may influence the susceptible boy or girl.”74

Such depictions carried two layers of threat. First, as Wertham, Hoover, and Kefauver argued, they reflected the seeming potential of modern media to transform “average” children into delinquents.75 Alex Drier, popular NBC newscaster, argued in May 1954 that “this continuous flow of filth [is] so corruptive in its effects that it has actually obliterated decent instincts in many of our children.”76 Yet perhaps more telling, the comics, as well as the heated response that they elicited, also reflected larger anxieties about what identities children should assume in contemporary America. As in the case of Bloody Mary, these comics presented an image of apparently sweet youths who were in fact driven by violent impulses and were not children at all. “How can we expose our children to this and then expect them to run the country when we are gone?” an agitated Hendrickson asked his colleagues in 1954.77 Bloody Mary, like the uneducated dolts of the Litchfield report and the spoiled boys of Wylie’s conjuring, presented an alternative identity for American youth that seemed to embody a new and dangerous future.

In the early months of 1954, Robert Hendrickson argued to his colleagues that “the strained international and domestic situation makes it impossible for young people of today to look forward with certainty to higher education, to entering a trade or business, to plans for marriage, a home, and family. . . . Neither the media, nor modern consumerism, nor the threat from outside our borders creates a problem child. But they do add to insecurity, to loneliness, to fear.”78 For Hendrickson these domestic trends, along with what he called “deficient adults,” seemed to have created a new population of troubled and victimized children who were “beyond the pale of our society.”79

The End of Victory Culture:
Cold War America and the Disillusioning of a Generation

by Tom Engelhardt
Kindle Locations 2872-2910

WORRY, BORDERING ON HYSTERIA, about the endangering behaviors of “youth” has had a long history in America, as has the desire of reformers and censors to save “innocent” children from the polluting effects of commercial culture. At the turn of the century, when middle-class white adolescents first began to take their place as leisure-time trendsetters, fears arose that the syncopated beat of popular “coon songs” and ragtime music would demonically possess young listeners, who might succumb to the “evils of the Negro soul.” Similarly, on-screen images of crime, sensuality, and violence in the earliest movies, showing in “nickel houses” run by a “horde of foreigners,” were decried by reformers. They were not just “unfit for children’s eyes,” but a “disease” especially virulent to young (and poor) Americans, who were assumed to lack all immunity to such spectacles. 1 […]

To many adults, a teen culture beyond parental oversight had a remarkably alien look to it. In venues ranging from the press to Senate committees, from the American Psychiatric Association to American Legion meetings, sensational and cartoonlike horror stories about the young or the cultural products they were absorbing were told. Tabloid newspaper headlines reflected this: “Two Teen Thrill Killings Climax City Park Orgies. Teen Age Killers Pose a Mystery— Why Did They Do It?… 22 Juveniles Held in Gang War. Teen Age Mob Rips up BMT Train. Congressmen Stoned, Cops Hunt Teen Gang.” After a visit to the movies in 1957 to watch two “teenpics,” Rock All Night and Dragstrip Girl, Ruth Thomas of Newport, Rhode Island’s Citizen’s Committee on Literature expressed her shock in words at least as lurid as those of any tabloid: “Isn’t it a form of brain-washing? Brain-washing the minds of the people and especially the youth of our nation in filth and sadistic violence. What enemy technique could better lower patriotism and national morale than the constant presentation of crime and horror both as news and recreation.” 3

You did not have to be a censor, a right-wing anti-Communist, or a member of the Catholic Church’s Legion of Decency, however, to hold such views. Dr. Frederick Wertham, a liberal psychiatrist, who testified in the landmark Brown v. Board of Education desegregation case and set up one of the first psychiatric clinics in Harlem, publicized the idea that children viewing commercially produced acts of violence and depravity, particularly in comic books, could be transformed into little monsters. The lurid title of his best-selling book, Seduction of the Innocent, an assault on comic books as “primers for crime,” told it all. In it, Dr. Wertham offered copious “horror stories” that read like material from Tales from the Crypt: “Three boys, six to eight years old, took a boy of seven, hanged him nude from a tree, his hands tied behind him, then burned him with matches. Probation officers investigating found that they were re-enacting a comic-book plot.… A boy of thirteen committed a lust murder of a girl of six. After his arrest, in jail, he asked for comicbooks” 4

Kindle Locations 2927-2937

The two— hood and performer, lower-class white and taboo black— merged in the “pelvis” of a Southern “greaser” who dressed like a delinquent, used “one of black America’s favorite products, Royal Crown Pomade hair grease” (meant to give hair a “whiter” look), and proceeded to move and sing “like a negro.” Whether it was because they saw a white youth in blackface or a black youth in whiteface, much of the media grew apoplectic and many white parents alarmed. In the meantime, swiveling his hips and playing suggestively with the microphone, Elvis Presley broke into the lives of millions of teens in 1956, bringing with him an element of disorder and sexuality associated with darkness. 6†

The second set of postwar fears involved the “freedom” of the commercial media— record and comic book companies, radio stations, the movies, and television— to concretize both the fantasies of the young and the nightmarish fears of grown-ups into potent products. For many adults, this was abundance as betrayal, the good life not as a vision of Eden but as an unexpected horror story.

Kindle Locations 2952-2979

Take comic books. Even before the end of World War II, a new kind of content was creeping into them as they became the reading matter of choice for the soldier-adolescent. […] Within a few years, “crime” comics like Crime Does Not Pay emerged from the shadows, displaying a wide variety of criminal acts for the delectation of young readers. These were followed by horror and science fiction comics, purchased in enormous numbers. By 1953, more than 150 horror comics were being produced monthly, featuring acts of torture often of an implicitly sexual nature, murders and decapitations of various bloody sorts, visions of rotting flesh, and so on. 9

Miniature catalogs of atrocities, their feel was distinctly assaultive. In their particular version of the spectacle of slaughter, they targeted the American family, the good life, and revered institutions. Framed by sardonic detective narrators or mocking Grand Guignol gatekeepers, their impact was deconstructive. Driven by a commercial “hysteria” as they competed to attract buyers with increasingly atrocity-ridden covers and stories, they both partook of and mocked the hysteria about them. Unlike radio or television producers, the small publishers of the comic book business were neither advertiser driven nor corporately controlled.

Unlike the movies, comics were subject to no code. Unlike the television networks, comics companies had no Standards and Practices departments. No censoring presence stood between them and whoever would hand over a dime at a local newsstand. Their penny-ante ads and pathetic pay scale ensured that writing and illustrating them would be a job for young men in their twenties (or even teens). Other than early rock and roll, comics were the only cultural form of the period largely created by the young for those only slightly younger. In them, uncensored, can be detected the dismantling voice of a generation that had seen in the world war horrors beyond measure.

The hysterical tone of the response to these comics was remarkable. Comics publishers were denounced for conspiring to create a delinquent nation. Across the country, there were publicized comic book burnings like one in Binghamton, New York, where 500 students were dismissed from school early in order to torch 2,000 comics and magazines. Municipalities passed ordinances prohibiting the sale of comics, and thirteen states passed legislation to control their publication, distribution, or sale. Newspapers and magazines attacked the comics industry. The Hartford Courant decried “the filthy stream that flows from the gold-plated sewers of New York.” In April 1954, the Senate Subcommittee to Investigate Juvenile Delinquency convened in New York to look into links between comics and teen crime. 10

Kindle Locations 3209-3238

If sponsors and programmers recognized the child as an independent taste center, the sight of children glued to the TV, reveling in their own private communion with the promise of America, proved unsettling to some adults. The struggle to control the set, the seemingly trancelike quality of TV time, the soaring number of hours spent watching, could leave a parent feeling challenged by some hard-to-define force released into the home under the aegis of abundance, and the watching child could gain the look of possession, emptiness, or zombification.

Fears of TV’s deleterious effects on the child were soon widespread. The medical community even discovered appropriate new childhood illnesses. There was “TV squint” or eyestrain, “TV bottom,” “bad feet” (from TV-induced inactivity), “frogitis” (from a viewing position that put too much strain on inner-leg ligaments), “TV tummy” (from TV-induced overexcitement), “TV jaw” or “television malocclusion” (from watching while resting on one’s knuckles, said to force the eyeteeth inward), and “tired child syndrome” (chronic fatigue, loss of appetite, headaches, and vomiting induced by excessive viewing).

However, television’s threat to the child was more commonly imagined to lie in the “violence” of its programming. Access to this “violence” and the sheer number of hours spent in front of the set made the idea that this new invention was acting in loco parentis seem chilling to some; and it was true that via westerns, crime shows, war and spy dramas, and Cold War-inspired cartoons TV was indiscriminately mixing a tamed version of the war story with invasive Cold War fears. Now, children could endlessly experience the thrill of being behind the barrel of a gun. Whether through the Atom Squad’s three government agents, Captain Midnight and his Secret Squadron, various FBI men, cowboys, or detectives, they could also encounter “an array of H-bomb scares, mad Red scientists, [and] plots to rule the world,” as well as an increasing level of murder and mayhem that extended from the six-gun frontier of the “adult” western to the blazing machine guns of the crime show. 30

Critics, educators, and worried parents soon began compiling TV body counts as if the statistics of victory were being turned on young Americans. “Frank Orme, an independent TV watchdog, made a study of Los Angeles television in 1952 and noted, in one week, 167 murders, 112 justifiable homicides, and 356 attempted murders. Two-thirds of all the violence he found occurred in children’s shows. In 1954, Orme said violence on kids’ shows had increased 400 percent since he made his first report.” PTAs organized against TV violence, and Senate hearings searched for links between TV programming and juvenile delinquency.

Such “violence,” though, was popular. In addition, competition for audiences among the three networks had the effect of ratcheting up the pressures for violence, just as it had among the producers of horror comics. At The Untouchables, a 1960 hit series in which Treasury agent Eliot Ness took on Chicago’s gangland (and weekly reached 5-8 million young viewers), ABC executives would push hard for more “action.” Producer Quinn Martin would then demand the same of his subordinates, “or we are all going to get clobbered.” In a memo to one of the show’s writers, he asked: “I wish you would come up with a different device than running the man down with a car, as we have done this now in three different shows. I like the idea of sadism, but I hope we can come up with another approach to it.” 31

Fantasyland, An American Tradition

“The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, every individual free to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, each of us free to reinvent himself by imagination and will. In America those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts.”
~ Kurt Andersen, Fantasyland

It’s hard to have public debate in the United States for a number of reasons. The most basic reason is that Americans are severely uninformed and disinformed. We also tend to lack a larger context for knowledge. Historical amnesia is rampant and scientific literacy is limited, exacerbated by centuries old strains of anti-intellectualism and dogmatic idealism, hyper-individualism and sectarian groupthink, public distrust and authoritarian demagoguery.

This doesn’t seem as common in countries elsewhere. Part of this is that Americans are less aware and informed about other countries than the citizens of other countries are of the United States. Living anywhere else in the world, it is near impossible to not know in great detail about the United States and other Western powers as the entire world cannot escape these influences that cast a long shadow of colonial imperialism, neoliberal globalization, transnational corporations, mass media, monocultural dominance, soft power, international propaganda campaigns during the Cold War, military interventionism, etc. The rest of the world can’t afford the luxury of ignorance that Americans enjoy.

Earlier last century when the United States was a rising global superpower competing against other rising global superpowers, the US was known for having one of the better education systems in the world. International competition motivated us in investing in education. Now we are famous for how pathetic recent generations of students compare to many other developed countries. But even the brief moment of seeming American greatness following World War II might have had more to do with the wide scale decimation of Europe, a temporary lowering of other developed countries rather than a vast improvement in the United States.

There has also been a failure of big biz mass media to inform the public and the continuing oligopolistic consolidation of corporate media into a few hands has not allowed for a competitive free market to force corporations to offer something better. On top of that, Americans are one of the most propagandized and indoctrinated populations on the planet, with only a few comparable countries such as China and Russia exceeding us in this area.

See how the near unanimity of the American mass media was able, by way of beating the war drum, to change majority public opinion from being against the Iraq War to being in support of it. It just so happens that the parent companies of most of the corporate media, with ties to the main political parties and the military-industrial complex, profits immensely from the endless wars of the war state.

Corporate media is in the business of making money which means selling a product. In late stage capitalism, all of media is entertainment and news media is infotainment. Even the viewers are sold as a product to advertisers. There is no profit in offering a public service to inform the citizenry and create the conditions for informed public debate. As part of consumerist society, we consume as we are consumed by endless fantasies, just-so stories, comforting lies, simplistic narratives, and political spectacle.

This is a dark truth that should concern and scare Americans. But that would require them to be informed first. There is the rub.

Every public debate in the United States begins with mainstream framing. It requires hours of interacting with a typical American even to maybe get them to acknowledge their lack of knowledge, assuming they have the intellectual humility that makes that likely. Americans are so uninformed and misinformed that they don’t realize they are ignorant, so indoctrinated that they don’t realize how much their minds are manipulated and saturated in bullshit (I speak from the expertise of being an American who has been woefully ignorant for most of my life). To simply get to the level of knowledge where debate is even within the realm of possibility is itself almost an impossible task. To say it is frustrating is an extreme understatement.

Consider how most Americans know that tough-on-crime laws, stop-and-frisk, broken window policies, heavy policing, and mass incarceration were the cause of decreased crime. How do they know? Because decades of political rhetoric and media narratives have told them so. Just as various authority figures in government and media told them or implied or remained silent while others pushed the lies that the 9/11 terrorist attack was somehow connected to Iraq which supposedly had weapons of mass destruction, despite that the US intelligence agencies and foreign governments at the time knew these were lies.

Sure, you can look to alternative media for regularly reporting of different info that undermines and disproves these beliefs. But few Americans get much if any of their news from alternative media. There have been at least hundreds of high quality scientific studies, careful analyses, and scholarly books that have come out since the violent crime decline began. This information, however, is almost entirely unknown to the average American citizen and one suspects largely unknown to the average American mainstream news reporter, media personality, talking head, pundit, think tank hack, and politician.

That isn’t to say there isn’t ignorance found in other populations as well. Having been in the online world since the early naughts, I’ve met and talked with many people from other countries and admittedly some of them are less than perfectly informed. Still, the level of ignorance in the United States is unique, at least in the Western world.

That much can’t be doubted. Other serious thinkers might have differing explanations for why the US has diverged so greatly from much of the rest of the world, from its level of education to its rate of violence. But one way or another, it needs to be explained in the hope of finding a remedy. Sadly, even if we could agree on a solution, those in power benefit too greatly from the ongoing state of an easily manipulated citizenry that lacks knowledge and critical thinking skills.

This isn’t merely an attack on low-information voters and right-wing nut jobs. Even in dealing with highly educated Americans among the liberal class, I rarely come across someone who is deeply and widely informed across various major topics of public concern.

American society is highly insular. We Americans are not only disconnected from the rest of the world but disconnected from each other. And so we have little sense of what is going on outside of the narrow constraints of our neighborhoods, communities, workplaces, social networks, and echo chambers. The United States is psychologically and geographically segregated into separate reality tunnel enclaves defined by region and residency, education and class, race and religion, politics and media.

It’s because we so rarely step outside of our respective worlds that we so rarely realize how little we know and how much of what we think we know is not true. Most of us live in neighborhoods, go to churches and stores, attend or send our kids to schools, work and socialize with people who are exactly like ourselves. They share our beliefs and values, our talking points and political persuasion, our biases and prejudices, our social and class position. We are hermetically sealed within our safe walled-in social identities. Nothing can reach us, threaten us, or change us.

That is until something happens like Donald Trump being elected. Then there is a panic about what has become of America in this post-fact age. The sad reality, however, is America has always been this way. It’s just finally getting to a point where it’s harder to ignore and that potential for public awakening offers some hope.

* * *

Fantasyland
by Kurt Anderson
pp. 10-14

Why are we like this?

. . . The short answer is because we’re Americans, because being American means we can believe any damn thing we want, that our beliefs are equal or superior to anyone else’s, experts be damned. Once people commit to that approach, the world turns inside out, and no cause-and-effect connection is fixed. The credible becomes incredible and the incredible credible.

The word mainstream has recently become a pejorative, shorthand for bias, lies, oppression by the elites. Yet that hated Establishment, the institutions and forces that once kept us from overdoing the flagrantly untrue or absurd—media, academia, politics, government, corporate America, professional associations, respectable opinion in the aggregate—has enabled and encouraged every species of fantasy over the last few decades.

A senior physician at one of America’s most prestigious university hospitals promotes miracle cures on his daily TV show. Major cable channels air documentaries treating mermaids, monsters, ghosts, and angels as real. A CNN anchor speculated on the air that the disappearance of a Malaysian airliner was a supernatural event. State legislatures and one of our two big political parties pass resolutions to resist the imaginary impositions of a New World Order and Islamic law. When a political scientist attacks the idea that “there is some ‘public’ that shares a notion of reality, a concept of reason, and a set of criteria by which claims to reason and rationality are judged,” colleagues just nod and grant tenure. A white woman felt black, pretended to be, and under those fantasy auspices became an NAACP official—and then, busted, said, “It’s not a costume…not something that I can put on and take off anymore. I wouldn’t say I’m African American, but I would say I’m black.” Bill Gates’s foundation has funded an institute devoted to creationist pseudoscience. Despite his nonstop lies and obvious fantasies—rather, because of them—Donald Trump was elected president. The old fringes have been folded into the new center. The irrational has become respectable and often unstoppable. As particular fantasies get traction and become contagious, other fantasists are encouraged by a cascade of out-of-control tolerance. It’s a kind of twisted Golden Rule unconsciously followed: If those people believe that , then certainly we can believe this.

Our whole social environment and each of its overlapping parts—cultural, religious, political, intellectual, psychological—have become conducive to spectacular fallacy and make-believe. There are many slippery slopes, leading in various directions to other exciting nonsense. During the last several decades, those naturally slippery slopes have been turned into a colossal and permanent complex of interconnected, crisscrossing bobsled tracks with no easy exit. Voilà: Fantasyland. . . .

When John Adams said in the 1700s that “facts are stubborn things,” the overriding American principle of personal freedom was not yet enshrined in the Declaration or the Constitution, and the United States of America was itself still a dream. Two and a half centuries later the nation Adams cofounded has become a majority-rule de facto refutation of his truism: “our wishes, our inclinations” and “the dictates of our passions” now apparently do “alter the state of facts and evidence,” because extreme cognitive liberty and the pursuit of happiness rule.

This is not unique to America, people treating real life as fantasy and vice versa, and taking preposterous ideas seriously. We’re just uniquely immersed. In the developed world, our predilection is extreme, distinctly different in the breadth and depth of our embrace of fantasies of many different kinds. Sure, the physician whose fraudulent research launched the antivaccine movement was a Brit, and young Japanese otaku invented cosplay, dressing up as fantasy characters. And while there are believers in flamboyant supernaturalism and prophecy and religious pseudoscience in other developed countries, nowhere else in the rich world are such beliefs central to the self-identities of so many people. We are Fantasyland’s global crucible and epicenter.

This is American exceptionalism in the twenty-first century. America has always been a one-of-a-kind place. Our singularity is different now. We’re still rich and free, still more influential and powerful than any nation, practically a synonym for developed country . But at the same time, our drift toward credulity, doing our own thing, and having an altogether uncertain grip on reality has overwhelmed our other exceptional national traits and turned us into a less-developed country as well.

People tend to regard the Trump moment—this post-truth, alternative facts moment—as some inexplicable and crazy new American phenomenon. In fact, what’s happening is just the ultimate extrapolation and expression of attitudes and instincts that have made America exceptional for its entire history—and really, from its prehistory. . . .

America was created by true believers and passionate dreamers, by hucksters and their suckers—which over the course of four centuries has made us susceptible to fantasy, as epitomized by everything from Salem hunting witches to Joseph Smith creating Mormonism, from P. T. Barnum to Henry David Thoreau to speaking in tongues, from Hollywood to Scientology to conspiracy theories, from Walt Disney to Billy Graham to Ronald Reagan to Oprah Winfrey to Donald Trump. In other words: mix epic individualism with extreme religion; mix show business with everything else; let all that steep and simmer for a few centuries; run it through the anything-goes 1960s and the Internet age; the result is the America we inhabit today, where reality and fantasy are weirdly and dangerously blurred and commingled.

I hope we’re only on a long temporary detour, that we’ll manage somehow to get back on track. If we’re on a bender, suffering the effects of guzzling too much fantasy cocktail for too long, if that’s why we’re stumbling, manic and hysterical, mightn’t we somehow sober up and recover? You would think. But first you need to understand how deeply this tendency has been encoded in our national DNA.

Fake News: It’s as American as George Washington’s Cherry Tree
by Hanna Rosin

Fake news. Post-truth. Alternative facts. For Andersen, these are not momentary perversions but habits baked into our DNA, the ultimate expressions of attitudes “that have made America exceptional for its entire history.” The country’s initial devotion to religious and intellectual freedom, Andersen argues, has over the centuries morphed into a fierce entitlement to custom-made reality. So your right to believe in angels and your neighbor’s right to believe in U.F.O.s and Rachel Dolezal’s right to believe she is black lead naturally to our president’s right to insist that his crowds were bigger.

Andersen’s history begins at the beginning, with the first comforting lie we tell ourselves. Each year we teach our children about Pilgrims, those gentle robed creatures who landed at Plymouth Rock. But our real progenitors were the Puritans, who passed the weeks on the trans-Atlantic voyage preaching about the end times and who, when they arrived, vowed to hang any Quaker or Catholic who landed on their shores. They were zealots and also well-educated British gentlemen, which set the tone for what Andersen identifies as a distinctly American endeavor: propping up magical thinking with elaborate scientific proof.

While Newton and Locke were ushering in an Age of Reason in Europe, over in America unreason was taking new seductive forms. A series of mystic visionaries were planting the seeds of extreme entitlement, teaching Americans that they didn’t have to study any book or old English theologian to know what to think, that whatever they felt to be true was true. In Andersen’s telling, you can easily trace the line from the self-appointed 17th-century prophet Anne Hutchinson to Kanye West: She was, he writes, uniquely American “because she was so confident in herself, in her intuitions and idiosyncratic, subjective understanding of reality,” a total stranger to self-doubt.

What happens next in American history, according to Andersen, happens without malevolence, or even intention. Our national character gels into one that’s distinctly comfortable fogging up the boundary between fantasy and reality in nearly every realm. As soon as George Washington dies fake news is born — the story about the cherry tree, or his kneeling in prayer at Valley Forge. Enterprising businessmen quickly figure out ways to make money off the Americans who gleefully embrace untruths.

Proteus Effect and Mediated Experience

The Proteus effect is how our appearance on media results in mediating our experience, perception, and identity. It also shapes how we relate and how others relate to us. Most of this happens unconsciously.

There are many ways this might relate to other psychological phenomenon. And there are real world equivalents to this. Consider that how we dress influences how we act such as wearing a black uniform will increase aggressive behavior. Another powerful example is that children imagining themselves as a superhero while doing a task will exceed the ability they would otherwise have.

Most interesting is how the Proteus effect might begin to overlap with so much else as immersive media comes to dominate our lives. We already see the power of such influences by way of placebo effect, Pygmalion/Rosenthal effect, golem effect, stereotype threat, and much else. I’ve been particularly interested in the placebo effect as the efficacy of antidepressants for most people are no more statistically significant than that of a placebo, demonstrating how something can allow us to imagine ourselves into a different state of mind. Or consider how simply interacting with a doctor or someone acting like a doctor brings relief without any actual procedure having been involved.

Our imaginations are powerful, imagination of both of ourselves and others along with the imagination of others of ourselves. Tell a doctor or a teacher something about a patient or student, even if not true, and the individual will respond in such a way as if it is true with real world measurable effects. New media could have similar effects, even when we know it isn’t ‘real’ but merely virtual. Imagination doesn’t necessarily concern itself with the constraints of supposed rationality, as shown how people will viscerally react to a fake arm being cut after they’ve come to identify with that fake arm, despite their consciously knowing it is not actually their arm.

Our minds are highly plastic and our experience easily influenced. The implications are immense, from education to mental health, from advertising to propaganda. The Proteus effect could play a transformative role in the further development of the modern mind, either through potential greater self-control or greater social control.

* * *

Virtual Worlds Are Real
by Nick Yee

Meet Virtual You: How Your VR Self Influences Your Real-Life Self
by Amy Cuddy

The Proteus Effect: How Our Avatar Changes Online Behavior
by John M. Grohol

Enhancing Our Lives with Immersive Virtual Reality
by Mel Slater & Maria V. Sanchez-Vives

Virtual Reality and Social Networks Will Be a Powerful Combination
by Jeremy N. Bailenson and Jim Blascovich

Promoting motivation with virtual agents and avatars
by Amy L. Baylor

Avatars and the Mirrorbox: Can Humans Hack Empathy?
by Danna Staaf

The Proteus effect: How gaming may revolutionise peacekeeping
by Gordon Hunt

Can virtual reality convince Americans to save for retirement?
by The Week Staff

When Reason Falters, It’s Age-Morphing Apps and Virtual Reality to the Rescue
by David Berreby

Give Someone a Virtual Avatar and They Adopt Stereotype Behavior
by Colin Schultz

Wii, Myself, and Size
by Li BJ, Lwin MO, & Jung Y

Would Being Forced to Use This ‘Obese’ Avatar Affect Your Physical Fitness?
by Esther Inglis-Arkell

The Proteus Effect and Self-Objectification via Avatars
by Big Think editors

The Proteus Effect in Dyadic Communication: Examining the Effect of Avatar Appearance in Computer-Mediated Dyadic Interaction
by Brandon Van Der Heide, Erin M. Schumaker, Ashley M. Peterson, & Elizabeth B. Jones

Reading Voices Into Our Minds

Each of us is a multitude. There is no single unified self. Our thoughts are a conversation. The voices of family echo in our minds when we first leave home and long after our loved ones have died. Then there are all the television, movie, and commercial characters that invade our consciousness with their catchphrases, slogans, and taglines. And we can’t forget how songs get stuck on cognitive repeat or emerge as a compulsion to sing.

Yet another example are the intimate voices imagined as you read novels, a form of inner speech that can carry on after you have put down a book. These can be the most powerful voices. There is nothing that compares to the long periods of time spent with compelling fiction. The voice of characters in a novel are heard within your own head as you read. You can return to this experience again and again, until the characters have become internalized and their words inscribed upon your psyche. Their voices becomes your own voices.

This chorus of voices is constantly playing in the background, a caucophony of thoughts vying for your attention. But occasionally they rise into the spotlight of your consciousness. Even then, it rarely occurs to any of us how strange those voices are, except when some particular voice insistently refuses to go away and maybe even seems to have a mind of its own. Then we might begin to question the distinction between them and us and question what kind of being we are that can contain both.

There is an argument that novels help us develop theory of mind. But maybe in the process novels, along with certain other modern media, result in a particular kind of mind or minds. We come to identify or otherwise incorporate what we empathize with. The worlds we inhabit long enough eventually inhabit us. And what we’ve heard through out our lives can have a way of continuing to speak to us, layers upon layers of voices that for some of us can speak clearly.

* * *

Fictional characters make ‘experiential crossings’ into real life, study finds
by Richard Lea

It’s a cliche to claim that a novel can change your life, but a recent study suggests almost a fifth of readers report that fiction seeps into their daily existence.

Researchers at Durham University conducted a survey of more than 1,500 readers, with about 400 providing detailed descriptions of their experiences with book. Nineteen per cent of those respondents said the voices of fictional characters stayed with them even when they weren’t reading, influencing the style and tone of their thoughts – or even speaking to them directly. For some participants it was as if a character “had started to narrate my world”, while others heard characters talking, or imagined them reacting to things going on in everyday life.

The study, which was carried out in collaboration with the Guardian at the 2014 Edinburgh international book festival, also found that more than half of the 1,500 respondents said that they heard the voices of characters while reading most or all of the time, while 48% reported a similar frequency of visual or other sensory experiences during reading.

According to one of the paper’s authors, the writer and psychologist Charles Fernyhough, the survey illustrates how readers of fiction are doing more than just processing words for meaning – they are actively recreating the worlds and characters being described.

“For many of us, this can involve experiencing the characters in a novel as people we can interact with,” Fernyhough said. “One in seven of our respondents, for example, said they heard the voices of fictional characters as clearly as if there was someone in the room with them.”

When they asked readers to describe what was happening in detail, the researchers found people who described fictional characters remaining active in their minds after they had put the book down, and influencing their thoughts as they went about their daily business – a phenomenon Fernyhough called “experiential crossing”.

The term covers a wide range of experiences, from hearing a character’s voice to feeling one’s own thoughts shaped by a character’s ideas, sensibility or presence, he continued. “One respondent, for example, described ‘feeling enveloped’ by [Virginia Woolf’s] character Clarissa Dalloway – hearing her voice and imagining her response to particular situations, such as walking into a Starbucks. Sometimes the experience seemed to be triggered by entering a real-world setting similar to one in the novel; in other situations, it felt like seeing the world through a particular character’s eyes, and judging events as the character would.”

The characters who make the leap into readers’ lives are typically “powerful, vivid characters and narrators”, Fernyhough added, “but this will presumably vary hugely from person to person”.

* * *

 

Hyperobjects and Individuality

We live in a liberal age and the liberal paradigm dominates, not just for liberals but for everyone. Our society consists of nothing other than liberalism and reactions to liberalism. And at the heart of it all is individualism. But through the cracks, other possibilities can be glimpsed.

One challenging perspective is that of hyperobjects, a proposed by Timothy Morton — as he writes: “Hyperobjects pose numerous threats to individualism, nationalism, anti-intellectualism, racism, speciesism, anthropocentrism, you name it. Possibly even capitalism itself.”

Evander Price summarizes the origin of the theory and the traits of hyperobjects (Hyperobjects & Dark Ecology). He breaks it down into seven points. The last three refer to individuality — here they are (with some minor editing):

5) Individuality is lost. We are not separate from other things. (This is Object Oriented Ontology) — Morton calls this entangledness. “Knowing more about hyperobjects is knowing more about how we are hopelessly fastened to them.” A little bit like Ahab all tangled up in the lines of Moby-Dick.

6) “Utilitarianism is deeply flawed when it comes to working with hyperobjects. The simple reason why is that hpyerobjects are profoundly futural.” (135) <–I’ve been arguing against utilitarianism for a while now within this line of thinking; this is because utilitarianism, the idea that moral goodness is measured by whether an action or idea increases the overall happiness of a given community, is always embedded within a temporal framework, outside of which the collective ‘happiness’ of a given individual or community is not considered. Fulfilling the greatest happiness for the current generation is always dependent on taking resources now [from] future generations. What is needed is chronocritical utilitarianism, but that is anathema to the radical individuality of utilitarianism.

7) Undermining — the opposite of hyperobjecting. From Harman. “Undermining is when things are reduced to smaller things that are held to be more real. The classic form of undermining in contemporary capitalism is individualism: ‘There are only individuals and collective decisions are ipso facto false.’” <– focusing on how things affect me because I am the most important is essentially undermining that I exist as part of a community, and a planet.

And from the book on the topic:

Hyperobjects:
Philosophy and Ecology after the End of the World

by Timothy Morton
Kindle Locations 427-446

The ecological thought that thinks hyperobjects is not one in which individuals are embedded in a nebulous overarching system, or conversely, one in which something vaster than individuals extrudes itself into the temporary shapes of individuals. Hyperobjects provoke irreductionist thinking, that is, they present us with scalar dilemmas in which ontotheological statements about which thing is the most real (ecosystem, world, environment, or conversely, individual) become impossible. 28 Likewise, irony qua absolute distance also becomes inoperative. Rather than a vertiginous antirealist abyss, irony presents us with intimacy with existing nonhumans.

The discovery of hyperobjects and OOO are symptoms of a fundamental shaking of being, a being-quake. The ground of being is shaken. There we were, trolling along in the age of industry, capitalism, and technology, and all of a sudden we received information from aliens, information that even the most hardheaded could not ignore, because the form in which the information was delivered was precisely the instrumental and mathematical formulas of modernity itself. The Titanic of modernity hits the iceberg of hyperobjects. The problem of hyperobjects, I argue, is not a problem that modernity can solve. Unlike Latour then, although I share many of his basic philosophical concerns, I believe that we have been modern, and that we are only just learning how not to be.

Because modernity banks on certain forms of ontology and epistemology to secure its coordinates, the iceberg of hyperobjects thrusts a genuine and profound philosophical problem into view. It is to address these problems head on that this book exists. This book is part of the apparatus of the Titanic, but one that has decided to dash itself against the hyperobject. This rogue machinery— call it speculative realism, or OOO— has decided to crash the machine, in the name of a social and cognitive configuration to come, whose outlines are only faintly visible in the Arctic mist of hyperobjects. In this respect, hyperobjects have done us a favor. Reality itself intervenes on the side of objects that from the prevalent modern point of view— an emulsion of blank nothingness and tiny particles— are decidedly medium-sized. It turns out that these medium-sized objects are fascinating, horrifying, and powerful.

For one thing, we are inside them, like Jonah in the Whale. This means that every decision we make is in some sense related to hyperobjects. These decisions are not limited to sentences in texts about hyperobjects.

Kindle Locations 467-472

Hyperobjects are a good candidate for what Heidegger calls “the last god,” or what the poet Hölderlin calls “the saving power” that grows alongside the dangerous power. 31 We were perhaps expecting an eschatological solution from the sky, or a revolution in consciousness— or, indeed, a people’s army seizing control of the state. What we got instead came too soon for us to anticipate it. Hyperobjects have dispensed with two hundred years of careful correlationist calibration. The panic and denial and right-wing absurdity about global warming are understandable. Hyperobjects pose numerous threats to individualism, nationalism, anti-intellectualism, racism, speciesism, anthropocentrism, you name it. Possibly even capitalism itself.

Kindle Locations 2712-2757

Marxists will argue that huge corporations are responsible for ecological damage and that it is self-destructive to claim that we are all responsible. Marxism sees the “ethical” response to the ecological emergency as hypocrisy. Yet according to many environmentalists and some anarchists, in denying that individuals have anything to do with why Exxon pumps billions of barrels of oil, Marxists are displacing the blame away from humans. This view sees the Marxist “political” response to the ecological emergency as hypocrisy. The ethics– politics binary is a true differend: an opposition so radical that it is in some sense insuperable. Consider this. If I think ethics, I seem to want to reduce the field of action to one-on-one encounters between beings. If I think politics, I hold that one-on-one encounters are never as significant as the world (of economic, class, moral, and so on), relations in which they take place. These two ways of talking form what Adorno would have called two halves of a torn whole, which nonetheless don’t add up together. Some nice compromise “between” the two is impossible. Aren’t we then hobbled when it comes to issues that affect society as a whole— nay the biosphere as a whole— yet affect us all individually (I have mercury in my blood, and ultraviolet rays affect me unusually strongly)?

Yet the deeper problem is that our (admittedly cartoonish) Marxist and anarchist see the problem as hypocrisy. Hypocrisy is denounced from the standpoint of cynicism. Both the Marxist and the anti-Marxist are still wedded to the game of modernity, in which she who grabs the most cynical “meta” position is the winner: Anything You Can Do, I Can Do Meta. Going meta has been the intellectual gesture par excellence for two centuries. I am smarter than you because I can see through you. You are smarter than they are because you ground their statements in conditions of possibility. From a height, I look down on the poor fools who believe what they think. But it is I who believes, more than they. I believe in my distance, I believe in the poor fools, I believe they are deluded. I have a belief about belief: I believe that belief means gripping something as tightly as possible with my mind. Cynicism becomes the default mode of philosophy and of ideology. Unlike the poor fool, I am undeluded— either I truly believe that I have exited from delusion, or I know that no one can, including myself, and I take pride in this disillusionment.

This attitude is directly responsible for the ecological emergency, not the corporation or the individual per se, but the attitude that inheres both in the corporation and in the individual, and in the critique of the corporation and of the individual. Philosophy is directly embodied in the size and shape of a paving stone, the way a Coca Cola bottle feels to the back of my neck, the design of an aircraft, or a system of voting. The overall guiding view, the “top philosophy,” has involved a cynical distance. It is logical to suppose that many things in my world have been affected by it— the way a shopping bag looks, the range of options on the sports channel, the way I think Nature is “over yonder.” By thinking rightness and truth as the highest possible elevation, as cynical transcendence, I think Earth and its biosphere as the stage set on which I prance for the amusement of my audience. Indeed, cynicism has already been named in some forms of ideology critique as the default mode of contemporary ideology. 48 But as we have seen, cynicism is only hypocritical hypocrisy.

Cynicism is all over the map: left, right, green, indifferent. Isn’t Gaian holism a form of cynicism? One common Gaian assertion is that there is something wrong with humans. Nonhumans are more Natural. Humans have deviated from the path and will be wiped out (poor fools!). No one says the same about dolphins, but it’s just as true. If dolphins go extinct, why worry? Dolphins will be replaced. The parts are greater than the whole. A mouse is not a mouse if it is not in the network of Gaia. 49 The parts are replaceable. Gaia will replace humans with a less defective component. We are living in a gigantic machine— a very leafy one with a lot of fractals and emergent properties to give it a suitably cool yet nonthreatening modern aesthetic feel.

It is fairly easy to discern how refusing to see the big picture is a form of what Harman calls undermining. 50 Undermining is when things are reduced to smaller things that are held to be more real. The classic form of undermining in contemporary capitalism is individualism: “There are only individuals and collective decisions are ipso facto false.” But this is a problem that the left, and environmentalism more generally, recognize well.

The blind spot lies in precisely the opposite direction: in how common ideology tends to think that bigger is better or more real. Environmentalism, the right, and the left seem to have one thing in common: they all hold that incremental change is a bad thing. Yet doesn’t the case against incrementalism, when it comes to things like global warming, amount to a version of what Harman calls overmining, in the domain of ethics and politics? Overmining is when one reduces a thing “upward” into an effect of some supervenient system (such as Gaia or consciousness). 51 Since bigger things are more real than smaller things, incremental steps will never accomplish anything. The critique of incrementalism laughs at the poor fools who are trying to recycle as much as possible or drive a Prius. By postponing ethical and political decisions into an idealized future, the critique of incrementalism leaves the world just as it is, while maintaining a smug distance toward it. In the name of the medium-sized objects that coexist on Earth (aspen trees, polar bears, nematode worms, slime molds, coral, mitochondria, Starhawk, and Glenn Beck), we should forge a genuinely new ethical view that doesn’t reduce them or dissolve them.

 

Hyperballad and Hyperobjects

Morton’s use of the term ‘hyperobjects’ was inspired by Björk’s 1996 single ‘Hyperballad’
(Wikipedia)

Björk
by Timothy Morton

Björk and I think that there is a major cultural shift going on around the world towards something beyond cynical reason and nihilism, as more and more it becomes impossible not to have consideration for nonhumans in everything we do. Hopefully this piece we made contributes to that somehow.

I was so lucky to be doing this while she was mixing her album with some of the nicest and most incredible musicians/producers I’ve ever met…great examples of this shift beyond cynical reason…

Here is something I think is so so amazing, the Subtle Abuse mix of “Hyperballad.” Car parts, bottles, cutlery–all the objects, right? Not to mention Björk’s body “slamming against those rocks.” It’s a veritable Latour Litany… And the haunting repetition…

Dark Ecological Chocolate
by Timothy Morton

This being-an-object is intimately related with the Kantian beauty experience, wherein I find experiential evidence without metaphysical positing that at least one other being exists. The Sadness is the attunement of coexistence stripped of its conceptual content. Since the rigid anthropocentric standard of taste with its refined distances has collapsed, it becomes at this level impossible to rebuild the distinction we lost in The Ethereal between being interested or concerned with (this painting, this polar bear) and being fascinated by… Being interested means I am in charge. Being fascinated means that something else is. Beauty starts to show the subscenent wiring under the board.

Take Björk. Her song “Hyperballad” is a classic example of what I’m trying to talk about here. She shows you the wiring under board of an emotion, the way a straightforward feeling like I love you is obviously not straightforward at all, so don’t write a love song like that, write one that says you’re sitting on top of this cliff, and you’re dropping bits and pieces of the edge like car parts, bottles and cutlery, all kinds of not-you nonhuman prosthetic bits that we take to be extensions of our totally integrated up to date shiny religious holistic selves, and then you picture throwing yourself off, and what would you look like—to the you who’s watching you still on the edge of the cliff—as you fell, and when you hit the bottom would you be alive or dead, would you look awake or asleep, would your eyes be closed, or open?

When you experience beauty you experience evidence in your inner space that at least one thing that isn’t you exists. An evanescent footprint in your inner space—you don’t need to prove that things are real by hitting them or eating them. A nonviolent coexisting without coercion. There is an undecidability between two entities—me and not-me, the thing. Beauty is sad because it is ungraspable; there is an elegiac quality to it. When we grasp it withdraws, like putting my hand into water. Yet it appears.

Beauty is virtual: I am unable to tell whether the beauty resides in me or in the thing—it is as if it were in the thing, but impossible to pin down there. The subjunctive, floating “as if” virtual reality of beauty is a little queasy—the thing emits a tractor beam in whose vortex I find myself; I veer towards it. The aesthetic dimension says something true about causality in a modern age: I can’t tell for sure what the causes and effects are without resorting to illegal metaphysical moves.[14] Something slightly sinister is afoot—there is a basic entanglement such that I can’t tell who or what started it.

Beauty is the givenness of data. A thing impinges on me before I can contain it or use it or think it. It is as if I hear the thing breathing right next to me. From the standpoint of agricultural white patriarchy, something slightly “evil” is happening: something already has a grip on us, and this is demonic insofar as it is “from elsewhere.” This “saturated” demonic proximity is the essential ingredient of ecological being and ecological awareness, not some Nature over yonder.[15]

Interdependence, which is ecology, is sad and contingent. Because of interdependence, when I’m nice to a bunny rabbit I’m not being nice to bunny rabbit parasites. Amazing violence would be required to try to fit a form over everything all at once. If you try then you basically undermine the bunnies and everything else into components of a machine, replaceable components whose only important aspect is their existence

12 Rules for Potential School Shooters

In response to the Parkland school shooting, Jared Sichel takes a different perspective. He puts it into the context of Jordan Peterson’s 12 Rules for Life.

I’m not sure of my opinion on this article, as might be noted by the title of this post. We should take Peterson seriously, but it is hard to know how his message applies to extreme cases of young males that are struggling to such an extent that they might commit mass violence. That is a lot to ask of a public intellectual, even when he has experience in clinical psychology as does Peterson.

The article begins with a general description of American school shooters, the implication being that by knowing who these people are we could help them before they turn violent (via the sage advice of a professorial father figure):

“He, like every one of America’s other young, school shooters since Columbine, is male. And like many, he grew up without a father present, is not socialized, is a loner, is not religious, sees himself as a victim, is angry and depressed, wants to get even, is attracted to violence, and meticulously planned his final, redemptive, act of chaos.”

I wanted to break this down a bit (although in less detail than I did with an earlier post on the demographics of violence). Let me start with the religious component. Whatever they identify or don’t identify as, many and maybe most school shooters were raised Christian and one wonders if that plays a role in their often expressing a loss of meaning, an existential crisis, etc. Birgit Pfeifer and Ruard R. Ganzevoort focus on the religious-like concerns that obsess so many school shooters and note that many of them had religious backgrounds:

“Traditionally, religion offers answers to existential concerns. Interestingly, school shootings have occurred more frequently in areas with a strong conservative religious population (Arcus 2002). Michael Carneal (Heath High School shooting, 1997, Kentucky) came from a family of devoted members of the Lutheran Church. Mitchell Johnson (Westside Middle School shooting, 1998, Arkansas) sang in the Central Baptist Church youth choir (Newman et al. 2004). Dylan Klebold (Columbine shooting, 1999, Colorado) attended confirmation classes in accordance with Lutheran tradition. However, not all school shooters have a Christian background. Some of them declare themselves atheists…” (The Implicit Religion of School Shootings).

Princeton sociologist Katherine Newman, in studying school shootings, has noted that, “School rampage shootings tend to happen in small, isolated or rural communities. There isn’t a very direct connection between where violence typically happens, especially gun violence in the United States, and where rampage shootings happen” (Common traits of all school shooters in the U.S. since 1970).

It is quite significant that these American mass atrocities are concentrated in “small, isolated or rural communities” that are “frequently in areas with a strong conservative religious population”. That might more precisely indicate who these school shooters are and what they are reacting to. Also, one might note that rural areas in general and specifically in the South do have high rates of gun-related deaths, although many of them are listed as ‘accidental’ which is to say most rural shootings involve people who know each other; also true of school shootings.

Sichel goes onto focus one particular descriptor of so many school shooters, that they are young males: “America is in the midst of a well-documented crisis of young males. […] “Boys are suffering in the modern world,” Peterson writes. […] Thus “toxic masculinity.” […] The combination of a toxic culture and broken homes has produced millions of anxious, confused, and even angry, teenage and young adult males.”

I’d put it more simply. America is in the midst of a well-documented crisis. Full stop. The crisis is found in every demographic and area of American society, even though it impacts and manifests in diverse ways of dysfunction and unhappiness.

But I might emphasize the point that some of the worst harmed are GenXers, which is to say middle agers, especially among certain demographics of white males such as in the lower classes and rural areas (the most harm in the relative sense of having experienced the greatest decline, the only demographic with worsening mortality rates). And I could add that GenXers — a generation with high rates of violent crime and social problems — are probably the majority of young parents right now, which is to say the parents of these school shooters and other struggling young men (and young women).

James Gilligan, among other issues, discusses this in terms of loss of status and loss of hope. This leads to stress and short term thinking, shame and frustration. Et cetera. It’s a whole mess of factors, across multiple generations, especially in considering how GenXers were the first mass incarcerated generation which caused much devastation, from absentee fathers to impoverished communities. It’s not only individuals who are hurting but also families and communities.

It just so happens that yesterday I wrote a post about American violence. Based on extensive data, I made the argument that lead toxicity and inequality are the most strongly proven explanations of violence, both in its increase and decrease. And it must be noted that violence in the US has been on a steep decline since the 1990s.

School shootings and other mass murder, however, is a different kind of action. Even though lead toxicity has been dealt with to some degree, although less so in poor communities (both inner city and rural communities full of old lead pipes and old buildings with lead paint), inequality keeps rising and that is one of the clearest predictors of a wide variety of social problems. And James Gilligan, as I discuss in that post, goes into great detail about why high inequality affects males differently than females. As part of his analysis of the stresses of inequality, he speaks of a culture of shame that particularly hits young men hard. To his credit, Jordan Peterson does admit that inequality is an important factor (as I recall, he discusses it in an interview with Russell Brand).

I’ll respond to one last bit from Sichel’s article: “And the aim of your life, Peterson argues, should not be happiness. He does not belittle it as a worthy pursuit among many, but “in a crisis, the inevitable suffering that life entails can rapidly make a mockery of the idea that happiness is the proper pursuit of the individual.””

I wonder what Jordan Peterson would think of Anu Partanen’s The Nordic Theory of Everything. She clarifies the issue of individualism. The United States is simultaneously too individualistic and not individualistic enough. Or one could say that few Americans take seriously what individualism means or could mean, instead being mired in the pseudo-individualistic rhetoric that authoritarianism hides behind.

Partanen argues that the Nordic social democracies, on a practical level, have greater individual rights and freedoms. Individual happiness can only be attained through the public good. Nordic laws treat people as individuals, whereas US law prioritizes family which Partanen sees as being oppressive in how it is used to undermine public responsibility. The American nuclear family is expected to hold up the entirety of society, an impossible task.

This relates to the difference between the cultural notion of Germanic freedom (etymologically related to ‘friend’) and the legalistic tradition of Roman liberty (simply meaning one isn’t a slave). But I don’t think Partanen discusses this distinction. Maybe American (pseudo-)individualism is too legalistic. And maybe precisely because individualism is less of a political football in Nordic countries that genuine individualism is possible. Individualism can be a result of a functioning social democracy, but not a starting point as an abstract ideal.

 

* * *

Here is a relevant section from my violence post where I share a passage from Gilligan’s book. He goes into more detail about other factors influencing males, specifically young males. But this particular passage fits in with Partanen’s view. Simply put, it can’t so easily be blamed on family breakdown, missing fathers, and single mothers.

Preventing Violence
by James Gilligan
Kindle Locations 1218-1256

Single-Parent Families Another factor that correlates with rates of violence in the United States is the rate of single-parent families: children raised in them are more likely to be abused, and are more likely to become delinquent and criminal as they grow older, than are children who are raised by two parents. For example, over the past three decades those two variables—the rates of violent crime and of one-parent families—have increased in tandem with each other; the correlation is very close. For some theorists, this has suggested that the enormous increase in the rate of youth violence in the U.S. over the past few decades has been caused by the proportionately similar increase in the rate of single-parent families.

As a parent myself, I would be the first to agree that child-rearing is such a complex and demanding task that parents need all the help they can get, and certainly having two caring and responsible parents available has many advantages over having only one. In addition, children, especially boys, can be shown to benefit in many ways, including diminished risk of delinquency and violent criminality, from having a positive male role-model in the household. The adult who is most often missing in single-parent families is the father. Some criminologists have noticed that Japan, for example, has practically no single-parent families, and its murder rate is only about one-tenth as high as that of the United States.

Sweden’s rate of one-parent families, however, has grown almost to equal that in the United States, and over the same period (the past few decades), yet Sweden’s homicide rate has also been on average only about one-tenth as high as that of the U.S., during that same time. To understand these differences, we should consider another variable, namely, the size of the gap between the rich and the poor. As stated earlier, Sweden and Japan both have among the lowest degrees of economic inequity in the world, whereas the U.S. has the highest polarization of both wealth and income of any industrialized nation. And these differences exist even when comparing different family structures. For example, as Timothy M. Smeeding has shown, the rate of relative poverty is very much lower among single-parent families in Sweden than it is among those in the U.S. Even more astonishing, however, is the fact that the rate of relative poverty among single-parent families in Sweden is much lower than it is among two-parent families in the United States (“Financial Poverty in Developed Countries,” 1997). Thus, it would seem that however much family structure may influence the rate of violence in a society, the overall social and economic structure of the society—the degree to which it is or is not stratified into highly polarized upper and lower social classes and castes—is a much more powerful determinant of the level of violence.

There are other differences between the cultures of Sweden and the U.S. that may also contribute to the differences in the correlation between single-parenthood and violent crime. The United States, with its strongly Puritanical and Calvinist cultural heritage, is much more intolerant of both economic dependency and out-of-wedlock sex than Sweden. Thus, the main form of welfare support for single-parent families in the U.S. (until it was ended a year ago) A.F.D.C., Aid to Families with Dependent Children, was specifically denied to families in which the father (or any other man) was living with the mother; indeed, government agents have been known to raid the homes of single mothers with no warning in the middle of the night in order to “catch” them in bed with a man, so that they could then deprive them (and their children) of their welfare benefits. This practice, promulgated by politicians who claimed that they were supporting what they called “family values,” of course had the effect of destroying whatever family life did exist. Fortunately for single mothers in Sweden, the whole society is much more tolerant of people’s right to organize their sexual life as they wish, and as a result many more single mothers are in fact able to raise their children with the help of a man.

Another difference between Sweden and the U.S. is that fewer single mothers in Sweden are actually dependent on welfare than is true in the U.S. The main reason for this is that mothers in Sweden receive much more help from the government in getting an education, including vocational training; more help in finding a job; and access to high-quality free childcare, so that mothers can work without leaving their children uncared for. The U.S. system, which claims to be based on opposition to dependency, thus fosters more welfare dependency among single mothers than Sweden’s does, largely because it is so more miserly and punitive with the “welfare” it does provide. Even more tragically, however, it also fosters much more violence. It is not single motherhood as such that causes the extremely high levels of violence in the United States, then; it is the intense degree of shaming to which single mothers and their children are exposed by the punitive, miserly, Puritanical elements that still constitute a powerful strain in the culture of the United States.

* * *

After writing this post, another factor occurred to me related to this.

There aren’t only the chemicals kids get accidentally exposed to — including far from being limited to heavy metal toxins for there also such things as estrogen-like chemicals in plastics and the food supply, not to mention a thousand other largely untested chemicals that surround us on a daily basis — but also those we intentionally prescribe them such as the increasing use of psychiatric medications on the entire population, more problematically prescribed to the still developing youth. A relevant example are the ADD/ADHD medications that are given mostly to boys to get them, as some argue, to act less like boys and more like girls. And these drugs have been proven to permanently alter neurocognitive development, specifically affecting areas of the brain related to motivation.

Dr. Leonard Sax in Boys Adrift (see my post) argues that, by way of chemicals and drugs along with schools constraining and punishing typical male behavior, we have now raised two generations of boys who are stunted (physiologically, cognitively, and psychologically). This is seen in the growing gender disparity of sexual development, girls maturing earlier than ever before and boys reaching puberty later than seen with previous generations. The consequence of this is that more young women are now attending and graduating college, including in many fields (e.g., business management) that used to be dominated by males.

It’s even a wider problem than this. The dramatic changes are also seen in the larger ecosystem. Males of other species showing the effects of shrinking testicles and egg-laying, the latter being particularly non-typical for males of all species, in fact one of the defining features of not being male. It appears to be a multi-species gender problem. Boys aren’t just adrift but stunted and maybe mutating. Why would anyone be surprised that there is something severely problematic going on and that it is pervasive? Simplistic psychological and social explanations, especially of the culture war variety, are next to worthless.

Take the challenges targeting the young, especially boys, and combine them with the worsening problems of inequality and segregation, classism and racism, climate change and late stage capitalism, and all the rest. The worst cases of lost young males are indicators of deeper troubles coursing through our entire society. The stresses in the world right now are immense, leaving many to feel overwhelmed. If anything, I’m surprised there isn’t far more violence. With the increasingly obvious decline and dysfunction of the United States, I keep waiting for a massive wave of social turmoil, riots, revolts, assassinations, and terrorist attacks. The occasional school shooting is maybe just a sign of what is to come.

Cultural Body-Mind

Daniel Everett is an expert on the Piraha, although he has studied other tribal cultures. It’s unsurprising then to find him make the same observations in different books. One particular example (seen below) is about bodily form. I bring it up becomes it contradicts much of the right-wing and reactionary ideology found in genetic determinism, race realism, evolutionary psychology, and present HBD (as opposed to the earlier human biodiversity theory originated by Jonathan Marks).

From the second book below, the excerpt is part of a larger section where Everett responded to the evolutionary psychologist John Tooby, the latter arguing that there is no such thing as ‘culture’ and hence everything is genetic or otherwise biological. Everett’s use of dark matter of the mind is his way of attempting to get at more deeply complex view. This dark matter is of the mind but also of the body. But he isn’t the only person to make such physiological observations.

The same point was emphasized in reading Ron Schmid’s Primal Nutrition. On page 57, there are some photographs showing healthy individuals from traditional communities. In one set of photographs, four Melanesian boys are shown who look remarkably similar. “These four boys lived on four different islands and were not related. Each had nutrition adequate for the development of the physical pattern typical of Melanesian males; thus their similar appearance.” This demonstrates non-determinism and non-essentialism.

* * *

How Language Began:
The Story of Humanity’s Greatest Invention

by Daniel L. Everett
pp. 220-221

Culture, patterns of being – such as eating, sleeping, thinking and posture – have been cultivated. A Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that their minds have been cultivated – because of the roles they play in a particular set of values and because of how they define, live out and prioritise these values, the roles of individuals in a society and the knowledge they have acquired.

It would be worth exploring further just how understanding language and culture together can enable us better to understand each. Such an understanding would also help to clarify how new languages or dialects or any other variants of speech come about. I think that this principle ‘you talk like who you talk with’ represents all human behaviour. We also eat like who we eat with, think like those we think with, etc. We take on a wide range of shared attributes – our associations shape how we live and behave and appear – our phenotype. Culture affects our gestures and our talk. It can even affect our bodies. Early American anthropologist Franz Boas studied in detail the relationship between environment, culture and bodily form. Boas made a solid case that human body types are highly plastic and change to adapt to local environmental forces, both ecological and cultural.

Less industrialised cultures show biology-culture connections. Among the Pirahã, facial features range impressionistically from slightly Negroid to East Asian, to Native American. Differences between villages or families may have a biological basis, originating in different tribes merging over the last 200 years. One sizeable group of Pirahãs (perhaps thirty to forty) – usually found occupying a single village – are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two centuries ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different – broader noses, some with epicanthic folds, large foreheads – giving an overall impression of similarity to East Asian features. ‡ Yet body dimensions across all Pirahãs are constant. Men’s waists are, or were when I worked with them, uniformly 27 inches (68 cm), their average height 5 feet 2 inches (157.5 cm) and their average weight 55 kilos (121 pounds). The Pirahã phenotypes are similar not because all Pirahãs necessarily share a single genotype, but because they share a culture, including values, knowledge of what to eat and values about how much to eat, when to eat and the like.

These examples show that even the body does not escape our earlier observation that studies of culture and human social behaviour can be summed up in the slogan that ‘you talk like who you talk with’ or ‘grow like who you grow with’. And the same would have held for all our ancestors, even erectus .

Dark Matter of the Mind:
The Culturally Articulated Unconscious

by Daniel L. Everett
Kindle Locations 1499-1576

Thus while Tooby may be absolutely right that to have meaning, “culture” must be implemented in individual minds, this is no indictment of the concept. In fact, this requirement has long been insisted on by careful students of culture, such as Sapir. Yet unlike, say, Sapir, Tooby has no account of how individual minds— like ants in a colony or neurons in a brain or cells in a body— can form a larger entity emerging from multi-individual sets of knowledge, values, and roles. His own nativist views offer little insight into the unique “unconscious patterning of society” (to paraphrase Sapir) that establishes the “social set” to which individuals belong.

The idea of culture, after all, is just that certain patterns of being— eating, sleeping, thinking, posture, and so forth— have been cultivated and that minds arising from one such “field” will not be like minds cultivated in another “field.” The Dutch individual will be unlike the Belgian, the British, the Japanese, or the Navajo, because of the way that his or her mind has been cultivated— because of the roles he or she plays in a particular value grouping, because of the ranking of values that her or she has come to share, and so on.

We must be clear, of course, that the idea of “cultivation” we are speaking of here is not merely of minds, but of entire individuals— their minds a way of talking about their bodies. From the earliest work on ethnography in the US, for example, Boas showed how cultures affect even body shape. And body shape is a good indication that it is not merely cognition that is effected and affected by culture. The uses, experiences, emotions, senses, and social engagements of our bodies forget the patterns of thought we call mind. […]

Exploring this idea that understanding language can help us understand culture, consider how linguists account for the rise of languages, dialects, and all other local variants of speech. Part of their account is captured in linguistic truism that “you talk like who you talk with.” And, I argue, this principle actually impinges upon all human behavior. We not only talk like who we talk with, but we also eat like who we eat with, think like those we think with, and so on. We take on a wide range of shared attributes; our associations shape how we live and behave and appear— our phenotype. Culture can affect our gestures and many other aspects of our talk. Boas (1912a, 1912b) takes up the issue of environment, culture, and bodily form. He provides extensive evidence that human body phenotypes are highly plastic and subject to nongenetic local environmental forces (whether dietary, climatological, or social). Had Boas lived later, he might have studied a very clear and dramatic case; namely, the body height of Dutch citizens before and after World War II. This example is worth a close look because it shows that bodies— like behaviors and beliefs— are cultural products and shapers simultaneously.

The curious case of the Netherlanders fascinates me. The Dutch went from among the shortest peoples of Europe to the tallest in the world in just over one century. One account simplistically links the growth in Dutch height with the change in political system (Olson 2014): “The Dutch growth spurt of the mid-19th century coincided with the establishment of the first liberal democracy. Before this time, the Netherlands had grown rich off its colonies but the wealth had stayed in the hands of the elite. After this time, the wealth began to trickle down to all levels of society, the average income went up and so did the height.” Tempting as this single account may be, there were undoubtedly other factors involved, including gene flow and sexual selection between Dutch and other (mainly European) populations, that contribute to explain European body shape relative to the Dutch. But democracy, a new political change from strengthened and enforced cultural values, is a crucial component of the change in the average height of the Dutch, even though the Dutch genotype has not changed significantly in the past two hundred years. For example, consider figures 2.1 and 2.2. In 1825, US male median height was roughly ten centimeters (roughly four inches) taller than the average Dutch. In the 1850s, the median heights of most males in Europe and the USA were lowered. But then around 1900, they begin to rise again. Dutch male median height lagged behind that of most of the world until the late ’50s and early ’60s, when it began to rise at a faster rate than all other nations represented in the chart. By 1975 the Dutch were taller than Americans. Today, the median Dutch male height (183 cm, or roughly just above six feet) is approximately three inches more than the median American male height (177 cm, or roughly five ten). Thus an apparent biological change turns out to be largely a cultural phenomenon.

To see this culture-body connection even more clearly, consider figure 2.2. In this chart, the correlation between wealth and height emerges clearly (not forgetting that the primary determiner of height is the genome). As wealth grew, so did men (and women). This wasn’t matched in the US, however, even though wealth also grew in the US (precise figures are unnecessary). What emerges from this is that Dutch genes are implicated in the Dutch height transformation, from below average to the tallest people in the world. And yet the genes had to await the right cultural conditions before they could be so dramatically expressed. Other cultural differences that contribute to height increases are: (i) economic (e.g., “white collar”) background; (ii) size of family (more children, shorter children); (iii) literacy of the child’s mother (literate mothers provide better diets); (iv) place of residence (residents of agricultural areas tend to be taller than those in industrial environments— better and more plentiful food); and so on (Khazan 2014). Obviously, these factors all have to do with food access. But looked at from a broader angle, food access is clearly a function of values, knowledge, and social roles— that is, culture.

Just as with the Dutch, less-industrialized cultures show culture-body connections. For example, Pirahã phenotype is also subject to change. Facial features among the Pirahãs range impressionistically from slightly Negroid to East Asian to American Indian (to use terms from physical anthropology). Phenotypical differences between villages or families seem to have a biological basis (though no genetic tests have been conducted). This would be due in part to the fact Pirahã women have trysts with various non-Pirahã visitors (mainly river traders and their crews, but also government workers and contract employees on health assistance assignments, demarcating the Pirahã reservation, etc.). The genetic differences are also partly historical. One sizeable group of Pirahãs (perhaps thirty to forty)— usually found occupying a single village— are descendants of the Torá, a Chapakuran-speaking group that emigrated to the Maici-Marmelos rivers as long as two hundred years ago. Even today Brazilians refer to this group as Torá, though the Pirahãs refer to them as Pirahãs. They are culturally and linguistically fully integrated into the Pirahãs. Their facial features are somewhat different— broader noses; some with epicanthic folds; large foreheads— giving an overall impression of similarity to Cambodian features. This and other evidence show us that the Pirahã gene pool is not closed. 4 Yet body dimensions across all Pirahãs are constant. Men’s waists are or were uniformly 83 centimeters (about 32.5 inches), their average height 157.5 centimeters (five two), and their average weight 55 kilos (about 121 pounds).

I learned about the uniformity in these measurements over the past several decades as I have taken Pirahã men, women, and children to stores in nearby towns to purchase Western clothes, when they came out of their villages for medical help. (The Pirahãs always asked that I purchase Brazilian clothes for them so that they would not attract unnecessary stares and comments.) Thus I learned that the measurements for men were nearly identical. Biology alone cannot account for this homogeneity of body form; culture is implicated as well. For example, Pirahãs raised since infancy outside the village are somewhat taller and much heavier than Pirahãs raised in their culture and communities. Even the body does not escape our earlier observation that studies of culture and human social behavior can be summed up in the slogan that “you talk like who you talk with” or “grow like who you grow with.”

Connecting the Dots of Violence

When talking to people or reading articles, alternative viewpoints and interpretations often pop up in my mind. It’s easy for me to see multiple perspectives simultaneously, to hold multiple ideas. I have a creative mind, but I’m hardly a genius. So, why does this ability seem so rare?

The lack of this ability is not simply a lack of knowledge. I spent the first half of my life in an overwhelming state of ignorance because of inferior public education, exacerbated by a learning disability and depression. But I always had the ability of divergent thinking. It’s just hard to do much with divergent thinking without greater knowledge to work with. I’ve since then remedied my state of ignorance with an extensive program of self-education.

I still don’t know exactly what is this ability to see what others don’t see. There is an odd disconnect I regularly come across, even among the well educated. I encountered a perfect example of this from Yes! Magazine. It’s an article by Mike Males, Gun Violence Has Dropped Dramatically in 3 States With Very Different Gun Laws.

In reading that article, I immediately noticed the lack of any mention of lead toxicity. Then I went to the comments section and saw other people noticed this as well. The divergent thinking it takes to make this connection doesn’t require all that much education and brain power. I’m not particularly special in seeing what the author didn’t see. What is strange is precisely that the author didn’t see it, that the same would be true for so many like him. It is strange because the author isn’t some random person opinionating on the internet.

This became even stranger when I looked into Mike Males’ previous writing elsewhere. In the past, he himself had made this connection between violent crime and lead toxicity. Yet  somehow the connection slipped from his mind in writing this article. This more recent article was in response to an event, the Parkland school shooting in Florida. And the author seems to have gotten caught up in the short term memory of the news cycle, not only unable to connect it to other data but failing to connect it to his own previous writing on that data. Maybe it shows the power of context-dependent memory. The school shooting was immediately put into the context of gun violence and so the framing elicited certain ways of thinking while excluding others. Like so many others, the author got pulled into the media hype of the moment, entirely forgetting what he otherwise would have considered.

This is how people can simultaneously know and not know all kinds of things. The human mind is built on vast disconnections, maybe because there has been little evolutionary advantage to constantly perceive larger patterns of causation beyond immediate situations. I’m not entirely sure what to make of this. It’s all too common. The thing is when such a disconnect happens the person is unaware of it — we don’t know what we don’t know and, as bizarre as it sounds, sometimes we don’t even know what we do know. So, even if I’m better than average at divergent thinking, there is no doubt that in other areas I too demonstrate this same cognitive limitation. It’s hard to see what doesn’t fit into our preconception, our worldview.

For whatever reason, lead toxicity has struggled to become included within public debate and political framing. Lead toxicity doesn’t fit into familiar narratives and the dominant paradigm, specifically in terms of a hyper-individualistic society. Even mental health tends to fall into this attitude of emphasizing the individual level, such as how the signs of mental illness could have been detected so that intervention could have stopped an individual from committing mass murder. It’s easier to talk about someone being crazy and doing crazy things than to question what caused them to become that way, be it toxicity or something else.

As such, Males’ article focuses narrowly without even entertaining fundamental causes, not limited to his overlooking lead toxicity. This is odd. We already know so much about what causes violence. The author himself has written multiple times on the topic, specifically in his professional capacity as a Senior Research Fellow at the Center on Juvenile and Criminal Justice (CJCJ). It’s his job to look for explanations and to communicate them, having written several hundred articles for CJCJ alone.

The human mind tends to go straight to the obvious, that is to say what is perceived as obvious within conventional thought. If the problem is gun violence, then the solution is gun control. Like most Americans (and increasingly so), I support more effective gun control. Still, that is merely dealing with the symptoms and doesn’t explain why someone wants to kill others. The views of the American public, though, don’t stop there. What the majority blames mass gun violence on is mental illness, a rather nebulous explanation. Mental illness also is a symptom.

That is what stands out about the omission I’m discussing here. Lead toxicity is one of most strongly proven causes of neugocognitive problems: stunted brain development, lowered IQ, learning disabilities, autism and Asperger’s, ADHD, depression, impulsivity, nervousness, irritability, anger, aggression, etc. All the heavy metals mess people up in the head, along with causing physical ailments such as hearing impairment, asthma, obesity, kidney failure, and much else. And that is talking about only one toxin among many, mercury being another widespread pollutant but there are many beyond that — this being directly relevant to the issue of violent behavior and crime, such as the high levels of toxins found in mass murderers:

“Three studies in the California prison system found those in prison for violent activity had significantly higher levels of hair manganese than controls, and studies of an area in Australia with much higher levels of violence as well as autopsies of several mass-murderers also found high levels of manganese to be a common factor. Such violent behavior has long been known in those with high manganese exposure. Other studies in the California prison and juvenile justice systems found that those with 5 or more essential mineral imbalances were 90% more likely to be violent and 50% more likely to be violent with two or more mineral imbalances. A study analyzing hair of 28 mass-murderers found that all had high metals and abnormal essential mineral levels.”

(See also: Lead was scourge before and after Beethoven by Kristina R. Anderson; Violent Crime, Hyperactivity and Metal Imbalance by Neil Ward; The Seeds that Give Birth to Terrorism by Kimberly Key; and An Updated Lead-Crime Roundup for 2018 by Kevin Drum)

Besides toxins, other factors have also been seriously studied. For example, high inequality is strongly correlated to increased mental illness rates along with aggressive, risky and other harmful behaviors (as written about in Keith Payne’s The Broken Ladder; an excerpt can be found at the end of this post). And indeed, even as lead toxicity has decreased overall (while remaining a severe problem among the poor), inequality has worsened.

There are multiple omissions going on here. And they are related. Where there are large disparities of wealth, there are also large disparities of health. Because of environmental classism and racism, toxic dumps are more likely to be located in poor and minority communities along with the problem of old housing with lead paint found where poverty is concentrated, all of it being related to a long history of economic and racial segregation. And I would point out that the evidence supports that, along with inequality, segregation creates a culture of distrust — as Eric Uslaner concluded: “It wasn’t diversity but segregation that led to less trust” (Segregation and Mistrust). In post-colonial countries like the United States, inequality and segregation go hand in hand, built on a socioeconomic system ethnic/racial castes and a permanent underclass that has developed over several centuries. The fact that this is the normal conditions of our country makes it all the harder for someone born here to fully sense its enormity. It’s simply the world we Americans have always known — it is our shared reality, rarely perceived for what it is and even more rarely interrogated.

These are far from being problems limited to those on the bottom of society. Lead toxicity ends up impacting a large part of the population. In reference to serious health concerns, Mark Hyman wrote, “that nearly 40 percent of all Americans are estimated to have blood levels of lead high enough to cause these problems” (Why Lead Poisoning May Be Causing Your Health Problems). The same thing goes for high inequality that creates dysfunction all across society, increasing social and health problems even among the upper classes, not to mention breeding an atmosphere of conflict and divisiveness (see James Gilligan’s Preventing Violence; an excerpt can be found at the end of this post). Everyone is worse off in a high amidst the unhappiness and dysfunction of a highly unequal society, far beyond homicides but also suicides, along with addiction and stress-related diseases.

Let’s look at the facts. Besides lead toxicity remaining a major problem in poor communities and old industrial inner cities, the United States has one of the highest rates of inequality in the world and the highest in the Western world, and this problem has been worsening for decades with present levels not seen since the Wall Street crash that led to the Great Depression. To go into the details, Florida has the fifth highest inequality in the United States, according to Mark Price and Estelle Sommeiller, with Florida having “all income growth between 2009 and 2011 accrued to the top 1 percent” (Economic Policy Institute). And Parkland, where the school shooting happened, specifically has high inequality: “The income inequality of Parkland, FL (measured using the Gini index) is 0.529 which is higher than the national average” (DATA USA).

In a sense, it is true that guns don’t kill people, that people kill people. But then again, it could be argued that people don’t kill people, that entire systemic problems triggers the violence that kills people, not even to talk about the immensity of slow violence that slowly kills people in even higher numbers. Lead toxicity is a great example of slow violence because of the 20 year lag time to fully measure its effects, disallowing the direct observation and visceral experience of causality and consequence. The topic of violence is important taken on its own terms (e.g., eliminating gun sales and permits to those with a history of violence would decrease gun violence), but my concern is exploring why it is so difficult to talk about violence in a larger and more meaningful way.

Lead toxicity is a great example for many reasons. It has been hard for advocates to get people to pay attention and take this seriously. Lead toxicity momentarily fell under the media spotlight with the Flint, Michigan case but that was just one of thousands of places with such problems, many of them with far worse rates. As always, as the media’s short attention span turned to some new shiny object, the lead toxicity crisis was forgotten again, as the poisoning continues. You can’t see it happening because it is always happening, an ever present tragedy that even when known remains abstract data. It is in the background and so has become part of our normal experience, operating at a level outside of our awareness.

School shootings are better able to capture the public imagination and so make for compelling dramatic narratives that the media can easily spin. Unlike lead toxicity, school shootings and their victims aren’t invisible. Lead toxins are hidden in the soil of playgrounds and the bodies of children (and prisoners who, as disproportionate victims of lead toxicity, are literally hidden away), whereas a bullet leaves dead bodies, splattered blood, terrified parents, and crying students. Neither can inequality compete with such emotional imagery. People can understand poverty because you can see poor people and poor communities, but you can’t see the societal pattern of dysfunction that exits between the dynamics of extreme poverty and extreme wealth. It can’t be seen, can’t be touched or felt, can’t be concretely known in personal experience.

Whether lead toxicity or high inequality, it is yet more abstract data that never quite gets a toehold within the public mind and the moral imagination. Even for those who should know better, it’s difficult for them to put the pieces together.

* * *

Here is the comment I left at Mike Males’ article:

I was earlier noting that Mike Males doesn’t mention lead exposure/toxicity/poisoning. I’m used to this being ignored in articles like this. Still, it’s disappointing.

It is the single most well supported explanation that has been carefully studied for decades. And the same conclusions have been found in other countries. But for whatever reason, public debate has yet to fully embrace this evidence.

Out of curiosity, I decided to do a web search. Mike Males works for Center on Juvenile and Criminal Justice. He writes articles there. I was able to find two articles where he directly and thoroughly discusses this topic:
http://www.cjcj.org/news/5548
http://www.cjcj.org/news/5552

He also mentions lead toxicity in passing in another article:
http://www.cjcj.org/news/9734

And Mike Males’ work gets referenced in a piece by Kevin Drum:
https://www.motherjones.com/kevin-drum/2016/03/kids-are-becoming-less-violent-adults-not-so-much/

This makes it odd that he doesn’t even mention it in passing here in this article. It’s not because he doesn’t know about the evidence, as he has already written about it. So, what is the reason for not offering the one scientific theory that is most relevant to the data he shares?

This seems straightforward to me. Consider the details from the article.

“Over the last 25 years—though other time periods show similar results—New York, California, and Texas show massive declines in gun homicides, ones that far exceed those of any other state. These three states also show the country’s largest decreases in gun suicide and gun accident death rates.”

The specific states in question were among the most polluting and hence polluted states. This means they had high rates of lead toxicity. And that means they had the most room for improvement. It goes without saying that national regulations and local programs will have the greatest impact where there are the worst problems (similar to the reason, as studies show, it is easier to increase the IQ of the poor than the wealthy by improving basic conditions).

“These major states containing seven in 10 of the country’s largest cities once had gun homicide rates far above the national average; now, their rates are well below those elsewhere in the country.”

That is as obvious as obvious can be. Yeah, the largest cities are also the places of the largest concentrations of pollution. Hence, one would expect to find the highest rates and largest improvements in lead toxicity, which has been proven to directly correlate to violent crime rates (with causality proven through dose-response curve, the same methodology used to prove efficacy of pharmaceuticals).

“The declines are most pronounced in urban young people.”

Once again, this is the complete opposite of surprising. It is exactly as what we would expect. Urban areas have the heaviest and most concentrated vehicular traffic along with the pollution that goes with it. And urban areas are often old industrial centers with a century of accumulated toxins in the soil, water, and elsewhere in the environment. These specific old urban areas are also where old houses are found which are affordable for the poor, but unfortunately are more likely to have old lead paint that is chipping away and turning into dust.

So, problem solved. The great mystery is no more. You’re welcome.

https://www.epa.gov/air-pollution-transportation/accomplishments-and-success-air-pollution-transportation

“Congress passed the landmark Clean Air Act in 1970 and gave the newly-formed EPA the legal authority to regulate pollution from cars and other forms of transportation. EPA and the State of California have led the national effort to reduce vehicle pollution by adopting increasingly stringent standards.”

https://www1.nyc.gov/assets/doh/downloads/pdf/lead/lead-2012report.pdf

The progress has been dramatic. For both children and adults, the number and severity of poisonings has declined. At the same time, blood lead testing rates have increased, especially in populations at high risk for lead poisoning. This public health success is due to a combination of factors, most notably commitment to lead poisoning prevention at the federal, state and city levels. New York City and New York State have implemented comprehensive policies and programs that support lead poisoning prevention. […]

“New York City’s progress in reducing childhood lead poisoning has been striking. Not only has the number of children with lead poisoning declined —a 68% drop from 2005 to 2012 — but the severity of poisonings has also declined. In 2005, there were 14 children newly identified with blood lead levels of 45 µg/dL and above, and in 2012 there were 5 children. At these levels, children require immediate medical intervention and may require hospitalization for chelation, a treatment that removes lead from the body.

“Forty years ago, tackling childhood lead poisoning seemed a daunting task. In 1970, when New York City established the Health Department’s Lead Poisoning Prevention Program, there were over 2,600 children identified with blood lead levels of 60 µg/dL or greater — levels today considered medical emergencies. Compared with other parts of the nation, New York City’s children were at higher risk for lead poisoning primarily due to the age of New York City’s housing stock, the prevalence of poverty and the associated deteriorated housing conditions. Older homes and apartments, especially those built before 1950, are most likely to contain lead­based paint. In New York City, more than 60% of the housing stock — around 2 million units — was built before 1950, compared with about 22% of housing nationwide.

“New York City banned the use of lead­based paint in residential buildings in 1960, but homes built before the ban may still have lead in older layers of paint. Lead dust hazards are created when housing is poorly maintained, with deteriorated and peeling lead paint, or when repair work in old housing is done unsafely. Young children living in such housing are especially at risk for lead poisoning. They are more likely to ingest lead dust because they crawl on the floor and put their hands and toys in their mouths.

“While lead paint hazards remain the primary source of lead poisoning in New York City children, the number and rate of newly identified cases and the associated blood lead levels have greatly declined.

“Strong Policies Aimed at Reducing Childhood Lead Exposure

“Declines in blood lead levels can be attributed largely to government regulations instituted in the 1960s, 1970s and 1980s that banned or limited the use of lead in gasoline, house paint, water pipes, solder for food cans and other consumer products. Abatement and remediation of lead­based paint hazards in housing, and increased consumer awareness of lead hazards have also contributed to lower blood lead levels.

“New York City developed strong policies to support lead poisoning prevention. Laws and regulations were adopted to prevent lead exposure before children are poisoned and to protect those with elevated blood lead levels from further exposure.”

https://www.motherjones.com/environment/2016/02/lead-exposure-gasoline-crime-increase-children-health/

“But if all of this solves one mystery, it shines a high-powered klieg light on another: Why has the lead/crime connection been almost completely ignored in the criminology community? In the two big books I mentioned earlier, one has no mention of lead at all and the other has a grand total of two passing references. Nevin calls it “exasperating” that crime researchers haven’t seriously engaged with lead, and Reyes told me that although the public health community was interested in her paper, criminologists have largely been AWOL. When I asked Sammy Zahran about the reaction to his paper with Howard Mielke on correlations between lead and crime at the city level, he just sighed. “I don’t think criminologists have even read it,” he said. All of this jibes with my own reporting. Before he died last year, James Q. Wilson—father of the broken-windows theory, and the dean of the criminology community—had begun to accept that lead probably played a meaningful role in the crime drop of the ’90s. But he was apparently an outlier. None of the criminology experts I contacted showed any interest in the lead hypothesis at all.

“Why not? Mark Kleiman, a public policy professor at the University of California-Los Angeles who has studied promising methods of controlling crime, suggests that because criminologists are basically sociologists, they look for sociological explanations, not medical ones. My own sense is that interest groups probably play a crucial role: Political conservatives want to blame the social upheaval of the ’60s for the rise in crime that followed. Police unions have reasons for crediting its decline to an increase in the number of cops. Prison guards like the idea that increased incarceration is the answer. Drug warriors want the story to be about drug policy. If the actual answer turns out to be lead poisoning, they all lose a big pillar of support for their pet issue. And while lead abatement could be big business for contractors and builders, for some reason their trade groups have never taken it seriously.

“More generally, we all have a deep stake in affirming the power of deliberate human action. When Reyes once presented her results to a conference of police chiefs, it was, unsurprisingly, a tough sell. “They want to think that what they do on a daily basis matters,” she says. “And it does.” But it may not matter as much as they think.”

* * *

The Broken Ladder:
How Inequality Affects the Way We Think, Live, and Die

by Keith Payne
pp. 69-80

How extensive are the effects of the fast-slow trade-off among humans? Psychology experiments suggest that they are much more prevalent than anyone previously suspected, influencing people’s behaviors and decisions in ways that have nothing to do with reproduction. Some of the most important now versus later trade-offs involve money. Financial advisers tell us that if we skip our daily latte and instead save that three dollars a day, we could increase our savings by more than a thousand dollars a year. But that means facing a daily choice: How much do I want a thousand dollars in the bank at the end of the year? And how great would a latte taste right now?

The same evaluations lurk behind larger life decisions. Do I invest time and money in going to college, hoping for a higher salary in the long run, or do I take a job that guarantees an income now? Do I work at a regular job and play by the rules, even if I will probably struggle financially all my life, or do I sell drugs? If I choose drugs, I might lose everything in the long run and end up broke, in jail, or dead. But I might make a lot of money today.

Even short-term feelings of affluence or poverty can make people more or less shortsighted. Recall from the earlier chapters that subjective sensations of poverty and plenty have powerful effects, and those are usually based on how we measure ourselves against other people. Psychologist Mitch Callan and colleagues combined these two principles and predicted that when people are made to feel poor, they will become myopic, taking whatever they can get immediately and ignoring the future. When they are made to feel rich, they would take the long view.

Their study began by asking research participants a long series of probing questions about their finances, their spending habits, and even their personality traits and personal tastes. They told participants that they needed all this detailed information because their computer program was going to calculate a personalized “Comparative Discretionary Income Index.” They were informed that the computer would give them a score that indicated how much money they had compared with other people who were similar to them in age, education level, personality traits, and so on. In reality, the computer program did none of that, but merely displayed a little flashing progress bar and the words “Calculating. Please wait . . .” Then it provided random feedback to participants, telling half that they had more money than most people like them, and the other half that they had less money than other people like them.

Next, participants were asked to make some financial decisions, and were offered a series of choices that would give them either smaller rewards received sooner or larger rewards received later. For example, they might be asked, “Would you rather have $ 100 today or $ 120 next week? How about $ 100 today or $ 150 next week?” After they answered many such questions, the researchers could calculate how much value participants placed on immediate rewards, and how much they were willing to wait for a better long-term payoff.

The study found that, when people felt poor, they tilted to the fast end of the fast-slow trade-off, preferring immediate gratification. But when they felt relatively rich, they took the long view. To underscore the point that this was not simply some abstract decision without consequences in the real world, the researchers performed the study again with a second group of participants. This time, instead of hypothetical choices, the participants were given twenty dollars and offered the chance to gamble with it. They could decline, pocket the money, and go home, or they could play a card game against the computer and take their chances, in which case they either would lose everything or might make much more money. When participants were made to feel relatively rich, 60 percent chose to gamble. When they were made to feel poor, the number rose to 88 percent. Feeling poor made people more willing to roll the dice.

The astonishing thing about these experiments was that it did not take an entire childhood spent in poverty or affluence to change people’s level of shortsightedness. Even the mere subjective feeling of being less well-off than others was sufficient to trigger the live fast, die young approach to life.

Nothing to Lose

Most of the drug-dealing gang members that Sudhir Venkatesh followed were earning the equivalent of minimum wage and living with their mothers. If they weren’t getting rich and the job was so dangerous, then why did they choose to do it? Because there were a few top gang members who were making several hundred thousand dollars a year. They made their wealth conspicuous by driving luxury cars and wearing expensive clothes and flashy jewelry. They traveled with entourages. The rank-and-file gang members did not look at one another’s lives and conclude that this was a terrible job. They looked instead at the top and imagined what they could be. Despite the fact that their odds of success were impossibly low, even the slim chance of making it big drove them to take outrageous risks.

The live fast, die young theory explains why people would focus on the here and now and neglect the future when conditions make them feel poor. But it does not tell the whole story. The research described in Chapter 2 revealed that rates of many health and social problems were higher, even among members of the middle class, in societies where there was more inequality. One of the puzzling aspects of the rapid rise of inequality over the past three decades is that almost all of the change in fortune has taken place at the top. The incomes of the poor and the middle class are not too different from where they were in 1980, once the numbers are adjusted for inflation. But the income and wealth of the top 1 percent have soared, and those of the top one tenth of a percent dwarfed even their increases. How are the gains of the superrich having harmful effects on the health and well-being of the rest of us? […]

As Cartar suspected, when the bees received bonus nectar, they played it safe and fed in the seablush fields. But when their nectar was removed, they headed straight for the dwarf huckleberry fields.

Calculating the best option in an uncertain environment is a complicated matter; even humans have a hard time with it. According to traditional economic theories, rational decision making means maximizing your payoffs. You can calculate your “expected utility” by multiplying the size of the reward by the likelihood of getting it. So, an option that gives you a 90 percent chance of winning $ 500 has a greater expected utility than an option that gives you a 40 percent chance of winning $ 1,000 ($ 500 × .90 = $ 450 as compared with $ 1,000 × .40 = $ 400). But the kind of decision making demonstrated by the bumblebees doesn’t necessarily line up well with the expected utility model. Neither, it turns out, do the risky decisions made by the many other species that also show the same tendency to take big risks when they are needy.

Humans are one of those species. Imagine what you would do if you owed a thousand dollars in rent that was due today or you would lose your home. In a gamble, would you take the 90 percent chance of winning $ 500, or the 40 percent chance of winning $ 1,000? Most people would opt for the smaller chance of getting the $ 1,000, because if they won, their need would be met. Although it is irrational from the expected utility perspective, it is rational in another sense, because meeting basic needs is sometimes more important than the mathematically best deal. The fact that we see the same pattern across animal species suggests that evolution has found need-based decision making to be adaptive, too. From the humble bumblebee, with its tiny brain, to people trying to make ends meet, we do not always seek to maximize our profits. Call it Mick Jagger logic: If we can’t always get what we want, we try to get what we need. Sometimes that means taking huge risks.

We saw in Chapter 2 that people judge what they need by making comparisons to others, and the impact of comparing to those at the top is much larger than comparing to those at the bottom. If rising inequality makes people feel that they need more, and higher levels of need lead to risky choices, it implies a fundamentally new relationship between inequality and risk: Regardless of whether you are poor or middle class, inequality itself might cause you to engage in riskier behavior. […]

People googling terms like “lottery tickets” and “payday loans,” for example, are probably already involved in some risky spending. To measure sexual riskiness, we counted searches for the morning-after pill and for STD testing. And to measure drug- and alcohol-related risks, we counted searches for how to get rid of a hangover and how to pass a drug test. Of course, a person might search for any of these terms for reasons unrelated to engaging in risky behaviors. But, on average, if there are more people involved in sex, drugs, and money risks, you would expect to find more of these searches.

Armed with billions of such data points from Google, we asked whether the states where people searched most often for those terms were also the states with higher levels of income inequality. To help reduce the impact of idiosyncrasies related to each search term, we averaged the six terms together into a general risk-taking index. Then we plotted that index against the degree of inequality in each state. The states with higher inequality had much higher risk taking, as estimated from their Google searches. This relationship remained strong after statistically adjusting for the average income in each state.

If the index of risky googling tracks real-life risky behavior, then we would expect it to be associated with poor life outcomes. So we took our Google index and tested whether it could explain the link, reported in Chapter 2, between inequality and Richard Wilkinson and Kate Pickett’s index of ten major health and social problems. Indeed, the risky googling index was strongly correlated with the index of life problems. Using sophisticated statistical analyses, we found that inequality was a strong predictor of risk taking, which in turn was a strong predictor of health and social problems. These findings suggest that risky behavior is a pathway that helps explain the link between inequality and bad outcomes in everyday life. The evidence becomes much stronger still when we consider these correlations together with the evidence of cause and effect provided by the laboratory experiments.

Experiments like the ones described in this chapter are essential for understanding the effects of inequality, because only experiments can separate the effects of the environment from individual differences in character traits. Surely there were some brilliant luminaries and some dullards in each experimental group. Surely there were some hearty souls endowed with great self-control, and some irresponsible slackers, too. Because they were assigned to the experimental groups at random, it is exceedingly unlikely that the groups differed consistently in their personalities or abilities. Instead, we can be confident that the differences we see are caused by the experimental factor, in this case making decisions in a context of high or low inequality. […]

Experiments are gentle reminders that, in the words of John Bradford, “There but for the grace of God go I.” If we deeply understand behavioral experiments, they make us humble. They challenge our assumption that we are always in control of our own successes and failures. They remind us that, like John Bradford, we are not simply the products of our thoughts, our plans, or our bootstraps.

These experiments suggest that any average person, thrust into these different situations, will start behaving differently. Imagine that you are an evil scientist with a giant research budget and no ethical review board. You decide to take ten thousand newborn babies and randomly assign them to be raised by families in a variety of places. You place some with affluent, well-educated parents in the suburbs of Atlanta. You place others with single mothers in inner-city Milwaukee, and so on. The studies we’ve looked at suggest that the environments you assign them to will have major effects on their futures. The children you assign to highly unequal places, like Texas, will have poorer outcomes than those you assign to more equal places, like Iowa, even though Texas and Iowa have about the same average income.

In part, this will occur because bad things are more likely to happen to them in unequal places. And in part, it will occur because the children raised in unequal places will behave differently. All of this can transpire even though the babies you are randomly assigning begin life with the same potential abilities and values.

pp. 116-121

If you look carefully at Figure 5.1, you’ll notice that the curve comparing different countries is bent. The relatively small income advantage that India has over Mozambique, for example, translates into much longer lives in India. Once countries reach the level of development of Chile or Costa Rica, something interesting happens: The curve flattens out. Very rich countries like the United States cease to have any life expectancy advantage over moderately rich countries like Bahrain or even Cuba. At a certain level of economic development, increases in average income stop mattering much.

But within a rich country, there is no bend; the relationship between money and longevity remains linear. If the relationship was driven by high mortality rates among the very poor, you would expect to see a bend. That is, you would expect dramatically shorter lives among the very poor, and then, once above the poverty line, additional income would have little effect. This curious absence of the bend in the line suggests that the link between money and health is not actually a reflection of poverty per se, at least not among economically developed countries. If it was extreme poverty driving the effect, then there would be a big spike in mortality among the very poorest and little difference between the middle- and highest-status groups.

The linear pattern in the British Civil Service study is also striking, because the subjects in this study all have decent government jobs and the salaries, health insurance, pensions, and other benefits that are associated with them. If you thought that elevated mortality rates were only a function of the desperately poor being unable to meet their basic needs, this study would disprove that, because it did not include any desperately poor subjects and still found elevated mortality among those with lower status.

Psychologist Nancy Adler and colleagues have found that where people place themselves on the Status Ladder is a better predictor of health than their actual income or education. In fact, in collaboration with Marmot, Adler’s team revisited the study of British civil servants and asked the research subjects to rate themselves on the ladder. Their subjective assessments of where they stood compared with others proved to be a better predictor of their health than their occupational status. Adler’s analyses suggest that occupational status shapes subjective status, and this subjective feeling of one’s standing, in turn, affects health.

If health and longevity in developed countries are more closely linked to relative comparisons than to income, then you would expect that societies with greater inequality would have poorer health. And, in fact, they do. Across the developed nations surveyed by Wilkinson and Pickett, those with greater income equality had longer life expectancies (see Figure 5.3). Likewise, in the United States, people who lived in states with greater income equality lived longer (see Figure 5.4). Both of these relationships remain once we statistically control for average income, which means that inequality in incomes, not just income itself, is responsible.

But how can something as abstract as inequality or social comparisons cause something as physical as health? Our emergency rooms are not filled with people dropping dead from acute cases of inequality. No, the pathways linking inequality to health can be traced through specific maladies, especially heart disease, cancer, diabetes, and health problems stemming from obesity. Abstract ideas that start as macroeconomic policies and social relationships somehow get expressed in the functioning of our cells.

To understand how that expression happens, we have to first realize that people from different walks of life die different kinds of deaths, in part because they live different kinds of lives. We saw in Chapter 2 that people in more unequal states and countries have poor outcomes on many health measures, including violence, infant mortality, obesity and diabetes, mental illness, and more. In Chapter 3 we learned that inequality leads people to take greater risks, and uncertain futures lead people to take an impulsive, live fast, die young approach to life. There are clear connections between the temptation to enjoy immediate pleasures versus denying oneself for the benefit of long-term health. We saw, for example, that inequality was linked to risky behaviors. In places with extreme inequality, people are more likely to abuse drugs and alcohol, more likely to have unsafe sex, and so on. Other research suggests that living in a high-inequality state increases people’s likelihood of smoking, eating too much, and exercising too little.

Taken together, this evidence implies that inequality leads to illness and shorter lives in part because it gives rise to unhealthy behaviors. That conclusion has been very controversial, especially on the political left. Some argue that it blames the victim because it implies that the poor and those who live in high-inequality areas are partly responsible for their fates by making bad choices. But I don’t think it’s assigning blame to point out the obvious fact that health is affected by smoking, drinking too much, poor diet and exercise, and so on. It becomes a matter of blaming the victim only if you assume that these behaviors are exclusively the result of the weak characters of the less fortunate. On the contrary, we have seen plenty of evidence that poverty and inequality have effects on the thinking and decision making of people living in those conditions. If you or I were thrust into such situations, we might well start behaving in more unhealthy ways, too.

The link between inequality and unhealthy behaviors helps shed light on a surprising trend discovered in a 2015 paper by economists Anne Case and Angus Deaton. Death rates have been steadily declining in the United States and throughout the economically developed world for decades, but these authors noticed a glaring exception: Since the 1990s, the death rate for middle-aged white Americans has been rising. The increase is concentrated among men and whites without a college degree. The death rate for black Americans of the same age remains higher, but is trending slowly downward, like that of all other minority groups.

The wounds in this group seem to be largely self-inflicted. They are not dying from higher rates of heart disease or cancer. They are dying of cirrhosis of the liver, suicide, and a cycle of chronic pain and overdoses of opiates and painkillers.

The trend itself is striking because it speaks to the power of subjective social comparisons. This demographic group is dying of violated expectations. Although high school– educated whites make more money on average than similarly educated blacks, the whites expect more because of their history of privilege. Widening income inequality and stagnant social mobility, Case and Deaton suggest, mean that this generation is likely to be the first in American history that is not more affluent than its parents.

Unhealthy behaviors among those who feel left behind can explain part of the link between inequality and health, but only part. The best estimates have found that such behavior accounts for about one third of the association between inequality and health. Much of the rest is a function of how the body itself responds to crises. Just as our decisions and actions prioritize short-term gains over longer-term interests when in a crisis, the body has a sophisticated mechanism that adopts the same strategy. This crisis management system is specifically designed to save you now, even if it has to shorten your life to do so.

* * *

Preventing Violence
by James Gilligan
Kindle Locations 552-706

The Social Cause of Violence

In order to understand the spread of contagious disease so that one can prevent epidemics, it is just as important to know the vector by which the pathogenic organism that causes the disease is spread throughout the population as it is to identify the pathogen itself. In the nineteenth century, for example, the water supply and the sewer system were discovered to be vectors through which some diseases became epidemic. What is the vector by which shame, the pathogen that causes violence, is spread to its hosts, the people who succumb to the illness of violence? There is a great deal of evidence, which I will summarize here, that shame is spread via the social and economic system. This happens in two ways. The first is through what we might call the “vertical” division of the population into a hierarchical ranking of upper and lower status groups, chiefly classes, castes, and age groups, but also other means by which people are divided into in-groups and out-groups, the accepted and the rejected, the powerful and the weak, the rich and the poor, the honored and the dishonored. For people are shamed on a systematic, wholesale basis, and their vulnerability to feelings of humiliation is increased when they are assigned an inferior social or economic status; and the more inferior and humble it is, the more frequent and intense the feelings of shame, and the more frequent and intense the acts of violence. The second way is by what we could call the “horizontal” asymmetry of social roles, or gender roles, to which the two sexes are assigned in patriarchal cultures, one consequence of which is that men are shamed or honored for different and in some respects opposite behavior from that which brings shame or honor to women. That is, men are shamed for not being violent enough (called cowards or even shot as deserters), and are more honored the more violent they are (with medals, promotions, titles, and estates)—violence for men is successful as a strategy. Women, however, are shamed for being too active and aggressive (called bitches or unfeminine) and honored for being passive and submissive—violence is much less likely to protect them against shame.

Relative Poverty and Unemployment

The most powerful predictor of the homicide rate in comparisons of the different nations of the world, the different states in the United States, different counties, and different cities and census tracts, is the size of the disparities in income and wealth between the rich and the poor. Some three dozen studies, at least, have found statistically significant correlations between the degree of absolute as well as relative poverty and the incidence of homicide, Hsieh and Pugh in 1993 did a meta-analysis of thirty-four such studies and found strong statistical support for these findings, as have several other reviews of this literature: two on homicide by Smith and Zahn in 1999; Chasin in 1998; Short in 1997; James in 1995; and individual studies, such as Braithwaite in 1979 and Messner in 1980.

On a worldwide basis, the nations with the highest inequities in wealth and income, such as many Third World countries in Latin America, Africa, and Asia, have the highest homicide rates (and also the most collective or political violence). Among the developed nations, the United States has the highest inequities in wealth and income, and also has by far the highest homicide rates, five to ten times larger than the other First World nations, all of which have the lowest levels of inequity and relative poverty in the world, and the lowest homicide rates. Sweden and Japan, for example, have had the lowest degree of inequity in the world in recent years, according to the World Bank’s measures; but in fact, all the other countries of western Europe, including Ireland and the United Kingdom, as well as Canada, Australia, and New Zealand, have a much more equal sharing of their collective wealth and income than either the United States or virtually any of the Second or Third World countries, as well as the lowest murder rates.

Those are cross-sectional studies, which analyze the populations being studied at one point in time. Longitudinal studies find the same result: violence rates climb and fall over time as the disparity in income rises and decreases, both in the less violent and the more violent nations. For example, in England and Wales, as Figures 1 and 2 show, there was an almost perfect fit between the rise in several different measures of the size of the gap between the rich and the poor, and the number of serious crimes recorded by the police between 1950 and 1990. Figure 1 shows two measures of the gradual widening of income differences, which accelerated dramatically from 1984 and 1985. Figure 2 shows the increasing percentage of households and families living in relative poverty, a rate that has been particularly rapid since the late 1970s, and also the number of notifiable offences recorded by the police during the same years. As you can see, the increase in crime rates follows the increase in rates of relative poverty almost perfectly. As both inequality and crime accelerated their growth rates simultaneously, the annual increases in crime from one year to the next became larger than the total crime rate had been in the early 1950s. If we examine the rates for murder alone during the same period, as reported by the Home Office, we find the same pattern, namely a progression from a murder rate that averaged 0.6 per 100,000 between 1946 and 1970, increased to 0.9 from 1971–78, and increased yet again to an average of 1.1 between 1979 and 1997 (with a range of 1.0 to 1.3) To put it another way, 1.2 and 1.3, the five highest levels since the end of World War II, were recorded in 1987, 1991, 1994, 1995 and 1997, ail twice as high as the 1946–70 average.

The same correlation between violence and relative poverty has been found in the United States. The economist James Galbraith in Created Unequal (1997) has used inequity in wages as one measure of the size and history of income inequity between the rich and the poor from 1920 to 1992. If we correlate this with fluctuations in the American homicide rate during the same period, we find that both wage inequity and the homicide rate increased sharply in the slump of 1920–21, and remained at those historically high levels until the Great Crash of 1929, when they both jumped again, literally doubling together and suddenly, to the highest levels ever observed up to that time. These record levels of economic inequality (which increase, as Galbraith shows, when unemployment increases) were accompanied by epidemic violence; both murder rates and wage inequity remained twice as high as they had previously been, until the economic leveling effects of Roosevelt’s New Deal, beginning in 1933, and the Second World War a few years later, combined to bring both violence and wage inequity down by the end of the war to the same low levels as at the end of the First World War, and they both remained at those low levels for the next quarter of a century, from roughly 1944 to 1968.

That was the modern turning point. In 1968 the median wage began falling, after having risen steadily for the previous three decades, and “beginning in 1969 inequality started to rise, and continued to increase sharply for fifteen years,” (J. K. Galbraith). The homicide rate soon reached levels twice as high as they had been during the previous quarter of a century (1942–66). Both wage inequality and homicide rates remained at those relatively high levels for the next quarter of a century, from 1973 to 1997. That is, the murder rate averaged 5 per 100,000 population from 1942 to 1966, and 10 per 100,000 from 1970 to 1997. Finally, by 1998 unemployment dropped to the lowest level since 1970; both the minimum wage and the median wage began increasing again in real terms for the first time in thirty years; and the poverty rate began dropping. Not surprisingly, the homicide rate also fell, for the first time in nearly thirty years, below the range in which it had been fluctuating since 1970–71 (though both rates, of murder and of economic inequality, are still higher than they were from the early 1940s to the mid-1960s).

As mentioned before, unemployment rates are also relevant to rates of violence. M. H. Brenner found that every one per cent rise in the unemployment rate is followed within a year by a 6 per cent rise in the homicide rate, together with similar increases in the rates of suicide, imprisonment, mental hospitalization, infant mortality, and deaths from natural causes such as heart attacks and strokes (Mental Illness and the Economy, 1973, and uPersonal Stability and Economic Security,” 1977). Theodore Chiricos reviewed sixty-three American studies and concluded that while the relationship between unemployment and crime may have been inconsistent during the 1960s (some studies found a relationship, some did not), it became overwhelmingly positive in the 1970s, as unemployment changed from a brief interval between jobs to enduring worklessness (“Rates of Crime and Unemployment,” 1987). David Dickinson found an exceptionally close relationship between rates of burglary and unemployment for men under twenty-five in the U.K. in the 1980s and 1990s (“Crime and Unemployment,” 1993). Bernstein and Houston have also found statistically significant correlations between unemployment and crime rates, and negative correlations between wages and crime rates, in the U.S. between 1989 and 1998 (Crime and Work, 2000).

If we compare Galbraith’s data with U.S. homicide statistics, we find that the U.S. unemployment rate has moved in the same direction as the homicide rate from 1920 to 1992: increasing sharply in 1920–21, then jumping to even higher levels from the Crash of 1929 until Roosevelt’s reforms began in 1933, at which point the rates of both unemployment and homicide also began to fall, a trend that accelerated further with the advent of the war. Both rates then remained low (with brief fluctuations) until 1968, when they began a steady rise which kept them both at levels higher than they had been in any postwar period, until the last half of 1997, when unemployment fell below that range and has continued to decline ever since, followed closely by the murder rate.

Why do economic inequality and unemployment both stimulate violence? Ultimately, because both increase feelings of shame (Gilligan, Violence). For example, we speak of the poor as the lower classes, who have lower social and economic status, and the rich as the upper classes who have higher status. But the Latin for lower is inferior, and the word for the lower classes in Roman law was the humiliores. Even in English, the poor are sometimes referred to as the humbler classes. Our language itself tells us that to be poor is to be humiliated and inferior, which makes it more difficult not to feel inferior. The word for upper or higher was superior, which is related to the word for pride, superbia (the opposite of shame), also the root of our word superb (another antonym of inferior). And a word for the upper classes, in Roman law, was the honestiores (related to the word honor, also the opposite of shame and dishonor).

Inferiority and superiority are relative concepts, which is why it is relative poverty, not absolute poverty, that exposes people to feelings of inferiority. When everyone is on the same level, there is no shame in being poor, for in those circumstances the very concept of poverty loses its meaning. Shame is also a function of the gap between one’s level of aspiration and one’s level of achievement. In a society with extremely rigid caste or class hierarchies, it may not feel so shameful to be poor, since it is a matter of bad luck rather than of any personal failing. Under those conditions, lower social status may be more likely to motivate apathy, fatalism, and passivity (or “passive aggressiveness”), and to inhibit ambition and the need for achievement, as Gunnar Myrdal noted in many of the caste-ridden peasant cultures that he studied in Asian Drama (1968). Caste-ridden cultures, however, may have the potential to erupt into violence on a revolutionary or even genocidal scale, once they reject the notion that the caste or class one is born into is immutable, and replace it with the notion that one has only oneself to blame if one remains poor while others are rich. This we have seen repeatedly in the political and revolutionary violence that has characterized the history of Indonesia, Kampuchea, India, Ceylon, China, Vietnam, the Philippines, and many other areas throughout Asia during the past half-century.

All of which is another way of saying that one of the costs people pay for the benefits associated with belief in the “American Dream,” the myth of equal opportunity, is an increased potential for violence. In fact, the social and economic system of the United States combines almost every characteristic that maximizes shame and hence violence. First, there is the “Horatio Alger” myth that everyone can get rich if they are smart and work hard (which means that if they are not rich they must be stupid or lazy, or both). Second, we are not only told that we can get rich, we are also stimulated to want to get rich. For the whole economic system of mass production depends on whetting people’s appetites to consume the flood of goods that are being produced (hence the flood of advertisements). Third, the social and economic reality is the opposite of the Horatio Alger myth, since social mobility is actually less likely in the U.S. than in the supposedly more rigid social structures of Europe and the U.K. As Mishel, Bernstein and Schmitt have noted:

Contrary to widely held perceptions, the U.S. offers less economic mobility than other rich countries. In one study, for example, low-wage workers in the U.S. were more likely to remain in the low-wage labor market five years longer than workers in Germany, France, Italy, the United Kingdom, Denmark, Finland, and Sweden (all the other countries studied in this analysis). In another study, poor households in the U.S. were less likely to leave poverty from one year to the next than were poor households in Canada, Germany, the Netherlands, Sweden, and the United Kingdom (all the countries included in this second analysis).
(The State of Working America 2000–2001, 2001)

Fourth, as they also mention, “the U.S. has the most unequal income distribution and the highest poverty rates among all the advanced economies in the world. The U.S. tax and benefit system is also one of the least effective in reducing poverty.” The net effect of all these features of U.S. society is to maximize the gap between aspiration and attainment, which maximizes the frequency and intensity of feelings of shame, which maximizes the rates of violent crimes.

It is difficult not to feel inferior if one is poor when others are rich, especially in a society that equates self-worth with net worth; and it is difficult not to feel rejected and worthless if one cannot get or hold a job while others continue to be employed. Of course, most people who lose jobs or income do not commit murders as a result; but there are always some men who are just barely maintaining their self-esteem at minimally tolerable levels even when they do have jobs and incomes. And when large numbers of them lose those sources of self-esteem, the number who explode into homicidal rage increases as measurably, regularly, and predictably as any epidemic does when the balance between pathogenic forces and the immune system is altered.

And those are not just statistics. I have seen many individual men who have responded in exactly that way under exactly these circumstances. For example, one African-American man was sent to the prison mental hospital I directed in order to have a psychiatric evaluation before his murder trial. A few months before that, he had had a good job. Then he was laid off at work, but he was so ashamed of this that he concealed the fact from his wife (who was a schoolteacher) and their children, going off as if to work every morning and returning at the usual time every night. Finally, after two or three months of this, his wife noticed that he was not bringing in any money. He had to admit the truth, and then his wife fatally said, “What kind of man are you? What kind of man would behave this way?” To prove that he was a man, and to undo the feeling of emasculation, he took out his gun and shot his wife and children. (Keeping a gun is, of course, also a way that some people reassure themselves that they are really men.) What I was struck by, in addition to the tragedy of the whole story, was the intensity of the shame he felt over being unemployed, which led him to go to such lengths to conceal what had happened to him.

Caste Stratification

Caste stratification also stimulates violence, for the same reasons. The United States, perhaps even more than the other Western democracies, has a caste system that is just as real as that of India, except that it is based on skin color and ethnicity more than on hereditary occupation. The fact that it is a caste system similar to India’s is registered by the fact that in my home city, Boston, members of the highest caste are called “Bsoston Brahmins” (a.k.a. “WASPs,” or White Anglo-Saxon Protestants). The lowest rung on the caste ladder, corresponding to the “untouchables” or Harijan, of India, is occupied by African-Americans, Native Americans, and some Hispanic-Americans. To be lower caste is to be rejected, socially and vocationally, by the upper castes, and regarded and treated as inferior. For example, whites often move out of neighborhoods when blacks move in; blacks are “the last to be hired and the first to be fired,” so that their unemployment rate has remained twice as high as the white rate ever since it began being measured; black citizens are arrested and publicly humiliated under circumstances in which no white citizen would be; respectable white authors continue to write books and articles claiming that blacks are intellectually inferior to whites; and so on and on, ad infinitum. It is not surprising that the constant shaming and attributions of inferiority to which the lower caste groups are subjected would cause members of those groups to feel shamed, insulted, disrespected, disdained, and treated as inferior—because they have been, and because many of their greatest writers and leaders have told us that this is how they feel they have been treated by whites. Nor is it surprising that this in turn would give rise to feelings of resentment if not rage, nor that the most vulnerable, those who lacked any non-violent means of restoring their sense of personal dignity, such as educational achievements, success, and social status, might well see violence as the only way of expressing those feelings. And since one of the major disadvantages of lower-caste status is lack of equal access to educational and vocational opportunities, it is not surprising that the rates of homicide and other violent crimes among all the lower-caste groups mentioned are many times higher, year after year, than those of the upper-caste groups.

Kindle Locations 1218-1256

Single-Parent Families Another factor that correlates with rates of violence in the United States is the rate of single-parent families: children raised in them are more likely to be abused, and are more likely to become delinquent and criminal as they grow older, than are children who are raised by two parents. For example, over the past three decades those two variables—the rates of violent crime and of one-parent families—have increased in tandem with each other; the correlation is very close. For some theorists, this has suggested that the enormous increase in the rate of youth violence in the U.S. over the past few decades has been caused by the proportionately similar increase in the rate of single-parent families.

As a parent myself, I would be the first to agree that child-rearing is such a complex and demanding task that parents need all the help they can get, and certainly having two caring and responsible parents available has many advantages over having only one. In addition, children, especially boys, can be shown to benefit in many ways, including diminished risk of delinquency and violent criminality, from having a positive male role-model in the household. The adult who is most often missing in single-parent families is the father. Some criminologists have noticed that Japan, for example, has practically no single-parent families, and its murder rate is only about one-tenth as high as that of the United States.

Sweden’s rate of one-parent families, however, has grown almost to equal that in the United States, and over the same period (the past few decades), yet Sweden’s homicide rate has also been on average only about one-tenth as high as that of the U.S., during that same time. To understand these differences, we should consider another variable, namely, the size of the gap between the rich and the poor. As stated earlier, Sweden and Japan both have among the lowest degrees of economic inequity in the world, whereas the U.S. has the highest polarization of both wealth and income of any industrialized nation. And these differences exist even when comparing different family structures. For example, as Timothy M. Smeeding has shown, the rate of relative poverty is very much lower among single-parent families in Sweden than it is among those in the U.S. Even more astonishing, however, is the fact that the rate of relative poverty among single-parent families in Sweden is much lower than it is among two-parent families in the United States (“Financial Poverty in Developed Countries,” 1997). Thus, it would seem that however much family structure may influence the rate of violence in a society, the overall social and economic structure of the society—the degree to which it is or is not stratified into highly polarized upper and lower social classes and castes—is a much more powerful determinant of the level of violence.

There are other differences between the cultures of Sweden and the U.S. that may also contribute to the differences in the correlation between single-parenthood and violent crime. The United States, with its strongly Puritanical and Calvinist cultural heritage, is much more intolerant of both economic dependency and out-of-wedlock sex than Sweden. Thus, the main form of welfare support for single-parent families in the U.S. (until it was ended a year ago) A.F.D.C., Aid to Families with Dependent Children, was specifically denied to families in which the father (or any other man) was living with the mother; indeed, government agents have been known to raid the homes of single mothers with no warning in the middle of the night in order to “catch” them in bed with a man, so that they could then deprive them (and their children) of their welfare benefits. This practice, promulgated by politicians who claimed that they were supporting what they called “family values,” of course had the effect of destroying whatever family life did exist. Fortunately for single mothers in Sweden, the whole society is much more tolerant of people’s right to organize their sexual life as they wish, and as a result many more single mothers are in fact able to raise their children with the help of a man.

Another difference between Sweden and the U.S. is that fewer single mothers in Sweden are actually dependent on welfare than is true in the U.S. The main reason for this is that mothers in Sweden receive much more help from the government in getting an education, including vocational training; more help in finding a job; and access to high-quality free childcare, so that mothers can work without leaving their children uncared for. The U.S. system, which claims to be based on opposition to dependency, thus fosters more welfare dependency among single mothers than Sweden’s does, largely because it is so more miserly and punitive with the “welfare” it does provide. Even more tragically, however, it also fosters much more violence. It is not single motherhood as such that causes the extremely high levels of violence in the United States, then; it is the intense degree of shaming to which single mothers and their children are exposed by the punitive, miserly, Puritanical elements that still constitute a powerful strain in the culture of the United States.

Kindle Locations 1310-1338

Social and Political Democracy Since the end of the Second World War, the homicide rates of the nations of western Europe, and Japan, for example, have been only about a tenth as high as those of the United States, which is another way of saying that they have been preventing 90 per cent of the violence that the U.S. still experiences. Their rates of homicide were not lower than those in the U.S. before. On the contrary, Europe and Asia were scenes of the largest numbers of homicides ever recorded in the history of the world, both in terms of absolute numbers killed and in the death rates per 100,000 population, in the “thirty years’ war” that lasted from 1914 to 1945. Wars, and governments, have always caused far more homicides than all the individual murderers put together (Richardson, Statistics of Deadly Quarrels, 1960; Keeley, War Before Civilization, 1996.) After that war ended, however, they all took two steps which have been empirically demonstrated throughout the world to prevent violence. They instituted social democracy (or “welfare states,” as they are sometimes called), and achieved an unprecedented decrease in the inequities in wealth and income between the richest and poorest groups in the population, one effect of which is to reduce the frequency of interpersonal or “criminal” violence. And Germany, Japan and Italy adopted political democracy as well, the effect of which is to reduce the frequency of international violence, or warfare (including “war crimes”).

While the United States adopted political democracy at its inception, it is the only developed nation on earth that has never adopted social democracy (a “welfare state”). The United States alone among the developed nations does not provide universal health insurance for all its citizens; it has the highest rate of relative poverty among both children and adults, and the largest gap between the rich and the poor, of any of the major economies; vastly less adequate levels of unemployment insurance and other components of shared responsibility for human welfare; and so on. Thus, it is not surprising that it also has murder rates that have been five to ten times as high as those of any other developed nation, year after year. It is also consistent with that analysis that the murder rate finally fell below the epidemic range in which it had fluctuated without exception for the previous thirty years (namely, 8 to II homicides per 100,000 population per year), only in 1998, after the unemployment rate reached its lowest level in thirty years and the rate of poverty among the demographic groups most vulnerable to violence began to diminish—slightly—for the first time in thirty years.

Some American politicians, such as President Eisenhower, have suggested that the nations of western Europe have merely substituted a high suicide rate for the high homicide rate that the U.S. has. In fact, the suicide rates in most of the other developed nations are also substantially lower than those of the United States, or at worst not substantially higher. The suicide rates throughout the British Isles, the Netherlands, and the southern European nations are around one-third lower than those of the U.S.; the rates in Canada, Australia, and New Zealand, as well as Norway and Luxembourg, are about the same. Only the remaining northern and central European countries and Japan have suicide rates that are higher, ranging from 30 per cent higher to roughly twice as high as the suicide rate of the U.S. By comparison, the U.S. homicide rate is roughly ten times as high as those of western Europe (including the U.K., Scandinavia, France, Germany, Switzerland, Austria), southern Europe, and Japan; and five times as high as those of Canada, Australia and New Zealand. No other developed nation has a homicide rate that is even close to that of the U.S.

Skepticism and Conspiracy

In a post about valid skepticism, Troy David Loy (Troythulu) takes up the issue of conspiracy theories. He was responding to a 2011 post by Steven Novella or rather the comment section. Novella seeks to differentiate between skepticism and cynicism, and he does so by way of the problems of conspiracy theory, what he refers to as conspiracy mongering.

The specific conspiracy theory he uses is of no interest to me, but there are many reasons this topic resonates. Skepticism is all the more important and all the more difficult in a paranoid society, which is inevitable under conditions of fear and anxiety as is found with high inequality and segregation. Even the conspiracy denialists easily end up being paranoid in seeing conspiracy theories everywhere, as if the conspiracy theorists are out to get them, out to destroy their rational world of truth. And no doubt there are destructive along with self-destructive elements in the United States, the paranoia often being justified. It is paranoia all around, paranoia reacting to paranoia (such as the two main parties bickering back and forth about the conspiracy theories involving the FBI, Russia, etc that each prefers in attacking the other side). It’s amusing. Frustrating at times, but amusing.

Let me dig in. Loy writes that, “one of the commenters [Starting Here] tries very hard to prove the very thesis of cynicism the post addresses in a classic and blatant display of the Dunning-Kruger effect, by conspiracy mongering, in dishonestly ignoring or dismissing all counterarguments, attempting to assert intellectual superiority by evading questions and repeating the same talking points using glaring errors in reasoning apparent to nearly everyone else in the thread, and especially obvious to Dr. Novella.”

Maybe so or maybe not. I have little motivation to get involved in that particular debate. It doesn’t seem all that meaningful what happened to Osama bin Laden’s body or the reasons behind it. I just don’t care. Even if there was a conspiracy involved, there are so many more important conspiracies to consider, specifically proven conspiracies. Besides, I would point out that this problem goes both ways. And the two sides feed into each other. No one can doubt that there is conspiracy mongering. But as or more common is conspiracy denialism. Besides, it appears that, in the comment section, there never was an agreement on what was the fundamental issue being debated and so no clear way of determining who ‘won’ the debate.

Anyway, not all conspiracies are mere theories, something I assume both sides would agree upon, the point of disagreement being how common and how well hidden. “I believe in facts about conspiracies,” Julian Assange explained and with insightful common sense added that, “Any time people with power plan in secret, they are conducting a conspiracy. So there are conspiracies everywhere. There are also crazed conspiracy theories. It’s important not to confuse these two. Generally, when there’s enough facts about a conspiracy we simply call this news. . . I’m constantly annoyed that people are distracted by false conspiracies such as 9/11, when all around we provide evidence of real conspiracies, for war or mass financial fraud.” The problem is the conspiracy mongers and conspiracy denialists are fond of obsessing over the extreme possibilities while ignoring what is right in front of their faces, although that could simply the nature of any ideological debate that polarizes people.

The thousands of known and surely thousands more unknown covert operations the US has committed were, by definition, conspiracies and many of them, before being proven as conspiracy facts, were dismissed as conspiracy theories. Every time a corporation from big tobacco to big oil hid information (including their own scientific research, as happened over a period of decades) from the government and the public, it was a conspiracy. The three biggest recent sex scandals (Harvey Weinstein, Larry Nassar, and now Sean Hutchison) involved numerous people covering up the abuse also over a period of decades, often involving institutions and large numbers of complicit actors, even to the point of involving direct efforts to shut down investigations (reminiscent of the Catholic sex abuse cases that might have involved thousands of victims, victimizers, co-conspirators, and colluding authority figures across numerous churches, communities, and countries over a period of generations, probably centuries, and yet the Vatican was able to successfully conspire in keeping it hushed up).

This form of conspiracy, favored by the Vatican and corporations alike, can even take advantage of the legal system to enforce secrecy — as explained by Eviatar Zerubavel:

“Needless to say, although victims certainly benefit from them financially and sometimes also reputationally, it is almost always the perpetrators of those wrongdoings who “insist on inserting confidentiality clauses in [secret] settlements— never the victims.” 27 Furthermore, the fact that the very existence of those settlements is often kept secret actually allows such wrongdoing to continue! Such secrecy implicitly empowers repeat offenders by sanctioning the isolation of their victims from one another, victims who are often unaware that those perpetrators have previously been accused of similar offenses: “The main loser in secret settlements is the public. Consumers are deprived of information they need to protect themselves from unsafe products. Workers are kept in the dark about unsafe working conditions … In 1933 the Johns Manville company settled a lawsuit by 11 employees who had been made sick by asbestos. If that settlement had not been kept secret for 45 years, thousands of other workers might not have contracted respiratory diseases.” 28 Similarly, when such settlements are used, for example, to protect a pedophile priest, his victims are unlikely to know that they are part of a larger general pattern of abuse. Instead, believing that they are alone, they view their own victimization as highly idiosyncratic and may even blame themselves in part for what happened.” (The Elephant in the Room, pp. 42-43)

He followed that up with two quotes from articles on the topic:

“One of the most troubling … aspects of the child sexual abuse scandal now roiling the Roman Catholic Church is the enabling role played by the court system. In case after case, judges have signed off on secret settlements of child-molestation suits, freeing the offending priests to molest again … One Boston judge who sealed court records in a priest molestation case [said] that she might not have done so “if I had been aware of how widespread this issue was.” It was, of course, rulings like hers … that helped hide just how big a problem sex abuse was in the church.”
~Ending Legal Secrecy

“[T]here is palpable unease … about the cumulative effect of so many secret agreements. “I’m ashamed I took their money now,” said Raymond P. Sinibaldi, who won a settlement from the church in 1995 after allegedly being abused by a priest … “I should have … filed a lawsuit and called a press conference to announce it. If we had done that, this problem would have been exposed long ago.””
~Walter V. Robinson, Scores of priests involved in sex abuse cases

It is through entirely legal maneuvers that conspiracies can be covered up or rather the conspiring to cover up itself is the conspiracy. But this doesn’t exclude the use of extralegal, whether or not explicitly illegal, means as well (e.g., Harvey Weinstein hiring former intelligence agents to shut down news stories about his sex abuse). A combination of tactics can allow multiple generations to be victimized while keeping the victims silenced and isolated. It’s a good example of how money is power and how far that power can extend.

Sadly, these kinds of cases happen all the time. We are constantly surrounded by conspiracies. And the ignorance among the public, both in terms of mongering and denialism, is itself pervasive. The ignorance of the other side is no proof of one’s own truth claims. In any given debate, it very well might be that both sides are wrong or else that each side only has part of the truth. Conspiracy theories, in particular, need to be taken on a case by case basis.

I could list dozens of horrendous US covert operations that most Americans still don’t know about and, assuming they would even acknowledge it, would shock them. The human experimentation tests by the US government alone are numerous, including cases where radioactive or poisonous material was spread over US populations. More well known are MKUltra and Tuskegee syphilis experiment, but other example could be included. This is the kind of thing that most Americans at the time and many Americans still today have a hard time believing their own government would do… and yet, in many cases, the government has admitted to them and released documents proving it, albeit sometimes so long after the fact that the key actors are dead.

A great example of a known conspiracy is the CIA orchestration of the 1953 Iranaian coup that was finally proven last year from a declassified document, after more than a half century of conspiracy theories about it. Another example was the assassination of Fred Hampton by local police in cahoots with the FBI, Hampton having been intentionally drugged by an informer right before the raid so that Hampton could be shot in his sleep, a blatant assassination that has yet to be officially acknowledged. One of the darker examples is the CIA involvement in drug trafficking which, when one tenacious journalist tried to reveal it, led to his career being destroyed and contributed to his suicide (discussed further down).

Maybe more disturbing would be such things as FBI’s COINTELPRO (part of a long history of Red Squads; the letter to MLK being a standard tactic similarly used against Black Panthers), CIA’s Operation Mockingbird (only declassified in 2007), CIA’s Operation CHAOS (related to other projects from the Office of Security: Project MERRIMAC, Project 2, Project RESISTANCE, etc; similar to work done by COINTELPRO in targeting domestic individuals and groups), and CIA-related Congress for Cultural Freedom (maybe the largest propaganda program in US history). To push a right-winger into full paranoia, just mention the fact that some Ivy League professors from the past have since been outed as spy masters who worked to promote propagandistic American studies and recruit students as new agents while creating lists of activists and dissenters, not to mention various US citizens in the arts and media (including journalists) who were on the payroll of the CIA.

Certainly, during the Cold War, few were aware what was going on and the corporate media rarely investigated it because that would have been upatriotic and unAmerican. My parents were in college during the height of this activity and they were completely oblivious because, as conspiracies go, they were highly successful operations. They didn’t become public knowledge until recent history. Most Americans alive during that time are still ignorant of those conspiracies and most of the conspirators have taken their secrets to the grave. Similarly, few people know what covert operations the FBI and CIA are involved in these days, although COINTELPRO-style practices have reemerged with the War on Terror such as entrapment being used to incite mentally unstable people toward planning terrorist acts.

Many argue that conspiracies can’t happen because someone will always speak or somehow find out, such as the heroic investigative journalists portrayed in Hollywood movies. That occasionally happens, but not very often. It’s a romantic vision of a fully functioning democratic society, which is to say it is a fantasy, a popular genre in America.

As an interesting twist, conspiracy theories themselves have been used as political weapons. During the Cold War, it wasn’t only common for major governments like the US to be involved in conspiracies. They also would sometimes invent and promote conspiracy theories for various agendas, as part of disinformation campaigns. This could be useful to create doubt, mistrust, paranoia, and outrage in targeted populations. Or else it was used to muddy the water, maybe even to help hide or distract from actual conspiracies. So, sometimes there are real conspiracies behind the conspiring to spread fake conspiracy theories, a tangle of conspiracy actions and theories.

Russia recently conspired to push conspiracy theories along with fake news on social media in order to agitate and divide the American public, along with at times simultaneously promoting rallies and counter-rallies in the same cities. The US has a long history of doing similar things in other countries and maybe in the US as well (it’s not always clear what many known domestic programs and projects were intended to accomplish and to what degree they were successful). Corporations also get involved in this kind of thing such as using front groups and astroturf, as has been well documented typically by way of investigative journalism done in alternative media (a recent example is that of drug companies bribing patient groups with millions of dollars to push opioids).

We are all being manipulated in various ways. It doesn’t take a paranoiac to realize this. Kathryn S. Olmstead, a history professor at UC Davis, concluded that (Real Enemies, pp. 239-240, 2011):

“Citizens of a democracy must be wary of official and alternative conspiracists alike, demanding proof for the theories. Yet Americans should be most skeptical of official theorists, because the most dangerous conspiracies and conspiracy theories flow from the center of American government, not from the margins of society.

“Since the First World War, officials of the U.S. government have encouraged conspiracy theories, sometimes inadvertently, sometimes intentionally. They have engaged in conspiracies and used the cloak of national security to hide their actions from the American people. With cool calculation, they have promoted official conspiracy theories, sometimes demonstrably false ones, for their own purposes. They have assaulted civil liberties by spying on their domestic enemies. If antigovernment conspiracy theorists get the details wrong—and they often do—they get the basic issue right: it is the secret actions of the government that are the real enemies of democracy.”

A lot of weird stuff happened over the past century and, as conspiracies are rarely discovered in real time, surely is still going on. No doubt about that. Sometimes, there is good reason behind paranoia. That is the problem. When there is a long history of lies and disinfo, obfuscation and propaganda, it becomes difficult to know the truth and trust claims of truth. And once paranoia has taken hold of a society, it can make public debate almost impossible. That can be seen with recent leaks that showed how closely some in the media were working with political party leaders, going so far as not only to give them debate questions but also to allow them to edit articles before publishing. And without these leaks, we probably never would have learned about any of this

This leaves many of us in a paranoid state of not knowing what hasn’t yet been leaked and may never be leaked, just an occasional peek behind the grand wizard’s curtain. But if such leaked info doesn’t make you paranoid, then maybe you’re not paying attention or you’ve grown cynical, apathetic, and indifferent . The question is what to do with that info once we have it. It would be one thing if this was limited to the fantasies of conspiracy theorists. That isn’t the case, though. Various documents, released and leaked, and various investigations have shown how common are conspiracies in diverse institutions within our society. It is almost a full time job trying to keep up with it all.

There is some press that has helped to uncover this info, but we would know a lot less if not for the rare brave souls who succeed, with everything against them, to force the truth into the light. It’s probably safe to assume that even these leaks barely scratch the surface of what goes on… or at least there is no rational reason to assume the opposite. Of course, that doesn’t justify conspiracy mongering, especially as taken advantage of by right-wing pundits and demagogues. Yet neither does it warrant uninformed and thoughtless dismissals.

If you wait long enough, a few of the worst conspiracies might eventually be exposed — partly because the top secret documents, unless destroyed, sometimes come out one way or another, not always and maybe not usually but sometimes. The problem isn’t that there is a total lack of a free press, but corporate media has as a main motivation to make profit. Having a press that is theoretically free to report the truth is not the same thing as their possessing a moral and legal responsibility, much less a self-interested incentive, to report the truth since the freedom to seek profit is overarching. In the end, there is little profit in exposing dark secrets and ugly truths that will anger powerful actors who can derail your career and do you much wore harm, that is unprofitable other than as superficial infotainment portrayed in a way to not be taken seriously.

In passive complicity, most news reporters simply quote the official statements of governments and corporations. Hard-hitting investigative journalism is rare because it is difficult and expensive, not to mention it might repel certain advertisers who don’t want to be associated with it for various reasons, along with strings being pulled behind the scenes. This leads most news reporting to be safe and bland, the profitable middle ground between competing forces.

No far-fetched speculation is required to explain this. Still, one should keep in mind that most of corporate media has become consolidated into a handful of transnational mega-corporations. These have direct corporate links to other areas, such as their parent companies also owning highly profitable energy and defense corporations, not to mention how these corporation fund various think tanks, lobbyist groups, etc that have have direct ties to politicians and political parties (involving revolving doors where politicians are bribed with lucrative lobbyist positions and corporate hack engineer regulatory capture). Talk about an extreme and blatant conflict of interest, similar to the police investigating the police which unsurprisingly leads to few police ever being prosecuted. By the way, it should be noted that the defense industry is both heavily government-funded often by no-bid contracts and represents the single largest sector of the economy. It doesn’t take a conspiracy theorist to acknowledge that humans are easily influenced by the incentives, connections, relationships, and life experiences that shape their personal and professional worldviews.

There are many vested interests involved that slant attitudes and actions without need of overt and intentional conspiracy, much of the influence would happen unconsciously and by way of social pressure (especially among peers and close associates), as the desire to fit in is powerful. Also, people in positions of power and authority, both in the public and private sector, tend to live in the same world and to share the same social circles, even living in the same neighborhoods, going to the same churches, sending their kids to the same schools. This biases their thinking, no different than it does for any other group of people. People conspire all the time, often without thinking about it that way, simply because they share the same biases and have an incentive to promote a shared worldview toward shared interests, agendas, and goals.

Most people are simply trying to accomplish what is important to them and don’t always stop to consider how it could be perceived by outsiders. Richard Nixon, for all his own tendencies toward conspiracies and conspiracy theorizing, showed little evidence of being self-aware enough to see clearly his own behavior and actions. Those in positions of power and authority are fallible humans like the rest of us — some might argue even more infallible in how, as studies have shown, those in the upper class have less ability to correctly read the emotions of others and how the highly educated have higher rates of smart idiot effect.

Uncomfortable knowledge doesn’t always get acknowledged easily, even when there are a few journalists investigating it. Consider Gary Webb who, in trying to expose the CIA conspiracy of drug trafficking, was attacked by other journalists working in the mainstream media and his life was made into a living hell. He dared to speak truth to power and that doesn’t always lead to someone being celebrated as heroic. Some of those who attacked him apologized later on after it was proven he was right, but such vindication was too late since he was already dead. It requires immense naivete to believe investigative journalism is easy and that it doesn’t take much effort to prove a conspiracy within mainstream debate.

Ryan Devereaux wrote:

“Looking back on the weeks immediately following the publication of “Dark Alliance,” the document offers a unique window into the CIA’s internal reaction to what it called “a genuine public relations crisis” while revealing just how little the agency ultimately had to do to swiftly extinguish the public outcry. Thanks in part to what author Nicholas Dujmovic, a CIA Directorate of Intelligence staffer at the time of publication, describes as “a ground base of already productive relations with journalists,” the CIA’s Public Affairs officers watched with relief as the largest newspapers in the country rescued the agency from disaster, and, in the process, destroyed the reputation of an aggressive, award-winning reporter.”

And Ryan Grim wrote:

“It did not end well for Webb, however. Major media, led by The New York Times, Washington Post and Los Angeles Times, worked to discredit his story. Under intense pressure, Webb’s top editor abandoned him. Webb was drummed out of journalism. One LA Times reporter recently apologized for his leading role in the assault on Webb, but it came too late. Webb died in 2004 from an apparent suicide. Obituaries referred to his investigation as “discredited.””

Or consider the more recent situation of the Iraq War. Studies have since shown that the Bush administration told 935 proven lies in the run-up to the war. Many in the intelligence agencies, as later was shown, knew these were lies and remained silent. Even when some documents got released to news organizations, the reporting was minimal and superficial. Some reporting was even delayed without explanation or in particular cases, as has since been revealed, at the behest of the government. Whether or not you think of this as conspiracy, it clearly indicates various levels of complicity. There was a push for war and high pressure to justify it.

Jon Schwarz brought up that, “This lie should have been easily caught by the U.S. media, given Kamel’s 1995 CNN interview. Moreover, there were public documents sitting on the IAEA website stating the Kamel had told the agency “all nuclear weapons related activities had effectively ceased” in 1991″ (Trump is Right, Bush Lied). And Robin Andersen writes “one of the most curious media failures regarding coverage of the war in Iraq, about a secret meeting finally brought to the light of day, but not by US media” (Bush, Blair and the Lies That Justified the Illegal Iraq War). Andersen notes that even some media figures admitted that it was extremely odd that this was being ommitted from reporting with one of them, CNN’s Jackie Schechner, observing that it wasn’t for lack of interest as was well covered in the blogosphere. At around the same time, “Washington Post ombud Michael Getler noted that readers had complained about the lack of coverage, though no explanation for the omission was offered.”

The typical American doesn’t look to the blogosphere for breaking news about info involving world-shattering invents such as a war that has led to millions of dead innocents and trillions of dollars of costs. The mainstream (corporate) media remains the primary source of media consumption, but even when readers complained about this the media silence continued. It was far from being a single failure of media. Andersen goes on to write that, “At this point, another opportunity presented itself for thorough coverage of the British documents, yet the American media again missed a chance to expose the falsities that led to war and correct the historical record. The delayed coverage of the memo that finally “burst into the White House” reveals the current complexities of media failures. With the Iraq invasion, we see the reinvention of a war’s history even before it has ended.” But not all of the media was like this. In When Media Goes to War, Anthony DiMaggio makes a useful comparison (p. 41):

“[B]oth the New York Times and Independent closely quote politicians commensurate with their percentage of seats in government. In the United States, the New York Times made significant efforts to split coverage evenly between Democratic and Republican sources, while devoting little attention to antiwar protestors. Similarly, the Independent molds it reporting to reflect the power distribution among the United Kingdom’s three major parties. However, the Independent is twice as likely to quote antiwar protestors than the New York Times, suggesting that the British coverage is less reliant on official sources in dissenting against the war.”

This slanted reporting happened in complete opposition to the largest protest movement in world history and in opposition to the majority of Americans that initially opposed the war (the majority only shifting after near unanimous promotion by the corporate media). The New York Times is as mainstream as it gets in US media. And whatever one may think of it, one is forced to admit that there has never been an opportunity lost by the New York Times to beat the war drum, no matter which party controls Congress and the presidency. The reason even the supposed liberal corporate media has so often been war hungry is a question one must ask, even if one denies all possibility of political conspiracy and corporate conflict of interests. The silence among many in not asking about this speaks volumes.

About influence from above, David Dadge explored how corporate media can be made to fall in line with official doctrine or at least to not speak out against it too loudly (The War in Iraq and why the Media Failed Us, p. 146):

“On the internet, Yellowtimes.org was briefly closed down by its Internet Service Provider (ISP) for showing pictures of American fatalities and there were pressures on Hollywood stars such as Martin Sheen who vigorously protested against the war. Perhaps the worst decision made by a broadcaster was CBS’s decision to hold back on the publication of pictures showing the abuse of Iraqi prisoners at the hands of American soldiers. The decision came after the Pentagon warned the broadcaster that such pictures might inflame tensions in Iraq. Given the importance of the story, CBS’s decision was a blatant disregard for objective and independent news reporting.

“While many of these censorious acts were at arm’s length from the government, it is hard not to see them as part of the environment created by the Bush administration. These acts point to a subtle manipulation of the media environment by calling on the public’s patriotism and making commercial enterprises extremely nervous about the impact of unpopular dissent on share prices. The comments by the Bush administration also encouraged a strong conservative media that channeled the public’s displeasure at dissent and unleashed it on the media. As a result, in late 2002 and early 2003, journalists began to feel extremely uncomfortable about taking on the Bush administration.

“The manipulation of the media environment, therefore, contained three vital elements: comments by senior administration officials showing that dissent is unpatriotic; mobilization of the public”s support for those comments; and pressure on journalists from other elements of the media and private commerce to support the administration’s actions. However, adding to these pressures, and perhaps for the first time in the history of the United States, the Bush administration also sharply questioned the media’s role within American society: a tactical decision that further damaged the media’s ability to challenge the government.

“President Bush’s admission to a journalist that he disputes the idea that the media reflects what the public is thinking is prejudicial to the media’s role. Although it is not necessarily wrong to confront the media’s own assumptions about itself, when this comment is seen in conjunction with the comments of other senior Bush administration officials, such as Andrew Card, who is on record as saying he does not believe the media have a check and balance function, it is disturbing. Accepting these comments at face value, it would appear that before and during the Iraq war the Bush administration either sought to use the mainstream media as an information delivery system or simply bypassed them altogether.”

Much of the corporate media has since then offered better reporting as the Iraq War winds down, some journalists even having admitted failure in not challenging the Bush administration, but it’s always easy to see more clearly years later when the fear of dissent has lessened. It reminds me of the corporate media’s failure to fully and honestly report on the stolen 2000 election and the peculiarities of the 2004 election (a conspiracy of silence about the conspiracy itself, based on equal parts open secret and willful ignorance), except the difference being that I’ve yet to hear anyone apologize for this failure. Am I a ‘conspiracy monger’ because my views don’t fall in line with the mainstream narrative fed to the American public by the bipartisan system of power and the plutocratic-owned corporate media?

(See also: News Incorporated ed. by Elliot D. Cohen, Mass Media, Mass Propaganda by Anthony R. Dimaggio, Constructing America’s War Culture ed. by Thomas J. Conroy & Jarice Hanson,  Media Spectacle and the Crisis of Democracy by Douglas Kellner, Whitewashing War  by Christopher R. Leahey, Anatomy of Deceit by Marcy Wheeler, When the Press Fails by W. Lance Bennett, Regina G. Lawrence & Steven Livingston; the kind of books including serious scholarship typically ignored by the conspiracy denialists.)

That is how oppressive groupthink operates, under conditions of national duress exploited by psychopathic and authoritarian power mongers. Social science studies have shown how people become increasingly conservative-minded during times of fear, anxiety, and stress. One study showed that liberals who early on saw repeated footage of the 9/11 attacks were more supportive of Bush’s War on Terror than those who heard about it over the radio (and one might consider that almost anyone working in media would be included in that group of repeated video watchers on 9/11). Many Americans, including within the media, suddenly became uber-patriotic and dissent wasn’t tolerated.

Does anyone remember how oppressive the public atmosphere became during that time? Major media figures were fired for having politically incorrect views in opposing war. Matt Taibbi pointed out that presently “people like Chris Matthews are giving people a hard time about their positions on Iraq. Where was MSNBC on Iraq back in the day? I mean, they were letting go of people like Phil Donahue and Jesse Ventura for having, you know, unpatriotic positions on the Iraq War. Everybody was in on this thing, except for maybe this program and a few other scattered journalists.”

Plus, there has been endless studies showing a wide variety of biases in media, which is part and parcel of the whole manufacturing of consent (with or without any intended conspiracy, as manufacturing consent simply requires a systemic shutting down of debate by how the forum of debate is structured). Even without these biases being proof of conspiracy, it is because of these biases that conspiracies so often can fly under the radar, sometimes for decades, as official narratives too often go unchallenged (e.g., the myths surrounding the Vietnam War). How many journalists are there who are actually brave enough to go through the potentially the career-destroying despair that Gary Webb experienced? Probably not many.

I’m in no way of supporting conspiracy mongers. But I’m well enough informed about proven conspiracies to not fall into the equally ignorant trap of denialism. I’m an agnostic about such things. I don’t affirm or deny what I don’t know, even as I do base my opinions on the evidence and patterns seen in past known cases. If there isn’t always a conspiracy of politics and power, there is most definitely a conspiracy of ignorance in American society (e.g., the propaganda wars over school textbooks). I’m all for skepticism, but skepticism is only as good as the knowledge it is based on and the public debate within which it operates. How many self-identified skeptics of conspiracy theories could honestly claim to be widely read and well informed about the US history of proven conspiracies? What do we do if the Dunning-Kruger effect applies equally to many on both sides of the debate?

In Novella’s comment section, someone going by the username rezistnzisfutl says that, “We all know that there’s funny business that goes on with the government. The same can be said really about any organization out there. I think the point of this article is that skeptics hold out for evidence for whatever is being claimed, while cynics will often assume a lot whether there’s evidence or not. It’s not to say that cynics are necessarily wrong, but typically for skeptics, disbelief or withholding of judgment is the default position until actual legitimate evidence is presented for a claim.” Demonstrating confused thought, he goes on to say that, “It’s more likely that news outlets are more interested in ratings and advertising dollars, than being the lapdogs of the government or corporations.” He is talking as if news outlets were not also corporations, which indicates a bizarre if maybe common psychological disconnect.

He then throws out what he considers to be a clincher: “There are many competing news organization and independent news sources that would jump at the opportunity to blow conspiracies wide open given the chance, if actual evidence of these things surfaced. Those kinds of things would make fortunes and put small orgs on the map.” In that case, show me the immense wealth that Gary Webb accrued. Show me the high life of luxury exhibited by Julia Assange, Edward Snowden, Chelsea Manning, etc. Or show me the fortune made by Raymond Lemme who mysteriously died in investigating the 2000 election in Florida.

In discussing that last one among much else, I offered this thought: “I don’t know what to do with this kind of thing. To most people, this is the territory of conspiracy theorists, ya know crazy paranoiacs. It should, therefore, be dismissed from thought and banished from public debate. The problem is that I’m psychologically incapable of ignoring inconvenient and uncomfortable facts. Call it depressive realism. I just can’t turn away, as if it doesn’t matter.”

Amusingly, the two sides in that comment section debate mostly seem to be talking past one another. On the otherhand, I saw good points made on both sides. In the end, the most reasonable conclusion was made by someone with the username Kobra — he simply stated that, “This conversation is moot because you cannot translate the scientific skeptical model into other domains, like business, or politics.” What does skepticism mean toward systems of power that seek to manipulate our beliefs and doubts about what is true, not to mention ideological and cultural worldviews that bias our thoughts and experiences at fundamental levels of our being?

Both sides assume they are the rational skeptics and those on the other side are the irrational fools. But in being intellectually humble, how do you prove you aren’t the one being an irrational fool or simply misinformed and misguided? How can you know what you don’t know, know that what you think you know isn’t false or partial, and know that there isn’t something else you really should know? We should be skeptical toward skepticism itself.

* * *

3/1/18 – Further thought:

There is one thought that kept bugging me. People assume that conspiracy means total secrecy. In a simplistic sense it does, as conspiracies begin in secrecy. But they don’t always remain in secrecy, not entirely. Conspiracies can continue and be successful even when they are open secrets.

An example of this is the dark money that runs American society. There has been some good investigative reporting on it, published in scholarly books and in the alternative media. Dark Money by Jane Mayer is one example. Another one is Buzzfeed’s in-depth report on how the Koch brothers and Mercer family funneled money to Steve Bannon, Breitbart News, Project Veritas, etc. Both of these probably only scratched the surface of what goes on behind the scenes, as they are just two examples among many.

Typically, such revelations are only briefly reported on in the mainstream news and once again are lost in silence or lost amidst the most recent spectacle in the news cycle. The majority of Americans probably never read about any of it or, if they did, they have forgotten about it. This is how open secrets operate. Most people don’t know and don’t want to know. The number of Americans that follow the news closely is small. Among those who do pay attention, fewer still want to consider any of it in terms of conspiracy. It’s easier to assume that these are exceptions to the norm, just a few bad apples and not an indicator of a larger pattern of wrongdoing.

So, after some partial and momentary exposure, the powerful plutocrats go back to their scheming and nothing changes. The Kochs and Mercers of the world surely have numerous conspiracies going on with money being funneled into all sorts of shell companies, front groups, astroturf, media operations, etc. If you pay attention much to right-wing ‘alternative’ media, you’ll notice how when certain articles come out they will simultaneously appear on dozens of websites and most of those websites, likely all backed with the same funding sources, seem to serve no other purpose than to promote such pieces into newsfeeds, social media, and web search results. Certainly, there is an agenda behind it all.

The conspiratorial attempts to manipulate and influence us are common and mostly it happens in the background. This isn’t some grand insight and doesn’t require much in the way of paranoia, just ordinary awareness of the larger world and a willingness to pay attention. But that is the key issue. Most ignorance on this matter is willful. It’s simply depressing to think about. That is how even government conspiracies that are proven through leaks and admissions rarely break into public awareness and become a permanent part of public knowledge. I mentioned some of them in this post. Those who lived during the period of those major conspiracies didn’t know about them while they were happening and most still don’t know.

The fact of the matter is that most people don’t want to know, which makes it easy for those with devious intentions. As a society, we never seem to learn because the process of becoming better informed can be quite demoralizing. We’d prefer to not think that we live in a society where bad things regularly happen. That is understandable, but that attitude is why bad people are so often able to get away without consequences. Of all the proven conspiracies I’ve mentioned, very few people involved were ever held accountable in legal terms or on a personal level. Without consequences, there is no deterrence.

Instead, most people go on denying that conspiracies happen. Only crazy people think that way.

* * *

From an earlier post:
Conspiracy Theory And Fact

We have voluminous official documentation and other evidence about conspiracies that weren’t known while they were happening, often only becoming verified decades later. Even when evidence shows the official story doesn’t make sense, any alternative explanation is a conspiracy theory by default, until some damning evidence finally comes forth. But even deathbed confessions by insiders (spymasters, covert operation agents, etc) are regularly dismissed for the type of people who get involved in conspiracies are those with reputations of secrecy and deceit.

Probably most of what militaries, alphabet soup agencies, organized crime, corporations, etc does in secret never comes to light. Conspiracies, if successful, are designed to be hard to prove with few paper trails and a surfeit of plausible deniability.

I’m not sure why anyone should find this surprising. It’s not hard to keep a secret, when all involved have a vested interest to keep it secret or who, like soldiers, are trained to be subservient by maintaining silence. Conspirators, in particular, are legally complicit and so have little motive to admit anything. If all else fails, there are endless means to keep people silent, from blackmail to assassinating them (when one pays attention, one finds an amazingly improbable number of alleged conspirators, subpoenaed witnesses, and investigators who end up dying by mysterious accidents and unforeseen suicides).

Take something like the 1964 Gulf of Tonkin incident—if not a proven false flag operation, then at least a conspiracy to hide the truth. Far from being a minor incident, it justified the US entering into the Vietnam War. It just so happens that those in power had been in the process of looking for an excuse to officially declare war, although illegal covert military operations had been going on for a while. Anyway, it turns out that parts of the official account never happened or not the way it was officially stated, but evidence didn’t finally come out in mainstream reporting until after the war was already over an government documents were only declassified in 2005.

That was decades later! And that was a situation with multiple naval ships and naval crews from multiple countries, and so involved numerous potential eye witnesses. Declassified records show that even US Senators at the time knew the official story was false. Certainly, officials in the other involved governments also had information about what actually happened and didn’t happen. Few conspiracies have ever involved so many.

The Gulf of Tonkin is not much different than the WMDs that got us into the Iraq War. Even the CIA didn’t believe Iraq had WMDs (not unlike when the CIA knew that the Soviet Union posed no threat when politicians were pushing to start the Cold War and not unlike when the CIA knew John F. Kennedy was lying during his presidential campaign about the weaponry the Soviet Union possessed, both incidents of CIA collusion by inaction not known until long after the historical era had passed). Besides, those in the Bush administration knew they were misleading the public in connecting Saddam Hussein to the 9/11 terrorists. It was a conspiracy and one that operated right out in the open, for those who had eyes to see. All it took was a servile mainstream media and a submissive public. Too many people don’t want to know the truth, even when the truth is obvious. That is what can make conspiracies so easy to commit. Most people want to believe whatever they’re told, especially when the person telling it to them is an authority figure.

It’s the same reason the Vatican was able to hush up the sex abuse for decades, as most people simply don’t want to talk about it, what is called a conspiracy of silence. Netflix’s documentary “The Keepers” focuses on a Catholic school where this happened. It goes into great detail about how an offender could sexually abuse so many children while so many people around him remained oblivious or else refused to see. Even most of the victims never talked about it and the few that did were ignored. The one person, a nun, who seriously challenged the conspiracy of silence apparently was murdered. And more damning, there is strong evidence the police were involved in shutting down investigations, because the priest who was molesting children had family ties.

The documentary finally managed to put the pieces together almost a half century later. That is praise for this one tenacious investigator, but it is hardly evidence of a fully functioning free press that it took so long for the depravity of it to be revealed. So, don’t feed me any bullshit about there being no way conspiracies can be kept secret.

Consider another example from the private sector. Recent investigative reporting from an alternative media organization (Inside Climate News) found that Exxon and other major oil/gas corporations knew about man-made climate change since the 1970s.

Numerous people in these corporations, from scientists to upper management, were aware of this knowledge. There were even internal documents showing this knowledge. This was and is a problem that not only has threatened the earth’s biosphere and global population but has also been a national threat to powerful countries like the US. Yet a successful campaign of lies, obfuscation, and disinformation (involving not just PR but also powerful political lobbyist organizations, think tanks, and front groups) lasted for decades apparently without any of the conspirators coming forward to speak out about the conspiracy or, if they did, it never received much MSM news coverage.

According to some, conspiracies like this are highly implausible. Yet these particular implausible conspiracies have been proven true. Conspiracy theorists jumped on the Tonkin story early on as they noticed the unexplained discrepancies. And for a long time many have written about the tactics of oil/gas corporations. But until documents are released or discovered conspiracy theories can be almost impossible to prove as conspiracy facts. The problem is that documents usually only come out after massive private investigation has already indicated conspiracy and long after any involved could be held accountable. Overwhelming proof can take a generation or generations to accumulate. Even so, most of what governments and corporations do in secret is never disclosed by those responsible, as the wealthy and powerful have little incentive do so. The government alone has mountains of top secret documents, only a fraction of which have ever been made public by way of leaks or freedom of information requests.

* * *

Let me finish this post by taking it into a different direction. What makes a conspiracy possible? It’s not just secrecy and corruption but what these represent. It is a culture of distrust dependent on a culture of silence and hence a conspiracy of silence. In this mix, individual and collective shame, fear, and outrage drive a cycle of victimization.

In discussing the Tulsa race war, James S. Hirsch says that speaking of “a “culture of silence” would have been more appropriate than a “conspiracy of silence”” (Riot and Remembrance, p.326). Conspiracies would never happen without silence. As Tim Madigan put it, a “culture of silence” breeds “cultural amnesia” (The Burning). And if you don’t understand the power of silence, it is understandable that conspiracies will seem absurd or else highly improbable.

I would add that this is far from being ancient history nor limited to a single place. There has been a collective amnesia about racial issues all across our society. My grandmother grew up near Tulsa when the race war happened, she spent her young adulthood in a Klan center, and then she eventually moved her own family including my father into a sundown town — yet my father doesn’t recall any discussions in his family about race and racism, a refusal to speak in one generation creating ignorance in the next, a complete silencing such that my father would also move his family to a sundown community with total unawareness, probably because on an unconscious level it felt comfortable to him.

This relates to what some, myself included, refer to as “the perplexing issue of simultaneously knowing and not knowing. The study of ignorance, agnotology, would also be the study of what is hidden, both to public and private awareness. All of this connects to ideas I first came across in the writings of Derrick Jensen, ideas about the victimization cycle, silencing, dissociation, splitting, doubling, etc.”

This is where social science and historical scholarship would aid skeptics in better understanding the world around them — linked to why Kobra was correct in saying that, “This conversation is moot because you cannot translate the scientific skeptical model into other domains, like business, or politics.” The skeptical attitude we need has to go much deeper into what it means to be human, specifically in the kind of society we find ourselves in.

It could be argued that the heart of the issue is shame. Whether or not a conspiracy originates in shame, it creates the conditions for shame which further entrenches the conspiratorial mindset of distrust, fear, and anxiety. And shame has immense power in silencing victmizers and victims alike. That is what happens where trauma ripples outward, leaving silence in its wake. In communities that have experienced some collective trauma, there is a resistance to speaking that will be enforced by social pressure, if not by law. This has been seen in cities that have experienced racial violence, sometimes with the victims expelled from the community as in sundown towns and sometimes with public records expunged of evidence. This can leave a mere residue of the event(s) that occurred, often a mere absence rather than a presence such as all or nearly all of the black population disappearing from one census to the next, but when asked about it few if anyone remembers or will talk. Tulsa was a rare case in eventually having been formally investigated, although not until 1997 which was more than three quarters of a century later.

For whatever reason, the 1990s was the time when the multi-generational shadow of a conspiracy of silence began to lift, the time period in which James W. Loewen wrote his groundbreaking book on sundown towns. Having attended high school in the 1990s, it wasn’t until recent years that I learned that one of the places I grew up in was a sundown suburb, but of course no one talked about it at the time.

The conspiracy of silence can operate in an odd way. It’s a sense of collective guilt, whether or not anyone was actually guilty. Loewen spoke of how, “Recent events in Martinsville, Indiana, provide an eerie example of cognitive dissonance at work” (Sundown Towns, p. 327). It was a known sundown town when a black woman, having transgressed the sundown code of getting out of town before the sun sets, was murdered in 1968: “So most people (correctly) assumed the motive to be rage at Jenkins as a black person for being in the city after dark,” wrote Loewen, continuing that:

“In the aftermath of the murder, NAACP leaders and reporters from outside the town levied criticism at the city’s police department, alleging lack of interest in solving the crime. Martinsville residents responded by appearing to define the situation as “us” against “them,” “them” being outsiders and nonwhites. The community seemed to close ranks behind the murderer and refused to turn him in, whoever he was. “The town became a clam,” said an Indianapolis newspaper reporter.65 Now Martinsville came to see itself not just as a sundown town—it already defined itself as that—but as a community that united in silence to protect the murderer of a black woman who had innocently violated its sundown taboo. To justify this behavior required still more extreme racism, which in turn prompted additional racist behaviors and thus festered further. […]

“Ironically, it turned out that no one from Martinsville murdered Carol Jenkins. On May 8, 2002, police arrested Kenneth Richmond, a 70-year-old who had never lived in Martinsville, based on the eyewitness account of his daughter, who sat in his car and watched while he did it when she was seven years old. Although many people inside as well as outside Martinsville believed its residents had been sheltering the murderer these 34 years, in fact no one in the town had known who did it. No matter: cognitive dissonance kicked in anyway. Again, if situations are defined as real, they are real in their consequences. Because everyone thought the community had closed ranks in defense of the murderer, additional acts of racism in the aftermath seemed all the more appropriate. Today, having intensified its racism for more than three decades in defense of its imagined refusal to turn over the murderer, Martinsville is finding it hard to reverse course.” (p. 328-329)

It was a conspiracy of silence based on nothing other than an imagined shared past that became an imagined shared identity. No one would tell the secret of this horrific crime, going to the grave with it if necessary, but it turns out there was no secret other than a sense of collective guilt. Successful conspiracies always draw people in psychologically, the oppressive sense of secrecy sometimes keeping people from even questioning its validity. Keeping secrets is normal human behavior and humans are quite talented at it. This is why it is so easy for conspiracies to happen, in particular when the stakes are so much higher.

Also, there is usually no one who has any advantage to bring attention to a conspiracy. In towns with history of racial violence or exclusion, it’s rare for anyone to talk and, as they are so common across the country, such places rarely gain much public attention. When a conspiracy of silence becomes a norm within a country, breaking that norm is difficult and can be costly. At the local level, there is more often than not no mention of the history of racism by local historians, historical societies, historical markers, and history books; by local newspapers, chambers of commerce, authority figures, and residents; even professors working in local colleges.

In the sundown town my father grew up in, there was a sundown sign on a road seen coming into town and the sign was there when my father was growing up, but as I said no one talked about it. My grandfather was a respected local minister and was racist, and it seems he played a role like so many others in suppressing this dark reality. This was standard behavior, as Loewen notes: “One might imagine that priests and preachers might chide their congregations about their un-Christian attitude toward people of color, but clergy, like local historians, avoid controversy by not saying anything bad about their town” (p. 199).

People in a town can successfully conspire not to talk about what everyone knows and even the living memory can be quickly suppressed, such as my father’s convenient inability to remember anything out of the ordinary. Well, it wasn’t out of the ordinary, as many communities in Indiana and across the country were sundown towns: “Outside the traditional South—states historically dominated by slavery, where sundown towns are rare—probably a majority of all incorporated places kept out African Americans” (p. 4). It was the social reality that was so pervasive that it didn’t need to be acknowledged — racism was the air everyone breathed.

This ability to suppress dark secrets, even when they are open secrets, is not some magical ability limited to racists in racist towns. This is basic human nature. Any group of people can act this way: churches, sports organizations, corporations, etc — this would be even more true for intelligence agencies that carefully select their employees, highly train them, and enforce protocols of secrecy with severe punishments to those who leak (e.g., almost any other CIA, NSA, etc employee that was as careless with classified documents as was Hillary Clinton would already be in prison).

If intelligence agencies weren’t highly talented at implementing successful conspiracies that rarely were exposed, they would be complete failures at their job. That isn’t to suggest most conspiracies represent hidden evil for most conspiracies are of no grand consequence, simply ordinary covert operations (heck, something as simple as a surprise birthday party is a conspiracy), and even those that are of greater importance probably are by and large well-intentioned according to the purposes and public mandate that officials involved believe themselves to be serving. The entire design of intelligence organizations is conspiracy, to conspire (i.e., covertly plan and enact activities, theoretically in service of national security and law enforcement). If the US government was as incompetent as conspiracy denialists believe, we would have lost World War II the Cold War. It seems too many people like to imagine absurd caricatures of conspiracies and conspiracy theorists.

That isn’t to deny there aren’t conspiracy mongers, even some that fit the caricatures, although I would re-emphasize the point that at least ome conspiracy mongers are likely disinformation agents, agent provocateurs, and controlled opposition. Consider the Breitbart News Network, the single largest and most influential conspiracy mongering operations in the country; it just so happens to have been heavily funded by and serving the interest of the Mercer family, one of the wealthiest and most powerful plutocratic families in the United States and the world. Certainly, the Mercer family pushing conspiracy theories is serving a self-interested political purpose. That is to say the conspiracy mongering obscures the real conspiracy of corporatism, the tight grip big biz has over big gov.

None of this is exactly a shocking revelation to anyone who has paid attention to what American society has become since the Gilded Age. We shouldn’t ignore the actual psychopaths, social dominators, and authoritarians involved. But more importantly, we shouldn’t forget that the potential for secrecy and silence is within us all. Even when people commit wrongdoing in collaboration with others (i.e., conspiracy), they rarely think of themselves as bad people, much less evil conspirators. What is disturbing about some conspiracies is how normal they are, most people simply going through the motions, going along to get along, giving into pressure and doing what is expected, and then of course rationalizing it all in their own mind. Conspiracy is one of the easiest things in the world. Breaking silence and revealing secrets is immensely more difficult. It feels bad to confront what is bad and it is even more challenging to simply acknowledge that there is something that needs to be confronted, especially when the response you will get is to be treated as a troublemaker or even a threat, possibly with harsh consequences following such as ostracism, career destruction, and/or imprisonment.

Conspiracies, once set into motion, can be maintained with little effort for all that is required is to do nothing or to do what one has always done, just keep your head down. And once you have been made into a collaborator or made to perceive yourself that way, immense guilt, shame, and fear will powerfully keep most people in line. Besides, most conspiracies operate by few people knowing all that is involved or to what end, making it all the easier to rationalize one’s actions.

* * *

Systemic-Conspiracy as Social Pathology
by Brent Cooper

Agnotology

What you don’t know can you kill you. Agnotology is a nascent field that emphasizes the cultural production of ignorance as an anti-epistemological force. It provides great insight into state secrecy and social epistemology. In Agnotology: The Making and Unmaking of Ignorance, the authors (also editors) of the volume, Proctor and Schiebinger, make the case that ignorance is under theorized and that their new term goes a long way in rectifying it.

Ironically, we are ignorant of our own ignorance. In the book, this observation is empirically reflected by Peter Galison’s estimate that the amount of data classified is 5–10 times greater than the open literature publicly accessible. Moreover, Proctor echoes the Socratic wisdom that knowledge of one’s own ignorance is a necessary prerequisite for intellectual enlightenment.

Considering this, Agnotology contributes substantial and needed insight to theories of knowledge. Proctor divides ignorance into three types: native state (common form; innocence; naiveté), lost realm (forgotten; selective; missed), and strategic ploy (“strategies to deceive”). It is primarily the last type that concerns us when dealing with conspiracy, although not exclusively.

Examples of strategic ploy ignorance are found in trade secrets, the tobacco industry, and military secrecy. Trade secrecy is legitimized because it is concerned with intellectual property as capital that drives business and economics. However, other forms of secrecy are more nefarious as they obscure truths vital to the public in order to advance their own interests. The truth about the lethality of smoking was stalled for nearly half a century through the concerted “manufacture of doubt.” Likewise, for even longer the scientific consensus on climate change has been marginalized and obstructed by conservatives.

Post 9/11, the Bush regime has also implemented an array of draconian legislative measures including the Patriot Acts resulting in scandals such as NSA wiretapping and extraordinary rendition, among others. Another potentially dangerous form of strategic ploy ignorance production is the state-secrets privilege. The executive can annul any lawsuits or investigations if disclosure of information pertaining to the case can potentially threaten national security.

A notable case involved FBI whistleblower Sibel Edmonds, whose appeals to expose evidence of FBI internal security breaches and a cover-up were blocked by the invocation of the state secrets privilege. Similarly, the disclosure of ‘Top Secret’ information, such as Daniel Ellsberg’s Pentagon Papers, is extremely dangerous and considered treasonous. However, exposing the crimes also reveals how dysfunctional the government really is.

It is marginally productive to describe these issues outside the context of systemic-conspiracy. The cases of climate change and tobacco-harm denial plainly reveal the systematic distortion of science and policy, and the manipulation of public opinion. And with agnotology, denial is truly the operative word.

The successful suppression of these cases from the dominant news narrative is yet another testament to the power of a systemic faceless enemy. The systemic-conspiracy is not one that explicitly engages a few people, but one that implicitly engages many people in subtle and banal ways. Awareness in a new broader context makes it more salient.

Riot and Remembrance:
The Tulsa Race War and Its Legacy
by James S. Hirsch
pp. 168-171

THE RIOT disappeared from sight. There were no memorials to honor the dead, no public ceremonies to observe an anniversary or express regret. Tulsans, black and white, made no public acknowledgment of the riot. Greenwood’s damaged buildings were evidence of the assault, but in time they too were toppled or rebuilt The riot was not mentioned in Oklahoma’s history books from the 1920s and 1930s, including Oklahoma: A History of the State and Its People, The Story of Oklahoma, Readings in Oklahoma History, Oklahoma: Its Origins and Development, Our Oklahoma, and Oklahoma: A Guide to the Sooner State. Angie Debo was a fearless Oklahoma historian— she was known as a “warrior scholar”— who chronicled how federal government agencies and business interests swindled land from the Indians. In 1943 she published Tulsa: From Creek Town to Oil Capital, but even this popular history made only brief and superficial reference to the riot. The Chronicles of Oklahoma, a quarterly journal on state history published by the Oklahoma Historical Society, has never rim a story on the riot It began publication in 1921.

Efforts to cover up the riot were rare but unmistakable. The most egregious example was the Tribune’s decision to excise from its bound volumes the front-page story of May 31, “Nab Negro for Attacking Girl in Elevator.” Equally irresponsible was the shredding of that day’s editorial page. Years later, scholars discovered that police and state militia documents associated with the riot were also missing.

These efforts to suppress information, however, do not account for the lack of serious scrutiny given the riot. Any scholar, journalist, or interested citizen could piece together the incident through court records, newspaper articles, photographs, and interviews. But such an investigation rarely happened. For most white Tulsans, the disaster was as isolated as Greenwood itself. One of America’s most distinguished historians, Daniel J. Boorstin, grew up in Tulsa and was six years old at the time of the riot. He graduated from Central High School and devoted his professional life to studying history, writing some twenty books and winning a Pulitzer Prize for The Discoverers, about man’s quest to know the world. But Boorstin never wrote about what may have been the greatest race riot in American history, even though his own father might have been a rich source of information. In 1921 Sam Boorstin was the lawyer for the Tulsa Tribune. In an essay about the optimistic ethos of Tulsa in Cleopatra’s Nose (1994), Daniel Boorstin mentioned the city’s “dark shadows— such as the relentless segregation, the brutal race riots of the 1920s, and the Ku Klux Klan. But these were not visible or prominent in my life.” *

The white Tulsans’ response to the riot has been called “a conspiracy of silence” or “a culture of silence.” The subject was certainly ignored in schools, newspapers, and churches. During the middle 1930s, the Tribune ran a daily feature on its editorial page describing what had happened in Tulsa on that date fifteen years earlier; but on the fifteenth anniversary of the riot, the paper ran a series of frivolous items. “Central high school’s crowning social event of the term just closed was the senior prom in the gymnasium with about 200 guests in attendance,” the Tribune dutifully reported. “The grand march was led by Miss Sara Little and Seth Hughes.”

Many whites viewed the riot as one of those inexplicable events, an act of nature. A brief article in the Tulsa World on November 7, 1949, proclaimed the incident as the “top horror of city history . .  . Mass murder of whites and Negroes began on June 1. No one knew then or remembers now how the shooting began.”

But the incident survived as a kind of underground phenomenon, a memory quietly passed along and enhanced by the city’s pioneers at picnics, church suppers, and other gatherings. In time, the riot acquired new shades of meaning: it was viewed as a healing event in the city’s history, a catalyst for progress between the races, and an opportunity for magnanimous outreach.

This revisionism was captured in Oklahoma: A Guide to the Sooner State, written for the Federal Writers’ Project around 1940. (Its reports became the American Guide travel series.) The report said that vigilantes invaded Greenwood and laid it waste by fire, but after two days of martial law, “The whites organized a systematic rehabilitation program for the devastated Negro section and gave generous aid to the Negroes left homeless by the fires. Nationwide publicity of the most lurid sort naturally followed the tragedy, and Tulsa’s whites and Negroes joined in an effort to live down the incident by working diligently— and on the whole successfully— for a better mutual understanding.”

Nothing could be further from the truth. Whites not only avoided rehabilitation but were also engaged in systematic discrimination in the 1930s (when the Guide was researched). Most southern and southwestern cities routinely assigned public service jobs to African Americans, but not Tulsa. Eight black policemen patrolled Greenwood, but the city otherwise did not have a single black employee. Tulsa and its private utility companies hired only whites as meter readers in black neighborhoods. Tulsa was also one of the few cities to have only white carriers deliver mail in the black community. The city not only segregated its schools but used different-colored checks to pay white and black teachers. In the federal building, the U.S. government had 425 employees, only 8 of whom were black: 4 men swept the floors during the day, and 4 women scrubbed them at night The Mid-Continent Petroleum Corporation operated the world’s largest inland refinery in Tulsa, employing more than 3,000 people. It had no Negro employees. There were also no Negro Girl Scouts. A director for the organization explained, “If the Negro girls wore Scout uniforms, the white girls would take theirs off.”

Sundown Towns:
A Hidden Dimension Of American Racism

by James W. Loewen
pp. 210-213

Academic historians have long put down what they call “local history,” deploring its shallow boosterism. But silence about sundown towns is hardly confined to local historians; professional historians and social scientists have also failed to notice them. Most Americans—historians and social scientists included—like to dwell on good things. Speaking to a conference of social studies teachers in Indiana, Tim Long, an Indiana teacher, noted how this characteristic can mislead: Today if you ask Hoosiers, “How many of you know of an Underground Railroad site in Indiana?” everyone raises their hands. “How many of you know of a Ku Klux Klan member in Indiana?” Few raise their hands. Yet Indiana had a million KKK members and few abolitionists. The same holds for sundown towns: Indiana had many more sundown towns after 1890 than it had towns that helped escaping slaves before 1860. Furthermore, Indiana’s sundown towns kept out African Americans throughout most of the twentieth century, some of them to this day, while its towns that aided slaves did so for about ten years a century and a half ago. Nevertheless, historians, popular writers, and local historical societies in Indiana have spent far more time researching and writing about Underground Railroad sites than sundown towns. The Underground Railroad shows us at our best. Sundown towns show us at our worst.37 Authors have written entire books on sundown towns without ever mentioning their racial policies.38 I am reminded of the Hindi scene of the elephant in the living room: everyone in the room is too polite to mention the elephant, but nevertheless, it dominates the living room. Some city planners seem particularly oblivious to race. […]

Two anthropologists, Carl Withers and Art Gallaher, each wrote an entire book on Wheatland, Missouri, a sundown town in a sundown county. Gallaher never mentioned race, and Withers’s entire treatment is one sentence in a footnote, “However, no Negroes live now in the county.” Penologist James Jacobs wrote “The Politics of Corrections” about the correctional center in Vienna, Illinois, but even though its subitle focused upon “Town/Prison Relations,” he never mentioned that Vienna was a sundown town, while most of the prisoners were black and Latino. This pattern of evasion continues: most entries on sundown suburbs in the Encyclopedia of Chicago, for instance, published in 2004, do not mention their striking racial composition, let alone explain how it was achieved. […]

Journalists, too, have dropped the ball. We have seen how business interests sometimes stop local newspapers from saying anything bad about a town. Propensities within journalism also minimize coverage of racial exclusion. Occasionally a race riot or a heinous crime relates to sundown towns and has caused the topic to become newsworthy. […]

Reporters for the New Yorker and People covered the 2002 arrest of the man who killed African American Carol Jenkins for being in Martinsville, Indiana, after dark, but the result was to demonize Martinsville as distinctive. As a result, I could not get an official of the Indiana Historical Bureau to address how general sundown towns might be in Indiana; instead, she repeated, “Martinsville is an entity unto itself—a real redneck town.” But Martinsville is not unusual. For the most part, precisely what is so alarming about sundown towns—their astonishing prevalence across the country—is what has made them not newsworthy, except on special occasions. Murders sell newspapers. Chronic social pathology does not.42

Journalism has been called the “first draft of history,” and the lack of coverage of sundown towns in the press, along with the reluctance of local historians to write anything revealing about their towns, has made it easy for professional historians and social scientists to overlook racial exclusion when they write about sundown communities. Most white writers of fiction similarly leave out race. In White Diaspora, Catherine Jurca notes that suburban novelists find the racial composition of their communities “so unremarkable” that they never think about it.43

So far as I can tell, only a handful of books on individual sundown towns has ever seen print, and this is the first general treatment of the topic.44 That is an astounding statement, given the number of sundown towns across the United States and across the decades. Social scientists and historians may also have failed to write about sundown towns because they have trouble thinking to include those who aren’t there. “People find it very difficult to learn that the absence of a feature is informative,” note psychologists Frank Kardes and David Sanbonmatsu. Writers who don’t notice the absence of people of color see nothing to explain and pay the topic no attention at all. Where does the subject even fit? Is this book African American history? Assuredly not—most of the towns it describes have not had even one African American resident for decades. It is white history . . . but “white history” is not a subject heading in college course lists, the Library of Congress catalog, or most people’s minds. Perhaps the new but growing field of “whiteness studies” will provide a home for sundown town research.45

I don’t mean to excuse these omissions. The absence of prior work on sundown towns is troubling. Omitted events usually signify hidden fault lines in our culture. If a given community has not admitted on its landscape to having been a sundown town in the past, that may be partly because it has not yet developed good race relations in the present. It follows that America may not have admitted to having sundown towns in its history books because it has not yet developed good race relations as a society. Optimistically, ending this cover-up now may be both symptom and cause of better race relations.

p. 424

Once we know what happened, we can start to reconcile. Publicizing a town’s racist actions can bring shame upon the community, but recalling and admitting them is the first step in redressing them. In every sundown town live potential allies—people who care about justice and welcome the truth. As a white man said in Corbin, Kentucky, on camera in 1990, “Forgetting just continues the wrong.” “Recovering sundown towns” (or wider metropolitan areas or states) might set up truth and reconciliation commissions modeled after South Africa’s to reveal the important historical facts that underlie their continuing whiteness, reconcile with African Americans in nearby communities, and thus set in motion a new more welcoming atmosphere. 8

The next step after learning and publicizing the truth is an apology, preferably by an official of the sundown town itself. In 2003, Bob Reynolds, mayor of Harrison, Arkansas, which has been all-white ever since it drove out its African Americans in race riots in 1905 and 1909, met with other community leaders to draw up a collective statement addressing the problem. It says in part, “The perception that hangs over our city is the result of two factors: one, unique evils resulting from past events, and two, the silence of the general population toward those events of 1905 and 1909.” The group, “United Christian Leaders,” is trying to change Harrison, and it knows that truth is the starting place. “98 years is long enough to be silent,” said Wayne Kelly, one of the group’s members. George Holcomb, a retiree who is also a reporter for the Harrison Daily Times, supports a grand jury investigation into the race riots: “Get the records, study them, give the people an account of what happened. Who lost property, what they owned, who had it stolen from them and who ended up with it.”

A Language Older Than Words
by Derrick Jensen
p. 4

We don’t stop these atrocities, because we don’t talk about them. We don’t talk about them, because we don’t think about them. We don’t think about them, because they’re too horrific to comprehend. As trauma expert Judith Herman writes, “The ordinary response to atrocities is to banish them from consciousness. Certain violations of the social compact are too terrible to utter aloud: this is the meaning of the word unspeakable.”

pp. 262-263

I’m not saying that Dave’s condition as a wage slave is the same as the condition of a woman about to be shot by a Nazi police officer. Nor am I saying that to grow up in a violent household is the same as to be murdered and mutilated by a United States Cavalry trooper. Nor am I saying that the Holocaust is the same as the destruction of indigenous peoples, nor am I saying that clearcuts are the same as rape. To make any of these claims would be absurd. Underlying the different forms of coercion is a unifying factor: Silence. The necessity of silencing victims before, during, and after exploitation or annihilation, and the necessity at these same times of silencing one’s own conscience and ones conscious awareness of relationship is undeniable. These radically different atrocities share mechanisms of silencing;

pp. 346-348

If we have become so inured to the coercion that engulfs, forms, and deforms us that we no longer perceive it for the aberration it is, how much more is this true for our ignorance of the trauma that characterizes our way of life? Salmon are going extinct? Pass the toast, man, I’m hungry. A quarter of a million dead in Iraq? Damnit, I’m gonna be late for work. If coercion is our habitat, then trauma is the food we daily take into our bodies.

I spoke with Dr. Judith Herman, one of the world’s experts on the effects of psychological trauma. I asked her about the relationship between atrocity and silence.

She said, “Atrocities are actions so horrifying they go beyond words. For people who witness or experience atrocities, there is a kind of silencing that comes from not knowing how to put these experiences into speech. At the same time, atrocities are the crimes perpetrators most want to hide. This creates a powerful convergence of interest: no one wants to speak about them. No one wants to remember them. Everyone wants to pretend they didn’t happen.”

I asked her about a line she once wrote: “In order to escape accountability the perpetrator does everything in his power to promote forgetting.”

“This is something with which we are all familiar. It seems that the more extreme the crimes, the more determined the efforts to deny the crimes happened. So we have, for example, almost a hundred years after the fact, an active and apparently state-sponsored effort on the part of the Turkish government to deny there was ever an Armenian genocide. We still have a whole industry of Holocaust denial. I just came back from Bosnia where, because there hasn’t been an effective medium for truth-telling and for establishing a record of what happened, you have the nationalist governmental entities continuing to insist that ethnic cleansing didn’t happen, that the various war crimes and atrocities committed in that war simply didn’t occur.”

“How does this happen?”

“On the most blatant level, it’s a matter of denying the crimes took place. Whether it’s genocide, military aggression, rape, wife beating, or child abuse, the same dynamic plays itself out, beginning with an indignant, almost rageful denial, and the suggestion that the person bringing forward the information— whether it’s the victim or another informant— is lying, crazy, malicious, or has been put up to it by someone else. Then of course there are a number of fallback positions to which perpetrators can retreat if the evidence is so overwhelming and irrefutable it cannot be ignored, or rather, suppressed. This, too, is something we re familiar with: the whole raft of predictable rationalizations used to excuse everything from rape to genocide: the victim exaggerates; the victim enjoyed it; the victim provoked or otherwise brought it on herself; the victim wasn’t really harmed; and even if some slight damage has been done, it’s now time to forget the past and get on with our lives: in the interests of preserving peace— or in the case of domestic violence, preserving family harmony— we need to draw a veil over these matters. The incidents should never be discussed, and preferably should be forgotten altogether.”

The Elephant in the Room:
Silence and Denial in Everyday Life

by Eviatar Zerubavel
pp. 13-16

As one might expect, what we ignore or avoid socially is often also ignored or avoided academically, 40 and conspiracies of silence are therefore still a somewhat undertheorized as well as understudied phenomenon. Furthermore, they typically consist of nonoccurrences, which, by definition, are rather difficult to observe. After all, it is much easier to study what people do discuss than what they do not (not to mention the difficulty of telling the difference between simply not talking about something and specifically avoiding it). 41

Yet despite all these difficulties, there have been a number of attempts to study conspiracies of silence. To date, those studies have, without exception, been focally confined to the way we collectively avoid specific topics such as race, homosexuality, the threat of nuclear annihilation, or the Holocaust. But no attempt has yet been made to transcend their specificity in an effort to examine such conspiracies as a general phenomenon. 42 Unfortunately, there is a lack of dialogue between those who study family secrets and those who study state secrets, and feminist writings on silence are virtually oblivious to its nongendered aspects. That naturally prevents us from noticing the strikingly similar manner in which couples, organizations, and even entire nations collectively deny the presence of “elephants” in their midst. Identifying these similarities, however, requires that we ignore the specific contents of conspiracies of silence and focus instead on their formal properties.

The formal features of such conspiracies are revealed when we examine the dynamics of denial at the level of families that ignore a member’s drinking problem as well as of nations that refuse to acknowledge the glaring incompetence of their leaders. […]

“The best way to disrupt moral behavior,” notes political theorist C. Fred Alford, “is not to discuss it and not to discuss not discussing it.” “Don’t talk about ethical issues,” he facetiously proposes, “and don’t talk about our not talking about ethical issues.” 45 As moral beings we cannot keep on non-discussing “undiscussables.” Breaking this insidious cycle of denial calls for an open discussion of the very phenomenon of undiscussability.

pp. 26-27

Furthermore, there are certain things that are never supposed to be discussed, or sometimes even mentioned, at all.

Consider here also the strong taboo, so memorably depicted in films like Prince of the City, Mississippi Burning, In the Heat of the Night, A Few Good Men, Bad Day at Black Rock, or Serpico, against washing one’s community’s “dirty laundry” in public. Particularly noteworthy in this regard are informal codes of silence such as the omerta, the traditional Sicilian code of honor that prohibits Mafia members from “ratting” on fellow members, or the infamous “blue wall of silence” that, ironically enough, similarly prevents police officers from reporting corrupt fellow officers, not to mention the actual secrecy oaths people must take in order to become members of secret societies or underground movements. Equally prohibitive are the “cultures of silence” that prevent oil workers from reporting oil spills and fraternity members from testifying against fellow brothers facing rape charges, and that have led senior tobacco company executives to suppress the findings of studies showing the incontrovertible health risks involved in smoking, and prevented the typically sensationalist, gossipy British and American press from publicizing the imminent abdication of King Edward VIII in 1936, or the sexual indiscretions of President John F. Kennedy. 29

A most effective way to make sure that people would actually stay away from conversational “no-go zones” 30 is to keep the tabooed object nameless, as when Catholic preachers, for example, carefully avoid mentioning sodomy (the “nameless sin”) by name. 31 It is as if refraining from talking about something will ultimately make it virtually unthinkable, as in the famous dystopian world of George Orwell’s Nineteen Eighty-Four, where it was practically impossible “to follow a heretical thought further than the perception that it was heretical; beyond that point the necessary words were nonexistent.”

pp. 47-50

It only takes one person to produce speech, but it requires the cooperation of all to produce silence. —Robert E. Pittenger et al.,

The First Five Minutes

The Double Wall of Silence As we approach denial from a sociological rather than a more traditional psychological perspective, we soon realize that it usually involves more than just one person and that we are actually dealing with “co-denial,” a social phenomenon involving more than just individuals. 1 In order to study conspiracies of silence we must first recognize, therefore, that, whether it is only a couple of friends or a large organization, they always involve an entire social system.

Co-denial presupposes mutual avoidance. Only when the proverbial elephant in the room is jointly avoided by everyone around it, indeed, are we actually dealing with a “conspiracy” of silence.

As the foremost expression of co-denial, silence is a collective endeavor, and it involves a collaborative effort on the parts of both the potential generator and recipient of a given piece of information to stay away from it. “Unlike the activity of speech, which does not require more than a single actor, silence demands collaboration.” 2 A conspiracy of silence presupposes discretion on the part of the non-producer of the information as well as inattention on the part of its non-consumers. It is precisely the collaborative efforts of those who avoid mentioning the elephant in the room and those who correspondingly refrain from asking about it that make it a conspiracy. […]

The “equal protection” provided to those who show no evil as well as to those who see no evil is the result of the symmetrical nature of the relations between the opposing social forces underlying conspiracies of silence. Such symmetry is evident even in highly asymmetrical relations, as so perfectly exemplified by the reluctance of both children and parents to discuss sexual matters with one another, the former feeling uncomfortable asking (and later telling) and the latter feeling equally uncomfortable telling (and later asking). Consider also the remarkable symmetry between someone’s wish to keep some atrocity secret and another’s urge to deny its reality even to oneself, as exemplified by the symbiotic relations between the politically incurious Alicia and her ever-evasive husband Roberto in the film The Official Story. Or note the chillingly symmetrical dynamics of silence between the fearsome perpetrators and the fearful witnesses of these atrocities, as exemplified by the Nazis’ efforts to hide the horrors of their concentration camps from nearby residents who in turn willingly turned a blind eye to their existence. 7

By collaboratively seeing and showing, or hearing and speaking, no evil we thus construct a “double wall” of silence, originally theorized by psychologist Dan Bar-On in the context of the relations between former Nazi perpetrators and their children yet, ironically, equally central to the dynamics between their victims and their children. After all, the heavy silence hanging over many Holocaust survivors’ homes is a product of “the interweaving of two kinds of conflicted energy: on the part of the survivor, [the] suppression of telling; on the part of the descendant, [the] fear of finding out.” (As one child of survivors recalls, talking about the Holocaust “was never overtly forbidden. By no means was I or my brother ever shushed when we attempted to steer the conversation [there]. We simply never made such attempts.”) That explains how someone may indeed remain forever unclear as to who actually prevented her mother from telling her how her grandmother was killed: “I don’t know whether the stopping of the conversation was my own doing or hers.” It was most likely both.

pp. 54-57

As we might expect, the likelihood of participating in a conspiracy of silence is greatly affected by one’s proximity to the proverbial elephant. The closer one gets to it, the more pressure one feels to deny its presence. Indeed, it is the people standing in the street and watching the royal procession rather than those who are actually part of it who are the first ones to break through the wall of denial and publicly acknowledge that the emperor has in fact no clothes. 18

Just as significant is the effect of social proximity among those standing around the elephant. After all, the socially “closer” we are, the more we tend to trust, and therefore the less likely we are to refrain from talking more openly with, one another. Formal relations and the social environments that foster them (such as bureaucracy), on the other hand, are more likely to discourage openness and thereby promote silence.

Equally significant is the political “distance” between us. We generally tend to trust our equals more than our superiors. Social systems with particularly hierarchical structures and thus more pronounced power differences therefore produce greater reluctance toward openness and candor.

Yet the one structural factor that most dramatically affects the likelihood of participating in conspiracies of silence is the actual number of conspirators involved. In marked contrast to ordinary secrets, the value of which is a direct function of their exclusivity (that is, of the paucity of people who share them), 19 open secrets actually become more tightly guarded as more, rather than fewer, people are “in the know.” Indeed, the larger the number of participants in the conspiracy, the “heavier” and more “resounding” the silence. Prohibiting strictly one-on-one encounters such as Winston and Julia’s illicit rendezvous in Nineteen Eighty-Four may thus be the most effective way for a dystopian police state to ensure that certain things are never openly discussed.

As famously demonstrated by one of the founding fathers of modern sociology, Georg Simmel, one only needs to compare social interactions among three as opposed to two persons to appreciate the extent to which the dynamics of social interactions are affected by the number of participants involved in them. And indeed, unlike two-person conspiracies of silence, even ones involving only three conspirators already presuppose the potential presence of a new key player in the social organization of denial, namely the silent bystander. […]

Silent bystanders act as enablers because watching others ignore something encourages one to deny its presence. As evident from studies that show how social pressure affects our perception, it is psychologically much more difficult to trust one’s senses and remain convinced that what one sees or hears is actually there when no one else around one seems to notice it. The discrepancy between others’ apparent inability to notice it and one’s own sensory experience creates a sense of ambiguity that further increases the likelihood that one would ultimately succumb to the social pressure and opt for denial. 22

Such pressure is further compounded as the number of silent bystanders increases. […] Moreover, the actual experience of watching several other people ignore the elephant together is significantly different from watching each of them ignore it by himself, because it involves the added impact of observing each of them watch the others ignore it as well! Instead of several isolated individuals in denial, one is thus surrounded by a group of people who are obviously all participating in one and the same conspiracy. Furthermore, moving from two- to three-person, let alone wider, conspiracies of silence involves a significant shift from a strictly interpersonal kind of social pressure to the collective kind we call group pressure, whereby breaking the silence actually violates not only some individuals’ personal sense of comfort, but a collectively sacred social taboo, thereby evoking a heightened sense of fear.

pp. 80-82

Inherently delusional, denial inevitably distorts one’s sense of reality, a problem further exacerbated when others collude in it through their silence. After all, it is hard to remain convinced that one is actually seeing and not just imagining the elephant in the room when no one else seems to acknowledge its presence. […] Lacking a firm basis for authenticating one’s perceptual experience, one may thus come to distrust one’s own senses and, as so chillingly portrayed in the film Gaslight, slowly lose one’s grip on reality.

The fact that no one else around us acknowledges the presence of “elephants” also tends to make them seem more frightening. Indeed, silence is not just a product, but also a major source, of fear (which also explains why it impedes the recovery of persons who have been traumatized). 7 To overcome fear we therefore often need to discuss the undiscussables that help produce it in the first place. 8

As so poignantly portrayed in “The Emperor’s New Clothes,” conspiracies of silence always involve some dissonance between what one inwardly experiences and what one outwardly expresses: “‘ What!’ thought the emperor. ‘I can’t see a thing!’ [But] aloud he said, ‘It is very lovely’ … All the councilors, ministers, and men of great importance … saw no more than the emperor had seen [but] they said the same thing that he had said … ‘It is magnificent! Beautiful! Excellent!’ All of their mouths agreed, though none of their eyes had seen anything.” 9 As one can tell from these bitingly satirical descriptions, such dissonance involves the kind of duplicity associated by Orwell in Nineteen Eighty-Four with “doublethink”: “His mind slid away into the labyrinthine world of doublethink. To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions … knowing them to be contradictory.” 10 Such duplicity presupposes a certain amount of cynicism. As a former Nazi doctor explains the inherently perverse logic of doublethink, “I couldn’t ask [Dr.] Klein ‘Don’t send this man to the gas chamber,’ because I didn’t know that he went to the gas chamber. You see, that was a secret. Everybody [knew] the secret, but it was a secret.” It also requires, however, a certain denial of one’s feelings. Although those Nazi doctors certainly knew that Jews “were not being resettled but killed, and that the ‘Final Solution’ meant killing all of them,” the fact that they could use such inherently anesthetic euphemistic expressions nevertheless meant that “killing … need[ ed] not be experienced … as killing,” and the more they used such language, the deeper they entered the “realm [of] nonfeeling,” increasingly becoming emotionally numb. 11

Needless to say, such denial of one’s feelings is psychologically exhausting. “Don’t think about it,” Harrison tells herself as she tries to ignore her feelings about her incestuous relationship with her father; yet denying those feelings, she slowly comes to realize, “seems to require an enormous effort.” 12

Conspiracies of silence may also trigger feelings of loneliness. The discrepancy between what one actually notices and what others around one acknowledge noticing undermines the quest for intersubjectivity, the very essence of sociality, 13 and often generates a deep sense of isolation. Whereas open communication brings us closer, silence makes us feel more distant from one another. “The word, even the most contradictious word,” notes Thomas Mann, “preserves contact —it is silence which isolates.”