Technological Fears and Media Panics

“One of the first characteristics of the first era of any new form of communication is that those who live through it usually have no idea what they’re in.”
~Mitchell Stephens

“Almost every new medium of communication or expression that has appeared since the dawn of history has been accompanied by doomsayers and critics who have confidently predicted that it would bring about The End of the World as We Know It by weakening the brain or polluting our precious bodily fluids.”
~New Media Are Evil, from TV Tropes

“The internet may appear new and fun…but it’s really a porn highway to hell. If your children want to get on the internet, don’t let them. It’s only a matter of time before they get sucked into a vortex of shame, drugs, and pornography from which they’ll never recover. The internet…it’s just not worth it.”
~Grand Theft Auto: Liberty City Stories

“It’s the same old devil with a new face.”
~Rev. George Bender, Harry Potter book burner

Media technology is hard to ignore. This goes beyond it being pervasive. Our complaints and fears, our fascination and optimism are mired in far greater things. It is always about something else. Media technology is not only the face of some vague cultural change but the embodiment of new forms of power that seem uncontrollable. Our lives are no longer fully our own, a constant worry in an individualistic society. With globalization, it’s as if the entire planet has become a giant company town.

I’m not one for giving into doom and gloom about technology. That response is as old as civilization and doesn’t offer anything useful. But I’m one of the first to admit to the dire situation we are facing. It’s just that in some sense the situation has always been dire, the world has always been ending. We never know if this finally will be the apocalypse that has been predicted for millennia, an ending to end it all with no new beginning. One way or another, the world as we know it is ending. There probably isn’t much reason to worry about it. Whatever the future holds, it is beyond our imagining as our present world was beyond the imagining of past generations.

One thing is clear. There is no point in getting in a moral panic over it. The young who embrace what is new always get blamed for it, even though they are simply inheriting what others have created. The youth today aren’t any worse off than any other prior generation at the same age. Still, it’s possible that these younger generations might take us into a future that us old fogies won’t be able to understand. History shows how shocking innovations can be. Talking about panics, think about Orson Welles’s radio show, War of the Worlds. The voice of radio back then had a power that we no longer can appreciate. Yet here we are with radio being so much background noise added to the rest.

Part of what got me thinking about this were two posts by Matt Cardin, at The Teeming Brain blog. In one post, he shares some of Nathaniel Rich’s review, Roth Agonistes, of Philip Roth’s Why Write?: Collected Nonfiction 1960–2013. There is a quote from Roth in 1960:

“The American writer in the middle of the twentieth century has his hands full in trying to understand, describe, and then make credible much of American reality. It stupefies, it sickens, it infuriates, and finally it is even a kind of embarrassment to one’s own meager imagination. The actuality is continually outdoing our talents, and the culture tosses up figures almost daily that are the envy of any novelist.”

Rich comments that, “Roth, despite writing before the tumult of the Sixties, went farther, suggesting that a radically destabilized society had made it difficult to discriminate between reality and fiction. What was the point of writing or reading novels when reality was as fantastic as any fiction? Such apprehensions may seem quaint when viewed from the comic-book hellscape of 2018, though it is perversely reassuring that life in 1960 felt as berserk as it does now.”

We are no more post-truth now than back then. It’s always been this way. But it is easy to lose context. Rich notes that, “Toward the end of his career, in his novels and public statements, Roth began to prophesy the extinction of a literary culture — an age-old pastime for aging writers.” The ever present fear that the strangeness and stresses of the unknown will replace the comforting of the familiar. We all grow attached to the world we experienced in childhood, as it forms the foundation of our identity. But every now and then something comes along to threaten it all. And the post-World War era was definitely a time of dramatic and, for some, traumatic change — despite all of the nostalgia that has accrued to its memories like flowers on a gravestone.

The technological world we presently live in took its first form during that earlier era. Since then, the book as an art form is far from being near extinction. More books have been printed in recent decades than ever before in history. New technology has oddly led us read even more books, both in their old and new technological forms. My young niece, of the so-called Internet Generation, prefers physical books… not that she is likely to read Philip Roth. Literacy, along with education and IQ, is on the rise. There is more of everything right now, what makes it overwhelming. Technologies of the past for the  most part aren’t being replaced but incorporated into a different world. This Borg-like process of assimilation might be more disturbing to the older generations than simply becoming obsolete.

The other post by Matt Cardin shares an excerpt from an NPR piece by Laura Sydell, The Father Of The Internet Sees His Invention Reflected Back Through A ‘Black Mirror’. It is about the optimists of inventors and the consequences of inventions, unforeseen except by a few. One of those who did see the long term implications was William Gibson: “The first people to embrace a technology are the first to lose the ability to see it objectively.” Maybe so, but that is true for about everyone, including most of those who don’t embrace it or go so far as to fear it. It’s not in human nature to see much of anything objectively.

Gibson did see the immediate realities of what he coined as ‘Cyberspace’. We do seem to be moving in that general direction of cyberpunk dystopia, at least here in this country. I’m less certain about the even longer term developments, as Gibson’s larger vision is as fantastical as many others. But it is the immediate realities that always concern people because they can be seen and felt, if not always acknowledged for what they are, often not even by the fear-mongers.

I share his being more “interested in how people behave around new technologies.” In reference to “how TV changed New York City neighborhoods in the 1940s,” Gibson states that, “Fewer people sat out on the stoops at night and talked to their neighbors, and it was because everyone was inside watching television. No one really noticed it at the time as a kind of epochal event, which I think it was.”

I would make two points about.

First, there is what I already said. It is always an epochal event when a major technology is invented, going back to the many inventions before that such as media technology (radio, films, telegraph, printing press, bound book, etc) but also other technologies (assembly lines, cotton gin, compass, etc). Did the Chinese individual who assembled the first firework imagine the carnage of bombs that made castles easy targets and led to two world wars that transformed all of human existence? Of course not. Even the simplest of technologies can turn civilization on its head, which has happened multiple times over the past few millennia and often with destructive results.

The second point is to look at something specific like television. It happened along with the building of the interstate highway system, the rise of car culture, and the spread of suburbia. Television became a portal for the outside world to invade the fantasyland of home life that took hold after the war. Similar fears about radio and the telephone were transferred to the television set and those fears were directed at the young. The first half of the 20th century was constant technological wonder and uncertainty. The social order was thrown askew.

We like to imagine the 1940s and 1950s as a happy time of social conformity and social order, a time of prosperity and a well behaved population, but that fantasy didn’t match the reality. It was an era of growing concerns about adolescent delinquency, violent crime, youth gangs, sexual deviancy, teen pregnancy, loose morals, and rock ‘n roll — and the data bears out that a large number in that generation were caught up in the criminal system, whether because they were genuinely a bad generation or that the criminal system had become more punitive, although others have argued that it was merely a side effect of the baby boom with youth making up a greater proportion of society. Whatever was involved, the sense of social angst got mixed up with lingering wartime trauma and emerging Cold War paranoia. The policing, arrests, and detention of wayward youth became a priority to the point of oppressive obsession. Besides youth problems, veterans from World War II did not come home content and happy (listen to Audible’s “The Home Front”). It was a tumultuous time, quite opposite of the perfect world portrayed in those family sitcoms of the 1940s and 1950s.

The youth during that era had a lot in common with their grandparents, the wild and unruly Lost Generation corrupted by family and community breakdown from early mass immigration, urbanization, industrialization, consumerism, etc. Starting in the late 1800s, youth gangs and hooliganism became rampant, as moral panic became widespread. As romance novels earlier had been blamed and later comic books would be blamed, around the turn of the century the popular media most feared were the violent penny dreadfuls and dime novels that targeted tender young minds with portrayals of lawlessness and debauchery, so it seemed to the moral reformers and authority figures.

It was the same old fear rearing its ugly head. This pattern has repeated on a regular basis. What new technology does is give an extra push to the swings of generational cycles. So, as change occurs, much remains the same. For all that William Gibson got right, no one can argue that the world has been balkanized into anarcho-corporatist city-states (Snow Crash), although it sure is a plausible near future. The general point is true, though. We are a changed society. Yet the same old patterns of fear-mongering and moral panic continue. What is cyclical and what is trend is hard to differentiate as it happens, it being easier to see clearly in hindsight.

I might add that vast technological and social transformations have occurred every century for the past half millennia. The ending of feudalism was far more devastating. Much earlier, the technological advancement of written text and the end of oral culture had greater consequences than even Socrates could have predicted. And it can’t be forgotten that movable type printing presses ushered in centuries of mass civil unrest, populist movements, religious wars, and revolution across numerous countries.

Our own time so far doesn’t compare, one could argue. The present relative peace and stability will continue until maybe World War III and climate change catastrophe forces a technological realignment and restructuring of civilization. Anyway, the internet corrupting the youth and smart phones rotting away people’s brains should be the least of our worries.

Even the social media meddling that Russia is accused of in manipulating the American population is simply a continuation of techniques that go back to before the internet existed. The game has changed a bit, but nations and corporations are pretty much acting in the devious ways they always have, except they are collecting a lot more info. Admittedly, technology does increase the effectiveness of their deviousness. But it also increases the potential methods for resisting and revolting against oppression.

I do see major changes coming. My doubts are more about how that change will happen. Modern civilization is massively dysfunctional. That we use new technologies less than optimally might have more to do with pre-existing conditions of general crappiness. For example, television along with air conditioning likely did contribute to people not sitting outside and talking to their neighbors, but as great or greater of a contribution probably had to do with diverse social and economic forces driving shifts in urbanization and suburbanization with the dying of small towns and the exodus from ethnic enclaves. Though technology was mixed into these changes, we maybe give technology too much credit and blame for the changes that were already in motion.

It is similar to the shift away from a biological explanation of addiction. It’s less that certain substances create uncontrollable cravings. Such destructive behavior is only possible and probable when particular conditions are set in place. There already has to be breakdown of relationships of trust and support. But rebuild those relationships and the addictive tendencies will lessen.

Similarly, there is nothing inevitable about William Gibson’s vision of the future or rather his predictions might be more based on patterns in our society than anything inherent to the technology itself. We retain the choice and responsibility to create the world we want or, failing that, to fall into self-fulfilling prophecies.

The question is what is the likelihood of our acting with conscious intention and wise forethought. All in all, self-fulfilling prophecy appears to be the most probable outcome. It is easy to be cynical, considering the track record of the present superpower that dominates the world and the present big biz corporatism that dominates the economy. Still, I hold out for the chance that conditions could shift for various reasons, altering what otherwise could be taken as near inevitable.

* * *

6/13/21 – Here is an additional thought that could be made into a new separate post, but for now we’ll leave it here as a note. There is evidence that new media technology does have an effect on the thought, perception, and behavior. This is measurable in brain scans. But other research shows it even alters personality or rather suppresses it’s expression. In a scientific article about testing for the Big Five personality traits, Tim Blumer and Nicola Döring offer an intriguing conclusion:

“To sum up, we conclude that for four of the five factors the data indicates a decrease of personality expression online, which is most probably due to the specification of the situational context. With regard to the trait of neuroticism, however, an additional effect occurs: The emotional stability increases on the computer and the Internet. This trend is likely, as has been described in previous studies, due to the typical features of computer-mediated communication (see Rice & Markey, 2009)” (Are we the same online? The expression of the five factor personality traits on the computer and the Internet).

This makes one think what it actually means. These personality tests are self-reports and so have that bias. Still, that is useful info in indicating what people are experiencing and perceiving about themselves. It also gives some evidence to what people are expressing, even when they aren’t conscious of it, as that is how these tests are designed with carefully phrased questions and including decoy questions.

It is quite likely that the personality is genuinely being suppressed when people engage with the internet. Online experience eliminates so many normal behavioral and biological cues (tone of voice, facial expressions, eye gaze, hand gestures, bodily posture, rate of breathing, pheromones, etc). It would be unsurprising if this induces at least mild psychosis in many people, in that individuals literally become disconnected from most aspects of normal reality and human relating. If nothing else, it would surely increase anxiety and agitation.

When online, we really aren’t ourselves. That is because we are cut off from the social mirroring that allows self-awareness. There is a theory that theory of mind and hence cognitive empathy is developed in childhood first through observing others. It’s in becoming aware that others have minds behind their behavior that we then develop a sense of our own mind as separate from others and the world.

The new media technologies remove the ability to easily sense others as actual people. This creates a strange familiarity and intimacy as, in a way, all of the internet is inside of you. What is induced can at times be akin to psychological solipsism. We’ve noticed how often people don’t seem to recognize the humanity of others online in the way they would if a living-and-breathing person were right in front of them. Most of the internet is simply words. And even pictures and videos are isolated of all real-world context and physical immediacy.

Yet it’s not clear we can make a blanket accusation about the risks of new media. Not all aspects are the same. In fact, one study indicated “a positive association between general Internet use, general use of social platforms and Facebook use, on the one hand, and self-esteem, extraversion, narcissism, life satisfaction, social support and resilience, on the other hand.” It’s not merely being on the internet that is the issue but specific platforms of interaction and how they shape human experience and behavior.

The effects varied greatly, as the researchers found: “Use of computer games was found to be negatively related to these personality and mental health variables. The use of platforms that focus more on written interaction (Twitter, Tumblr) was assumed to be negatively associated with positive mental health variables and significantly positively with depression, anxiety, and stress symptoms. In contrast, Instagram use, which focuses more on photo-sharing, correlated positively with positive mental health variables” (Julia Brailovskaia et al, What does media use reveal about personality and mental health? An exploratory investigation among German students).

There is good evidence. The video game result is perplexing, though. It is image-based, as is Instagram. Why does the former lead to less optimal outcomes and the the latter not? It might have to do with Instagram being more socially-oriented, whereas video games can be played in isolation. Would that still be true of video games that are played with friends and/or on multi-user online worlds? Anyway, it is unsurprising that text-based social media is clearly a net loss for mental health. That would definitely fit with the theory that it’s particularly something about the disconnecting effect of words alone on a screen.

This fits our own experience even when interacting with people we’ve personally known for most or all of our lives. It’s common to write something to a family member or an old friend on email, text message, FB messenger, etc; and, knowing they received it and read it, receive no response; not even a brief acknowledgement. No one in the “real world” would act that way to “real people”. No one who wanted to maintain a relationship would stand near you while you spoke to them to their face, not look at you or say anything, and then walk away like you weren’t there. But that is the point of the power of textual derealization.

Part of this is the newness of new media. We simply have not yet fully adapted to it. People freaked out about every media innovation that came along. And no doubt the fears and anxiety were often based on genuine concerns and direct observations. When media changes, it does have profound effect on people and society. People probably do act deranged for a period of time. It might take generations or centuries for society to settle down after each period of change. That is the problem with the modern world where we’re hit by such a vast number of innovations in such quick succession. The moment we regain our senses enough to try to stand back up again we are hit by the next onslaught.

We are in the middle of what one could call the New Media Derangement Syndrome (NMDS). It’s not merely one thing but a thousands things and combined with a total technological overhaul of society. This past century has turned all of civilization on its head. Over a few generations, most of it occurring within a single lifespan, humanity went from mostly rural communities, farm-based economy, horse-and-buggies, books, and newspapers to mass urbanization, skyscrapers, factories, trains, cars, trucks, ocean liners, airplanes and jets, rocket ships, electricity, light bulbs, telegraphs, movies, radio, television, telephones, smartphones, internet, radar, x-rays, air conditioning, etc.

The newest of new media is bringing in a whole other aspect. We are now living in not just a banana republic but inverted totalitarianism. Unlike the past, the everyday experience of our lives are more defined by corporations than churches, communities, or governments. Think of how most people spend most of their waking hours regularly checking into various corporate media technologies, platforms, and networks; including while at work and a smartphone next to the bed giving one instant notifications.

Think about how almost all media that Americans now consume is owned and controlled by a handful of transnational corporations. Yet not that long ago, most media was owned and operated locally by small companies, non-profit organizations, churches, etc. Most towns had multiple independently-run newspapers. Likewise, most radio shows and tv shows were locally or regionally produced mid-20th century. The moveable type printing press made possible the first era of mass media, but that was small time change compared to the nationalization and globalization of mass media over the past century.

Part of NMDS is that we consumer-citizens have become commodified products. The social media and such is not a product that we are buying. No, we and our data is the product that is being sold. The crazifaction factor is how everything has become manipulated by data-gathering and algorithms. Corporations now have larger files on American citizens than the FBI did during the height of the Cold War. New media technology is one front of the corporate war on democracy and the public good. Economics is now the dominant paradigm of everything.  In her article Social Media Is a Borderline Personality Disorder, Kasia Chojecka wrote:

“Social media, as Tristan Harris said, undermined human weaknesses and contributed to what can be called a collective depression. I would say it’s more than that — a borderline personality disorder with an emotional rollercoaster, lack of trust, and instability. We can’t stay sane in this world right now, Harris said. Today the world is dominated not only by surveillance capitalism based on commodification and commercialization of personal data (Sh. Zuboff), but also by a pandemic, which caused us to shut ourselves at home and enter a full lockdown mode. We were forced to move to social media and are now doomed to instant messaging, notifications, the urge to participate. The scale of exposure to social media has grown incomparably (the percentages of growth vary — some estimate it would be circa ten percent or even several dozen percent in comparison to 2019).”

The result of this is media-induced mass insanity can be seen with conspiracy theories (QAnon, Pizzagate, etc) and related mass psychosis and mass hallucinations: Jewish lasers in space starting wildfires, global child prostitution rings that operate on moon bases, vast secret tunnel systems that connect empty Walmarts, and on and on. That paranoia, emerging from the dark corners of the web, helped launch an insurrection against the government and caused the attempted kidnappings and assassinations of politicians. Plus, it got a bizarre media personality cult elected as president. If anyone doubted the existence of NMDS in the past, it has since become the undeniable reality we all now live in.

* * *

Fear of the new - a techno panic timeline

11 Examples of Fear and Suspicion of New Technology
by Len Wilson

New communications technologies don’t come with user’s manuals. They are primitive, while old tech is refined. So critics attack. The critic’s job is easier than the practitioner’s: they score with the fearful by comparing the infancy of the new medium with the perfected medium it threatens. But of course, the practitioner wins. In the end, we always assimilate to the new technology.

“Writing is a step backward for truth.”
~Plato, c. 370 BC

“Printed book will never be the equivalent of handwritten codices.”
~Trithemius of Sponheim, 1492

“The horrible mass of books that keeps growing might lead to a fall back into barbarism..”
~Gottfried Wilhelm, 1680

“Few students will study Homer or Virgil when they can read Tom Jones or a thousand inferior or more dangerous novels.”
~Rev. Vicemius Know, 1778

“The most powerful of ignorance’s weapons is the dissemination of printed matter.”
~Count Leo Tolstoy, 1869

“We will soon be nothing but transparent heaps of jelly to each other.”
~New York Times 1877 Editorial, on the advent of the telephone

“[The telegraph is] a constant diffusion of statements in snippets.”
~Spectator Magazine, 1889

“Have I done the world good, or have I added a menace?”
~Guglielmo Marconi, inventor of radio, 1920

“The cinema is little more than a fad. It’s canned drama. What audiences really want to see is flesh and blood on the stage.”
~Charlie Chaplin, 1916

“There is a world market for about five computer.”
~Thomas J. Watson, IBM Chairman and CEO, 1943

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”
~Daryl Zanuck, 20th Century Fox CEO, 1946

Media Hysteria: An Epidemic of Panic
by Jon Katz

MEDIA HYSTERIA OCCURS when tectonic plates shift and the culture changes – whether from social changes or new technology.

It manifests itself when seemingly new fears, illnesses, or anxieties – recovered memory, chronic fatigue syndrome, alien abduction, seduction by Internet molesters, electronic theft – are described as epidemic disorders in need of urgent recognition, redress, and attention.

For those of us who live, work, message, or play in new media, this is not an abstract offshoot of the information revolution, but a topic of some urgency: We are the carriers of these contagious ideas. We bear some of the responsibility and suffer many of the consequences.

Media hysteria is part of what causes the growing unease many of us feel about the toxic interaction between technology and information.

Moral Panics Over Youth Culture and Video Games
by Kenneth A. Gagne

Several decades of the past century have been marked by forms of entertainment that were not available to the previous generation. The comic books of the Forties and Fifties, rock ‘n roll music of the Fifties, Dungeons & Dragons in the Seventies and Eighties, and video games of the Eighties and Nineties were each part of the popular culture of that era’s young people. Each of these entertainment forms, which is each a medium unto itself, have also fallen under public scrutiny, as witnessed in journalistic media such as newspapers and journals – thus creating a “moral panic.”

The Smartphone’s Impact is Nothing New
by Rabbi Jack Abramowitz

Any invention that we see as a benefit to society was once an upstart disruption to the status quo. Television was terrible because when listened to the radio, we used our imaginations instead of being spoon-fed. Radio was terrible because families used to sit around telling stories. Moveable type was terrible because if books become available to the masses, the lower classes will become educated beyond their level. Here’s a newsflash: Socrates objected to writing! In The Phaedrus (by his disciple Plato), Socrates argues that “this discovery…will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. … (Y)ou give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

When the Internet and the smartphone evolved, society did what we always do: we adapted. Every new technology has this effect. Do you know why songs on the radio are about 3½ minutes long? Because that’s what a 45-rpm record would hold. Despite the threat some perceived in this radical format, we adapted. (As it turns out, 45s are now a thing of the past but the pop song endures. Turns out we like 3½-minute songs!)

Is the Internet Making Us Crazy? What the New Research Says
by Tony Dokoupil

The first good, peer-reviewed research is emerging, and the picture is much gloomier than the trumpet blasts of Web utopians have allowed. The current incarnation of the Internet—portable, social, accelerated, and all-pervasive—may be making us not just dumber or lonelier but more depressed and anxious, prone to obsessive-compulsive and attention-deficit disorders, even outright psychotic. Our digitized minds can scan like those of drug addicts, and normal people are breaking down in sad and seemingly new ways. […]

And don’t kid yourself: the gap between an “Internet addict” and John Q. Public is thin to nonexistent. One of the early flags for addiction was spending more than 38 hours a week online. By that definition, we are all addicts now, many of us by Wednesday afternoon, Tuesday if it’s a busy week. Current tests for Internet addiction are qualitative, casting an uncomfortably wide net, including people who admit that yes, they are restless, secretive, or preoccupied with the Web and that they have repeatedly made unsuccessful efforts to cut back. But if this is unhealthy, it’s clear many Americans don’t want to be well. [,,,]

The Gold brothers—Joel, a psychiatrist at New York University, and Ian, a philosopher and psychiatrist at McGill University—are investigating technology’s potential to sever people’s ties with reality, fueling hallucinations, delusions, and genuine psychosis, much as it seemed to do in the case of Jason Russell, the filmmaker behind “Kony 2012.” The idea is that online life is akin to life in the biggest city, stitched and sutured together by cables and modems, but no less mentally real—and taxing—than New York or Hong Kong. “The data clearly support the view that someone who lives in a big city is at higher risk of psychosis than someone in a small town,” Ian Gold writes via email. “If the Internet is a kind of imaginary city,” he continues. “It might have some of the same psychological impact.”

What parallels do you see between the invention of the internet – the ‘semantic web’ and the invention of the printing press?
answer by Howard Doughty

Technology, and especially the technology of communication, has tremendous consequences for human relations – social, economic and political.

Socrates raged against the written word, insisting that it was the end of philosophy which, in his view, required two or more people in direct conversation. Anything else, such as a text, was at least one step removed from the real thing and, like music and poetry which he also despised, represented a pale imitation (or bastardization) of authentic life. (Thank goodness Plato wrote it all down.)

From an oral to a written society was one thing, but as Marshall McLuhan so eruditely explained in his book, The Gutenberg Galaxy, the printing press altered fundamantal cultural patterns again – making reading matter more easily available and, in the process, enabling the Protestant Reformation and its emphasis on isolated individual interpretations of whatever people imagined their god to be.

In time, the telegraph and the telephone began the destruction of space, time and letter writing, making it possible to have disembodied conversations over thousands of miles.

Don’t Touch That Dial!
by Vaughan Bell

A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both “confusing and harmful” to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an “always on” digital environment. It’s worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That’s not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press.

Worries about information overload are as old as information itself, with each generation reimagining the dangerous impacts of technology on mind and brain. From a historical perspective, what strikes home is not the evolution of these social concerns, but their similarity from one century to the next, to the point where they arrive anew with little having changed except the label.

 These concerns stretch back to the birth of literacy itself. In parallel with modern concerns about children’s overuse of technology, Socrates famously warned against writing because it would “create forgetfulness in the learners’ souls, because they will not use their memories.” He also advised that children can’t distinguish fantasy from reality, so parents should only allow them to hear wholesome allegories and not “improper” tales, lest their development go astray. The Socratic warning has been repeated many times since: The older generation warns against a new technology and bemoans that society is abandoning the “wholesome” media it grew up with, seemingly unaware that this same technology was considered to be harmful when first introduced.

Gessner’s anxieties over psychological strain arose when he set about the task of compiling an index of every available book in the 16th century, eventually published as the Bibliotheca universalis. Similar concerns arose in the 18th century, when newspapers became more common. The French statesman Malesherbes railed against the fashion for getting news from the printed page, arguing that it socially isolated readers and detracted from the spiritually uplifting group practice of getting news from the pulpit. A hundred years later, as literacy became essential and schools were widely introduced, the curmudgeons turned against education for being unnatural and a risk to mental health. An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.” Meanwhile, excessive study was considered a leading cause of madness by the medical community.

When radio arrived, we discovered yet another scourge of the young: The wireless was accused of distracting children from reading and diminishing performance in school, both of which were now considered to be appropriate and wholesome. In 1936, the music magazine the Gramophone reported that children had “developed the habit of dividing attention between the humdrum preparation of their school assignments and the compelling excitement of the loudspeaker” and described how the radio programs were disturbing the balance of their excitable minds. The television caused widespread concern as well: Media historian Ellen Wartella has noted how “opponents voiced concerns about how television might hurt radio, conversation, reading, and the patterns of family living and result in the further vulgarization of American culture.”

Demonized Smartphones Are Just Our Latest Technological Scapegoat
by Zachary Karabell

AS IF THERE wasn’t enough angst in the world, what with the Washington soap opera, #MeToo, false nuclear alerts, and a general sense of apprehension, now we also have a growing sense of alarm about how smartphones and their applications are impacting children.

In the past days alone, The Wall Street Journal ran a long story about the “parents’ dilemma” of when to give kids a smartphone, citing tales of addiction, attention deficit disorder, social isolation, and general malaise. Said one parent, “It feels a little like trying to teach your kid how to use cocaine, but in a balanced way.” The New York Times ran a lead article in its business section titled “It’s Time for Apple to Build a Less Addictive iPhone,” echoing a rising chorus in Silicon Valley about designing products and programs that are purposely less addictive.

All of which begs the question: Are these new technologies, which are still in their infancy, harming a rising generation and eroding some basic human fabric? Is today’s concern about smartphones any different than other generations’ anxieties about new technology? Do we know enough to make any conclusions?

Alarm at the corrosive effects of new technologies is not new. Rather, it is deeply rooted in our history. In ancient Greece, Socrates cautioned that writing would undermine the ability of children and then adults to commit things to memory. The advent of the printing press in the 15th century led Church authorities to caution that the written word might undermine the Church’s ability to lead (which it did) and that rigor and knowledge would vanish once manuscripts no longer needed to be copied manually.

Now, consider this question: “Does the telephone make men more active or more lazy? Does [it] break up home life and the old practice of visiting friends?” Topical, right? In fact, it’s from a 1926 survey by the Knights of Columbus about old-fashioned landlines.

 The pattern of technophobia recurred with the gramophone, the telegraph, the radio, and television. The trope that the printing press would lead to loss of memory is very much the same as the belief that the internet is destroying our ability to remember. The 1950s saw reports about children glued to screens, becoming more “aggressive and irritable as a result of over-stimulating experiences, which leads to sleepless nights and tired days.” Those screens, of course, were televisions.

Then came fears that rock-n-roll in the 1950s and 1960s would fray the bonds of family and undermine the ability of young boys and girls to become productive members of society. And warnings in the 2000s that videogames such as Grand Theft Auto would, in the words of then-Senator Hillary Rodham Clinton, “steal the innocence of our children, … making the difficult job of being a parent even harder.”

Just because these themes have played out benignly time and again does not, of course, mean that all will turn out fine this time. Information technologies from the printed book onward have transformed societies and upended pre-existing mores and social order.

Protruding Breasts! Acidic Pulp! #*@&!$% Senators! McCarthyism! Commies! Crime! And Punishment!
by R.C. Baker

In his medical practice, Wertham saw some hard cases—juvenile muggers, murderers, rapists. In Seduction, he begins with a gardening metaphor for the relationship between children and society: “If a plant fails to grow properly because attacked by a pest, only a poor gardener would look for the cause in that plant alone.” He then observes, “To send a child to a reformatory is a serious step. But many children’s-court judges do it with a light heart and a heavy calendar.” Wertham advocated a holistic approach to juvenile delinquency, but then attacked comic books as its major cause. “All comics with their words and expletives in balloons are bad for reading.” “What is the social meaning of these supermen, super women … super-ducks, super-mice, super-magicians, super-safecrackers? How did Nietzsche get into the nursery?” And although the superhero, Western, and romance comics were easily distinguishable from the crime and horror genres that emerged in the late 1940s, Wertham viewed all comics as police blotters. “[Children] know a crime comic when they see one, whatever the disguise”; Wonder Woman is a “crime comic which we have found to be one of the most harmful”; “Western comics are mostly just crime comic books in a Western setting”; “children have received a false concept of ‘love’ … they lump together ‘love, murder, and robbery.’” Some crimes are said to directly imitate scenes from comics. Many are guilty by association—millions of children read comics, ergo, criminal children are likely to have read comics. When listing brutalities, Wertham throws in such asides as, “Incidentally, I have seen children vomit over comic books.” Such anecdotes illuminate a pattern of observation without sourcing that becomes increasingly irritating. “There are quite a number of obscure stores where children congregate, often in back rooms, to read and buy secondhand comic books … in some parts of cities, men hang around these stores which sometimes are foci of childhood prostitution. Evidently comic books prepare the little girls well.” Are these stores located in New York? Chicago? Sheboygan? Wertham leaves us in the dark. He also claimed that powerful forces were arrayed against him because the sheer number of comic books was essential to the health of the pulp-paper manufacturers, forcing him on a “Don Quixotic enterprise … fighting not windmills, but paper mills.”

When Pac-Man Started a National “Media Panic”
by Michael Z. Newman

This moment in the history of pop culture and technology might have seemed unprecedented, as computerized gadgets were just becoming part of the fabric of everyday life in the early ‘80s. But we can recognize it as one in a predictable series of overheated reactions to new media that go back all the way to the invention of writing (which ancients thought would spell the end of memory). There is a particularly American tradition of becoming enthralled with new technologies of communication, identifying their promise of future prosperity and renewed community. It is matched by a related American tradition of freaking out about the same objects, which are also figured as threats to life as we know it.

The emergence of the railroad and the telegraph in the 19th century and of novel 20th century technologies like the telephone, radio, cinema, television, and the internet were all similarly greeted by a familiar mix of high hopes and dark fears. In Walden, published in 1854, Henry David Thoreau warned that, “we do not ride on the railroad; it rides upon us.” Technologies of both centuries were imagined to unite to unite a vast and dispersed nation and edify citizens, but they also were suspected of trivializing daily affairs, weakening local bonds, and worse yet, exposing vulnerable children to threats and hindering their development into responsible adults.

These expressions are often a species of moral outrage known as media panic, a reaction of adults to the perceived dangers of an emerging culture popular with children, which the parental generation finds unfamiliar and threatening. Media panics recur in a dubious cycle of lathering outrage, with grownups seeming not to realize that the same excessive alarmism has arisen in every generation. Eighteenth and 19th century novels might have caused confusion to young women about the difference between fantasy and reality, and excited their passions too much. In the 1950s, rock and roll was “the devil’s music,” feared for inspiring lust and youthful rebellion, and encouraging racial mixing. Dime novels, comic books, and camera phones have all been objects of frenzied worry about “the kids these days.”

The popularity of video games in the ‘80s prompted educators, psychotherapists, local government officeholders, and media commentators to warn that young players were likely to suffer serious negative effects. The games would influence their aficionados in the all the wrong ways. They would harm children’s eyes and might cause “Space Invaders Wrist” and other physical ailments. Like television, they would be addictive, like a drug. Games would inculcate violence and aggression in impressionable youngsters. Their players would do badly in school and become isolated and desensitized. A reader wrote to The New York Times to complain that video games were “cultivating a generation of mindless, ill-tempered adolescents.”

The arcades where many teenagers preferred to play video games were imagined as dens of vice, of illicit trade in drugs and sex. Kids who went to play Tempest or Donkey Kong might end up seduced by the lowlifes assumed to hang out in arcades, spiraling into lives of substance abuse, sexual depravity, and crime. Children hooked on video games might steal to feed their habit. Reports at the time claimed that video kids had vandalized cigarette machines, pocketing the quarters and leaving behind the nickels and dimes. […]

Somehow, a generation of teenagers from the 1980s managed to grow up despite the dangers, real or imagined, from video games. The new technology could not have been as powerful as its detractors or its champions imagined. It’s easy to be captivated by novelty, but it can force us to miss the cyclical nature of youth media obsessions. Every generation fastens onto something that its parents find strange, whether Elvis or Atari. In every moment in media history, intergenerational tension accompanies the emergence of new forms of culture and communication. Now we have sexting, cyberbullying, and smartphone addiction to panic about.

But while the gadgets keep changing, our ideas about youth and technology, and our concerns about young people’s development in an uncertain and ever-changing modern world, endure.

Why calling screen time ‘digital heroin’ is digital garbage
by Rachel Becker

The supposed danger of digital media made headlines over the weekend when psychotherapist Nicholas Kardaras published a story in the New York Post called “It’s ‘digital heroin’: How screens turn kids into psychotic junkies.” In the op-ed, Kardaras claims that “iPads, smartphones and XBoxes are a form of digital drug.” He stokes fears about the potential for addiction and the ubiquity of technology by referencing “hundreds of clinical studies” that show “screens increase depression, anxiety and aggression.”
We’ve seen this form of scaremongering before. People are frequently uneasy with new technology, after all. The problem is, screens and computers aren’t actually all that new. There’s already a whole generation — millennials — who grew up with computers. They appear, mostly, to be fine, selfies aside. If computers were “digital drugs,” wouldn’t we have already seen warning signs?

No matter. Kardaras opens with a little boy who was so hooked on Minecraft that his mom found him in his room in the middle of the night, in a “catatonic stupor” — his iPad lying next to him. This is an astonishing use of “catatonic,” and is almost certainly not medically correct. It’s meant to scare parents.

by Alison Gopnik

My own childhood was dominated by a powerful device that used an optical interface to transport the user to an alternate reality. I spent most of my waking hours in its grip, oblivious of the world around me. The device was, of course, the book. Over time, reading hijacked my brain, as large areas once dedicated to processing the “real” world adapted to processing the printed word. As far as I can tell, this early immersion didn’t hamper my development, but it did leave me with some illusions—my idea of romantic love surely came from novels.
English children’s books, in particular, are full of tantalizing food descriptions. At some point in my childhood, I must have read about a honeycomb tea. Augie, enchanted, agreed to accompany me to the grocery store. We returned with a jar of honeycomb, only to find that it was an inedible, waxy mess.

Many parents worry that “screen time” will impair children’s development, but recent research suggests that most of the common fears about children and screens are unfounded. (There is one exception: looking at screens that emit blue light before bed really does disrupt sleep, in people of all ages.) The American Academy of Pediatrics used to recommend strict restrictions on screen exposure. Last year, the organization examined the relevant science more thoroughly, and, as a result, changed its recommendations. The new guidelines emphasize that what matters is content and context, what children watch and with whom. Each child, after all, will have some hundred thousand hours of conscious experience before turning sixteen. Those hours can be like the marvellous ones that Augie and I spent together bee-watching, or they can be violent or mindless—and that’s true whether those hours are occupied by apps or TV or books or just by talk.

New tools have always led to panicky speculation. Socrates thought that reading and writing would have disastrous effects on memory; the novel, the telegraph, the telephone, and the television were all declared to be the End of Civilization as We Know It, particularly in the hands of the young. Part of the reason may be that adult brains require a lot of focus and effort to learn something new, while children’s brains are designed to master new environments spontaneously. Innovative technologies always seem distracting and disturbing to the adults attempting to master them, and transparent and obvious—not really technology at all—to those, like Augie, who encounter them as children.

The misguided moral panic over Slender Man
by Adam Possamai

Sociologists argue that rather than simply being created stories, urban legends represent the fear and anxieties of current time, and in this instance, the internet culture is offering a global and a more participatory platform in the story creation process.

New technology is also allowing urban legends to be transmitted at a faster pace than before the invention of the printing press, and giving more people the opportunity to shape folk stories that blur the line between fiction and reality. Commonly, these stories take a life of their own and become completely independent from what the original creator wanted to achieve.

Yet if we were to listen to social commentary this change in the story creation process is opening the door to deviant acts.

Last century, people were already anxious about children accessing VHS and Betamax tapes and being exposed to violence and immorality. We are now likely to face a similar moral panic with regards to the internet.

Sleepwalking Through Our Dreams

In The Secret Life of Puppets, Victoria Nelson makes some useful observations of reading addiction, specifically in terms of formulaic genres. She discusses Sigmund Freud’s repetition compulsion and Lenore Terr’s post-traumatic games. She sees genre reading as a ritual-like enactment that can’t lead to resolution, and so the addictive behavior becomes entrenched. This would apply to many other forms of entertainment and consumption. And it fits into Derrick Jensen’s discussion of abuse, trauma, and the victimization cycle.

I would broaden her argument in another way. People have feared the written text ever since it was invented. In the 18th century, there took hold a moral panic about reading addiction in general and that was before any fiction genres had developed (Frank Furedi, The Media’s First Moral Panic; full text available at Wayback Machine). The written word is unchanging and so creates the conditions for repetition compulsion. Every time a text is read, it is the exact same text.

That is far different from oral societies. And it is quite telling that oral societies have a much more fluid sense of self. The Piraha, for example, don’t cling to their sense of self nor that of others. When a Piraha individual is possessed by a spirit or meets a spirit who gives them a new name, the self that was there is no longer there. When asked where is that person, the Piraha will say that he or she isn’t there, even if the same body of the individual is standing right there in front of them. They also don’t have a storytelling tradition or concern for the past.

Another thing that the Piraha apparently lack is mental illness, specifically depression along with suicidal tendencies. According to Barbara Ehrenreich from Dancing in the Streets, there wasn’t much written about depression even in the Western world until the suppression of religious and public festivities, such as Carnival. One of the most important aspects of Carnival and similar festivities was the masking, shifting, and reversal of social identities. Along with this, there was the losing of individuality within the group. And during the Middle Ages, an amazing number of days in the year were dedicated to communal celebrations. The ending of this era coincided with numerous societal changes, including the increase of literacy with the spread of the movable type printing press.

The Media’s First Moral Panic
by Frank Furedi

When cultural commentators lament the decline of the habit of reading books, it is difficult to imagine that back in the 18th century many prominent voices were concerned about the threat posed by people reading too much. A dangerous disease appeared to afflict the young, which some diagnosed as reading addiction and others as reading rage, reading fever, reading mania or reading lust. Throughout Europe reports circulated about the outbreak of what was described as an epidemic of reading. The behaviours associated with this supposedly insidious contagion were sensation-seeking and morally dissolute and promiscuous behaviour. Even acts of self-destruction were associated with this new craze for the reading of novels.

What some described as a craze was actually a rise in the 18th century of an ideal: the ‘love of reading’. The emergence of this new phenomenon was largely due to the growing popularity of a new literary genre: the novel. The emergence of commercial publishing in the 18th century and the growth of an ever-widening constituency of readers was not welcomed by everyone. Many cultural commentators were apprehensive about the impact of this new medium on individual behaviour and on society’s moral order.

With the growing popularity of novel reading, the age of the mass media had arrived. Novels such as Samuel Richardson’s Pamela, or Virtue Rewarded (1740) and Rousseau’s Julie, or the New Heloise (1761) became literary sensations that gripped the imagination of their European readers. What was described as ‘Pamela-fever’ indicated the powerful influence novels could exercise on the imagination of the reading public. Public deliberation on these ‘fevers’ focused on what was a potentially dangerous development, which was the forging of an intense and intimate interaction between the reader and literary characters. The consensus that emerged was that unrestrained exposure to fiction led readers to lose touch with reality and identify with the novel’s romantic characters to the point of adopting their behaviour. The passionate enthusiasm with which European youth responded to the publication of Johann Wolfgang von Goethe’s novel The Sorrows of Young Werther (1774) appeared to confirm this consensus. […]

What our exploration of the narrative of Werther fever suggests is that it acquired a life of its own to the point that it mutated into a taken-for-granted rhetorical idiom, which accounted for the moral problems facing society. Warnings about an epidemic of suicide said more about the anxieties of their authors than the behaviour of the readers of the novels. An inspection of the literature circulating these warnings indicates a striking absence of empirical evidence. The constant allusion to Miss. G., to nameless victims and to similarly framed death scenes suggests that these reports had little factual content to draw on. Stories about an epidemic of suicide were as fictional as the demise of Werther in Goethe’s novel.

It is, however, likely that readers of Werther were influenced by the controversy surrounding the novel. Goethe himself was affected by it and in his autobiography lamented that so many of his readers felt called upon to ‘re-enact the novel, and possibly shoot themselves’. Yet, despite the sanctimonious scaremongering, it continued to attract a large readership. While there is no evidence that Werther was responsible for the promotion of a wave of copycat suicides, it evidently succeeded in inspiring a generation of young readers. The emergence of what today would be described as a cult of fans with some of the trappings of a youth subculture is testimony to the novel’s powerful appeal.

The association of the novel with the disorganisation of the moral order represented an early example of a media panic. The formidable, sensational and often improbable effects attributed to the consequences of reading in the 18th century provided the cultural resources on which subsequent reactions to the cinema, television or the Internet would draw on. In that sense Werther fever anticipated the media panics of the future.

Curiously, the passage of time has not entirely undermined the association of Werther fever with an epidemic of suicide. In 1974 the American sociologist Dave Phillips coined the term, the ‘Werther Effect’ to describe mediastimulated imitation of suicidal behaviour. But the durability of the Werther myth notwithstanding, contemporary media panics are rarely focused on novels. In the 21st century the simplistic cause and effect model of the ‘Werther Effect is more likely to be expressed through moral anxieties about the danger of cybersuicide, copycat online suicide.

The Better Angels of Our Nature
by Steven Pinker
Kindle Locations 13125-13143
(see To Imagine and Understand)

It would be surprising if fictional experiences didn’t have similar effects to real ones, because people often blur the two in their memories. 65 And a few experiments do suggest that fiction can expand sympathy. One of Batson’s radio-show experiments included an interview with a heroin addict who the students had been told was either a real person or an actor. 66 The listeners who were asked to take his point of view became more sympathetic to heroin addicts in general, even when the speaker was fictitious (though the increase was greater when they thought he was real). And in the hands of a skilled narrator, a fictitious victim can elicit even more sympathy than a real one. In his book The Moral Laboratory, the literary scholar Jèmeljan Hakemulder reports experiments in which participants read similar facts about the plight of Algerian women through the eyes of the protagonist in Malike Mokkeddem’s novel The Displaced or from Jan Goodwin’s nonfiction exposé Price of Honor. 67 The participants who read the novel became more sympathetic to Algerian women than those who read the true-life account; they were less likely, for example, to blow off the women’s predicament as a part of their cultural and religious heritage. These experiments give us some reason to believe that the chronology of the Humanitarian Revolution, in which popular novels preceded historical reform, may not have been entirely coincidental: exercises in perspective-taking do help to expand people’s circle of sympathy.

The science of empathy has shown that sympathy can promote genuine altruism, and that it can be extended to new classes of people when a beholder takes the perspective of a member of that class, even a fictitious one. The research gives teeth to the speculation that humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering. And as such, the cognitive process of perspective-taking and the emotion of sympathy must figure in the explanation for many historical reductions in violence. They include institutionalized violence such as cruel punishments, slavery, and frivolous executions; the everyday abuse of vulnerable populations such as women, children, homosexuals, racial minorities, and animals; and the waging of wars, conquests, and ethnic cleansings with a callousness to their human costs.

Innocent Weapons:
The Soviet and American Politics of Childhood in the Cold War

by Margaret E. Peacock
pp. 88-89

As a part of their concern over American materialism, politicians and members of the American public turned their attention to the rising influence of media and popular culture upon the next generation.69 Concerns over uncontrolled media were not new in the United States in the 1950s. They had a way of erupting whenever popular culture underwent changes that seemed to differentiate the generations. This was the case during the silent film craze of the 1920s and when the popularity of dime novels took off in the 1930s.70 Yet, for many in the postwar era, the press, the radio, and the television presented threats to children that the country had never seen before. As members of Congress from across the political spectrum would argue throughout the 1950s, the media had the potential to present a negative image of the United States abroad, and it ran the risk of corrupting the minds of the young at a time when shoring up national patriotism and maintaining domestic order were more important than ever. The impact of media on children was the subject of Fredric Wertham’s 1953 best-selling book Seduction of the Innocent, in which he chronicled his efforts over the course of three years to “trace some of the roots of the modern mass delinquency.”71 Wertham’s sensationalist book documented case after case of child delinquents who seemed to be mimicking actions that they had seen on the television or, in particular, in comic strips. Horror comics, which were popular from 1948 until 1954, showed images of children killing their parents and peers, sometimes in gruesome ways—framing them for murder—being cunning and devious, even cannibalistic. A commonly cited story was that of “Bloody Mary,” published by Farrell Comics, which told the story of a seven-year-old girl who strangles her mother, sends her father to the electric chair for the murder, and then kills a psychiatrist who has learned that the girl committed these murders and that she is actually a dwarf in disguise.72 Wertham’s crusade against horror comics was quickly joined by two Senate subcommittees in 1954, at the heads of which sat Estes Kefauver and Robert Hendrickson. They argued to their colleagues that the violence and destruction of the family in these comic books symbolized “a terrible twilight zone between sanity and madness.”73 They contended that children found in these comic books violent models of behavior and that they would otherwise be law abiding. J. Edgar Hoover chimed in to comment that “a comic which makes lawlessness attractive . . . may influence the susceptible boy or girl.”74

Such depictions carried two layers of threat. First, as Wertham, Hoover, and Kefauver argued, they reflected the seeming potential of modern media to transform “average” children into delinquents.75 Alex Drier, popular NBC newscaster, argued in May 1954 that “this continuous flow of filth [is] so corruptive in its effects that it has actually obliterated decent instincts in many of our children.”76 Yet perhaps more telling, the comics, as well as the heated response that they elicited, also reflected larger anxieties about what identities children should assume in contemporary America. As in the case of Bloody Mary, these comics presented an image of apparently sweet youths who were in fact driven by violent impulses and were not children at all. “How can we expose our children to this and then expect them to run the country when we are gone?” an agitated Hendrickson asked his colleagues in 1954.77 Bloody Mary, like the uneducated dolts of the Litchfield report and the spoiled boys of Wylie’s conjuring, presented an alternative identity for American youth that seemed to embody a new and dangerous future.

In the early months of 1954, Robert Hendrickson argued to his colleagues that “the strained international and domestic situation makes it impossible for young people of today to look forward with certainty to higher education, to entering a trade or business, to plans for marriage, a home, and family. . . . Neither the media, nor modern consumerism, nor the threat from outside our borders creates a problem child. But they do add to insecurity, to loneliness, to fear.”78 For Hendrickson these domestic trends, along with what he called “deficient adults,” seemed to have created a new population of troubled and victimized children who were “beyond the pale of our society.”79

The End of Victory Culture:
Cold War America and the Disillusioning of a Generation

by Tom Engelhardt
Kindle Locations 2872-2910

WORRY, BORDERING ON HYSTERIA, about the endangering behaviors of “youth” has had a long history in America, as has the desire of reformers and censors to save “innocent” children from the polluting effects of commercial culture. At the turn of the century, when middle-class white adolescents first began to take their place as leisure-time trendsetters, fears arose that the syncopated beat of popular “coon songs” and ragtime music would demonically possess young listeners, who might succumb to the “evils of the Negro soul.” Similarly, on-screen images of crime, sensuality, and violence in the earliest movies, showing in “nickel houses” run by a “horde of foreigners,” were decried by reformers. They were not just “unfit for children’s eyes,” but a “disease” especially virulent to young (and poor) Americans, who were assumed to lack all immunity to such spectacles. 1 […]

To many adults, a teen culture beyond parental oversight had a remarkably alien look to it. In venues ranging from the press to Senate committees, from the American Psychiatric Association to American Legion meetings, sensational and cartoonlike horror stories about the young or the cultural products they were absorbing were told. Tabloid newspaper headlines reflected this: “Two Teen Thrill Killings Climax City Park Orgies. Teen Age Killers Pose a Mystery— Why Did They Do It?… 22 Juveniles Held in Gang War. Teen Age Mob Rips up BMT Train. Congressmen Stoned, Cops Hunt Teen Gang.” After a visit to the movies in 1957 to watch two “teenpics,” Rock All Night and Dragstrip Girl, Ruth Thomas of Newport, Rhode Island’s Citizen’s Committee on Literature expressed her shock in words at least as lurid as those of any tabloid: “Isn’t it a form of brain-washing? Brain-washing the minds of the people and especially the youth of our nation in filth and sadistic violence. What enemy technique could better lower patriotism and national morale than the constant presentation of crime and horror both as news and recreation.” 3

You did not have to be a censor, a right-wing anti-Communist, or a member of the Catholic Church’s Legion of Decency, however, to hold such views. Dr. Frederick Wertham, a liberal psychiatrist, who testified in the landmark Brown v. Board of Education desegregation case and set up one of the first psychiatric clinics in Harlem, publicized the idea that children viewing commercially produced acts of violence and depravity, particularly in comic books, could be transformed into little monsters. The lurid title of his best-selling book, Seduction of the Innocent, an assault on comic books as “primers for crime,” told it all. In it, Dr. Wertham offered copious “horror stories” that read like material from Tales from the Crypt: “Three boys, six to eight years old, took a boy of seven, hanged him nude from a tree, his hands tied behind him, then burned him with matches. Probation officers investigating found that they were re-enacting a comic-book plot.… A boy of thirteen committed a lust murder of a girl of six. After his arrest, in jail, he asked for comicbooks” 4

Kindle Locations 2927-2937

The two— hood and performer, lower-class white and taboo black— merged in the “pelvis” of a Southern “greaser” who dressed like a delinquent, used “one of black America’s favorite products, Royal Crown Pomade hair grease” (meant to give hair a “whiter” look), and proceeded to move and sing “like a negro.” Whether it was because they saw a white youth in blackface or a black youth in whiteface, much of the media grew apoplectic and many white parents alarmed. In the meantime, swiveling his hips and playing suggestively with the microphone, Elvis Presley broke into the lives of millions of teens in 1956, bringing with him an element of disorder and sexuality associated with darkness. 6†

The second set of postwar fears involved the “freedom” of the commercial media— record and comic book companies, radio stations, the movies, and television— to concretize both the fantasies of the young and the nightmarish fears of grown-ups into potent products. For many adults, this was abundance as betrayal, the good life not as a vision of Eden but as an unexpected horror story.

Kindle Locations 2952-2979

Take comic books. Even before the end of World War II, a new kind of content was creeping into them as they became the reading matter of choice for the soldier-adolescent. […] Within a few years, “crime” comics like Crime Does Not Pay emerged from the shadows, displaying a wide variety of criminal acts for the delectation of young readers. These were followed by horror and science fiction comics, purchased in enormous numbers. By 1953, more than 150 horror comics were being produced monthly, featuring acts of torture often of an implicitly sexual nature, murders and decapitations of various bloody sorts, visions of rotting flesh, and so on. 9

Miniature catalogs of atrocities, their feel was distinctly assaultive. In their particular version of the spectacle of slaughter, they targeted the American family, the good life, and revered institutions. Framed by sardonic detective narrators or mocking Grand Guignol gatekeepers, their impact was deconstructive. Driven by a commercial “hysteria” as they competed to attract buyers with increasingly atrocity-ridden covers and stories, they both partook of and mocked the hysteria about them. Unlike radio or television producers, the small publishers of the comic book business were neither advertiser driven nor corporately controlled.

Unlike the movies, comics were subject to no code. Unlike the television networks, comics companies had no Standards and Practices departments. No censoring presence stood between them and whoever would hand over a dime at a local newsstand. Their penny-ante ads and pathetic pay scale ensured that writing and illustrating them would be a job for young men in their twenties (or even teens). Other than early rock and roll, comics were the only cultural form of the period largely created by the young for those only slightly younger. In them, uncensored, can be detected the dismantling voice of a generation that had seen in the world war horrors beyond measure.

The hysterical tone of the response to these comics was remarkable. Comics publishers were denounced for conspiring to create a delinquent nation. Across the country, there were publicized comic book burnings like one in Binghamton, New York, where 500 students were dismissed from school early in order to torch 2,000 comics and magazines. Municipalities passed ordinances prohibiting the sale of comics, and thirteen states passed legislation to control their publication, distribution, or sale. Newspapers and magazines attacked the comics industry. The Hartford Courant decried “the filthy stream that flows from the gold-plated sewers of New York.” In April 1954, the Senate Subcommittee to Investigate Juvenile Delinquency convened in New York to look into links between comics and teen crime. 10

Kindle Locations 3209-3238

If sponsors and programmers recognized the child as an independent taste center, the sight of children glued to the TV, reveling in their own private communion with the promise of America, proved unsettling to some adults. The struggle to control the set, the seemingly trancelike quality of TV time, the soaring number of hours spent watching, could leave a parent feeling challenged by some hard-to-define force released into the home under the aegis of abundance, and the watching child could gain the look of possession, emptiness, or zombification.

Fears of TV’s deleterious effects on the child were soon widespread. The medical community even discovered appropriate new childhood illnesses. There was “TV squint” or eyestrain, “TV bottom,” “bad feet” (from TV-induced inactivity), “frogitis” (from a viewing position that put too much strain on inner-leg ligaments), “TV tummy” (from TV-induced overexcitement), “TV jaw” or “television malocclusion” (from watching while resting on one’s knuckles, said to force the eyeteeth inward), and “tired child syndrome” (chronic fatigue, loss of appetite, headaches, and vomiting induced by excessive viewing).

However, television’s threat to the child was more commonly imagined to lie in the “violence” of its programming. Access to this “violence” and the sheer number of hours spent in front of the set made the idea that this new invention was acting in loco parentis seem chilling to some; and it was true that via westerns, crime shows, war and spy dramas, and Cold War-inspired cartoons TV was indiscriminately mixing a tamed version of the war story with invasive Cold War fears. Now, children could endlessly experience the thrill of being behind the barrel of a gun. Whether through the Atom Squad’s three government agents, Captain Midnight and his Secret Squadron, various FBI men, cowboys, or detectives, they could also encounter “an array of H-bomb scares, mad Red scientists, [and] plots to rule the world,” as well as an increasing level of murder and mayhem that extended from the six-gun frontier of the “adult” western to the blazing machine guns of the crime show. 30

Critics, educators, and worried parents soon began compiling TV body counts as if the statistics of victory were being turned on young Americans. “Frank Orme, an independent TV watchdog, made a study of Los Angeles television in 1952 and noted, in one week, 167 murders, 112 justifiable homicides, and 356 attempted murders. Two-thirds of all the violence he found occurred in children’s shows. In 1954, Orme said violence on kids’ shows had increased 400 percent since he made his first report.” PTAs organized against TV violence, and Senate hearings searched for links between TV programming and juvenile delinquency.

Such “violence,” though, was popular. In addition, competition for audiences among the three networks had the effect of ratcheting up the pressures for violence, just as it had among the producers of horror comics. At The Untouchables, a 1960 hit series in which Treasury agent Eliot Ness took on Chicago’s gangland (and weekly reached 5-8 million young viewers), ABC executives would push hard for more “action.” Producer Quinn Martin would then demand the same of his subordinates, “or we are all going to get clobbered.” In a memo to one of the show’s writers, he asked: “I wish you would come up with a different device than running the man down with a car, as we have done this now in three different shows. I like the idea of sadism, but I hope we can come up with another approach to it.” 31

Who were the Phoenicians?

In modern society, we are obsessed with identity, specifically in terms of categorizing and labeling. This leads to a tendency to essentialize identity, but this isn’t supported by the evidence. The only thing we are born as is members of a particular species, homo sapiens.

What stands out is that other societies have entirely different experiences of collective identity. The most common distinctions, contrary to ethnic and racial ideologies, are those we perceive in the people most similar to us — the (too often violent) narcissism of small differences.

We not only project onto other societies our own cultural assumptions for we also read anachronisms into the past as our way of rationalizing the present. But if we study closely what we know from history and archaeology, there isn’t any clear evidence for ethnic and racial ideology.

The ancient world is more complex than our simple notions.  A good example of this is the people(s) that have been called Phoenicians.

* * *

In Search of the Phoenicians
by Josephine Quinn
pp. 13-17

However, my intention here is not simply to rescue the Phoenicians from their undeserved obscurity. Quite the opposite, in fact: I’m going to start by making the case that they did not in fact exist as a self-conscious collective or “people.” The term “Phoenician” itself is a Greek invention, and there is no good evidence in our surviving ancient sources that these Phoenicians saw themselves, or acted, in collective terms above the level of the city or in many cases simply the family. The first and so far the only person known to have called himself a Phoenician in the ancient world was the Greek novelist Heliodorus of Emesa (modern Homs in Syria) in the third or fourth century CE, a claim made well outside the traditional chronological and geographical boundaries of Phoenician history, and one that I will in any case call into question later in this book.

Instead, then, this book explores the communities and identities that were important to the ancient people we have learned to call Phoenicians, and asks why the idea of being Phoenician has been so enthusiastically adopted by other people and peoples—from ancient Greece and Rome, to the emerging nations of early modern Europe, to contemporary Mediterranean nation-states. It is these afterlives, I will argue, that provide the key to the modern conception of the Phoenicians as a “people.” As Ernest Gellner put it, Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist”. 7 In the case of the Phoenicians, I will suggest, modern nationalism invented and then sustained an ancient nation.

Identities have attracted a great deal of scholarly attention in recent years, serving as the academic marginalia to a series of crucially important political battles for equality and freedom. 8 We have learned from these investigations that identities are not simple and essential truths into which we are born, but that they are constructed by the social and cultural contexts in which we live, by other people, and by ourselves—which is not to say that they are necessarily freely chosen, or that they are not genuinely and often fiercely felt: to describe something as imagined is not to dismiss it as imaginary. 9 Our identities are also multiple: we identify and are identified by gender, class, age, religion, and many other things, and we can be more than one of any of those things at once, whether those identities are compatible or contradictory. 10 Furthermore, identities are variable across both time and space: we play—and we are assigned—different roles with different people and in different contexts, and they have differing levels of importance to us in different situations. 11

In particular, the common assumption that we all define ourselves as a member of a specific people or “ethnic group,” a collective linked by shared origins, ancestry, and often ancestral territory, rather than simply by contemporary political, social, or cultural ties, remains just that—an assumption. 12 It is also a notion that has been linked to distinctive nineteenth-century European perspectives on nationalism and identity, 13 and one that sits uncomfortably with counterexamples from other times and places. 14

The now-discredited categorization and labeling of African “tribes” by colonial administrators, missionaries, and anthropologists of the nineteenth and twentieth centuries provides many well-known examples, illustrating the way in which the “ethnic assumption” can distort interpretations of other people’s affiliations and self-understanding. 15 The Banande of Zaire, for instance, used to refer to themselves simply as bayira (“cultivators” or “workers”), and it was not until the creation of a border between the British Protectorate of Uganda and the Belgian Congo in 1885 that they came to be clearly delineated from another group of bayira now called Bakonzo. 16 Even more strikingly, the Tonga of Zambia, as they were named by outsiders, did not regard themselves as a unified group differentiated from their neighbors, with the consequence that they tended to disperse and reassimilate among other groups. 17 Where such groups do have self-declared ethnic identities, they were often first imposed from without, by more powerful regional actors. The subsequent local adoption of those labels, and of the very concepts of ethnicity and tribe in some African contexts, illustrates the effects that external identifications can have on internal affiliations and self-understandings. 18 Such external labeling is not of course a phenomenon limited to Africa or to Western colonialism: other examples include the ethnic categorization of the Miao and the Yao in Han China, and similar processes carried out by the state in the Soviet Union. 19

Such processes can be dangerous. When Belgian colonial authorities encountered the central African kingdom of Rwanda, they redeployed labels used locally at the time to identify two closely related groups occupying different positions in the social and political hierarchy to categorize the population instead into two distinct “races” of Hutus (identified as the indigenous farmers) and Tutsis (thought to be a more civilized immigrant population). 20 This was not easy to do, and in 1930 a Belgian census attempting to establish which classification should be recorded on the identity cards of their subjects resorted in some cases to counting cows: possession of ten or more made you a Tutsi. 21 Between April and July 1994, more than half a million Tutsis were killed by Hutus, sometimes using their identity cards to verify the “race” of their victims.

The ethnic assumption also raises methodological problems for historians. The fundamental difficulty with labels like “Phoenician” is that they offer answers to questions about historical explanation before they have even been asked. They assume an underlying commonality between the people they designate that cannot easily be demonstrated; they produce new identities where they did not to our knowledge exist; and they freeze in time particular identities that were in fact in a constant process of construction, from inside and out. As Paul Gilroy has argued, “ethnic absolutism” can homogenize what are in reality significant differences. 22 These labels also encourage historical explanation on a very large and abstract scale, focusing attention on the role of the putative generic identity at the expense of more concrete, conscious, and interesting communities and their stories, obscuring in this case the importance of the family, the city, and the region, not to mention the marking of other social identities such as gender, class, and status. In sum, they provide too easy a way out of actually reading the historical evidence.

As a result, recent scholarship tends to see ethnicity not as a timeless fact about a region or group, but as an ideology that emerges at certain times, in particular social and historical circumstances, and, especially at moments of change or crisis: at the origins of a state, for instance, or after conquest, or in the context of migration, and not always even then. 23 In some cases, we can even trace this development over time: James C. Scott cites the example of the Cossacks on Russia’s frontiers, people used as cavalry by the tsars, Ottomans, and Poles, who “were, at the outset, nothing more and nothing less than runaway serfs from all over European Russia, who accumulated at the frontier. They became, depending on their locations, different Cossack “hosts”: the Don (for the Don River basin) Cossacks, the Azov (Sea) Cossacks, and so on.” 24

Ancient historians and archaeologists have been at the forefront of these new ethnicity studies, emphasizing the historicity, flexibility, and varying importance of ethnic identity in the ancient Mediterranean. 25 They have described, for instance, the emergence of new ethnic groups such as the Moabites and Israelites in the Near East in the aftermath of the collapse of the Bronze Age empires and the “crystallisation of commonalities” among Greeks in the Archaic period. 26 They have also traced subsequent changes in the ethnic content and formulation of these identifications: in relation to “Hellenicity,” for example, scholars have delineated a shift in the fifth century BCE from an “aggregative” conception of Greek identity founded largely on shared history and traditions to a somewhat more oppositional approach based on distinction from non-Greeks, especially Persians, and then another in the fourth century BCE, when Greek intellectuals themselves debated whether Greekness should be based on a shared past or on shared culture and values in the contemporary world. 27 By the Hellenistic period, at least in Egypt, the term “Hellene” (Greek) was in official documents simply an indication of a privileged tax status, and those so labeled could be Jews, Thracians—or, indeed, Egyptians. 28

Despite all this fascinating work, there is a danger that the considerable recent interest in the production, mechanisms, and even decline of ancient ethnicity has obscured its relative rarity. Striking examples of the construction of ethnic groups in the ancient world do not of course mean that such phenomena became the norm. 29 There are good reasons to suppose in principle that without modern levels of literacy, education, communication, mobility, and exchange, ancient communal identities would have tended to form on much smaller scales than those at stake in most modern discussions of ethnicity, and that without written histories and genealogies people might have placed less emphasis on the concepts of ancestry and blood-ties that at some level underlie most identifications of ethnic groups. 30 And in practice, the evidence suggests that collective identities throughout the ancient Mediterranean were indeed largely articulated at the level of city-states and that notions of common descent or historical association were rarely the relevant criterion for constructing “groupness” in these communities: in Greek cities, for instance, mutual identification tended to be based on political, legal, and, to a limited extent, cultural criteria, 31 while the Romans famously emphasized their mixed origins in their foundation legends and regularly manumitted their foreign slaves, whose descendants then became full Roman citizens. 32

This means that some of the best-known “peoples” of antiquity may not actually have been peoples at all. Recent studies have shown that such familiar groups as the Celts of ancient Britain and Ireland and the Minoans of ancient Crete were essentially invented in the modern period by the archaeologists who first studied or “discovered” them, 33 and even the collective identity of the Greeks can be called into question. As S. Rebecca Martin has recently pointed out, “there is no clear recipe for the archetypal Hellene,” and despite our evidence for elite intellectual discussion of the nature of Greekness, it is questionable how much “being Greek” meant to most Greeks: less, no doubt, than to modern scholars. 34 The Phoenicians, I will suggest in what follows, fall somewhere in the middle—unlike the Minoans or the Atlantic Celts, there is ancient evidence for a conception of them as a group, but unlike the Greeks, this evidence is entirely external—and they provide another good case study of the extent to which an assumption of a collective identity in the ancient Mediterranean can mislead. 35

pp. 227-230

In all the exciting work that has been done on “identity” in the past few decades, there has been too little attention paid to the concept of identity itself. We tend to ask how identities are made, vary, and change, not whether they exist at all. But Rogers Brubaker and Frederik Cooper have pinned down a central difficulty with recent approaches: “it is not clear why what is routinely characterized as multiple, fragmented, and fluid should be conceptualized as ‘identity’ at all.” 1 Even personal identity, a strong sense of one’s self as a distinct individual, can be seen as a relatively recent development, perhaps related to a peculiarly Western individualism. 2 Collective identities, furthermore, are fundamentally arbitrary: the artificial ways we choose to organize the world, ourselves, and each other. However strong the attachments they provoke, they are not universal or natural facts. Roger Rouse has pointed out that in medieval Europe, the idea that people fall into abstract social groupings by virtue of common possession of a certain attribute, and occupy autonomous and theoretically equal positions within them, would have seemed nonsensical: instead, people were assigned their different places in the interdependent relationships of a concrete hierarchy. 3

The truth is that although historians are constantly apprehending the dead and checking their pockets for identity, we do not know how people really thought of themselves in the past, or in how many different ways, or indeed how much. I have argued here that the case of the Phoenicians highlights the extent to which the traditional scholarly perception of a basic sense of collective identity at the level of a “people,” “culture,” or “nation” in the cosmopolitan, entangled world of the ancient Mediterranean has been distorted by the traditional scholarly focus on a small number of rather unusual, and unusually literate, societies.

My starting point was that we have no good evidence for the ancient people that we call Phoenician identifying themselves as a single people or acting as a stable collective. I do not conclude from this absence of evidence that the Phoenicians did not exist, nor that nobody ever called her- or himself a Phoenician under any circumstances: Phoenician-speakers undoubtedly had a larger repertoire of self-classifications than survives in our fragmentary evidence, and it would be surprising if, for instance, they never described themselves as Phoenicians to the Greeks who invented that term; indeed, I have drawn attention to several cases where something very close to that is going on. Instead, my argument is that we should not assume that our “Phoenicians” thought of themselves as a group simply by analogy with models of contemporary identity formation among their neighbors—especially since those neighbors do not themselves portray the Phoenicians as a self-conscious or strongly differentiated collective. Instead, we should accept the gaps in our knowledge and fill the space instead with the stories that we can tell.

The stories I have looked at in this book include the ways that the people of the northern Levant did in fact identify themselves—in terms of their cities, but even more of their families and occupations—as well as the formation of complex social, cultural, and economic networks based on particular cities, empires, and ideas. These could be relatively small and closed, like the circle of the tophet, or on the other hand, they could, like the network of Melqart, create shared religious and political connections throughout the Mediterranean—with other Levantine settlements, with other settlers, and with local populations. Identifications with a variety of social and cultural traditions is one recurrent characteristic of the people and cities we call Phoenician, and this continued into the Hellenistic and Roman periods, when “being Phoenician” was deployed as a political and cultural tool, although it was still not claimed as an ethnic identity.

Another story could go further, to read a lack of collective identity, culture, and political organization among Phoenician-speakers as a positive choice, a form of resistance against larger regional powers. James C. Scott has recently argued in The Art of Not Being Governed (2009) that self-governing people living on the peripheries and borders of expansionary states in that region tend to adopt strategies to avoid incorporation and to minimize taxation, conscription, and forced labor. Scott’s focus is on the highlands of Southeast Asia, an area now sometimes known as Zomia, and its relationship with the great plains states of the region such as China and Burma. He describes a series of tactics used by the hill people to avoid state power, including “their physical dispersion in rugged terrain, their mobility, their cropping practices, their kinship structure, their pliable ethnic identities . . . their flexible social structure, their religious heterodoxy, their egalitarianism and even the nonliterate, oral cultures.” The constant reconstruction of identity is a core theme in his work: “ethnic identities in the hills are politically crafted and designed to position a group vis-à-vis others in competition for power and resources.” 4 Political integration in Zomia, when it has happened at all, has usually consisted of small confederations: such alliances, he points out, are common but short-lived, and are often preserved in local place names such as “Twelve Tai Lords” (Sipsong Chutai) or “Nine Towns” (Ko Myo)—information that throws new light on the federal meetings recorded in fourth-century BCE Tripolis (“Three Cities”). 5

In fact, many aspects of Scott’s analysis feel familiar in the world of the ancient Mediterranean, on the periphery of the great agricultural empires of Mesopotamia and Iran, and despite all its differences from Zomia, another potential candidate for the label of “shatterzone.” The validity of Scott’s model for upland Southeast Asia itself —a matter of considerable debate since the book’s publication—is largely irrelevant for our purposes; 6 what is interesting here is how useful it might be for thinking about the mountainous region of the northern Levant, and the places of refuge in and around the Mediterranean.

In addition to outright rebellion, we could argue that the inhabitants of the Levant employed a variety of strategies to evade the heaviest excesses of imperial power. 7 One was to organize themselves in small city-states with flimsy political links and weak hierarchies, requiring larger powers to engage in multiple negotiations and arrangements, and providing the communities involved with multiple small and therefore obscure opportunities for the evasion of taxation and other responsibilities—“divide that ye be not ruled,” as Scott puts it. 8 A cosmopolitan approach to culture and language in those cities would complement such an approach, committing to no particular way of doing or being or even looking, keeping loyalties vague and options open. One of the more controversial aspects of Scott’s model could even explain why there is no evidence for Phoenician literature despite earlier Near Eastern traditions of myth and epic. He argues that the populations he studies are in some cases not so much nonliterate as postliterate: “Given the considerable advantages in plasticity of oral over written histories and genealogies, it is at least conceivable to see the loss of literacy and of written texts as a more or less deliberate adaptation to statelessness.” 9

Another available option was to take to the sea, a familiar but forbidding terrain where the experience and knowledge of Levantine sailors could make them and their activities invisible and unaccountable to their overlords further east. The sea also offered an escape route from more local sources of power, and the stories we hear of the informal origins of western settlements such as Carthage and Lepcis, whether or not they are true, suggest an appreciation of this point. A distaste even for self-government could also explain a phenomenon I have drawn attention to throughout the book: our “Phoenicians” not only fail to visibly identify as Phoenician, they often omit to identify at all.

It is striking in this light that the first surviving visible expression of an explicitly “Phoenician” identity was imposed by the Carthaginians on their subjects as they extended state power to a degree unprecedented among Phoenician-speakers, that it was then adopted by Tyre as a symbol of colonial success, and that it was subsequently exploited by Roman rulers in support of their imperial activities. This illustrates another uncomfortable aspect of identity formation: it is often a cultural bullying tactic, and one that tends to benefit those already in power more than those seeking self-empowerment. Modern European examples range from the linguistic and cultural education strategies that turned “peasants into Frenchmen” in the late nineteenth century, 10 to the eugenic Lebensborn program initiated by the Nazis in mid-twentieth-century central Europe to create more Aryan children through procreation between German SS officers and “racially pure” foreign women. 11 Such examples also underline the difficulty of distinguishing between internal and external conceptions of identity when apparently internal identities are encouraged from above, or even from outside, just as the developing modern identity as Phoenician involved the gradual solidification of the identity of the ancient Phoenicians.

It seems to me that attempts to establish a clear distinction between “emic” and “etic” identity are part of a wider tendency to treat identities as ends rather than means, and to focus more on how they are constructed than on why. Identity claims are always, however, a means to another end, and being “Phoenician” is in all the instances I have surveyed here a political rather than a personal statement. It is sometimes used to resist states and empires, from Roman Africa to Hugh O’Donnell’s Ireland, but more often to consolidate them, lending ancient prestige and authority to later regimes, a strategy we can see in Carthage’s Phoenician coinage, the emperor Elagabalus’s installation of a Phoenician sun god at Rome, British appeals to Phoenician maritime power, and Hannibal Qadhafi’s cruise ship.

In the end, it is modern nationalism that has created the Phoenicians, along with much else of our modern idea of the ancient Mediterranean. Phoenicianism has served nationalist purposes since the early modern period: the fully developed notion of Phoenician ethnicity may be a nineteenth-century invention, a product of ideologies that sought to establish ancient peoples or “nations” at the heart of new nation-states, but its roots, like those of nationalism itself, are deeper. As origin myth or cultural comparison, aggregative or oppositional, imperialist and anti-imperialist, Phoenicianism supported the expansion of the early modern nation of Britain, as well as the position of the nation of Ireland as separate and respected within that empire; it helped to consolidate the nation of Lebanon under French imperial mandate, premised on a regional Phoenician identity agreed on between local and French intellectuals, but it also helped to construct the nation of Tunisia in opposition to European colonialism.

Outpost of Humanity

There have been certain thoughts on my mind. I’ve been focused on the issue of who I want to be in terms of what I do with my time and how I relate to others. To phrase it in the negative, I don’t want to waste time and promote frustration for myself or others.

I’ve come to the conclusion that we humans tend to consciously focus on that which matters the least. We are easily drawn in and distracted. Those in power understand this and use it to create political conflicts and charades to manipulate us. Sadly, the distance between Hollywood and the District of Columbia is nearly non-existent within the public mind. Americans worry about the division of church and state or business and state when what they should be worried most about is the division of entertainment and state, the nexus of spectacle and propaganda. I’m looking at you, mainstream media.

A notion I’ve had is that maybe politics, as with economics, is more of a result than a cause (until recent times, few would have ever seriously considered politics and economics as the primary cause of much of anything; even as late as the 19th century, public debate about such things was often thought of as unseemly). We focus on what is easy to see, which is to say the paradigm that defines our society and so dominates our minds. Politics and economics are ways of simplistically framing what in reality is complex. We don’t know how to deal with the complex reality, confusing and discomforting as it is, and so we mostly ignore it. Besides, politics and economics makes for a more entertaining narrative that plays well on mass media.

It’s like the joke about the man looking for car keys under a streetlamp. When asked if he lost his car keys by the streetlamp, he explains he lost them elsewhere but the lighting is better there. Still, people will go on looking under that streetlamp, no matter what anybody else says. There is no point in arguing about it. Just wish them well on their fool’s errand. I guess we all have to keep ourselves preoccupied somehow.

Here is an even more basic point. It appears that rationality and facts have almost nothing to do with much of anything that has any significance, outside of the precise constraints of particular activities such as scientific research or philosophical analysis. I’m specifically thinking of the abovementioned frames of politics and economics. Rationality must operate within a frame, but it can’t precede the act of framing. That is as true for the political left as for the political right, as true for me as for the rest of humanity. Critical thinking is not what centrally motivates people and not what, on those rare occasions, allows for genuine change. Our ability to think well based on valid info is important in society and is a useful as a tool, but it isn’t what drives human behavior.

By the time an issue gets framed as politics or economics, it is already beyond the point of much influence and improvement. Arguing about such things won’t change anything. Even activism by itself won’t change anything. They are results and not causes. Or at best, they are tools and not the hand that wields the tool nor the mind that determines its use. I’m no longer in the mood to bash my head against the brick wall of public debate. It’s not about feeling superior. Rather, it’s about focusing on what matters.

I barely know what motivates myself and I’m not likely to figure out what makes other people tick. It’s not a lack of curiosity on my part, not a lack of effort in trying to understand. This isn’t to say I plan on ending my obsessive focus on human nature and society. But I realize that focusing on politics, economics, etc doesn’t make me happy or anyone else happy either, much less making the world a better place. It seems like the wrong way to look at things, distracting us from the possibilities of genuine insight and understanding, the point of leverage where the world might be moved. These dominant frames can’t give us the inspiration and vision that is necessary for profound change, the only game that interests me in these times when profound change is desperately needed.

There is another avenue of thought I’ve been following. To find what intrigues and interests you is one of the most important things in the world. Without it, even the best life can feel without meaning or purpose. And with it, even the worst can be tolerable. It’s having something of value to focus upon, to look toward with hope and excitement, to give life direction.

I doubt politics or economics plays this role for anyone. What we care about is always beyond that superficial level. The inspiring pamphleteers of the American Revolution weren’t offering mere political change and economic ideas but an entirely new vision of humanity and society. Some of the American founders even admitted that their own official activities bored them. They’d rather have pursued other interests—to have read edifying books, done scientific research, invented something of value, contributed to their communities, spent more time with their families, or whatever. Something like politics (or economics) was a means, not an end. But too often it gets portrayed as an end, a purpose it is ill-suited to serve.

We spend too little time getting clear in our hearts and minds what it is we want. We use words and throw out ideals while rarely wrestling with what they mean. To shift our focus would require a soul-searching far beyond any election campaigning, political activism, career development, financial investment strategy, or whatever. That isn’t to argue for apathy and disinterest, much less cynicism and fatalism. Let me point to some real world examples. You can hear the kind of deeper engagement in the words of someone like Martin Luther King jr or, upon his death, the speech given by Robert F. Kennedy jr. Sometime really listen to speeches like that, feel the resonance of emotion beyond words.

When politics matters the most is when it stops being about politics, when our shared humanity peeks through. In brief moments of stark human reality, as in tank man on Tiananmen Square, our minds are brought up short and a space opens up for something new. Then the emptiness of ideological rhetoric and campaign slogans becomes painfully apparent. And we ache for something more.

Yet I realize that what I present here is not what you’ll see on the mainstream media, not what you’ll hear from any politician or pundit, not what your career guidance counselor or financial adviser is going to offer. I suspect most people would understand what I’m saying, at least on some level, but it’s not what we normally talk about in our society. It touches a raw nerve. In writing these words, I might not be telling most people what they want to hear. I’m offering no comforting rationalizations, no easy narrative, no plausible deniability. Instead, I’m suggesting people think for themselves and to do so as honestly as possible.

I’ve only come to this view myself after a lifetime of struggle. It comes easy for no one, to question and wonder this deeply. But once one has come to such a view, what does one do with it? All I know to do is to give voice to it, as best I can, however limited my audience. I have no desire to try to force anyone to understand. This is my view and my voice. Others will understand it, maybe even embrace it and find common bond in it or they won’t. My only purpose is to open up a quiet space amidst the rattling noise and flashing lights. All who can meet me as equals in this understanding are welcome. As for those who see it differently, they are free to go elsewhere on the free market of opinions.

I know that I’m a freak, according to mainstream society. I know there are those who don’t understand my views and don’t agree. That is fine. I’ll leave them alone, if they leave me alone. But here in my space, I will let my freak flag fly. It might even turn out that there are more freaks than some have assumed, which is to say maybe people like me are more normal than those in power would like to let on. One day the silenced majority might find its collective voice. We all might be surprised when we finally hear what they have to say.

Until then, I’ll go on doing my own thing in my own way, here at this outpost of humanity.

What kind of trust? And to what end?

A common argument against the success of certain societies is that it wouldn’t be possible in the United States. As it is claimed, what makes them work well is there lack of diversity. Sometimes, it will be added that they are small countries which is to imply ‘tribalistic’. Compared to actual tribes, these countries are rather diverse and large. But I get the point being made and I’m not one to dismiss it out of hand.

Still, not all the data agrees with this conclusion. One example is seen in the comparisons of education systems. In the successful social democracies, even the schools with higher rates of diversity and immigrant students tend to have higher test scores, as compared to a country like the US.  There is one book that seriously challenges the tribal argument: Segregation and Mistrust by Eric M. Uslaner. Looking at the data, he determined that (Kindle Locations 72-73), “It wasn’t diversity but segregation that led to less trust.”

Segregation tends to go along with various forms of inequality: social position, economic class and mobility, political power and representation, access to resources, quality of education, systemic and institutional racism, environmental racism, ghettoization, etc. And around inequality, there is unsurprisingly a constellation of other social and health problems that negatively impact the segregated most of all but also the entire society in general—such as an increase of: food deserts, obesity, stunted neurocognitive development (including brain damage from neurotoxins), mental illnesses, violent crime, teen pregnancies, STDs, high school drop outs, child and spousal abuse, bullying, and the list goes on.

Obviously, none of that creates the conditions for a culture of trust. Segregation and inequality undermine everything that allows for a healthy society. Therefore, lessen inequality and, in proportion, a healthy society will follow. That is even true with high levels of diversity.

Related to this, I recall a study that showed that children raised in diverse communities tended to grow up to be socially liberal adults, which included greater tolerance and acceptance, fundamental traits of social trust.

On the opposite end, a small tribe has high trust within that community, but they have almost little if any trust of anyone outside of the community. Is such a small community really more trusting in the larger sense? I don’t know if that has ever been researched.

Such people in tight-knit communities may be willing to do anything for those within their tribe, but a stranger might be killed for no reason other than being an outsider. Take the Puritans, as an example. They had high trust societies. And from early on they had collectivist tendencies, in their being community-oriented with a strong shared vision. Yet anyone who didn’t quite fit in would be banished, tortured, or killed.

Maybe there are many kinds of trust, as there are many kinds of social capital, social cohesion, and social order. There are probably few if any societies that excel in all forms of trust. Some forms of trust might even be diametrically opposed to other forms of trust. Besides, trust in some cases such as an authoritarian regime isn’t necessarily a good thing. Low diversity societies such as Russia, Germany, Japan, China, etc have their own kinds of potential problems that can endanger the lives of those far outside of their own societies.

Trust is complex. What kind of trust? And to what end?

* * *

Does Diversity Erode Social Cohesion?
Social Capital and Race in British Neighbourhoods
by Natalia Letki

The debate on causes and consequences of social capital has been recently complemented with an investigation into factors that erode it. Various scholars concluded that diversity, and racial heterogeneity in particular, is damaging for the sense of community, interpersonal trust and formal and informal interactions. However, most of this research does not adequately account for the negative effect of a community’s low socio-economic status on neighbourhood interactions and attitudes. This paper is the first to date empirical examination of the impact of racial context on various dimensions of social capital in British neighbourhoods. Findings show that the low neighbourhood status is the key element undermining all dimensions of social capital, while eroding effect of racial diversity is limited.

Racism learned
James H. Burnett III

children exposed to racism tend to accept and embrace it as young as age 3, and in just a matter of days.

Can Racism Be Stopped in the Third Grade?
by Lisa Miller

At no developmental age are children less racist than in elementary school. But that’s not innocence, exactly, since preschoolers are obsessed with race. At ages 3 and 4, children are mapping their world, putting things and people into categories: size, shape, color. Up, down; day, night; in, out; over, under. They see race as a useful sorting measure and ask their parents to give them words for the differences they see, generally rejecting the adult terms “black” and “white,” and preferring finer (and more accurate) distinctions: “tan,” “brown,” “chocolate,” “pinkish.” They make no independent value judgments about racial difference, obviously, but by 4 they are already absorbing the lessons of a racist culture. All of them know reflexively which race it is preferable to be. Even today, almost three-quarters of a century since the Doll Test, made famous in Brown v. Board of Education, experiments by CNN and Margaret Beale Spencer have found that black and white children still show a bias toward people with lighter skin.

But by the time they have entered elementary school, they are in a golden age. At 7 or 8, children become very concerned with fairness and responsive to lessons about prejudice. This is why the third, fourth, and fifth grades are good moments to teach about slavery and the Civil War, suffrage and the civil-rights movement. Kids at that age tend to be eager to wrestle with questions of inequality, and while they are just beginning to form a sense of racial identity (this happens around 7 for most children, though for some white kids it takes until middle school), it hasn’t yet acquired much tribal force. It’s the closest humans come to a racially uncomplicated self. The psychologist Stephen Quintana studies Mexican-American kids. At 6 to 9 years old, they describe their own racial realities in literal terms and without value judgments. When he asks what makes them Mexican-American, they talk about grandparents, language, food, skin color. When he asks them why they imagine a person might dislike Mexican-Americans, they are baffled. Some can’t think of a single answer. This is one reason cross-racial friendships can flourish in elementary school — childhood friendships that researchers cite as the single best defense against racist attitudes in adulthood. The paradise is short-lived, though. Early in elementary school, kids prefer to connect in twos and threes over shared interests — music, sports, Minecraft. Beginning in middle school, they define themselves through membership in groups, or cliques, learning and performing the fraught social codes that govern adult interactions around race. As early as 10, psychologists at Tufts have shown, white children are so uncomfortable discussing race that, when playing a game to identify people depicted in photos, they preferred to undermine their own performance by staying silent rather than speak racial terms aloud.

Being Politically Correct Can Actually Boost Creativity
by Marissa Fessenden

The researchers assessed the ideas each group generated after 10 minutes of brainstorming. In same-sex groups, they found, political correctness priming produced less creative ideas. In the mixed groups however, creativity got a boost. “They generated more ideas, and those ideas were more novel,” Duguid told NPR. “Whether it was two men and one woman or two women and one man, the results were consistent.” The creativity of each group’s ideas was assessed by independent, blind raters.

Is Diversity the Source of America’s Genius?
by Gregory Rodriguez

Despite the fact that diversity is so central to the American condition, scholars who’ve studied the cognitive effects of diversity have long made the mistake of treating homogeneity as the norm. Only this year did a group of researchers from MIT, Columbia University, and Northwestern University publish a paper questioning the conventional wisdom that homogeneity represents some kind of objective baseline for comparison or “neutral indicator of the ideal response in a group setting.”

To bolster their argument, the researchers cite a previous study that found that members of homogenous groups tasked with solving a mystery tend to be more confident in their problem-solving skills than their performance actually merits. By contrast, the confidence level of individuals in diverse groups corresponds better with how well their group actually performs. The authors concluded that homogenous groups “were actually further than diverse groups from an objective index of accuracy.”

The researchers also refer to a 2006 experiment showing that homogenous juries made “more factually inaccurate statements and considered a narrower range of information” than racially diverse juries. What these and other findings suggest, wrote the researchers, is that people in diverse groups “are more likely to step outside their own perspective and less likely to instinctively impute their own knowledge onto others” than people in homogenous groups.

Multicultural Experience Enhances Creativity
by Leung, Maddux, Galinsky, & Chiu

Many practices aimed at cultivating multicultural competence in educational and organizational settings (e.g., exchange programs, diversity education in college, diversity management at work) assume that multicultural experience fosters creativity. In line with this assumption, the research reported in this article is the first to empirically demonstrate that exposure to multiple cultures in and of itself can enhance creativity. Overall, the authors found that extensiveness of multicultural experiences was positively related
to both creative performance (insight learning, remote association, and idea generation) and creativity-supporting cognitive processes (retrieval of unconventional knowledge, recruitment of ideas from unfamiliar cultures for creative idea expansion). Furthermore, their studies showed that the serendipitous creative benefits resulting from multicultural experiences may depend on the extent to which individuals open themselves to foreign cultures, and that creativity is facilitated in contexts that deemphasize the need for firm answers or existential concerns. The authors discuss the implications of their findings for promoting creativity in increasingly global learning and work environments.

The Evidence That White Children Benefit From Integrated Schools
by Anya Kamenetz

For example, there’s evidence that corporations with better gender and racial representation make more money and are more innovative. And many higher education groups have collected large amounts of evidence on the educational benefits of diversity in support of affirmative action policies.

In one set of studies, Phillips gave small groups of three people a murder mystery to solve. Some of the groups were all white and others had a nonwhite member. The diverse groups were significantly more likely to find the right answer.

Sundown Towns: A Hidden Dimension Of American Racism
by James W. Loewen
pp. 360-2

In addition to discouraging new people, hypersegregation may also discourage new ideas. Urban theorist Jane Jacobs has long held that the mix of peoples and cultures found in successful cities prompts creativity. An interesting study by sociologist William Whyte shows that sundown suburbs may discourage out-of-the-box thinking. By the 1970s, some executives had grown weary of the long commutes with which they had saddled themselves so they could raise their families in elite sundown suburbs. Rather than move their families back to the city, they moved their corporate headquarters out to the suburbs. Whyte studied 38 companies that left New York City in the 1970s and ’80s, allegedly “to better [the] quality-of-life needs of their employees.” Actually, they moved close to the homes of their CEOs, cutting their average commute to eight miles; 31 moved to the Greenwich-Stamford, Connecticut, area. These are not sundown towns, but adjacent Darien was, and Greenwich and Stamford have extensive formerly sundown neighborhoods that are also highly segregated on the basis of social class. Whyte then compared those 38 companies to 36 randomly chosen comparable companies that stayed in New York City. Judged by stock price, the standard way to measure how well a company is doing, the suburbanized companies showed less than half the stock appreciation of the companies that chose to remain in the city.7 […]

Research suggests that gay men are also important members of what Richard Florida calls “the creative class”—those who come up with or welcome new ideas and help drive an area economically.11 Metropolitan areas with the most sundown suburbs also show the lowest tolerance for homosexuality and have the lowest concentrations of “out” gays and lesbians, according to Gary Gates of the Urban Institute. He lists Buffalo, Cleveland, Detroit, Milwaukee, and Pittsburgh as examples. Recently, some cities—including Detroit—have recognized the important role that gay residents can play in helping to revive problematic inner-city neighborhoods, and now welcome them.12 The distancing from African Americans embodied by all-white suburbs intensifies another urban problem: sprawl, the tendency for cities to become more spread out and less dense. Sprawl can decrease creativity and quality of life throughout the metropolitan area by making it harder for people to get together for all the human activities—from think tanks to complex commercial transactions to opera—that cities make possible in the first place. Asked in 2000, “What is the most important problem facing the community where you live?” 18% of Americans replied sprawl and traffic, tied for first with crime and violence. Moreover, unlike crime, sprawl is increasing. Some hypersegregated metropolitan areas like Detroit and Cleveland are growing larger geographically while actually losing population.13

How Diversity Makes Us Smarter
by Katherine W. Phillips

Research on large, innovative organizations has shown repeatedly that this is the case. For example, business professors Cristian Deszö of the University of Maryland and David Ross of Columbia University studied the effect of gender diversity on the top firms in Standard & Poor’s Composite 1500 list, a group designed to reflect the overall U.S. equity market. First, they examined the size and gender composition of firms’ top management teams from 1992 through 2006. Then they looked at the financial performance of the firms. In their words, they found that, on average, “female representation in top management leads to an increase of $42 million in firm value.” They also measured the firms’ “innovation intensity” through the ratio of research and development expenses to assets. They found that companies that prioritized innovation saw greater financial gains when women were part of the top leadership ranks.

Racial diversity can deliver the same kinds of benefits. In a study conducted in 2003, Orlando Richard, a professor of management at the University of Texas at Dallas, and his colleagues surveyed executives at 177 national banks in the U.S., then put together a database comparing financial performance, racial diversity and the emphasis the bank presidents put on innovation. For innovation-focused banks, increases in racial diversity were clearly related to enhanced financial performance.

Evidence for the benefits of diversity can be found well beyond the U.S. In August 2012 a team of researchers at the Credit Suisse Research Institute issued a report in which they examined 2,360 companies globally from 2005 to 2011, looking for a relationship between gender diversity on corporate management boards and financial performance. Sure enough, the researchers found that companies with one or more women on the board delivered higher average returns on equity, lower gearing (that is, net debt to equity) and better average growth. […]

In 2006 Margaret Neale of Stanford University, Gregory Northcraft of the University of Illinois at Urbana-Champaign and I set out to examine the impact of racial diversity on small decision-making groups in an experiment where sharing information was a requirement for success. Our subjects were undergraduate students taking business courses at the University of Illinois. We put together three-person groups—some consisting of all white members, others with two whites and one nonwhite member—and had them perform a murder mystery exercise. We made sure that all group members shared a common set of information, but we also gave each member important clues that only he or she knew. To find out who committed the murder, the group members would have to share all the information they collectively possessed during discussion. The groups with racial diversity significantly outperformed the groups with no racial diversity. Being with similar others leads us to think we all hold the same information and share the same perspective. This perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation.

Other researchers have found similar results. In 2004 Anthony Lising Antonio, a professor at the Stanford Graduate School of Education, collaborated with five colleagues from the University of California, Los Angeles, and other institutions to examine the influence of racial and opinion composition in small group discussions. More than 350 students from three universities participated in the study. Group members were asked to discuss a prevailing social issue (either child labor practices or the death penalty) for 15 minutes. The researchers wrote dissenting opinions and had both black and white members deliver them to their groups. When a black person presented a dissenting perspective to a group of whites, the perspective was perceived as more novel and led to broader thinking and consideration of alternatives than when a white person introduced that same dissenting perspective. The lesson: when we hear dissent from someone who is different from us, it provokes more thought than when it comes from someone who looks like us.

This effect is not limited to race. For example, last year professors of management Denise Lewin Loyd of the University of Illinois, Cynthia Wang of Oklahoma State University, Robert B. Lount, Jr., of Ohio State University and I asked 186 people whether they identified as a Democrat or a Republican, then had them read a murder mystery and decide who they thought committed the crime. Next, we asked the subjects to prepare for a meeting with another group member by writing an essay communicating their perspective. More important, in all cases, we told the participants that their partner disagreed with their opinion but that they would need to come to an agreement with the other person. Everyone was told to prepare to convince their meeting partner to come around to their side; half of the subjects, however, were told to prepare to make their case to a member of the opposing political party, and half were told to make their case to a member of their own party.

The result: Democrats who were told that a fellow Democrat disagreed with them prepared less well for the discussion than Democrats who were told that a Republican disagreed with them. Republicans showed the same pattern. When disagreement comes from a socially different person, we are prompted to work harder. Diversity jolts us into cognitive action in ways that homogeneity simply does not.

For this reason, diversity appears to lead to higher-quality scientific research. This year Richard Freeman, an economics professor at Harvard University and director of the Science and Engineering Workforce Project at the National Bureau of Economic Research, along with Wei Huang, a Harvard economics Ph.D. candidate, examined the ethnic identity of the authors of 1.5 million scientific papers written between 1985 and 2008 using Thomson Reuters’s Web of Science, a comprehensive database of published research. They found that papers written by diverse groups receive more citations and have higher impact factors than papers written by people from the same ethnic group. Moreover, they found that stronger papers were associated with a greater number of author addresses; geographical diversity, and a larger number of references, is a reflection of more intellectual diversity. […]

In a 2006 study of jury decision making, social psychologist Samuel Sommers of Tufts University found that racially diverse groups exchanged a wider range of information during deliberation about a sexual assault case than all-white groups did. In collaboration with judges and jury administrators in a Michigan courtroom, Sommers conducted mock jury trials with a group of real selected jurors. Although the participants knew the mock jury was a court-sponsored experiment, they did not know that the true purpose of the research was to study the impact of racial diversity on jury decision making.

Sommers composed the six-person juries with either all white jurors or four white and two black jurors. As you might expect, the diverse juries were better at considering case facts, made fewer errors recalling relevant information and displayed a greater openness to discussing the role of race in the case. These improvements did not necessarily happen because the black jurors brought new information to the group—they happened because white jurors changed their behavior in the presence of the black jurors. In the presence of diversity, they were more diligent and open-minded.

Social Disorder, Mental Disorder

“It is no measure of health to be well adjusted to a profoundly sick society.”
~ Jiddu Krishnamurti

“The opposite of addiction is not sobriety. The opposite of addiction is connection.”
~ Johann Harri

On Staying Sane in a Suicidal Culture
by Dahr Jamail

Our situation so often feels hopeless. So much has spun out of control, and pathology surrounds us. At least one in five Americans are taking psychiatric medications, and the number of children taking adult psychiatric drugs is soaring.

From the perspective of Macy’s teachings, it seems hard to argue that this isn’t, at least in part, active denial of what is happening to the world and how challenging it is for both adults and children to deal with it emotionally, spiritually and psychologically.

These disturbing trends, which are increasing, are something she is very mindful of. As she wrote in World as Lover, World as Self, “The loss of certainty that there will be a future is, I believe, the pivotal psychological reality of our time.”

What does depression feel like? Trust me – you really don’t want to know
by Tim Lott

Admittedly, severely depressed people can connect only tenuously with reality, but repeated studies have shown that mild to moderate depressives have a more realistic take on life than most “normal” people, a phenomenon known as “depressive realism”. As Neel Burton, author of The Meaning of Madness, put it, this is “the healthy suspicion that modern life has no meaning and that modern society is absurd and alienating”. In a goal-driven, work-oriented culture, this is deeply threatening.

This viewpoint can have a paralysing grip on depressives, sometimes to a psychotic extent – but perhaps it haunts everyone. And therefore the bulk of the unafflicted population may never really understand depression. Not only because they (understandably) lack the imagination, and (unforgivably) fail to trust in the experience of the sufferer – but because, when push comes to shove, they don’t want to understand. It’s just too … well, depressing.

The Mental Disease of Late-Stage Capitalism
by Joe Brewer

A great irony of this deeply corrupt system of wealth hoarding is that the “weapon of choice” is how we feel about ourselves as we interact with our friends. The elites don’t have to silence us. We do that ourselves by refusing to talk about what is happening to us. Fake it until you make it. That’s the advice we are given by the already successful who have pigeon-holed themselves into the tiny number of real opportunities society had to offer. Hold yourself accountable for the crushing political system that was designed to divide us against ourselves.

This great lie that we whisper to ourselves is how they control us. Our fear that other impoverished people (which is most of us now) will look down on us for being impoverished too. This is how we give them the power to keep humiliating us.

I say no more of this emotional racket. If I am going to be responsible for my fate in life, let it be because I chose to stand up and fight — that I helped dismantle the global architecture of wealth extraction that created this systemic corruption of our economic and political systems.

Now more than ever, we need spiritual healing. As this capitalist system destroys itself, we can step aside and find healing by living honestly and without fear. They don’t get to tell us how to live. We can share our pain with family and friends. We can post it on social media. Shout it from the rooftops if we feel like it. The pain we feel is capitalism dying. It hurts us because we are still in it.

Neoliberalism – the ideology at the root of all our problems
by George Monbiot

So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.

Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.

Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.

We internalise and reproduce its creeds. The rich persuade themselves that they acquired their wealth through merit, ignoring the advantages – such as education, inheritance and class – that may have helped to secure it. The poor begin to blame themselves for their failures, even when they can do little to change their circumstances.

Never mind structural unemployment: if you don’t have a job it’s because you are unenterprising. Never mind the impossible costs of housing: if your credit card is maxed out, you’re feckless and improvident. Never mind that your children no longer have a school playing field: if they get fat, it’s your fault. In a world governed by competition, those who fall behind become defined and self-defined as losers.

Among the results, as Paul Verhaeghe documents in his book What About Me? are epidemics of self-harm, eating disorders, depression, loneliness, performance anxiety and social phobia. Perhaps it’s unsurprising that Britain, in which neoliberal ideology has been most rigorously applied, is the loneliness capital of Europe. We are all neoliberals now.

Neoliberalism has brought out the worst in us
by Paul Verhaeghe

We tend to perceive our identities as stable and largely separate from outside forces. But over decades of research and therapeutic practice, I have become convinced that economic change is having a profound effect not only on our values but also on our personalities. Thirty years of neoliberalism, free-market forces and privatisation have taken their toll, as relentless pressure to achieve has become normative. If you’re reading this sceptically, I put this simple statement to you: meritocratic neoliberalism favours certain personality traits and penalises others.

There are certain ideal characteristics needed to make a career today. The first is articulateness, the aim being to win over as many people as possible. Contact can be superficial, but since this applies to most human interaction nowadays, this won’t really be noticed.

It’s important to be able to talk up your own capacities as much as you can – you know a lot of people, you’ve got plenty of experience under your belt and you recently completed a major project. Later, people will find out that this was mostly hot air, but the fact that they were initially fooled is down to another personality trait: you can lie convincingly and feel little guilt. That’s why you never take responsibility for your own behaviour.

On top of all this, you are flexible and impulsive, always on the lookout for new stimuli and challenges. In practice, this leads to risky behaviour, but never mind, it won’t be you who has to pick up the pieces. The source of inspiration for this list? The psychopathy checklist by Robert Hare, the best-known specialist on psychopathy today.

What About Me?: The Struggle for Identity in a Market-Based Society
by Paul Verhaeghe
Kindle Locations 2357-2428

Hypotheses such as these, however plausible, are not scientific. If we want to demonstrate the link between a neo-liberal society and, say, mental disorders, we need two things. First, we need a yardstick that indicates the extent to which a society is neo-liberal. Second, we need to develop criteria to measure the increase or decrease of psychosocial wellbeing in society. Combine these two, and you would indeed be able to see whether such a connection existed. And by that I don’t mean a causal connection, but a striking pattern; a rise in one being reflected in the other, or vice versa.

This was exactly the approach used by Richard Wilkinson, a British social epidemiologist, in two pioneering studies (the second carried out with Kate Pickett). The gauge they used was eminently quantifiable: the extent of income inequality within individual countries. This is indeed a good yardstick, as neo-liberal policy is known to cause a spectacular rise in such inequality. Their findings were unequivocal: an increase of this kind has far-reaching consequences for nearly all health criteria. Its impact on mental health (and consequently also mental disorders) is by no means an isolated phenomenon. This finding is just as significant as the discovery that mental disorders are increasing.

As social epidemiologists, Wilkinson and Pickett studied the connection between society and health in the broad sense of the word. Stress proves to be a key factor here. Research has revealed its impact, both on our immune systems and our cardiovascular systems. Tracing the causes of stress is difficult, though, especially given that we live in the prosperous and peaceful West. If we take a somewhat broader view, most academics agree on the five factors that determine our health: early childhood; the fears and cares we experience; the quality of our social relationships; the extent to which we have control over our lives; and, finally, our social status. The worse you score in these areas, the worse your health and the shorter your life expectancy are likely to be.

In his first book, The Impact of Inequality: how to make sick societies healthier, Wilkinson scrutinises the various factors involved, rapidly coming to what would be the central theme of his second book — that is, income inequality. A very striking conclusion is that in a country, or even a city, with high income inequality, the quality of social relationships is noticeably diminished: there is more aggression, less trust, more fear, and less participation in the life of the community. As a psychoanalyst, I was particularly interested in his quest for the factors that play a role at individual level. Low social status proves to have a determining effect on health. Lack of control over one’s work is a prominent stress factor. A low sense of control is associated with poor relationships with colleagues and greater anger and hostility — a phenomenon that Richard Sennett had already described (the infantilisation of adult workers). Wilkinson discovered that this all has a clear impact on health, and even on life expectancy. Which in turn ties in with a classic finding of clinical psychology: powerlessness and helplessness are among the most toxic emotions.

Too much inequality is bad for your health

A number of conclusions are forced upon us. In a prosperous part of the world like Western Europe, it isn’t the quality of health care (the number of doctors and hospitals) that determines the health of the population, but the nature of social and economic life. The better social relationships are, the better the level of health. Excessive inequality is more injurious to health than any other factor, though this is not simply a question of differences between social classes. If anything, it seems to be more of a problem within groups that are presumed to be equal (for example, civil servants and academics). This finding conflicts with the general assumption that income inequality only hurts the underclass — the losers — while those higher up the social ladder invariably benefit. That’s not the case: its negative effects are statistically visible in all sectors of the population, hence the subtitle of Wilkinson’s second work: why more equal societies almost always do better.

In that book, Wilkinson and Pickett adopt a fairly simple approach. Using official statistics, they analyse the connection between income inequality and a host of other criteria. The conclusions are astounding, almost leaping off the page in table after table: the greater the level of inequality in a country or even region, the more mental disorders, teenage pregnancies, child mortality, domestic and street violence, crime, drug abuse, and medication. And the greater the inequality is, the worse physical health and educational performance are, the more social mobility declines, along with feelings of security, and the unhappier people are.

Both books, especially the latter, provoked quite a response in the Anglo-Saxon world. Many saw in them proof of what they already suspected. Many others were more negative, questioning everything from the collation of data to the statistical methods used to reach conclusions. Both authors refuted the bulk of the criticism — which, given the quality of their work, was not a very difficult task. Much of it targeted what was not in the books: the authors were not urging a return to some kind of ‘all animals are equal’ Eastern-bloc state. What critics tended to forget was that their analysis was of relative differences in income, with negative effects becoming most manifest in the case of extreme inequality. Moreover, it is not income inequality itself that produces these effects, but the stress factors associated with it.

Roughly the same inferences can be drawn from Sennett’s study, though it is more theoretical and less underpinned with figures. His conclusion is fairly simple, and can be summed up in the title of what I regard as his best book: Respect in a World of Inequality. Too much inequality leads to a loss of respect, including self-respect — and, in psychosocial terms, this is about the worst thing that can happen to anyone.

This emerges very powerfully from a single study of the social determinants of health, which is still in progress. Nineteen eighty-six saw the start of the second ‘Whitehall Study’ that systematically monitored over 10,000 British civil servants, to establish whether there was a link between their health and their work situations. At first sight, this would seem to be a relatively homogenous group, and one that definitely did not fall in the lowest social class. The study’s most striking finding is that the lower the rank and status of someone within that group, the lower their life expectancy, even when taking account of such factors as smoking, diet, and physical exercise. The most obvious explanation is that the lowest-ranked people experienced the most stress. Medical studies confirm this: individuals in this category have higher cortisol levels (increased stress) and more coagulation-factor deficiencies (and thus are at greater risk of heart attacks).

My initial question was, ‘Is there a demonstrable connection between today’s society and the huge rise in mental disorders?’ As all these studies show, the answer is yes. Even more important is the finding that this link goes beyond mental health. The same studies show highly negative effects on other health parameters. As so often is the case, a parallel can be found in fiction — in this instance, in Alan Lightman’s novel The Diagnosis. During an interview, the author posed the following rhetorical question: ‘Who, experiencing for years the daily toll of intense corporate pressure, could truly escape severe anxiety?’* (I think it may justifiably be called rhetorical, when you think how many have had to find out its answer for themselves.)

A study by a research group at Heidelberg University very recently came to similar conclusions, finding that people’s brains respond differently to stress according to whether they have had an urban or rural upbringing. 3 What’s more, people in the former category prove more susceptible to phobias and even schizophrenia. So our brains are differently shaped by the environment in which we grow up, making us potentially more susceptible to mental disorders. Another interesting finding emerged from the way the researchers elicited stress. While the subjects of the experiment were wrestling with the complex calculations they had been asked to solve, some of them were told (falsely) that their scores were lagging behind those of the others, and asked to hurry up because the experiments were expensive. All the neo-liberal factors were in place: emphasis on productivity, evaluation, competition, and cost reduction.

Capitalist Realism: Is there no alternative?
by Mark Fisher
pp. 19-22

Mental health, in fact, is a paradigm case of how capitalist realism operates. Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect). In the 1960s and 1970s, radical theory and politics (Laing, Foucault, Deleuze and Guattari, etc.) coalesced around extreme mental conditions such as schizophrenia, arguing, for instance, that madness was not a natural, but a political, category. But what is needed now is a politicization of much more common disorders. Indeed, it is their very commonness which is the issue: in Britain, depression is now the condition that is most treated by the NHS. In his book The Selfish Capitalist, Oliver James has convincingly posited a correlation between rising rates of mental distress and the neoliberal mode of capitalism practiced in countries like Britain, the USA and Australia. In line with James’s claims, I want to argue that it is necessary to reframe the growing problem of stress (and distress) in capitalist societies. Instead of treating it as incumbent on individuals to resolve their own psychological distress, instead, that is, of accepting the vast privatization of stress that has taken place over the last thirty years, we need to ask: how has it become acceptable that so many people, and especially so many young people, are ill? The ‘mental health plague’ in capitalist societies would suggest that, instead of being the only social system that works, capitalism is inherently dysfunctional, and that the cost of it appearing to work is very high. […]

By contrast with their forebears in the 1960s and 1970s, British students today appear to be politically disengaged. While French students can still be found on the streets protesting against neoliberalism, British students, whose situation is incomparably worse, seem resigned to their fate. But this, I want to argue, is a matter not of apathy, nor of cynicism, but of reflexive impotence. They know things are bad, but more than that, they know they can’t do anything about it. But that ‘knowledge’, that reflexivity, is not a passive observation of an already existing state of affairs. It is a self-fulfilling prophecy.

Reflexive impotence amounts to an unstated worldview amongst the British young, and it has its correlate in widespread pathologies. Many of the teenagers I worked with had mental health problems or learning difficulties. Depression is endemic. It is the condition most dealt with by the National Health Service, and is afflicting people at increasingly younger ages. The number of students who have some variant of dyslexia is astonishing. It is not an exaggeration to say that being a teenager in late capitalist Britain is now close to being reclassified as a sickness. This pathologization already forecloses any possibility of politicization. By privatizing these problems – treating them as if they were caused only by chemical imbalances in the individual’s neurology and/ or by their family background – any question of social systemic causation is ruled out.

Many of the teenage students I encountered seemed to be in a state of what I would call depressive hedonia. Depression is usually characterized as a state of anhedonia, but the condition I’m referring to is constituted not by an inability to get pleasure so much as it by an inability to do anything else except pursue pleasure. There is a sense that ‘something is missing’ – but no appreciation that this mysterious, missing enjoyment can only be accessed beyond the pleasure principle. In large part this is a consequence of students’ ambiguous structural position, stranded between their old role as subjects of disciplinary institutions and their new status as consumers of services. In his crucial essay ‘Postscript on Societies of Control’, Deleuze distinguishes between the disciplinary societies described by Foucault, which were organized around the enclosed spaces of the factory, the school and the prison, and the new control societies, in which all institutions are embedded in a dispersed corporation.

pp. 32-38

The ethos espoused by McCauley is the one which Richard Sennett examines in The Corrosion of Character: The Personal Consequences of Work in the New Capitalism, a landmark study of the affective changes that the post-Fordist reorganization of work has brought about. The slogan which sums up the new conditions is ‘no long term’. Where formerly workers could acquire a single set of skills and expect to progress upwards through a rigid organizational hierarchy, now they are required to periodically re-skill as they move from institution to institution, from role to role. As the organization of work is decentralized, with lateral networks replacing pyramidal hierarchies, a premium is put on ‘flexibility’. Echoing McCauley’s mockery of Hanna in Heat (‘ How do you expect to keep a marriage?’), Sennett emphasizes the intolerable stresses that these conditions of permanent instability put on family life. The values that family life depends upon – obligation, trustworthiness, commitment – are precisely those which are held to be obsolete in the new capitalism. Yet, with the public sphere under attack and the safety nets that a ‘Nanny State’ used to provide being dismantled, the family becomes an increasingly important place of respite from the pressures of a world in which instability is a constant. The situation of the family in post-Fordist capitalism is contradictory, in precisely the way that traditional Marxism expected: capitalism requires the family (as an essential means of reproducing and caring for labor power; as a salve for the psychic wounds inflicted by anarchic social-economic conditions), even as it undermines it (denying parents time with children, putting intolerable stress on couples as they become the exclusive source of affective consolation for each other). […]

The psychological conflict raging within individuals cannot but have casualties. Marazzi is researching the link between the increase in bi-polar disorder and post-Fordism and, if, as Deleuze and Guattari argue, schizophrenia is the condition that marks the outer edges of capitalism, then bi-polar disorder is the mental illness proper to the ‘interior’ of capitalism. With its ceaseless boom and bust cycles, capitalism is itself fundamentally and irreducibly bi-polar, periodically lurching between hyped-up mania (the irrational exuberance of ‘bubble thinking’) and depressive come-down. (The term ‘economic depression’ is no accident, of course). To a degree unprecedented in any other social system, capitalism both feeds on and reproduces the moods of populations. Without delirium and confidence, capital could not function.

It seems that with post-Fordism, the ‘invisible plague’ of psychiatric and affective disorders that has spread, silently and stealthily, since around 1750 (i.e. the very onset of industrial capitalism) has reached a new level of acuteness. Here, Oliver James’s work is important. In The Selfish Capitalist, James points to significant rises in the rates of ‘mental distress’ over the last 25 years. ‘By most criteria’, James reports,

rates of distress almost doubled between people born in 1946 (aged thirty-six in 1982) and 1970 (aged thirty in 2000). For example, 16 per cent of thirty-six-year-old women in 1982 reported having ‘trouble with nerves, feeling low, depressed or sad’, whereas 29 per cent of thirty year-olds reported this in 2000 (for men it was 8 per cent in 1982, 13 per cent in 2000).

Another British study James cites compared levels of psychiatric morbidity (which includes neurotic symptoms, phobias and depression) in samples of people in 1977 and 1985. ‘Whereas 22 per cent of the 1977 sample reported psychiatric morbidity, this had risen to almost a third of the population (31 per cent) by 1986’. Since these rates are much higher in countries that have implemented what James calls ‘selfish’ capitalism than in other capitalist nations, James hypothesizes that it is selfish (i.e. neoliberalized) capitalist policies and culture that are to blame. […]

James’s conjectures about aspirations, expectations and fantasy fit with my own observations of what I have called ‘hedonic depression’ in British youth.

It is telling, in this context of rising rates of mental illness, that New Labour committed itself, early in its third term in government, to removing people from Incapacity Benefit, implying that many, if not most, claimants are malingerers. In contrast with this assumption, it doesn’t seem unreasonable to infer that most of the people claiming Incapacity Benefit – and there are well in excess of two million of them – are casualties of Capital. A significant proportion of claimants, for instance, are people psychologically damaged as a consequence of the capitalist realist insistence that industries such as mining are no longer economically viable. (Even considered in brute economic terms, though, the arguments about ‘viability’ seem rather less than convincing, especially once you factor in the cost to taxpayers of incapacity and other benefits.) Many have simply buckled under the terrifyingly unstable conditions of post-Fordism.

The current ruling ontology denies any possibility of a social causation of mental illness. The chemico-biologization of mental illness is of course strictly commensurate with its de-politicization. Considering mental illness an individual chemico-biological problem has enormous benefits for capitalism. First, it reinforces Capital’s drive towards atomistic individualization (you are sick because of your brain chemistry). Second, it provides an enormously lucrative market in which multinational pharmaceutical companies can peddle their pharmaceuticals (we can cure you with our SSRIs). It goes without saying that all mental illnesses are neurologically instantiated, but this says nothing about their causation. If it is true, for instance, that depression is constituted by low serotonin levels, what still needs to be explained is why particular individuals have low levels of serotonin. This requires a social and political explanation; and the task of repoliticizing mental illness is an urgent one if the left wants to challenge capitalist realism.

It does not seem fanciful to see parallels between the rising incidence of mental distress and new patterns of assessing workers’ performance. We will now take a closer look at this ‘new bureaucracy’.

The Opposite of Addiction is Connection
by Robert Weiss LCSW, CSAT-S

Not for Alexander. He was bothered by the fact that the cages in which the rats were isolated were small, with no potential for stimulation beyond the heroin. Alexander thought: Of course they all got high. What else were they supposed to do? In response to this perceived shortcoming, Alexander created what we now call “the rat park,” a cage approximately 200 times larger than the typical isolation cage, with Hamster wheels and multi-colored balls to play with, plenty of tasty food to eat, and spaces for mating and raising litters.[ii] And he put not one rat, but 20 rats (of both genders) into the cage. Then, and only then, did he mirror the old experiments, offering one bottle of pure water and one bottle of heroin water. And guess what? The rats ignored the heroin. They were much more interested in typical communal rat activities such as playing, fighting, eating, and mating. Essentially, with a little bit of social stimulation and connection, addiction disappeared. Heck, even rats who’d previously been isolated and sucking on the heroin water left it alone once they were introduced to the rat park.

The Human Rat Park

One of the reasons that rats are routinely used in psychological experiments is that they are social creatures in many of the same ways that humans are social creatures. They need stimulation, company, play, drama, sex, and interaction to stay happy. Humans, however, add an extra layer to this equation. We need to be able to trust and to emotionally attach.

This human need for trust and attachment was initially studied and developed as a psychological construct in the 1950s, when John Bowlby tracked the reactions of small children when they were separated from their parents.[iii] In a nutshell, he found that infants, toddlers, and young children have an extensive need for safe and reliable caregivers. If children have that, they tend to be happy in childhood and well-adjusted (emotionally healthy) later in life. If children don’t have that, it’s a very different story. In other words, it is clear from Bowlby’s work and the work of later researchers that the level and caliber of trust and connection experienced in early childhood carries forth into adulthood. Those who experience secure attachment as infants, toddlers, and small children nearly always carry that with them into adulthood, and they are naturally able to trust and connect in healthy ways. Meanwhile, those who don’t experience secure early-life attachment tend to struggle with trust and connection later in life. In other words, securely attached individuals tend to feel comfortable in and to enjoy the human rat park, while insecurely attached people typically struggle to fit in and connect.

The Opposite Of Addiction is Connection
By Jonathan Davis

If connection is the opposite of addiction, then an examination of the neuroscience of human connection is in order. Published in 2000, A General Theory Of Love is a collaboration between three professors of psychiatry at the University of California in San Francisco. A General Theory Of Love reveals that humans require social connection for optimal brain development, and that babies cared for in a loving environment are psychological and neurologically ‘immunised’ by love. When things get difficult in adult life, the neural wiring developed from a love-filled childhood leads to increased emotional resilience in adult life. Conversely, those who grow up in an environment where loving care is unstable or absent are less likely to be resilient in the face of emotional distress.

How does this relate to addiction? Gabor Maté observes an extremely high rate of childhood trauma in the addicts he works with and trauma is the extreme opposite of growing up in a consistently safe and loving environment. He asserts that it is extremely common for people with addictions to have a reduced capacity for dealing with emotional distress, hence an increased risk of drug-dependence.

How Our Ability To Connect Is Impaired By Trauma

Trauma is well-known to cause interruption to healthy neural wiring, in both the developing and mature brain. A deeper issue here is that people who have suffered trauma, particularly children, can be left with an underlying sense that the world is no longer safe, or that people can no longer be trusted. This erosion (or complete destruction) of a sense of trust, that our family, community and society will keep us safe, results in isolation – leading to the very lack of connection Johann Harri suggests is the opposite of addiction. People who use drugs compulsively do so to avoid the pain of past trauma and to replace the absence of connection in their life.

Social Solutions To Addiction

The solution to the problem of addiction on a societal level is both simple and fairly easy to implement. If a person is born into a life that is lacking in love and support on a family level, or if due to some other trauma they have become isolated and suffer from addiction, there must be a cultural response to make sure that person knows that they are valued by their society (even if they don’t feel valued by their family). Portugal has demonstrated this with a 50% drop in addiction thanks to programs that are specifically designed to re-create connection between the addict and their community.

The real cause of addiction has been discovered – and it’s not what you think
by Johann Hari

This has huge implications for the one hundred year old war on drugs. This massive war – which, as I saw, kills people from the malls of Mexico to the streets of Liverpool – is based on the claim that we need to physically eradicate a whole array of chemicals because they hijack people’s brains and cause addiction. But if drugs aren’t the driver of addiction – if, in fact, it is disconnection that drives addiction – then this makes no sense.

Ironically, the war on drugs actually increases all those larger drivers of addiction: for example, I went to a prison in Arizona – ‘Tent City’ – where inmates are detained in tiny stone isolation cages (“The Hole”) for weeks and weeks on end, to punish them for drug use. It is as close to a human recreation of the cages that guaranteed deadly addiction in rats as I can imagine. And when those prisoners get out, they will be unemployable because of their criminal record – guaranteeing they with be cut off ever more. I watched this playing out in the human stories I met across the world.

There is an alternative. You can build a system that is designed to help drug addicts to reconnect with the world – and so leave behind their addictions.

This isn’t theoretical. It is happening. I have seen it. Nearly fifteen years ago, Portugal had one of the worst drug problems in Europe, with 1 percent of the population addicted to heroin. They had tried a drug war, and the problem just kept getting worse. So they decided to do something radically different. They resolved to decriminalize all drugs, and transfer all the money they used to spend on arresting and jailing drug addicts, and spend it instead on reconnecting them – to their own feelings, and to the wider society. The most crucial step is to get them secure housing, and subsidized jobs – so they have a purpose in life, and something to get out of bed for. I watched as they are helped, in warm and welcoming clinics, to learn how to reconnect with their feelings, after years of trauma and stunning them into silence with drugs.

One example I learned about was a group of addicts who were given a loan to set up a removals firm. Suddenly, they were a group, all bonded to each other, and to the society, and responsible for each other’s care.

The results of all this are now in. An independent study by the British Journal of Criminology found that since total decriminalization, addiction has fallen, and injecting drug use is down by 50 percent. I’ll repeat that: injecting drug use is down by 50 percent. Decriminalization has been such a manifest success that very few people in Portugal want to go back to the old system. The main campaigner against the decriminalization back in 2000 was Joao Figueira – the country’s top drug cop. He offered all the dire warnings that we would expect from the Daily Mail or Fox News. But when we sat together in Lisbon, he told me that everything he predicted had not come to pass – and he now hopes the whole world will follow Portugal’s example.

This isn’t only relevant to addicts. It is relevant to all of us, because it forces us to think differently about ourselves. Human beings are bonding animals. We need to connect and love. The wisest sentence of the twentieth century was E.M. Forster’s: “only connect.” But we have created an environment and a culture that cut us off from connection, or offer only the parody of it offered by the Internet. The rise of addiction is a symptom of a deeper sickness in the way we live–constantly directing our gaze towards the next shiny object we should buy, rather than the human beings all around us.

The writer George Monbiot has called this “the age of loneliness.” We have created human societies where it is easier for people to become cut off from all human connections than ever before. Bruce Alexander, the creator of Rat Park, told me that for too long, we have talked exclusively about individual recovery from addiction. We need now to talk about social recovery—how we all recover, together, from the sickness of isolation that is sinking on us like a thick fog.

But this new evidence isn’t just a challenge to us politically. It doesn’t just force us to change our minds. It forces us to change our hearts.

* * *

Social Conditions of an Individual’s Condition

Society and Dysfunction

It’s All Your Fault, You Fat Loser!

Liberal-mindedness, Empathetic Imagination, and Capitalist Realism

Ideological Realism & Scarcity of Imagination

The Unimagined: Capitalism and Crappiness

To Put the Rat Back in the Rat Park

Rationalizing the Rat Race, Imagining the Rat Park

The Desperate Acting Desperately

To Grow Up Fast

Morality-Punishment Link

An Invisible Debt Made Visible

Trends in Depression and Suicide Rates

From Bad to Worse: Trends Across Generations

Republicans: Party of Despair

Rate And Duration of Despair

“We have met the enemy and he is us.”

We blame society, but we are society.

That is such a simple truth and for that reason it is easy to ignore or not fully grasp. It slips past us, as if it were just a nice saying. Yet it is the literal and most basic truth of our entire existence. We are social creatures, at the very core of our being.

Living in a dysfunctional society, this gives us plenty of opportunities to think about what this means. I realize most people would rather not think about it because then they’d feel a sense of moral responsibility to do something about it. That is all the more reason for the rest of us, unable to ignore it, to force this issue into public attention. Again and again and again.

People say we have no choice but to choose what society offers us. This is regularly seen during the campaign season. Just hold your nose, eat the plate of shit given you, and try to keep it down.

It’s the saddest thing in the world to see the abused voter returning to the two-party system that abuses them, as if they deserve the abuse. You try to argue with them, but the victim predictably defends the abuser: he’s not so bad, he really loves me, I couldn’t live without him, etc. Even though the victim is physically free to leave, they can’t imagine a life that is different or rather can’t imagine that they deserve anything else.

All of society is about relationships. These relationships don’t exist outside of us. We are our relationships in a fundamental sense. It is what defines us. As such, we should choose our relationships carefully and when necessary choose new relationships.

We don’t live in an overtly violent and oppressive militarized police state. If we speak our minds or act independently, we aren’t likely to be arbitrarily imprisoned or executed. Despite our society being a banana republic, we still do have basic freedoms, even with the elections being rigged. Besides, democracy isn’t an election. Nor is it the government. No, to find democracy look in a mirror or, better yet, look into the face of your neighbor. We are democracy.

If we don’t like the choices within our democracy, we need to act differently. No one is going to give us democracy. No one can give us permission to be free and to act freely. Voting for the right candidate is not the issue, much less the solution.

We will have a functioning democracy if and only when we act as functioning democratic citizens. We’ve allowed ourselves to be fooled. Yet all that it would take for us to see clearly is to remove the blindfold and open our eyes. And all that it would take for us to act freely is to loosen the shackles, once we realized they were never locked.

Some on the political left would like to entirely blame the rich for our failed democracy. Others on the political right would blame the poor. But both sides are wrong. The rich are too small in number to stop what the public demanded, if the public ever were to demand actual democracy. And the poor vote at too low of a rate, for various reasons.

We have a welfare state because that maintains the social order, not because anyone wants a welfare state. It’s just the other side of the corporatocracy. The welfare state just keeps the masses comfortable enough that they will neither vote for reform nor start a revolution. As I’ve said many times before, it is the bread part of the bread and circus.

No one, rich or poor, is necessarily happy with our society. Yet we lack the collective ability to envision anything better. We’re trapped by our own demoralized apathy and crippled imagination. We are dominated by fear, but we forget that we are what we fear. The dysfunction we see is the expression of our own behavior, the results of our own choices. It is fear that holds our society together.

So, the only way to reclaim our society and our democracy is by claiming that fear. That is the source of the power we’ve given away.

Social Conditions of an Individual’s Condition

A paradigm change has been happening. The shift began long ago, but it’s starting to gain traction in the mainstream. Here is one recent example, an article from Psychology Today—Anxiety and Depression Are Symptoms, Not Diseases by Gregg Henriques Ph.D.:

“Depression is a way the emotional system signals that things are not working and that one is not getting one’s relational needs met. If you are low on relational value in the key domains of family, friends, lovers, group and self, feeling depressed in this context is EXACTLY like feeling pain from a broken arm, feeling cold being outside in the cold, and feeling hungry after going 24 hours without food.

“It is worth noting that, given the current structure of society, depression often serves not to help reboot the system and enlist social support, but instead contributes to the further isolation of the individual, which creates a nasty, vicious spiral of shutting down, doing less, feeling more isolated, turning against the self, and thus getting even more depressed. As such, depressive symptoms often do contribute to the problem, and folks do suffer from Negative Affect Syndromes, where extreme negative moods are definitely part of the problem.

“BUT, everyone should be clear, first and foremost, that anxiety and depression are symptoms of psychosocial needs and threats. They should NOT be, first and foremost, considered alien feelings that need to be eliminated or fixed, any more than we would treat pain from a broken arm, coldness and hunger primarily with pills that takes away the feelings, as opposed to fixing the arm, getting warmer or feeding the hungry individual.”

It’s a pretty good article. The focus on symptoms seems like the right way to frame it. This touch upon larger issues. I’d widen the scope even further. Once we consider the symptoms, it opens up a whole slew of possibilities.

There is the book Chasing the Scream by Johann Hari. The author discusses the rat park research, showing that addiction isn’t an individual disease but a social problem. Change the conditions and the results change. Basically, people are healthier, happier, and more well-adjusted in environments that are conducive to satisfying basic needs.

Then there is James Gilligan’s Why Some Politicians Are More Dangerous Than Others, an even more hard-hitting book. It shows (among other things) suicide rates go up when Republicans are elected. As I recall, other data shows that suicide rates go up in other societies as well, when conservatives are elected.

There are other factors that are directly correlated to depression rates and other mental health issues.

Some are purely physical. Toxoplasmosis is an example of that, and its related parasitic load that stunts brain development. Many examples could be added, from malnutrition to lack of healthcare.

Plus, there are problems that involve both the physical environment and social environment. Lead toxicity causes mental health problems, including depression. The rates of lead toxicity depend on how strong and effective are regulations, which in turn depends on the type of government and who is in power.

A wide variety of research and data is pointing to a basic conclusion. Environmental conditions (physical, social, political, and economic) are of penultimate importance. So, why do we treat as sick individuals those who suffer the consequences of the externalized costs of society?

Here is the sticking point. Systemic and collective problems in some ways are the easiest to deal with. The problems, once understood, are essentially simple and their solutions tend to be straightforward. Even so, the very largeness of these problems make them hard for us to confront. We want someone to blame. But who do we blame when the entire society is dysfunctional?

If we recognize the problems as symptoms, we are forced to acknowledge our collective agency and shared fate. For those who understand this, they are up against countervailing forces that maintain the status quo. Even if a psychiatrist realizes that their patient is experiencing the symptoms of larger social issues, how is that psychiatrist supposed to help the patient? Who is going to diagnose the entire society and demand it seek rehabilitation?

No One Knows

Here is a thought experiment. What if almost everything you think you know is wrong? It isn’t just a thought experiment. In all likelihood, it is true.

Almost everything people thought they knew in the past has turned out to be wrong, partly or entirely. There is no reason to think the same isn’t still the case. We are constantly learning new things that add to or alter prior fields of knowledge.

We live in a scientific age. Even so, there are more things we don’t know than we do know. Our scientific knowledge remains narrow and shallow. The universe is vast. Even the earth is vast. Heck, human nature is vast, in its myriad expressions and potentials.

In some ways, science gives a false sense of how much we know. We end up taking many things as scientific that aren’t actually so. Take the examples of consciousness and free will, both areas about which we have little scientific knowledge.

We have no more reason to believe consciousness is limited to the brain than to believe that consciousness is inherent to matter. We have no more reason to believe that free will exists than to believe it doesn’t. These are non-falsifiable hypotheses, which is to say we don’t know how to test them in order to prove them one way or another.

Yet we go about our lives as if these are decided facts, that we are conscious free agents in a mostly non-conscious world. This is what we believe based on our cultural biases. Past societies had different beliefs about consciousness and agency. Future societies likely will have different beliefs than our own and they will look at us as oddly as we look at ancient people. Our present hyper-individualism may one day seem as bizarre as the ancient bicameral mind.

We forget how primitive our society still is. In many ways, not much has changed over the past centuries or even across the recent millennia. Humans still live their lives basically the same. For as long as civilization has existed, people live in houses and ride on wheeled vehicles. When we have health conditions, invasively cutting into people is still often standard procedure, just as people have been doing for a long long time. Political and military power hasn’t really changed either, except in scale. The most fundamental aspects of our lives are remarkably unchanged.

At the same time, we are on the edge of vast changes. Just in my life, technology has leapt ahead far beyond the imaginings of most people in the generations before mine. Our knowledge of genetics, climate change, and even biblical studies has been irrevocably altered—throwing on its head, much of the earlier consensus.

We can’t comprehend what any of it means or where it is heading. All that we can be certain is that paradigms are going to be shattered over this next century. What will replace them no one knows.

Origins of Ritual Behavior

Here is something from the Scientific American. It’s an article by Laura Kehoe, Mysterious Chimpanzee Behavior May Be Evidence of “Sacred” Rituals:

“Even more intriguing than this, maybe we found the first evidence of chimpanzees creating a kind of shrine that could indicate sacred trees. Indigenous West African people have stone collections at “sacred” trees and such man-made stone collections are commonly observed across the world and look eerily similar to what we have discovered here.”

Apparently, this has never before been observed and documented. It is an amazing discovery. Along with tool use, it points toward a central building block of primate society.

I immediately thought of the first evidence of settled civilization. Before humans built homes for themselves in settlements, they built homes for their gods. These first temples likely began quite simply, maybe even as simple as a pile of rocks.

Human society, as we know it, developed around ritual sites. This may have begun much earlier with the common ancestor of both humans and chimpanzees.

Human Condition

Human nature and the human condition
by The Philosopher’s Beard Blog

“The distinction between human nature and the human condition has implications that go beyond whether some academic sub-fields are built on fundamental error and thus a waste of time (hardly news). The foundational mistake of assuming that certain features prominent among contemporary human beings are true of H. sapiens and therefore true of all of us has implications for how we think about ourselves now. There is a lack of adequate critical reflection – of a true scientific spirit of inquiry – in much of the naturalising project. It fits all too easily with our natural desire for a convenient truth: that the way the world seems is the way it has to be.

“For example, many people believe that to be human is to be religious – or at least to have a ‘hunger for religion’ – and argue as a result that religion should be accorded special prominence and autonomy in our societies – in our education, civil, and political institutions. American ‘secularism’ for example might be said to be built on this principle: hence all religions are engaged in a similar project of searching for the divine and deserve equal respect. The pernicious implication is that the non-religious (who are not the same as atheists, by the way) are somehow lacking in an essential human capability, and should be pitied or perhaps given help to overcome the gaping hole in their lives.

“Anatomically modern humans have been around in our current form for around 200,000 years but while our physiological capacities have scarcely changed we are cognitively very different. Human beings operate in a human world of our own creation, as well as in the natural, biological world that we are given. In the human world people create new inventions – like religion or war or slavery – that do something for them. Those inventions succeed and spread in so far as they are amenable to our human nature and our other inventions, and by their success they condition us to accept the world they create until it seems like it could not have been otherwise.

“Recognising the fact that the human condition is human-made offers us the possibility to scrutinise it, to reflect, and perhaps even to adopt better inventions. Slavery was once so dominant in our human world that even Aristotle felt obliged to give an account of its naturalness (some people are just naturally slavish). But we discovered a better invention – market economies – that has made inefficient slavery obsolete and now almost extinct (which is not to say that this invention is perfect either). The human condition concerns humans as we are, but not as we have to be.”

The Final Rhapsody of Charles Bowden
A visit with the famed journalist just before his death.
by Scott Carrier, Mother Jones Magazine

“Postscript from Bowden’s Blood Orchid, 1995: Imagine the problem is not physical. Imagine the problem has never been physical, that it is not biodiversity, it is not the ozone layer, it is not the greenhouse effect, the whales, the old-growth forest, the loss of jobs, the crack in the ghetto, the abortions, the tongue in the mouth, the diseases stalking everywhere as love goes on unconcerned. Imagine the problem is not some syndrome of our society that can be solved by commissions or laws or a redistribution of what we call wealth. Imagine that it goes deeper, right to the core of what we call our civilization and that no one outside of ourselves can affect real change, that our civilization, our governments are sick and that we are mentally ill and spiritually dead and that all our issues and crises are symptoms of this deeper sickness … then what are we to do?”