The Moral Axis of the Axial Age

Where is the world heading and upon what might it all be revolving? One would be unwise to speculate too much or offer strong predictions, but it must be noted that there has been a general trend that is well-established. Over time, Americans have been moving further and further to the political ‘left’. The majority of Americans are strongly liberal and progressive on nearly every major issue — political, social, economic, environmental, etc. But this is also happening on the political ‘right’, even among the religious. It’s interesting that as the elite have often pushed the Overton window to the ‘right’, the political ‘right’ has generally gone ‘left’ in following the rest of the American population. The whole spectrum shifts leftward.

Only a minority of right-wingers have become increasingly extreme in the other direction. The problem is this small demographic, what I call the ‘Ferengi‘ (overlap of propagandized Fox News viewers, white Evangelicals, and partisan Republicans), has had an outsized voice in the corporate media and an outsized influence in corporatocratic politics. This ideological shift, to a large extent, is a generational divide or rather an age-related gradation. Each generation becomes steadily more liberal and progressive, sometimes outright left-wing on certain issues compared to how issues were perceived in the past.

This conflict of views has less relevance in the Democratic Party but is quite stark in the Republican Party. It’s also seen among Evangelicals. Old Evangelicals, at least among whites, are part of the Ferengi extremists. But young Evangelicals identify with the ‘progressive’ label and support such things as same sex marriage while no longer seeing abortion as an important issue, much less feeling drawn to polticized religiosity. The Ferengi are opposite of the majority of Americans, often opposite of a large number of moderate Republicans and conservatives, and definitely opposite of the young.

Yet the Ferengi are held up as an equivalent demographic to these much larger demographics in creating a false narrative of polarization and division. The ideological gap, though, is in some sense real. The Ferengi fringe are disproportionately represented among those who are most politically active with high voter turnout, specifically as found among older conservatives with more money and influence. Even as they are a shrinking minority, they still strongly control or otherwise are overly represented by the Republican Party and right-wing media. The extremism of this minority emphasizes, in contrast, how far ‘left’ the rest of the population has gone.

This ongoing leftward pattern, what some might consider ‘progress’, isn’t exactly new. The shift hasn’t only happened over the decades and across the generations but, one might argue, goes back centuries or possibly even millennia. Being part of the political ‘left’ project has required saintly patience, prophetic vision, and heroic will. The impulse of egalitarianism and universalism initially were religious imperatives — born under the Axial Age, grew into childhood during the Middle Ages, and came to young adulthood in the Enligthenment Age, if still not yet having reached full maturity.

It was in the 1300s, when the moral vision of Jesus, as expressed in the orignal Christian creed, finally captured the populist imagination as something akin to class war and sociopolitical ideology. Some of those early proto-leftists sought to overthrow the hierarchy of fuedalism and church, to bring the equality of heaven down to earth. Their thinking on the matter was far from being rationally articulated as a coherent philosophy, but the demands they made were stated in no uncertain terms. They weren’t content with otherworldly promises of rewards in the afterlife. Once imagined, those ideals as demands in the here-and-now inevitably became radical in their threat to worldly power.

Yet no one back then had any notion of a political ‘left’, per se. For most of the past two millennia, it remained a moral intuition bubbling up out of the collective psyche. Even so, it was a poweful moral intuition. Those peasants, in revolting, did rampage into the cities and killed more than a few of the elite. They nearly took the king hostage, although they weren’t quite sure what to do as the commoners had never previously gained the upper hand to that degree. It would require many more centuries for the dirty masses to figure out exactly what were their demands and to what end, and what exactly did this moral intuition mean, but for damn sure it could not be denied and it would only grow stronger over time.

That ancient outrage of the commoners is what we have inherited. We’ve fancied it up with Enlightenment thought and clothed it in modern respectability, while the political ‘right’ has sought to blame it on French Jacobins and postmodern neo-Marxists or whatever, but in essence it remains that crude beating heart of moral righteousness and divine judgment, the authority of God’s command brought down like a sledgehammer to level the towers of human pride, as with Jesus throwing the moneychangers out of the temple. It’s not an intellectual argument and so, in response to it, rationality is impotent. But equally impotent are the churchly claims of fundamentalists and the delicate sensibilities of social conservatives.

Every single advance of society began as a never-before-thought idea that was imagined into existence but at first denied and attacked as heretical, dangerous, crazy, or impossible. So much of what has become established and normalized, so much of what even conservatives now accept and defend began as terrifying radicalism, fevered dream, and ranting jeremiad. Before written about in revolutionary pamphlets, scholarly tomes and left-wing analyses, these obstinate demands and unrealistic ideals were originally brought forth by prophets from the desert and peasants from the countryside, the uncouth and illiterate rabble who spoke with the moral certainty of faith and of God’s intimacy.

These initially incohate inklings and urgings of the Anglo-American and broader Western moral imagination took so many unknown generations of struggle to take shape as we know them now. But we act like the revolutionary zeal of the late 18th century burst forth like Athena from Zeus’ head, as if intellectuals with too much time on their hands thought it all up while getting a bit tipsy in colonial taverns and French cafes. More than a few of those rabblerousers and pamphlet scribblers began as religious dissenters, a tradition they inherited from their forefathers who fled to the colonies during the religious uprising and populist unrest of the English Civil War, an aftershock of the English Peasants’ Revolt (with similar uprisings shaping the experience of immigrants from other countries).

Thomas Paine, a man hated for claiming God was not an evil authoritarian ruling over humanity, has been largely forgotten in his later writing of Agrarian Justice in 1797. In offering a plan for land and tax reform, he spelled out ideas on an old age pension and basic income, as paid for by progressive taxation. The former took almost a century and half to finally get enacted as Social Security and the latter we’re still working toward. These kinds of radical proposals take a while to gain purchase in political action, even when they’ve been part of the political imaginary for many generations or longer. Paine himself was merely responding to an ongoing public debate that preceded him in the centuries before.

Even his criticisms of organized religion were largely his having repeated what others had already said. Some of those heretical thoughts had been recorded in the ancient world. Jesus, after all, was one of the greatest heretics of them all, a detail that didn’t go without notice by so many heretics who followed his example. Such religious heresy always went hand in hand with social and political heresy. The early Christians were disliked because they refused to participate in the worship and celebrations of imperial religion. And some of the first Christian communities set themselves apart by living in egalitarian communes where positions were decided through drawing lots. Their radical beliefs led to radical actions and radical social order, not mere nice-sounding rhetoric about a distant Heaven.

So, it’s unsurprising that primitive communism, proto-socialism, and Marxist-like critiques began among religious dissenters, as heard during the Peasants’ Revolt and English Civil War. They took inspiration from Jesus and the original Christians, as those in the first century were themselves drawing upon the words written down over the half millennia before that. When the full-fledged American socialists came along with their crazy dreams as implemented in Milwaukee’s sewer socialism and labor organizing, they were doing so as part of the Judeo-Christian tradition and in carrying forward ancient ideals. Don’t forget the American “Pledge of Allegiance” was written by a Christian socialist.

Yet here we are. The radical notion of sewer socialism where everyone equally deserves clean water was once considered a threat to Western civilization by the respectable elite but now is considered an essential component of that very same ruling order. Conservatives no longer openly argue that poor people deserve to fall into horrific sickness and die from sewage and filthy water. What used to be radically left-wing has simply become the new unquestioned norm, the moral ground below which we won’t descend. Some might call that progress.

It’s the same thing with constitutional republicanism, civil rights, free markets, universal education, women’s suffrage, abolition of slavery, and on and on. In centuries past, these were originally dangerous notions to conservatives and traditionalists. They were condemned and violently suppressed. But now the modern right-winger has so fully embraced and become identified with this radicalism as to have forgotten it was ever radical. And this trend continues. As clean water is accepted as a universal right, in the near future, same sex marriage and basic income might be likewise brought into the fold of what defines civilization; in fact, a majority of Americans already support same sex marriage, universal healthcare, women’s rights, pro-choice, etc.

There is no reason to assume that this seismic shift that began so long ago is going to stop anytime soon, as long as this civilizational project continues its development. The aftershocks of an ancient cataclysm will likely continue to redefine the world from one age to the next. In a sense, we are still living in the struggle of the Axial Age (“The Empire never ended!” PKD) and no one knows when it will finally come to a close nor what will be the final result, what world will have come to fruition from the seed that was planted in that fertile soil. The Axial Age is the moral axis upon which the world we know rotates. A revolution is a turning and returning, an eternal recurrence — and in a state of disorientation with no end in sight, around and around we go.

* * *

On the Cusp of Adulthood and Facing an Uncertain Future: What We Know About Gen Z So Far
by Kim Parker and Ruth Igielnik

Within the GOP, Gen Zers have sharp differences with their elders

Among Republicans and those who lean to the Republican Party, there are striking differences between Generation Z and older generations on social and political issues. In their views on race, Gen Z Republicans are more likely than older generations of Republicans to say blacks are treated less fairly than whites in the U.S. today. Fully 43% of Republican Gen Zers say this, compared with 30% of Millennial Republicans and roughly two-in-ten Gen X, Boomer and Silent Generation Republicans. Views are much more consistent across generations among Democrats and Democratic leaners.

Similarly, the youngest Republicans stand out in their views on the role of government and the causes of climate change. Gen Z Republicans are much more likely than older generations of Republicans to desire an increased government role in solving problems. About half (52%) of Republican Gen Zers say government should do more, compared with 38% of Millennials, 29% of Gen Xers and even smaller shares among older generations. And the youngest Republicans are less likely than their older counterparts to attribute the earth’s warming temperatures to natural patterns, as opposed to human activity (18% of Gen Z Republicans say this, compared with three-in-ten or more among older generations of Republicans).

Overall, members of Gen Z look similar to Millennials in their political preferences, particularly when it comes to the upcoming 2020 election. Among registered voters, a January Pew Research Center survey found that 61% of Gen Z voters (ages 18 to 23) said they were definitely or probably going to vote for the Democratic candidate for president in the 2020 election, while about a quarter (22%) said they were planning to vote for Trump. Millennial voters, similarly, were much more likely to say they plan to support a Democrat in November than Trump (58% vs. 25%). Larger shares of Gen X voters (37%), Boomers (44%) and Silents (53%) said they plan to support President Trump. […]

Generations differ in their familiarity and comfort with using gender-neutral pronouns

Ideas about gender identity are rapidly changing in the U.S., and Gen Z is at the front end of those changes. Gen Zers are much more likely than those in older generations to say they personally know someone who prefers to go by gender-neutral pronouns, with 35% saying so, compared with 25% of Millennials, 16% of Gen Xers, 12% of Boomers and just 7% of Silents. This generational pattern is evident among both Democrats and Republicans.

There are also stark generational differences in views of how gender options are presented on official documents. Gen Z is by far the most likely to say that when a form or online profile asks about a person’s gender it should include options other than “man” and “woman.” About six-in-ten Gen Zers (59%) say forms or online profiles should include additional gender options, compared with half of Millennials, about four-in-ten Gen Xers and Boomers (40% and 37%, respectively) and roughly a third of those in the Silent Generation (32%).

These views vary widely along partisan lines, and there are generational differences within each party coalition. But those differences are sharpest among Republicans: About four-in-ten Republican Gen Zers (41%) think forms should include additional gender options, compared with 27% of Republican Millennials, 17% of Gen Xers and Boomers and 16% of Silents. Among Democrats, half or more in all generations say this.

Gen Zers are similar to Millennials in their comfort with using gender-neutral pronouns. Both groups express somewhat higher levels of comfort than other generations, though generational differences on this question are fairly modest. Majorities of Gen Zers and Millennials say they would feel “very” or “somewhat” comfortable using a gender-neutral pronoun to refer to someone if asked to do so. By comparison, Gen Xers and Boomers are about evenly divided: About as many say they would feel at least somewhat comfortable (49% and 50%, respectively) as say they would be uncomfortable.

Members of Gen Z are also similar to Millennials in their views on society’s acceptance of those who do not identify as a man or a woman. Roughly half of Gen Zers (50%) and Millennials (47%) think that society is not accepting enough of these individuals. Smaller shares of Gen Xers (39%), Boomers (36%) and those in the Silent Generation (32%) say the same.

Here again there are large partisan gaps, and Gen Z Republicans stand apart from other generations of Republicans in their views. About three-in-ten Republican Gen Zers (28%) say that society is not accepting enough of people who don’t identify as a man or woman, compared with two-in-ten Millennials, 15% of Gen Xers, 13% of Boomers and 11% of Silents. Democrats’ views are nearly uniform across generations in saying that society is not accepting enough of people who don’t identify as a man or a woman.

Technological Fears and Media Panics

“One of the first characteristics of the first era of any new form of communication is that those who live through it usually have no idea what they’re in.”
~Mitchell Stephens

“Almost every new medium of communication or expression that has appeared since the dawn of history has been accompanied by doomsayers and critics who have confidently predicted that it would bring about The End of the World as We Know It by weakening the brain or polluting our precious bodily fluids.”
~New Media Are Evil, from TV Tropes

“The internet may appear new and fun…but it’s really a porn highway to hell. If your children want to get on the internet, don’t let them. It’s only a matter of time before they get sucked into a vortex of shame, drugs, and pornography from which they’ll never recover. The internet…it’s just not worth it.”
~Grand Theft Auto: Liberty City Stories

“It’s the same old devil with a new face.”
~Rev. George Bender, Harry Potter book burner

Media technology is hard to ignore. This goes beyond it being pervasive. Our complaints and fears, our fascination and optimism are mired in far greater things. It is always about something else. Media technology is not only the face of some vague cultural change but the embodiment of new forms of power that seem uncontrollable. Our lives are no longer fully our own, a constant worry in an individualistic society. With globalization, it’s as if the entire planet has become a giant company town.

I’m not one for giving into doom and gloom about technology. That response is as old as civilization and doesn’t offer anything useful. But I’m one of the first to admit to the dire situation we are facing. It’s just that in some sense the situation has always been dire, the world has always been ending. We never know if this finally will be the apocalypse that has been predicted for millennia, an ending to end it all with no new beginning. One way or another, the world as we know it is ending. There probably isn’t much reason to worry about it. Whatever the future holds, it is beyond our imagining as our present world was beyond the imagining of past generations.

One thing is clear. There is no point in getting in a moral panic over it. The young who embrace what is new always get blamed for it, even though they are simply inheriting what others have created. The youth today aren’t any worse off than any other prior generation at the same age. Still, it’s possible that these younger generations might take us into a future that us old fogies won’t be able to understand. History shows how shocking innovations can be. Talking about panics, think about Orson Welles’s radio show, War of the Worlds. The voice of radio back then had a power that we no longer can appreciate. Yet here we are with radio being so much background noise added to the rest.

Part of what got me thinking about this were two posts by Matt Cardin, at The Teeming Brain blog. In one post, he shares some of Nathaniel Rich’s review, Roth Agonistes, of Philip Roth’s Why Write?: Collected Nonfiction 1960–2013. There is a quote from Roth in 1960:

“The American writer in the middle of the twentieth century has his hands full in trying to understand, describe, and then make credible much of American reality. It stupefies, it sickens, it infuriates, and finally it is even a kind of embarrassment to one’s own meager imagination. The actuality is continually outdoing our talents, and the culture tosses up figures almost daily that are the envy of any novelist.”

Rich comments that, “Roth, despite writing before the tumult of the Sixties, went farther, suggesting that a radically destabilized society had made it difficult to discriminate between reality and fiction. What was the point of writing or reading novels when reality was as fantastic as any fiction? Such apprehensions may seem quaint when viewed from the comic-book hellscape of 2018, though it is perversely reassuring that life in 1960 felt as berserk as it does now.”

We are no more post-truth now than back then. It’s always been this way. But it is easy to lose context. Rich notes that, “Toward the end of his career, in his novels and public statements, Roth began to prophesy the extinction of a literary culture — an age-old pastime for aging writers.” The ever present fear that the strangeness and stresses of the unknown will replace the comforting of the familiar. We all grow attached to the world we experienced in childhood, as it forms the foundation of our identity. But every now and then something comes along to threaten it all. And the post-World War era was definitely a time of dramatic and, for some, traumatic change — despite all of the nostalgia that has accrued to its memories like flowers on a gravestone.

The technological world we presently live in took its first form during that earlier era. Since then, the book as an art form is far from being near extinction. More books have been printed in recent decades than ever before in history. New technology has oddly led us read even more books, both in their old and new technological forms. My young niece, of the so-called Internet Generation, prefers physical books… not that she is likely to read Philip Roth. Literacy, along with education and IQ, is on the rise. There is more of everything right now, what makes it overwhelming. Technologies of the past for the  most part aren’t being replaced but incorporated into a different world. This Borg-like process of assimilation might be more disturbing to the older generations than simply becoming obsolete.

The other post by Matt Cardin shares an excerpt from an NPR piece by Laura Sydell, The Father Of The Internet Sees His Invention Reflected Back Through A ‘Black Mirror’. It is about the optimists of inventors and the consequences of inventions, unforeseen except by a few. One of those who did see the long term implications was William Gibson: “The first people to embrace a technology are the first to lose the ability to see it objectively.” Maybe so, but that is true for about everyone, including most of those who don’t embrace it or go so far as to fear it. It’s not in human nature to see much of anything objectively.

Gibson did see the immediate realities of what he coined as ‘Cyberspace’. We do seem to be moving in that general direction of cyberpunk dystopia, at least here in this country. I’m less certain about the even longer term developments, as Gibson’s larger vision is as fantastical as many others. But it is the immediate realities that always concern people because they can be seen and felt, if not always acknowledged for what they are, often not even by the fear-mongers.

I share his being more “interested in how people behave around new technologies.” In reference to “how TV changed New York City neighborhoods in the 1940s,” Gibson states that, “Fewer people sat out on the stoops at night and talked to their neighbors, and it was because everyone was inside watching television. No one really noticed it at the time as a kind of epochal event, which I think it was.”

I would make two points about.

First, there is what I already said. It is always an epochal event when a major technology is invented, going back to the many inventions before that such as media technology (radio, films, telegraph, printing press, bound book, etc) but also other technologies (assembly lines, cotton gin, compass, etc). Did the Chinese individual who assembled the first firework imagine the carnage of bombs that made castles easy targets and led to two world wars that transformed all of human existence? Of course not. Even the simplest of technologies can turn civilization on its head, which has happened multiple times over the past few millennia and often with destructive results.

The second point is to look at something specific like television. It happened along with the building of the interstate highway system, the rise of car culture, and the spread of suburbia. Television became a portal for the outside world to invade the fantasyland of home life that took hold after the war. Similar fears about radio and the telephone were transferred to the television set and those fears were directed at the young. The first half of the 20th century was constant technological wonder and uncertainty. The social order was thrown askew.

We like to imagine the 1940s and 1950s as a happy time of social conformity and social order, a time of prosperity and a well behaved population, but that fantasy didn’t match the reality. It was an era of growing concerns about adolescent delinquency, violent crime, youth gangs, sexual deviancy, teen pregnancy, loose morals, and rock ‘n roll — and the data bears out that a large number in that generation were caught up in the criminal system, whether because they were genuinely a bad generation or that the criminal system had become more punitive, although others have argued that it was merely a side effect of the baby boom with youth making up a greater proportion of society. Whatever was involved, the sense of social angst got mixed up with lingering wartime trauma and emerging Cold War paranoia. The policing, arrests, and detention of wayward youth became a priority to the point of oppressive obsession. Besides youth problems, veterans from World War II did not come home content and happy (listen to Audible’s “The Home Front”). It was a tumultuous time, quite opposite of the perfect world portrayed in those family sitcoms of the 1940s and 1950s.

The youth during that era had a lot in common with their grandparents, the wild and unruly Lost Generation corrupted by family and community breakdown from early mass immigration, urbanization, industrialization, consumerism, etc. Starting in the late 1800s, youth gangs and hooliganism became rampant, as moral panic became widespread. As romance novels earlier had been blamed and later comic books would be blamed, around the turn of the century the popular media most feared were the violent penny dreadfuls and dime novels that targeted tender young minds with portrayals of lawlessness and debauchery, so it seemed to the moral reformers and authority figures.

It was the same old fear rearing its ugly head. This pattern has repeated on a regular basis. What new technology does is give an extra push to the swings of generational cycles. So, as change occurs, much remains the same. For all that William Gibson got right, no one can argue that the world has been balkanized into anarcho-corporatist city-states (Snow Crash), although it sure is a plausible near future. The general point is true, though. We are a changed society. Yet the same old patterns of fear-mongering and moral panic continue. What is cyclical and what is trend is hard to differentiate as it happens, it being easier to see clearly in hindsight.

I might add that vast technological and social transformations have occurred every century for the past half millennia. The ending of feudalism was far more devastating. Much earlier, the technological advancement of written text and the end of oral culture had greater consequences than even Socrates could have predicted. And it can’t be forgotten that movable type printing presses ushered in centuries of mass civil unrest, populist movements, religious wars, and revolution across numerous countries.

Our own time so far doesn’t compare, one could argue. The present relative peace and stability will continue until maybe World War III and climate change catastrophe forces a technological realignment and restructuring of civilization. Anyway, the internet corrupting the youth and smart phones rotting away people’s brains should be the least of our worries.

Even the social media meddling that Russia is accused of in manipulating the American population is simply a continuation of techniques that go back to before the internet existed. The game has changed a bit, but nations and corporations are pretty much acting in the devious ways they always have, except they are collecting a lot more info. Admittedly, technology does increase the effectiveness of their deviousness. But it also increases the potential methods for resisting and revolting against oppression.

I do see major changes coming. My doubts are more about how that change will happen. Modern civilization is massively dysfunctional. That we use new technologies less than optimally might have more to do with pre-existing conditions of general crappiness. For example, television along with air conditioning likely did contribute to people not sitting outside and talking to their neighbors, but as great or greater of a contribution probably had to do with diverse social and economic forces driving shifts in urbanization and suburbanization with the dying of small towns and the exodus from ethnic enclaves. Though technology was mixed into these changes, we maybe give technology too much credit and blame for the changes that were already in motion.

It is similar to the shift away from a biological explanation of addiction. It’s less that certain substances create uncontrollable cravings. Such destructive behavior is only possible and probable when particular conditions are set in place. There already has to be breakdown of relationships of trust and support. But rebuild those relationships and the addictive tendencies will lessen.

Similarly, there is nothing inevitable about William Gibson’s vision of the future or rather his predictions might be more based on patterns in our society than anything inherent to the technology itself. We retain the choice and responsibility to create the world we want or, failing that, to fall into self-fulfilling prophecies.

The question is what is the likelihood of our acting with conscious intention and wise forethought. All in all, self-fulfilling prophecy appears to be the most probable outcome. It is easy to be cynical, considering the track record of the present superpower that dominates the world and the present big biz corporatism that dominates the economy. Still, I hold out for the chance that conditions could shift for various reasons, altering what otherwise could be taken as near inevitable.

* * *

6/13/21 – Here is an additional thought that could be made into a new separate post, but for now we’ll leave it here as a note. There is evidence that new media technology does have an effect on the thought, perception, and behavior. This is measurable in brain scans. But other research shows it even alters personality or rather suppresses it’s expression. In a scientific article about testing for the Big Five personality traits, Tim Blumer and Nicola Döring offer an intriguing conclusion:

“To sum up, we conclude that for four of the five factors the data indicates a decrease of personality expression online, which is most probably due to the specification of the situational context. With regard to the trait of neuroticism, however, an additional effect occurs: The emotional stability increases on the computer and the Internet. This trend is likely, as has been described in previous studies, due to the typical features of computer-mediated communication (see Rice & Markey, 2009)” (Are we the same online? The expression of the five factor personality traits on the computer and the Internet).

This makes one think what it actually means. These personality tests are self-reports and so have that bias. Still, that is useful info in indicating what people are experiencing and perceiving about themselves. It also gives some evidence to what people are expressing, even when they aren’t conscious of it, as that is how these tests are designed with carefully phrased questions and including decoy questions.

It is quite likely that the personality is genuinely being suppressed when people engage with the internet. Online experience eliminates so many normal behavioral and biological cues (tone of voice, facial expressions, eye gaze, hand gestures, bodily posture, rate of breathing, pheromones, etc). It would be unsurprising if this induces at least mild psychosis in many people, in that individuals literally become disconnected from most aspects of normal reality and human relating. If nothing else, it would surely increase anxiety and agitation.

When online, we really aren’t ourselves. That is because we are cut off from the social mirroring that allows self-awareness. There is a theory that theory of mind and hence cognitive empathy is developed in childhood first through observing others. It’s in becoming aware that others have minds behind their behavior that we then develop a sense of our own mind as separate from others and the world.

The new media technologies remove the ability to easily sense others as actual people. This creates a strange familiarity and intimacy as, in a way, all of the internet is inside of you. What is induced can at times be akin to psychological solipsism. We’ve noticed how often people don’t seem to recognize the humanity of others online in the way they would if a living-and-breathing person were right in front of them. Most of the internet is simply words. And even pictures and videos are isolated of all real-world context and physical immediacy.

Yet it’s not clear we can make a blanket accusation about the risks of new media. Not all aspects are the same. In fact, one study indicated “a positive association between general Internet use, general use of social platforms and Facebook use, on the one hand, and self-esteem, extraversion, narcissism, life satisfaction, social support and resilience, on the other hand.” It’s not merely being on the internet that is the issue but specific platforms of interaction and how they shape human experience and behavior.

The effects varied greatly, as the researchers found: “Use of computer games was found to be negatively related to these personality and mental health variables. The use of platforms that focus more on written interaction (Twitter, Tumblr) was assumed to be negatively associated with positive mental health variables and significantly positively with depression, anxiety, and stress symptoms. In contrast, Instagram use, which focuses more on photo-sharing, correlated positively with positive mental health variables” (Julia Brailovskaia et al, What does media use reveal about personality and mental health? An exploratory investigation among German students).

There is good evidence. The video game result is perplexing, though. It is image-based, as is Instagram. Why does the former lead to less optimal outcomes and the the latter not? It might have to do with Instagram being more socially-oriented, whereas video games can be played in isolation. Would that still be true of video games that are played with friends and/or on multi-user online worlds? Anyway, it is unsurprising that text-based social media is clearly a net loss for mental health. That would definitely fit with the theory that it’s particularly something about the disconnecting effect of words alone on a screen.

This fits our own experience even when interacting with people we’ve personally known for most or all of our lives. It’s common to write something to a family member or an old friend on email, text message, FB messenger, etc; and, knowing they received it and read it, receive no response; not even a brief acknowledgement. No one in the “real world” would act that way to “real people”. No one who wanted to maintain a relationship would stand near you while you spoke to them to their face, not look at you or say anything, and then walk away like you weren’t there. But that is the point of the power of textual derealization.

Part of this is the newness of new media. We simply have not yet fully adapted to it. People freaked out about every media innovation that came along. And no doubt the fears and anxiety were often based on genuine concerns and direct observations. When media changes, it does have profound effect on people and society. People probably do act deranged for a period of time. It might take generations or centuries for society to settle down after each period of change. That is the problem with the modern world where we’re hit by such a vast number of innovations in such quick succession. The moment we regain our senses enough to try to stand back up again we are hit by the next onslaught.

We are in the middle of what one could call the New Media Derangement Syndrome (NMDS). It’s not merely one thing but a thousands things and combined with a total technological overhaul of society. This past century has turned all of civilization on its head. Over a few generations, most of it occurring within a single lifespan, humanity went from mostly rural communities, farm-based economy, horse-and-buggies, books, and newspapers to mass urbanization, skyscrapers, factories, trains, cars, trucks, ocean liners, airplanes and jets, rocket ships, electricity, light bulbs, telegraphs, movies, radio, television, telephones, smartphones, internet, radar, x-rays, air conditioning, etc.

The newest of new media is bringing in a whole other aspect. We are now living in not just a banana republic but inverted totalitarianism. Unlike the past, the everyday experience of our lives are more defined by corporations than churches, communities, or governments. Think of how most people spend most of their waking hours regularly checking into various corporate media technologies, platforms, and networks; including while at work and a smartphone next to the bed giving one instant notifications.

Think about how almost all media that Americans now consume is owned and controlled by a handful of transnational corporations. Yet not that long ago, most media was owned and operated locally by small companies, non-profit organizations, churches, etc. Most towns had multiple independently-run newspapers. Likewise, most radio shows and tv shows were locally or regionally produced mid-20th century. The moveable type printing press made possible the first era of mass media, but that was small time change compared to the nationalization and globalization of mass media over the past century.

Part of NMDS is that we consumer-citizens have become commodified products. The social media and such is not a product that we are buying. No, we and our data is the product that is being sold. The crazifaction factor is how everything has become manipulated by data-gathering and algorithms. Corporations now have larger files on American citizens than the FBI did during the height of the Cold War. New media technology is one front of the corporate war on democracy and the public good. Economics is now the dominant paradigm of everything.  In her article Social Media Is a Borderline Personality Disorder, Kasia Chojecka wrote:

“Social media, as Tristan Harris said, undermined human weaknesses and contributed to what can be called a collective depression. I would say it’s more than that — a borderline personality disorder with an emotional rollercoaster, lack of trust, and instability. We can’t stay sane in this world right now, Harris said. Today the world is dominated not only by surveillance capitalism based on commodification and commercialization of personal data (Sh. Zuboff), but also by a pandemic, which caused us to shut ourselves at home and enter a full lockdown mode. We were forced to move to social media and are now doomed to instant messaging, notifications, the urge to participate. The scale of exposure to social media has grown incomparably (the percentages of growth vary — some estimate it would be circa ten percent or even several dozen percent in comparison to 2019).”

The result of this is media-induced mass insanity can be seen with conspiracy theories (QAnon, Pizzagate, etc) and related mass psychosis and mass hallucinations: Jewish lasers in space starting wildfires, global child prostitution rings that operate on moon bases, vast secret tunnel systems that connect empty Walmarts, and on and on. That paranoia, emerging from the dark corners of the web, helped launch an insurrection against the government and caused the attempted kidnappings and assassinations of politicians. Plus, it got a bizarre media personality cult elected as president. If anyone doubted the existence of NMDS in the past, it has since become the undeniable reality we all now live in.

* * *

Fear of the new - a techno panic timeline

11 Examples of Fear and Suspicion of New Technology
by Len Wilson

New communications technologies don’t come with user’s manuals. They are primitive, while old tech is refined. So critics attack. The critic’s job is easier than the practitioner’s: they score with the fearful by comparing the infancy of the new medium with the perfected medium it threatens. But of course, the practitioner wins. In the end, we always assimilate to the new technology.

“Writing is a step backward for truth.”
~Plato, c. 370 BC

“Printed book will never be the equivalent of handwritten codices.”
~Trithemius of Sponheim, 1492

“The horrible mass of books that keeps growing might lead to a fall back into barbarism..”
~Gottfried Wilhelm, 1680

“Few students will study Homer or Virgil when they can read Tom Jones or a thousand inferior or more dangerous novels.”
~Rev. Vicemius Know, 1778

“The most powerful of ignorance’s weapons is the dissemination of printed matter.”
~Count Leo Tolstoy, 1869

“We will soon be nothing but transparent heaps of jelly to each other.”
~New York Times 1877 Editorial, on the advent of the telephone

“[The telegraph is] a constant diffusion of statements in snippets.”
~Spectator Magazine, 1889

“Have I done the world good, or have I added a menace?”
~Guglielmo Marconi, inventor of radio, 1920

“The cinema is little more than a fad. It’s canned drama. What audiences really want to see is flesh and blood on the stage.”
~Charlie Chaplin, 1916

“There is a world market for about five computer.”
~Thomas J. Watson, IBM Chairman and CEO, 1943

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”
~Daryl Zanuck, 20th Century Fox CEO, 1946

Media Hysteria: An Epidemic of Panic
by Jon Katz

MEDIA HYSTERIA OCCURS when tectonic plates shift and the culture changes – whether from social changes or new technology.

It manifests itself when seemingly new fears, illnesses, or anxieties – recovered memory, chronic fatigue syndrome, alien abduction, seduction by Internet molesters, electronic theft – are described as epidemic disorders in need of urgent recognition, redress, and attention.

For those of us who live, work, message, or play in new media, this is not an abstract offshoot of the information revolution, but a topic of some urgency: We are the carriers of these contagious ideas. We bear some of the responsibility and suffer many of the consequences.

Media hysteria is part of what causes the growing unease many of us feel about the toxic interaction between technology and information.

Moral Panics Over Youth Culture and Video Games
by Kenneth A. Gagne

Several decades of the past century have been marked by forms of entertainment that were not available to the previous generation. The comic books of the Forties and Fifties, rock ‘n roll music of the Fifties, Dungeons & Dragons in the Seventies and Eighties, and video games of the Eighties and Nineties were each part of the popular culture of that era’s young people. Each of these entertainment forms, which is each a medium unto itself, have also fallen under public scrutiny, as witnessed in journalistic media such as newspapers and journals – thus creating a “moral panic.”

The Smartphone’s Impact is Nothing New
by Rabbi Jack Abramowitz

Any invention that we see as a benefit to society was once an upstart disruption to the status quo. Television was terrible because when listened to the radio, we used our imaginations instead of being spoon-fed. Radio was terrible because families used to sit around telling stories. Moveable type was terrible because if books become available to the masses, the lower classes will become educated beyond their level. Here’s a newsflash: Socrates objected to writing! In The Phaedrus (by his disciple Plato), Socrates argues that “this discovery…will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. … (Y)ou give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

When the Internet and the smartphone evolved, society did what we always do: we adapted. Every new technology has this effect. Do you know why songs on the radio are about 3½ minutes long? Because that’s what a 45-rpm record would hold. Despite the threat some perceived in this radical format, we adapted. (As it turns out, 45s are now a thing of the past but the pop song endures. Turns out we like 3½-minute songs!)

Is the Internet Making Us Crazy? What the New Research Says
by Tony Dokoupil

The first good, peer-reviewed research is emerging, and the picture is much gloomier than the trumpet blasts of Web utopians have allowed. The current incarnation of the Internet—portable, social, accelerated, and all-pervasive—may be making us not just dumber or lonelier but more depressed and anxious, prone to obsessive-compulsive and attention-deficit disorders, even outright psychotic. Our digitized minds can scan like those of drug addicts, and normal people are breaking down in sad and seemingly new ways. […]

And don’t kid yourself: the gap between an “Internet addict” and John Q. Public is thin to nonexistent. One of the early flags for addiction was spending more than 38 hours a week online. By that definition, we are all addicts now, many of us by Wednesday afternoon, Tuesday if it’s a busy week. Current tests for Internet addiction are qualitative, casting an uncomfortably wide net, including people who admit that yes, they are restless, secretive, or preoccupied with the Web and that they have repeatedly made unsuccessful efforts to cut back. But if this is unhealthy, it’s clear many Americans don’t want to be well. [,,,]

The Gold brothers—Joel, a psychiatrist at New York University, and Ian, a philosopher and psychiatrist at McGill University—are investigating technology’s potential to sever people’s ties with reality, fueling hallucinations, delusions, and genuine psychosis, much as it seemed to do in the case of Jason Russell, the filmmaker behind “Kony 2012.” The idea is that online life is akin to life in the biggest city, stitched and sutured together by cables and modems, but no less mentally real—and taxing—than New York or Hong Kong. “The data clearly support the view that someone who lives in a big city is at higher risk of psychosis than someone in a small town,” Ian Gold writes via email. “If the Internet is a kind of imaginary city,” he continues. “It might have some of the same psychological impact.”

What parallels do you see between the invention of the internet – the ‘semantic web’ and the invention of the printing press?
answer by Howard Doughty

Technology, and especially the technology of communication, has tremendous consequences for human relations – social, economic and political.

Socrates raged against the written word, insisting that it was the end of philosophy which, in his view, required two or more people in direct conversation. Anything else, such as a text, was at least one step removed from the real thing and, like music and poetry which he also despised, represented a pale imitation (or bastardization) of authentic life. (Thank goodness Plato wrote it all down.)

From an oral to a written society was one thing, but as Marshall McLuhan so eruditely explained in his book, The Gutenberg Galaxy, the printing press altered fundamantal cultural patterns again – making reading matter more easily available and, in the process, enabling the Protestant Reformation and its emphasis on isolated individual interpretations of whatever people imagined their god to be.

In time, the telegraph and the telephone began the destruction of space, time and letter writing, making it possible to have disembodied conversations over thousands of miles.

Don’t Touch That Dial!
by Vaughan Bell

A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both “confusing and harmful” to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an “always on” digital environment. It’s worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That’s not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press.

Worries about information overload are as old as information itself, with each generation reimagining the dangerous impacts of technology on mind and brain. From a historical perspective, what strikes home is not the evolution of these social concerns, but their similarity from one century to the next, to the point where they arrive anew with little having changed except the label.

 These concerns stretch back to the birth of literacy itself. In parallel with modern concerns about children’s overuse of technology, Socrates famously warned against writing because it would “create forgetfulness in the learners’ souls, because they will not use their memories.” He also advised that children can’t distinguish fantasy from reality, so parents should only allow them to hear wholesome allegories and not “improper” tales, lest their development go astray. The Socratic warning has been repeated many times since: The older generation warns against a new technology and bemoans that society is abandoning the “wholesome” media it grew up with, seemingly unaware that this same technology was considered to be harmful when first introduced.

Gessner’s anxieties over psychological strain arose when he set about the task of compiling an index of every available book in the 16th century, eventually published as the Bibliotheca universalis. Similar concerns arose in the 18th century, when newspapers became more common. The French statesman Malesherbes railed against the fashion for getting news from the printed page, arguing that it socially isolated readers and detracted from the spiritually uplifting group practice of getting news from the pulpit. A hundred years later, as literacy became essential and schools were widely introduced, the curmudgeons turned against education for being unnatural and a risk to mental health. An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.” Meanwhile, excessive study was considered a leading cause of madness by the medical community.

When radio arrived, we discovered yet another scourge of the young: The wireless was accused of distracting children from reading and diminishing performance in school, both of which were now considered to be appropriate and wholesome. In 1936, the music magazine the Gramophone reported that children had “developed the habit of dividing attention between the humdrum preparation of their school assignments and the compelling excitement of the loudspeaker” and described how the radio programs were disturbing the balance of their excitable minds. The television caused widespread concern as well: Media historian Ellen Wartella has noted how “opponents voiced concerns about how television might hurt radio, conversation, reading, and the patterns of family living and result in the further vulgarization of American culture.”

Demonized Smartphones Are Just Our Latest Technological Scapegoat
by Zachary Karabell

AS IF THERE wasn’t enough angst in the world, what with the Washington soap opera, #MeToo, false nuclear alerts, and a general sense of apprehension, now we also have a growing sense of alarm about how smartphones and their applications are impacting children.

In the past days alone, The Wall Street Journal ran a long story about the “parents’ dilemma” of when to give kids a smartphone, citing tales of addiction, attention deficit disorder, social isolation, and general malaise. Said one parent, “It feels a little like trying to teach your kid how to use cocaine, but in a balanced way.” The New York Times ran a lead article in its business section titled “It’s Time for Apple to Build a Less Addictive iPhone,” echoing a rising chorus in Silicon Valley about designing products and programs that are purposely less addictive.

All of which begs the question: Are these new technologies, which are still in their infancy, harming a rising generation and eroding some basic human fabric? Is today’s concern about smartphones any different than other generations’ anxieties about new technology? Do we know enough to make any conclusions?

Alarm at the corrosive effects of new technologies is not new. Rather, it is deeply rooted in our history. In ancient Greece, Socrates cautioned that writing would undermine the ability of children and then adults to commit things to memory. The advent of the printing press in the 15th century led Church authorities to caution that the written word might undermine the Church’s ability to lead (which it did) and that rigor and knowledge would vanish once manuscripts no longer needed to be copied manually.

Now, consider this question: “Does the telephone make men more active or more lazy? Does [it] break up home life and the old practice of visiting friends?” Topical, right? In fact, it’s from a 1926 survey by the Knights of Columbus about old-fashioned landlines.

 The pattern of technophobia recurred with the gramophone, the telegraph, the radio, and television. The trope that the printing press would lead to loss of memory is very much the same as the belief that the internet is destroying our ability to remember. The 1950s saw reports about children glued to screens, becoming more “aggressive and irritable as a result of over-stimulating experiences, which leads to sleepless nights and tired days.” Those screens, of course, were televisions.

Then came fears that rock-n-roll in the 1950s and 1960s would fray the bonds of family and undermine the ability of young boys and girls to become productive members of society. And warnings in the 2000s that videogames such as Grand Theft Auto would, in the words of then-Senator Hillary Rodham Clinton, “steal the innocence of our children, … making the difficult job of being a parent even harder.”

Just because these themes have played out benignly time and again does not, of course, mean that all will turn out fine this time. Information technologies from the printed book onward have transformed societies and upended pre-existing mores and social order.

Protruding Breasts! Acidic Pulp! #*@&!$% Senators! McCarthyism! Commies! Crime! And Punishment!
by R.C. Baker

In his medical practice, Wertham saw some hard cases—juvenile muggers, murderers, rapists. In Seduction, he begins with a gardening metaphor for the relationship between children and society: “If a plant fails to grow properly because attacked by a pest, only a poor gardener would look for the cause in that plant alone.” He then observes, “To send a child to a reformatory is a serious step. But many children’s-court judges do it with a light heart and a heavy calendar.” Wertham advocated a holistic approach to juvenile delinquency, but then attacked comic books as its major cause. “All comics with their words and expletives in balloons are bad for reading.” “What is the social meaning of these supermen, super women … super-ducks, super-mice, super-magicians, super-safecrackers? How did Nietzsche get into the nursery?” And although the superhero, Western, and romance comics were easily distinguishable from the crime and horror genres that emerged in the late 1940s, Wertham viewed all comics as police blotters. “[Children] know a crime comic when they see one, whatever the disguise”; Wonder Woman is a “crime comic which we have found to be one of the most harmful”; “Western comics are mostly just crime comic books in a Western setting”; “children have received a false concept of ‘love’ … they lump together ‘love, murder, and robbery.’” Some crimes are said to directly imitate scenes from comics. Many are guilty by association—millions of children read comics, ergo, criminal children are likely to have read comics. When listing brutalities, Wertham throws in such asides as, “Incidentally, I have seen children vomit over comic books.” Such anecdotes illuminate a pattern of observation without sourcing that becomes increasingly irritating. “There are quite a number of obscure stores where children congregate, often in back rooms, to read and buy secondhand comic books … in some parts of cities, men hang around these stores which sometimes are foci of childhood prostitution. Evidently comic books prepare the little girls well.” Are these stores located in New York? Chicago? Sheboygan? Wertham leaves us in the dark. He also claimed that powerful forces were arrayed against him because the sheer number of comic books was essential to the health of the pulp-paper manufacturers, forcing him on a “Don Quixotic enterprise … fighting not windmills, but paper mills.”

When Pac-Man Started a National “Media Panic”
by Michael Z. Newman

This moment in the history of pop culture and technology might have seemed unprecedented, as computerized gadgets were just becoming part of the fabric of everyday life in the early ‘80s. But we can recognize it as one in a predictable series of overheated reactions to new media that go back all the way to the invention of writing (which ancients thought would spell the end of memory). There is a particularly American tradition of becoming enthralled with new technologies of communication, identifying their promise of future prosperity and renewed community. It is matched by a related American tradition of freaking out about the same objects, which are also figured as threats to life as we know it.

The emergence of the railroad and the telegraph in the 19th century and of novel 20th century technologies like the telephone, radio, cinema, television, and the internet were all similarly greeted by a familiar mix of high hopes and dark fears. In Walden, published in 1854, Henry David Thoreau warned that, “we do not ride on the railroad; it rides upon us.” Technologies of both centuries were imagined to unite to unite a vast and dispersed nation and edify citizens, but they also were suspected of trivializing daily affairs, weakening local bonds, and worse yet, exposing vulnerable children to threats and hindering their development into responsible adults.

These expressions are often a species of moral outrage known as media panic, a reaction of adults to the perceived dangers of an emerging culture popular with children, which the parental generation finds unfamiliar and threatening. Media panics recur in a dubious cycle of lathering outrage, with grownups seeming not to realize that the same excessive alarmism has arisen in every generation. Eighteenth and 19th century novels might have caused confusion to young women about the difference between fantasy and reality, and excited their passions too much. In the 1950s, rock and roll was “the devil’s music,” feared for inspiring lust and youthful rebellion, and encouraging racial mixing. Dime novels, comic books, and camera phones have all been objects of frenzied worry about “the kids these days.”

The popularity of video games in the ‘80s prompted educators, psychotherapists, local government officeholders, and media commentators to warn that young players were likely to suffer serious negative effects. The games would influence their aficionados in the all the wrong ways. They would harm children’s eyes and might cause “Space Invaders Wrist” and other physical ailments. Like television, they would be addictive, like a drug. Games would inculcate violence and aggression in impressionable youngsters. Their players would do badly in school and become isolated and desensitized. A reader wrote to The New York Times to complain that video games were “cultivating a generation of mindless, ill-tempered adolescents.”

The arcades where many teenagers preferred to play video games were imagined as dens of vice, of illicit trade in drugs and sex. Kids who went to play Tempest or Donkey Kong might end up seduced by the lowlifes assumed to hang out in arcades, spiraling into lives of substance abuse, sexual depravity, and crime. Children hooked on video games might steal to feed their habit. Reports at the time claimed that video kids had vandalized cigarette machines, pocketing the quarters and leaving behind the nickels and dimes. […]

Somehow, a generation of teenagers from the 1980s managed to grow up despite the dangers, real or imagined, from video games. The new technology could not have been as powerful as its detractors or its champions imagined. It’s easy to be captivated by novelty, but it can force us to miss the cyclical nature of youth media obsessions. Every generation fastens onto something that its parents find strange, whether Elvis or Atari. In every moment in media history, intergenerational tension accompanies the emergence of new forms of culture and communication. Now we have sexting, cyberbullying, and smartphone addiction to panic about.

But while the gadgets keep changing, our ideas about youth and technology, and our concerns about young people’s development in an uncertain and ever-changing modern world, endure.

Why calling screen time ‘digital heroin’ is digital garbage
by Rachel Becker

The supposed danger of digital media made headlines over the weekend when psychotherapist Nicholas Kardaras published a story in the New York Post called “It’s ‘digital heroin’: How screens turn kids into psychotic junkies.” In the op-ed, Kardaras claims that “iPads, smartphones and XBoxes are a form of digital drug.” He stokes fears about the potential for addiction and the ubiquity of technology by referencing “hundreds of clinical studies” that show “screens increase depression, anxiety and aggression.”
We’ve seen this form of scaremongering before. People are frequently uneasy with new technology, after all. The problem is, screens and computers aren’t actually all that new. There’s already a whole generation — millennials — who grew up with computers. They appear, mostly, to be fine, selfies aside. If computers were “digital drugs,” wouldn’t we have already seen warning signs?

No matter. Kardaras opens with a little boy who was so hooked on Minecraft that his mom found him in his room in the middle of the night, in a “catatonic stupor” — his iPad lying next to him. This is an astonishing use of “catatonic,” and is almost certainly not medically correct. It’s meant to scare parents.

by Alison Gopnik

My own childhood was dominated by a powerful device that used an optical interface to transport the user to an alternate reality. I spent most of my waking hours in its grip, oblivious of the world around me. The device was, of course, the book. Over time, reading hijacked my brain, as large areas once dedicated to processing the “real” world adapted to processing the printed word. As far as I can tell, this early immersion didn’t hamper my development, but it did leave me with some illusions—my idea of romantic love surely came from novels.
English children’s books, in particular, are full of tantalizing food descriptions. At some point in my childhood, I must have read about a honeycomb tea. Augie, enchanted, agreed to accompany me to the grocery store. We returned with a jar of honeycomb, only to find that it was an inedible, waxy mess.

Many parents worry that “screen time” will impair children’s development, but recent research suggests that most of the common fears about children and screens are unfounded. (There is one exception: looking at screens that emit blue light before bed really does disrupt sleep, in people of all ages.) The American Academy of Pediatrics used to recommend strict restrictions on screen exposure. Last year, the organization examined the relevant science more thoroughly, and, as a result, changed its recommendations. The new guidelines emphasize that what matters is content and context, what children watch and with whom. Each child, after all, will have some hundred thousand hours of conscious experience before turning sixteen. Those hours can be like the marvellous ones that Augie and I spent together bee-watching, or they can be violent or mindless—and that’s true whether those hours are occupied by apps or TV or books or just by talk.

New tools have always led to panicky speculation. Socrates thought that reading and writing would have disastrous effects on memory; the novel, the telegraph, the telephone, and the television were all declared to be the End of Civilization as We Know It, particularly in the hands of the young. Part of the reason may be that adult brains require a lot of focus and effort to learn something new, while children’s brains are designed to master new environments spontaneously. Innovative technologies always seem distracting and disturbing to the adults attempting to master them, and transparent and obvious—not really technology at all—to those, like Augie, who encounter them as children.

The misguided moral panic over Slender Man
by Adam Possamai

Sociologists argue that rather than simply being created stories, urban legends represent the fear and anxieties of current time, and in this instance, the internet culture is offering a global and a more participatory platform in the story creation process.

New technology is also allowing urban legends to be transmitted at a faster pace than before the invention of the printing press, and giving more people the opportunity to shape folk stories that blur the line between fiction and reality. Commonly, these stories take a life of their own and become completely independent from what the original creator wanted to achieve.

Yet if we were to listen to social commentary this change in the story creation process is opening the door to deviant acts.

Last century, people were already anxious about children accessing VHS and Betamax tapes and being exposed to violence and immorality. We are now likely to face a similar moral panic with regards to the internet.

Sleepwalking Through Our Dreams

In The Secret Life of Puppets, Victoria Nelson makes some useful observations of reading addiction, specifically in terms of formulaic genres. She discusses Sigmund Freud’s repetition compulsion and Lenore Terr’s post-traumatic games. She sees genre reading as a ritual-like enactment that can’t lead to resolution, and so the addictive behavior becomes entrenched. This would apply to many other forms of entertainment and consumption. And it fits into Derrick Jensen’s discussion of abuse, trauma, and the victimization cycle.

I would broaden her argument in another way. People have feared the written text ever since it was invented. In the 18th century, there took hold a moral panic about reading addiction in general and that was before any fiction genres had developed (Frank Furedi, The Media’s First Moral Panic; full text available at Wayback Machine). The written word is unchanging and so creates the conditions for repetition compulsion. Every time a text is read, it is the exact same text.

That is far different from oral societies. And it is quite telling that oral societies have a much more fluid sense of self. The Piraha, for example, don’t cling to their sense of self nor that of others. When a Piraha individual is possessed by a spirit or meets a spirit who gives them a new name, the self that was there is no longer there. When asked where is that person, the Piraha will say that he or she isn’t there, even if the same body of the individual is standing right there in front of them. They also don’t have a storytelling tradition or concern for the past.

Another thing that the Piraha apparently lack is mental illness, specifically depression along with suicidal tendencies. According to Barbara Ehrenreich from Dancing in the Streets, there wasn’t much written about depression even in the Western world until the suppression of religious and public festivities, such as Carnival. One of the most important aspects of Carnival and similar festivities was the masking, shifting, and reversal of social identities. Along with this, there was the losing of individuality within the group. And during the Middle Ages, an amazing number of days in the year were dedicated to communal celebrations. The ending of this era coincided with numerous societal changes, including the increase of literacy with the spread of the movable type printing press.

The Media’s First Moral Panic
by Frank Furedi

When cultural commentators lament the decline of the habit of reading books, it is difficult to imagine that back in the 18th century many prominent voices were concerned about the threat posed by people reading too much. A dangerous disease appeared to afflict the young, which some diagnosed as reading addiction and others as reading rage, reading fever, reading mania or reading lust. Throughout Europe reports circulated about the outbreak of what was described as an epidemic of reading. The behaviours associated with this supposedly insidious contagion were sensation-seeking and morally dissolute and promiscuous behaviour. Even acts of self-destruction were associated with this new craze for the reading of novels.

What some described as a craze was actually a rise in the 18th century of an ideal: the ‘love of reading’. The emergence of this new phenomenon was largely due to the growing popularity of a new literary genre: the novel. The emergence of commercial publishing in the 18th century and the growth of an ever-widening constituency of readers was not welcomed by everyone. Many cultural commentators were apprehensive about the impact of this new medium on individual behaviour and on society’s moral order.

With the growing popularity of novel reading, the age of the mass media had arrived. Novels such as Samuel Richardson’s Pamela, or Virtue Rewarded (1740) and Rousseau’s Julie, or the New Heloise (1761) became literary sensations that gripped the imagination of their European readers. What was described as ‘Pamela-fever’ indicated the powerful influence novels could exercise on the imagination of the reading public. Public deliberation on these ‘fevers’ focused on what was a potentially dangerous development, which was the forging of an intense and intimate interaction between the reader and literary characters. The consensus that emerged was that unrestrained exposure to fiction led readers to lose touch with reality and identify with the novel’s romantic characters to the point of adopting their behaviour. The passionate enthusiasm with which European youth responded to the publication of Johann Wolfgang von Goethe’s novel The Sorrows of Young Werther (1774) appeared to confirm this consensus. […]

What our exploration of the narrative of Werther fever suggests is that it acquired a life of its own to the point that it mutated into a taken-for-granted rhetorical idiom, which accounted for the moral problems facing society. Warnings about an epidemic of suicide said more about the anxieties of their authors than the behaviour of the readers of the novels. An inspection of the literature circulating these warnings indicates a striking absence of empirical evidence. The constant allusion to Miss. G., to nameless victims and to similarly framed death scenes suggests that these reports had little factual content to draw on. Stories about an epidemic of suicide were as fictional as the demise of Werther in Goethe’s novel.

It is, however, likely that readers of Werther were influenced by the controversy surrounding the novel. Goethe himself was affected by it and in his autobiography lamented that so many of his readers felt called upon to ‘re-enact the novel, and possibly shoot themselves’. Yet, despite the sanctimonious scaremongering, it continued to attract a large readership. While there is no evidence that Werther was responsible for the promotion of a wave of copycat suicides, it evidently succeeded in inspiring a generation of young readers. The emergence of what today would be described as a cult of fans with some of the trappings of a youth subculture is testimony to the novel’s powerful appeal.

The association of the novel with the disorganisation of the moral order represented an early example of a media panic. The formidable, sensational and often improbable effects attributed to the consequences of reading in the 18th century provided the cultural resources on which subsequent reactions to the cinema, television or the Internet would draw on. In that sense Werther fever anticipated the media panics of the future.

Curiously, the passage of time has not entirely undermined the association of Werther fever with an epidemic of suicide. In 1974 the American sociologist Dave Phillips coined the term, the ‘Werther Effect’ to describe mediastimulated imitation of suicidal behaviour. But the durability of the Werther myth notwithstanding, contemporary media panics are rarely focused on novels. In the 21st century the simplistic cause and effect model of the ‘Werther Effect is more likely to be expressed through moral anxieties about the danger of cybersuicide, copycat online suicide.

The Better Angels of Our Nature
by Steven Pinker
Kindle Locations 13125-13143
(see To Imagine and Understand)

It would be surprising if fictional experiences didn’t have similar effects to real ones, because people often blur the two in their memories. 65 And a few experiments do suggest that fiction can expand sympathy. One of Batson’s radio-show experiments included an interview with a heroin addict who the students had been told was either a real person or an actor. 66 The listeners who were asked to take his point of view became more sympathetic to heroin addicts in general, even when the speaker was fictitious (though the increase was greater when they thought he was real). And in the hands of a skilled narrator, a fictitious victim can elicit even more sympathy than a real one. In his book The Moral Laboratory, the literary scholar Jèmeljan Hakemulder reports experiments in which participants read similar facts about the plight of Algerian women through the eyes of the protagonist in Malike Mokkeddem’s novel The Displaced or from Jan Goodwin’s nonfiction exposé Price of Honor. 67 The participants who read the novel became more sympathetic to Algerian women than those who read the true-life account; they were less likely, for example, to blow off the women’s predicament as a part of their cultural and religious heritage. These experiments give us some reason to believe that the chronology of the Humanitarian Revolution, in which popular novels preceded historical reform, may not have been entirely coincidental: exercises in perspective-taking do help to expand people’s circle of sympathy.

The science of empathy has shown that sympathy can promote genuine altruism, and that it can be extended to new classes of people when a beholder takes the perspective of a member of that class, even a fictitious one. The research gives teeth to the speculation that humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering. And as such, the cognitive process of perspective-taking and the emotion of sympathy must figure in the explanation for many historical reductions in violence. They include institutionalized violence such as cruel punishments, slavery, and frivolous executions; the everyday abuse of vulnerable populations such as women, children, homosexuals, racial minorities, and animals; and the waging of wars, conquests, and ethnic cleansings with a callousness to their human costs.

Innocent Weapons:
The Soviet and American Politics of Childhood in the Cold War

by Margaret E. Peacock
pp. 88-89

As a part of their concern over American materialism, politicians and members of the American public turned their attention to the rising influence of media and popular culture upon the next generation.69 Concerns over uncontrolled media were not new in the United States in the 1950s. They had a way of erupting whenever popular culture underwent changes that seemed to differentiate the generations. This was the case during the silent film craze of the 1920s and when the popularity of dime novels took off in the 1930s.70 Yet, for many in the postwar era, the press, the radio, and the television presented threats to children that the country had never seen before. As members of Congress from across the political spectrum would argue throughout the 1950s, the media had the potential to present a negative image of the United States abroad, and it ran the risk of corrupting the minds of the young at a time when shoring up national patriotism and maintaining domestic order were more important than ever. The impact of media on children was the subject of Fredric Wertham’s 1953 best-selling book Seduction of the Innocent, in which he chronicled his efforts over the course of three years to “trace some of the roots of the modern mass delinquency.”71 Wertham’s sensationalist book documented case after case of child delinquents who seemed to be mimicking actions that they had seen on the television or, in particular, in comic strips. Horror comics, which were popular from 1948 until 1954, showed images of children killing their parents and peers, sometimes in gruesome ways—framing them for murder—being cunning and devious, even cannibalistic. A commonly cited story was that of “Bloody Mary,” published by Farrell Comics, which told the story of a seven-year-old girl who strangles her mother, sends her father to the electric chair for the murder, and then kills a psychiatrist who has learned that the girl committed these murders and that she is actually a dwarf in disguise.72 Wertham’s crusade against horror comics was quickly joined by two Senate subcommittees in 1954, at the heads of which sat Estes Kefauver and Robert Hendrickson. They argued to their colleagues that the violence and destruction of the family in these comic books symbolized “a terrible twilight zone between sanity and madness.”73 They contended that children found in these comic books violent models of behavior and that they would otherwise be law abiding. J. Edgar Hoover chimed in to comment that “a comic which makes lawlessness attractive . . . may influence the susceptible boy or girl.”74

Such depictions carried two layers of threat. First, as Wertham, Hoover, and Kefauver argued, they reflected the seeming potential of modern media to transform “average” children into delinquents.75 Alex Drier, popular NBC newscaster, argued in May 1954 that “this continuous flow of filth [is] so corruptive in its effects that it has actually obliterated decent instincts in many of our children.”76 Yet perhaps more telling, the comics, as well as the heated response that they elicited, also reflected larger anxieties about what identities children should assume in contemporary America. As in the case of Bloody Mary, these comics presented an image of apparently sweet youths who were in fact driven by violent impulses and were not children at all. “How can we expose our children to this and then expect them to run the country when we are gone?” an agitated Hendrickson asked his colleagues in 1954.77 Bloody Mary, like the uneducated dolts of the Litchfield report and the spoiled boys of Wylie’s conjuring, presented an alternative identity for American youth that seemed to embody a new and dangerous future.

In the early months of 1954, Robert Hendrickson argued to his colleagues that “the strained international and domestic situation makes it impossible for young people of today to look forward with certainty to higher education, to entering a trade or business, to plans for marriage, a home, and family. . . . Neither the media, nor modern consumerism, nor the threat from outside our borders creates a problem child. But they do add to insecurity, to loneliness, to fear.”78 For Hendrickson these domestic trends, along with what he called “deficient adults,” seemed to have created a new population of troubled and victimized children who were “beyond the pale of our society.”79

The End of Victory Culture:
Cold War America and the Disillusioning of a Generation

by Tom Engelhardt
Kindle Locations 2872-2910

WORRY, BORDERING ON HYSTERIA, about the endangering behaviors of “youth” has had a long history in America, as has the desire of reformers and censors to save “innocent” children from the polluting effects of commercial culture. At the turn of the century, when middle-class white adolescents first began to take their place as leisure-time trendsetters, fears arose that the syncopated beat of popular “coon songs” and ragtime music would demonically possess young listeners, who might succumb to the “evils of the Negro soul.” Similarly, on-screen images of crime, sensuality, and violence in the earliest movies, showing in “nickel houses” run by a “horde of foreigners,” were decried by reformers. They were not just “unfit for children’s eyes,” but a “disease” especially virulent to young (and poor) Americans, who were assumed to lack all immunity to such spectacles. 1 […]

To many adults, a teen culture beyond parental oversight had a remarkably alien look to it. In venues ranging from the press to Senate committees, from the American Psychiatric Association to American Legion meetings, sensational and cartoonlike horror stories about the young or the cultural products they were absorbing were told. Tabloid newspaper headlines reflected this: “Two Teen Thrill Killings Climax City Park Orgies. Teen Age Killers Pose a Mystery— Why Did They Do It?… 22 Juveniles Held in Gang War. Teen Age Mob Rips up BMT Train. Congressmen Stoned, Cops Hunt Teen Gang.” After a visit to the movies in 1957 to watch two “teenpics,” Rock All Night and Dragstrip Girl, Ruth Thomas of Newport, Rhode Island’s Citizen’s Committee on Literature expressed her shock in words at least as lurid as those of any tabloid: “Isn’t it a form of brain-washing? Brain-washing the minds of the people and especially the youth of our nation in filth and sadistic violence. What enemy technique could better lower patriotism and national morale than the constant presentation of crime and horror both as news and recreation.” 3

You did not have to be a censor, a right-wing anti-Communist, or a member of the Catholic Church’s Legion of Decency, however, to hold such views. Dr. Frederick Wertham, a liberal psychiatrist, who testified in the landmark Brown v. Board of Education desegregation case and set up one of the first psychiatric clinics in Harlem, publicized the idea that children viewing commercially produced acts of violence and depravity, particularly in comic books, could be transformed into little monsters. The lurid title of his best-selling book, Seduction of the Innocent, an assault on comic books as “primers for crime,” told it all. In it, Dr. Wertham offered copious “horror stories” that read like material from Tales from the Crypt: “Three boys, six to eight years old, took a boy of seven, hanged him nude from a tree, his hands tied behind him, then burned him with matches. Probation officers investigating found that they were re-enacting a comic-book plot.… A boy of thirteen committed a lust murder of a girl of six. After his arrest, in jail, he asked for comicbooks” 4

Kindle Locations 2927-2937

The two— hood and performer, lower-class white and taboo black— merged in the “pelvis” of a Southern “greaser” who dressed like a delinquent, used “one of black America’s favorite products, Royal Crown Pomade hair grease” (meant to give hair a “whiter” look), and proceeded to move and sing “like a negro.” Whether it was because they saw a white youth in blackface or a black youth in whiteface, much of the media grew apoplectic and many white parents alarmed. In the meantime, swiveling his hips and playing suggestively with the microphone, Elvis Presley broke into the lives of millions of teens in 1956, bringing with him an element of disorder and sexuality associated with darkness. 6†

The second set of postwar fears involved the “freedom” of the commercial media— record and comic book companies, radio stations, the movies, and television— to concretize both the fantasies of the young and the nightmarish fears of grown-ups into potent products. For many adults, this was abundance as betrayal, the good life not as a vision of Eden but as an unexpected horror story.

Kindle Locations 2952-2979

Take comic books. Even before the end of World War II, a new kind of content was creeping into them as they became the reading matter of choice for the soldier-adolescent. […] Within a few years, “crime” comics like Crime Does Not Pay emerged from the shadows, displaying a wide variety of criminal acts for the delectation of young readers. These were followed by horror and science fiction comics, purchased in enormous numbers. By 1953, more than 150 horror comics were being produced monthly, featuring acts of torture often of an implicitly sexual nature, murders and decapitations of various bloody sorts, visions of rotting flesh, and so on. 9

Miniature catalogs of atrocities, their feel was distinctly assaultive. In their particular version of the spectacle of slaughter, they targeted the American family, the good life, and revered institutions. Framed by sardonic detective narrators or mocking Grand Guignol gatekeepers, their impact was deconstructive. Driven by a commercial “hysteria” as they competed to attract buyers with increasingly atrocity-ridden covers and stories, they both partook of and mocked the hysteria about them. Unlike radio or television producers, the small publishers of the comic book business were neither advertiser driven nor corporately controlled.

Unlike the movies, comics were subject to no code. Unlike the television networks, comics companies had no Standards and Practices departments. No censoring presence stood between them and whoever would hand over a dime at a local newsstand. Their penny-ante ads and pathetic pay scale ensured that writing and illustrating them would be a job for young men in their twenties (or even teens). Other than early rock and roll, comics were the only cultural form of the period largely created by the young for those only slightly younger. In them, uncensored, can be detected the dismantling voice of a generation that had seen in the world war horrors beyond measure.

The hysterical tone of the response to these comics was remarkable. Comics publishers were denounced for conspiring to create a delinquent nation. Across the country, there were publicized comic book burnings like one in Binghamton, New York, where 500 students were dismissed from school early in order to torch 2,000 comics and magazines. Municipalities passed ordinances prohibiting the sale of comics, and thirteen states passed legislation to control their publication, distribution, or sale. Newspapers and magazines attacked the comics industry. The Hartford Courant decried “the filthy stream that flows from the gold-plated sewers of New York.” In April 1954, the Senate Subcommittee to Investigate Juvenile Delinquency convened in New York to look into links between comics and teen crime. 10

Kindle Locations 3209-3238

If sponsors and programmers recognized the child as an independent taste center, the sight of children glued to the TV, reveling in their own private communion with the promise of America, proved unsettling to some adults. The struggle to control the set, the seemingly trancelike quality of TV time, the soaring number of hours spent watching, could leave a parent feeling challenged by some hard-to-define force released into the home under the aegis of abundance, and the watching child could gain the look of possession, emptiness, or zombification.

Fears of TV’s deleterious effects on the child were soon widespread. The medical community even discovered appropriate new childhood illnesses. There was “TV squint” or eyestrain, “TV bottom,” “bad feet” (from TV-induced inactivity), “frogitis” (from a viewing position that put too much strain on inner-leg ligaments), “TV tummy” (from TV-induced overexcitement), “TV jaw” or “television malocclusion” (from watching while resting on one’s knuckles, said to force the eyeteeth inward), and “tired child syndrome” (chronic fatigue, loss of appetite, headaches, and vomiting induced by excessive viewing).

However, television’s threat to the child was more commonly imagined to lie in the “violence” of its programming. Access to this “violence” and the sheer number of hours spent in front of the set made the idea that this new invention was acting in loco parentis seem chilling to some; and it was true that via westerns, crime shows, war and spy dramas, and Cold War-inspired cartoons TV was indiscriminately mixing a tamed version of the war story with invasive Cold War fears. Now, children could endlessly experience the thrill of being behind the barrel of a gun. Whether through the Atom Squad’s three government agents, Captain Midnight and his Secret Squadron, various FBI men, cowboys, or detectives, they could also encounter “an array of H-bomb scares, mad Red scientists, [and] plots to rule the world,” as well as an increasing level of murder and mayhem that extended from the six-gun frontier of the “adult” western to the blazing machine guns of the crime show. 30

Critics, educators, and worried parents soon began compiling TV body counts as if the statistics of victory were being turned on young Americans. “Frank Orme, an independent TV watchdog, made a study of Los Angeles television in 1952 and noted, in one week, 167 murders, 112 justifiable homicides, and 356 attempted murders. Two-thirds of all the violence he found occurred in children’s shows. In 1954, Orme said violence on kids’ shows had increased 400 percent since he made his first report.” PTAs organized against TV violence, and Senate hearings searched for links between TV programming and juvenile delinquency.

Such “violence,” though, was popular. In addition, competition for audiences among the three networks had the effect of ratcheting up the pressures for violence, just as it had among the producers of horror comics. At The Untouchables, a 1960 hit series in which Treasury agent Eliot Ness took on Chicago’s gangland (and weekly reached 5-8 million young viewers), ABC executives would push hard for more “action.” Producer Quinn Martin would then demand the same of his subordinates, “or we are all going to get clobbered.” In a memo to one of the show’s writers, he asked: “I wish you would come up with a different device than running the man down with a car, as we have done this now in three different shows. I like the idea of sadism, but I hope we can come up with another approach to it.” 31

Moral Flynn Effect?

What is causing the IQ increase over the generations?

It’s an important question, as the rise hasn’t been minor. I’m amazed every time I consider that the average IQ used to be what, by comparison to the present, would be considered extremely low intelligence, functionally retarded even if you go back a few generations from present living generations. If you have an average person today take an IQ test designed earlier last century, they would get results, relative to the results when the test was first given, that show them as being quite brilliant.

It makes one wonder what is measured by IQ tests.

This IQ increase is called the Flynn Effect. It was named after James Flynn who wrote a number of papers about it based on his international and cross-generational observations of testing, although Richard Zynn first observed it on a more limited scale in the Japanese population.

The Flynn Effect has been seen in both crystallized and fluid intelligence. The former is basically learned intelligence. This shows what you know and how well you are able to use it. The latter is more about how you are able to think, specifically abstract thinking and non-verbal problem-solving. It is the ability to deal with new and unique problems.

(As a side note, I realized how this applies to my own cognitive abilities. When I was youngr, I was delayed in my crystallized intelligence and precocious in my fluid intelligence. I was so delayed in the one that teachers initially thought I might have been retarded, but IQ testing showed that I measured high in pattern recognition and puzzle-solving. My strengths helped me compensate for my weaknesses. But if it had been reversed, compensation would have been much more challenging.)

The greatest and most consistent IQ increases have been measured in the fluid intelligence. No one exactly knows why, but explanations are diverse. Flynn sees it as primarily an increase in abstract thinking in line with the demands of modern industrialized society with all of its complexities: infrastructures, social systems, economies, technologies, visual media, video games, etc. Flynn points out how rural people even just a century ago didn’t demonstrate much predilection for abstractions (see Luria’s interviews with isolated rural Russians). With a different focus, others propose that the main change has been in terms of health standards and environmental conditions, that have allowed greater brain development.

The reasons interest me less at the moment. I wanted to note that the changes seen across the generations are quite real and significant, whatever they might mean. They are also continuing in many countries, including the United States, although the pattern doesn’t hold in all countries. We Americans haven’t yet hit the ceiling of IQ limits, and that applies to all demographic groups, although those on the lower end of the scale are rising faster and hence the IQ disparities are shrinking.

So, about this trend, what does it represent? Where is it heading?

There are some correlations that I find intriguing. Higher average IQ correlates to greater liberal-mindedness. Many studies have shown this. It seems related to corrleations found between other cognitive abilities and predispositions: openness to experience, thin boundaries, fantasy proneness, creativity, empathy, emotional sensitivity, social awareness, etc.

This probably connects to fluid intelligence, the ability to deal with new, unique, and unusual situations and problems. I’ve pointed out before that the strength and weakness of liberalism is its emphasis on abstractions, both critical thinking and wide-ranging empathy being dependent on this. There is a psychological fluidity with liberalism that appears to be linked to cognitive and intellectual fluidity. I’ve also noted this may be the reason that research has shown it easier to shift a liberal into a conservative mindset than a conservative into a liberal mindset. Liberals easily fall prey to contact highs, both psychological and ideological.

Unsurprisingly, liberalism (in particular, social liberalism) has increased in unison with rising IQ. Also, social democracy has spread and become more dominant following the wide-scale availability of knowledge because of movable type printing presses, mass publishing, public libraries, public education, etc; and is likely to spread further as all of these contributing factors spread further and are magnified by the internet and various new media technologies. Others have observed that the Axial Age began and came to fruition because of the development and popularization of alphabetic writing, scrolls and then bound books, and the formation of libraries. That beginning, uneven and shaky, did more fully take hold during the Enlightenment and greater still with industrialization.

Steven Pinker has made the argument that this corresponds to an impressive decrease of violence per capita across the centuries. This is what is called the “moral Flynn effect.” It’s not just an improvement of social and health conditions, but an actual change at the level of psychological and cognitive functioning, at least so the theory goes.

Fluid intelligence isn’t just about cold analysis, dry logic, and intellectual problem-solving. It’s more importantly about seeing patterns and connections and the ability to shift perspectives, such as ideological worldviews, ethnic cultures, and personal experiences. It’s not just abstract thinking, but it definitely involves abstract thinking. To empathize with someone far different from you requires an abstract capacity of universalizing human nature and seeking commonality in human experience. There is no way to go from concrete thinking to such inclusive extremes of empathy, to go from the known of one’s own experience and into the unknown of imagining other viewpoints.

You can see this mindset having struggled to take hold during the Enlightenment and early modern revolutionary era, and even well into the 19th century. One of the greatest debates at that time, including among the American founders, was whether all humans had a basic human nature. Did all people, even peasants and slaves, have a common experience of self-awareness, thought, and feeling? Did all people feel pain and suffering, desire happiness and freedom? Were all humans really the same on some fundamental level or were some populations more like animals?

These seem like silly questions to many modern people in modernized societies, but that wasn’t always the case. It has only been over this past century that psychological understanding has become common, and this has been in concert  with scientific thought becoming more widespread, the two being inextricably connected. To see the world through a stranger’s eyes requires a quite complex process of cognitive ability. It has to be learned and developed. No one is simply born with this capacity.

It’s amazing that we have advanced so far that we now take so much of this for granted. Still, we have much further to go. It does get me to wondering. Will we reach a tipping point when the American or global population reaches a certain level of IQ and education, specifically in terms of increasing ability of complex thought and perspective-taking? The average American today is smarter and more well educated than was the ruling elite from centuries ago. If you think the present generations of Americans are stupid, you should have seen their ancestors before most of the population was educated and literate.

On the other hand, some worry that increased abstract thought is causing a loss of concrete thought.But I doubt it is a zero sum game. By way of transcend and include, abstract thought moreso builds upon than replaces concrete thought. It’s that combining of cognitive abilities that allows for ever more complex thought. That is what I hope is the case. We are presently undergoing a massive social experiment to test this hypothesis.

* * *

See:

Are We Becoming Morally Smarter?
The connection between increasing IQs, decreasing violence, and economic liberalism
by Michael Shermer

Swords into Syllogisms
by Randal R. Hendrickson

Losses Outweighing Gains

I just finished reading Come Home, America: The Rise and Fall (and Redeeming Promise) of Our Country by William Greider. It’s not an awesome book, but it’s well above average. Greider is a liberal who seeks reform rather than to revolutionize the entire system. He is only radical in his optimism about democracy.

What I liked about the book is that the author wasn’t afraid to dig down into the foundation of our collective problems. One particular passage demonstrates this which can be found below. In that passage, he talks about the economy and the environment. His focus is on externalized costs, moral hazard and free riders.

If you’re an informed person, most of this won’t be new to you. But it is always nice when you find a clear-eyed accounting of the problems we face. I did learn one new thing from this passage. He discusses the work done by Herman E. Daly and John B. Cobb Jr as presented in their book, For The Common Good: Redirecting the Economy toward Community, the Environment, and a Sustainable Future. Basically, if all the costs are accounted for and measured, the US economy is a net loss. This isn’t just unsustainable but self-destructive.

There was a good review (by David E. Rockett) about Daly and Cobb’s book:

“Agrarian Localist that I am, with roots in the cultural and political Right — Daly was refreshing and often challenging from the ‘New and Improved Left. He brilliantly and repeatedly shows the ‘fallacy of misplaced concreteness’– that is the dubious use of logical abstractions which supposedly lead to good conclusions. NOT! In logic, it is similar to ‘the undistributed middle’– or in laymen’s terms — there is yet far too much we simply don’t know to conclude ‘this’. Those pegging him a traditional UN Internationalists look like blind Libertarians who are simply dead wrong, and didn’t read carefully. Daly is a modest Decentralists/Federalists’ in calling for a ‘return to the Local’. His call is for a federalism with far more attention to Local and Regional markets and development than we’ve had in this country since Lincoln. Yet Daly still uncomfortablly allows for some heiarchialism at national and international levels. Suprisingly, he uncritically buys all the status-quo environmental hysteria as ‘Fact’, indeed ‘wild facts’ he calls them. Thus, you have a mixd book — full of brilliant and insightful critique — and sullied by a good bit of carried-over authoritarian leftism.”

I just wanted to share that to show that this isn’t a right vs left issue. Even for someone like this who doesn’t appreciate the actual data about environmental damage can still understand the basic problems for human society.

This reviewer gets at an important point with his mention of the ‘fallacy of misplaced concreteness’. This is essentially Burkean conservatism being used to criticize laissez-faire capitalism as an ideology lost in abstractions and ungrounded in reality as it is. It’s interesting how closely aligned Burkean conservatism is to the precautionary principle that so many liberals obsess about.

Now for the passage from Greider’s Come Home, America.

* * * *

(Kindle Locations 2636-2707)

It is not people who have failed. It is the system that has failed people. The awkward secret about the American growth engine is that it thrives by wantonly wasting the noneconomic “assets” of people’s lives, the lost potential of their time on earth. These are “priceless” because they cannot be bought or sold. Their true value is unknowable, even to the individual. The growth engine wastes the future-the full range of possible experiences that ought to be commonly available in this very wealthy nation. When people look around, they see that vital elements of their surroundings have also disappeared or crumbled, and these things are not just the roads and bridges, but also essential public assets like the grace notes of community life and common verities trashed by the manic competition for growth and profit. Why must we live like this if we are so rich?

Ecologists look at the natural world and ask essentially the same question. The most dramatic and threatening price paid for economic growth, they argue, is the systematic destruction of nature. The finite capacity of the natural world to sustain life-human and otherwise-is endangered by the relentless encroachment of the industrial system. This threat means that there are indeed limits to growth, at least to growth as it is presently practiced. To put the point crudely, you can only pave over so much of the earth before nature begins to lose its life-supporting capabilities. Most mainstream economists dismissed this idea originally, but now they are more respectful, since global warming has provided frightening evidence of the collision between nature and the industrial system.

Images of the Arctic ice cap receding and polar bears stranded on floes have been seared onto our consciousness. Oceans and mountains, topsoil and rain forests, water, land, air, and the natural diversity of living things have all been laid waste. These are the things that sustain existence for all species, including ours. There are limits to growth and the world is bumping up against them, especially now that industrialization has spread to some very poor societies. Like wasted human lives, the losses to nature are not factored into the economic accounting, nor are they redeemable.

It may seem odd to accuse US business and finance of wastefulness since they are obsessed with efficiency. But the intense competition for returns among companies and investors focuses their managements on reducing their own companies’ costs, not the costs to society or nature. The gross domestic product is essentially the total of all of a country’s economic activities-the money transactions of producers and consumers and the performance and profits of enterprises and investors. Everything else is left out-human lives, society’s needs and values, the well-being of nature. In this system, even obviously negative events-a train wreck or an earthquake-are treated as positive since they will stimulate more economic activity.

Worse than that, the growth engine actively damages anything it does not itself value. Companies know how to enhance their own growth and profit by dumping their production costs onto innocent others, such as the workers stripped of their pensions and the rivers destroyed by pollution. Then the government must clean up the human injuries and environmental wreckage left behind. Some of the collateral damage businesses cause is no doubt accidental, but most of it is deliberate. In numerous ways, companies develop careful strategies for extracting profit from the assets of others. Someone does pay eventually for this antisocial wastefulness, but usually it is not the perpetrators who gained wealth from their irresponsibility.

This is more than an accounting problem. It is a deep disorder in the values that govern our country. The economy keeps output-production and consumption-expanding as measured in dollars. But the process of growth simultaneously creates a political illusion by concealing the net negative loss to society. Politicians do not have to face this contradiction since the government conveniently does not look at growth in these terms.

Herman E. Daly, a rare economist who endeavors to see the world whole, set out to unmask the illusion. He calculated the full consequences of growth by combining various indicators of social and ecological gains and losses with the standard economic measures of output and wealth creation. Daly and his collaborator, John B. Cobb Jr., called it the Index of Sustainable Economic Welfare, and its stunning results were a rebuke to narrow-minded economics.

In For the Common Good, Daly and Cobb explained that for many years, although US growth had been officially reported as positive, it was actually negative for the overall society when these other factors were included. Economists quarreled with Daly’s method of calculation, but other researchers have since confirmed his point by using different ways of weighing the losses and gains.5

[5. Among the controversial assumptions Daly made in constructing the Index of Sustainable Economic Welfare was that a measure of income inequality should be included. The premise was that society’s well-being degrades as inequality increases, a point that only hard-right conservatives would dispute today. The book was revised and the index refined in a second edition published in 1994.]

The great American economic icon-gross domestic product-is in trouble, especially as more Americans discover the truths that ecologists have been explaining for years. The familiar measure for defining progress no longer makes much sense, not for people or for society. Progress as it has been traditionally defined feels dizzying instead, like running in place, faster and faster, without getting anywhere, and even sliding backward without realizing it. Americans need to rethink the meaning of progress in broader, more realistic terms that are more consistent with the human condition. Demanding an honest accounting of reality represents a major step toward straightening out our future.

[ . . . . ]

In Steady-State Economics, Herman Daly invoked a similar analogy. To grow, he pointed out, is defined as “to spring up and develop to maturity.” People do not grow physically bigger and bigger throughout the life span (they would look like freaks if they did). At a certain point, they level off in physical size, but continue to develop the skills and qualities they need to sustain and enrich their lives. At a certain point, he suggested, wise and wealthy nations must do the same.7

The steady-state economy described by Daly and elaborated by others is radical and uncompromising. It rejects growth as we know it as a fixation on expansive accumulation that does not discriminate between good and ill consequences. This does not mean an end to “progress,” however. In terms people can recognize, the steady-state society continues to improve itself, developing and redeveloping internally, perfecting the social conditions that promote the public welfare and more fulfilling lives. The United States has the wherewithal to achieve this if it has the nerve to try.

Daly’s “steady state” is an economy in dynamic equilibrium that fulfills human needs without destroying the planet. It is an economy fully reconciled with nature’s limits and in harmony with the country’s abiding values of equality, freedom, democracy, and “life, liberty, and the pursuit of happiness.” Daly’s pioneering insights have gained a lot of ground among economic and social theorists in the intervening years, despite the hostility of orthodox doctrine. His perspective has been popularized as a foundation for sustainable development (though the meaning of sustainability is often corrupted in practice).

A concrete expression of Daly’s thinking is popularly known as the ecological footprint-a measure of how much humanity and industrialization have encroached upon and diminished nature’s capacity to replenish life. Even some leading corporations now promise to reduce their corporate “footprint.” The footprint of human activity-including the spoiling of natural resources like air, land, and water-is already overshooting nature’s carrying capacity by an estimated 25 percent, according to the Global Footprint Network. Biologists have called our era the Sixth Great Extinction, with thousands of species doomed by the shrinking habitats and failed ecosystems.

“Humanity is living off its ecological credit card,” said Mathis Wackernagel, the group’s executive director.”While this can be done for a short while, overshoot ultimately leads to liquidation of the planet’s ecological assets.”8

[8. Wackernagel said in announcing the release of the Living Planet Report 2006: “Humanity is living off its ecological credit card. While this can be done for a short while, overshoot ultimately leads to liquidation of the planet’s ecological assets, and the depletion of resources, such as the forests, oceans and agricultural land upon which our economy depends.” See Chris Hails, Jonathan Loh, and Steven Goldfinger, editors, The Living Planet Report 2006, WWF International, Institute of Zoology, and Global Footprint Network, October 24, 2006, http://www.footprintnetwork.org/newsletters/ gfn_blast_0610.html.]

These are social and ecological wounds whose existence can no longer be evaded. They are defining realities that Americans must face and accept if they are to think clearly and honestly about transforming how we live and are organized as a society-the dream I describe as America the Possible. Reconstructing a promising society from the wreckage of the past is possible, though every aspect is difficult and lies beyond the usual expectations of what seems possible in politics. Failure is also possible. I won’t dwell on the consequences of failure because it means genuine decline-we would become a country that was past its best days and resigned to a dispiriting future. That is not where we are, nor where we want to go.

* * * *

For more information and discussion, see the following:

Index of Sustainable Economic Welfare

Genuine progress indicator

Ecological footprint

Green gross domestic product

Green national product

Environmental full cost accounting

Happy Planet Index

Human Development Index

Index of Sustainable Economic Welfare

Living Planet Index

Beyond GDP: Measuring and Achieving Global Genuine Progress

The Triple Bottom Line: What Is It and How Does It Work?

Herman Daly’s Ecological Economics – An Introductory Note

When Higher GDP can lead to Lower Welfare
The use of ISEW: the index of sustainable economic welfare

What is economic growth and are there limits to it?

The virtues of ignoring GDP
Dropping a bad habit

Someday Environment Costs will be factored into the GDP

Economic Growth: A Social Pathology

VALUING THE EARTH: Economics, Ecology, Ethics

This is the copy of an email sent to Mr. Herman Day in 1996

Iowa Biking & Rural Politics

My brother, Nate, and I were bicycling on Iowa’s country roads. We were on a day outing. We met in the Waterworks Prairie Park by the Iowa River, from where we traveled to a nearby town (Hills, Iowa) and then crossing Coralville Dam. On the way we passed farmland mixed with housing, and we talked as we cruised along. Healthy exercise and fresh air.

Iowa is one of the best states in the country for bicyclists, including professionals wanting to train, which is why there is such things as Ragbrai here (my brother has been biking a lot in preparation for that particular event). The reasons Iowa is great for biking is because of how the roads were planned in mostly square sections that goes back to the earliest settlements and farmland (Iowa was the first state or one of the first to be planned out in this manner). This type of planned community structure and civic infrastructure is part of the Midwestern DNA, unlike the haphazard (and litigation-prone) metes and bounds system that defined the early development in the South (legal problems with land ownership were behind Daniel Boone and Abraham Lincoln’s family leaving Kentucky).

The other thing Iowa is known for are all these small towns that are regularly situated in the counties, even in rural areas. Many of these farm towns have died out or are in the process of dying, but many of them still survive. The surviving farm towns, besides the county seats, have often had to reinvent themselves or else become ghosts of their former selves. Even with the rural population drain, the massive road infrastructure is maintained because it is required for the movement of farm equipment and farm produce.

One of the results of the rural population drain is that many of the young have left. Those left behind are older and getting older every year. These small towns are typically filled with fewer young families, especially as the family farms have been bought up by large corporations.

This has created many small communities with little sense of the future. The old people there remember the way things used to be. They fear and resist change, but in doing so they are typically unwilling to invest in the future. Some of these towns essentially collectively commit suicide because the problems they face are so great and no one wants to take responsibility to invest in the next generation that is likely to move away. It is a vicious cycle of self-destruction or else, in some cases, an unavoidable downward spiral.

The shrinking populations have a cascade of effects, but let me first describe how it used to be.

Iowa counties were designed so that any farmer could drive his horse-drawn wagon to the county seat and back in a single day. The Amish living here still travel by wagon, and under these conditions their traditional communities thrive. Counties are relatively small, but many counties used to have a number of towns where basic needs to could be regularly taken care of. In a typical town, there would be such things as grocery and farm supply stores, farmers’ markets, gas stations, repair shops, public schools, public libraries, etc. These were very civic-minded places, somewhat modeled on the New England ideal of local democracy.

The heart of the counties are the county seats where the courthouses and such are located and where parades and fairs are held, but the heart of communities used to be the public school and the downtown. The pride of any decent-sized town was the local high school football or baseball team. However, to save money, many of the small schools have been shut down and replaced by centralized larger schools where kids are bussed in from far away. This has had a terrible impact on these traditional farm communities. All that is left of many of the remaining towns are the houses surrounding a now mostly emptied downtown with a few struggling businesses. The Walmarts have put the final nail in the coffin of most of the small downtowns.

This has been true all across the Midwest. My dad grew up in Alexandria, Indiana. It was once declared Small Town USA. In the decades following the post-war boom era, factories have closed down or laid off workers. My uncle has remained their because of his love of the place, but he struggles to make money with his dentistry business. Most of these small towns used to have dentists and doctors with offices in the downtown, but rural residents these days typically have to travel a lot further than they used to just to get basic healthcare, if they get it at all.

To make matters worse, meth addiction has become an epidemic in the rural Midwest. Meth is a drug that grows amidst desperation, spreading that desperation further like weeds that even Roundup can’t kill. The best options available in such a desperate environment is to work at the slaughterhouse or work at Walmart or else work multiple jobs trying to support your family which leads you to be tempted by meth to give you that extra boost when you have no energy left. Of course, you can also go into the meth making business itself as meth can be cooked up easily in any kitchen or trailer.

Desperation also breeds politicians like Steve King. He is congressman of a district in western Iowa, the epitome of everything I describe. That district has the oldest average age of any population in the entire country. The reason Steve King is such an asshole is because he represents ornery old people who live in dying towns where there is absolutely no hope for the future. You have to have pity on a population so distraught that they would repeatedly vote for someone like Steve King. That is a group of people begging to be put out of their misery. They are angry at a world that has left them behind and they have every reason to be angry, even though in their anger they embrace a demagogue who can’t and won’t solve their problems.

These aging rural folk mostly come from farm families that have farmed for generations. They are the last of their kind. In many cases, their family land has been sold and their children have left them. Society no longer has any use for them and so they have no use for society. The shrinking few of the next generation that have stuck around aren’t following in the family tradition of being independent farmers. Jefferson’s yeoman farmer vision of America is a thing of the past. These last rural holdouts embrace reactionary politics because they are fighting for a lost cause. Only in fiction are lost causes ever noble.

Reactionary politics doesn’t result in constructive and sustainable policies. Fiscal conservatism becomes the rally cry of communities in retreat, communities full of people who have lost faith in anything greater that can be achieved. Building and maintaining town infrastructure is something that is done when it is seen as an investment that will one day payoff. To the reactionary mind, though, all change is seen as loss and destruction, even when it involves basic maintenance and simple improvements. Normal government functions become battles to be endlessly fought against. Local democracy can’t survive under such conditions, and so the remnants of civic-mindedness devolves into struggles for power. If the only self-respect in one’s view is resistance and refusal, it will be seized with a death grip.

Nate and I were discussing some of this on our jaunt about the countryside.

He lives in one of those small towns, West Branch. It used to be a very thriving town since the railroad used to pass right through the middle of it, but the tracks have been pulled up and it has partly become a bedroom community of Iowa City (where the University of  Iowa is located and where I live). West Branch is the boyhood home of Herbert Hoover, the 31st US president. There is a federal park there that includes Hoover’s boyhood home and some other original buildings, and the park brings in some money into the town.

Having been in West Branch for many years now, my brother has gained an inside view on what goes for small town life these days. His town lacks the desperation of some towns as there are jobs to be had in the larger, more prosperous cities nearby. I doubt there are many people left in West Branch who make a living by farming their own land. Still, the old families remain and they try to maintain their grip on their community.

This plays out in a number of ways. The older generation resists improving anything for fear that it will attract more people to move into town. They’d rather let the town crumble than risk it growing into something new, prosperity and hope for new generations be damned. Others in town, often newcomers like my brother (and all families are newcomers who haven’t been there for generations), want to improve the town for they are mostly young parents and young professionals who are hopeful about the future.

It is a battle of the old power elite against the rising of a new generation. The old power elite consists of a group of families that have had immense influence. They hold many of the political positions and they treat the volunteer fire department like a private club. These families are represented by a generation of old white guys, many of whom are in their 70s. They are known as the ‘Dinosaurs’. The mayor died last year. Some of the older city council members don’t have have many years left before retiring, going senile or dying. It is a government run by nepotism and cronyism, a typical good ol’ boys network.

West Branch is a perfect example of fiscal conservatism. There is always money for the fire department, of course. It is the pride and joy. Otherwise, there is never money for anything and there is great resistance to raise taxes. Even when a federal grant was available, they wouldn’t take it to fix the sidewalks. Part of the reason was because the federal government has requirements about how work is contracted out which means they couldn’t use their typical practice of cronyism. So, the sidewalks go on crumbling.

The local government is governed by those who don’t want to govern. They see their sole purpose is to obstruct progress and maintain their control, and they have been very good at achieving this end.

For some reason, the city government likes to give public property away. They gave away the city park to the Herbert Hoover National Park because they didn’t want to spend the money to maintain it such as keeping it mowed, but now every time they want to have a public event at the former city park they have to get permission from the local representative of the federal government. They were donated a large old building and the property it was built on, a former retirement home as I recall. The land was worth $100,000 and the building was worth a $100,000. They gave it to one of their cronies for $5,000 which means the city took a loss of $195,000 and that is a lot of money for such a tiny town. Their justification was that this crony sometimes did volunteer work for the city. I wish someone gave me $195,000 when I volunteered.

They don’t like the idea of public property. I guess it sounds too much like communism. If they could give the entire town away, they might consider it as long as it went to a member of one of the old families. As libertarians like to say, government is the problem. These Dinosaurs are taking seriously the idea of shrinking their local government so much that it can be drowned in a bathtub. Some might call that self-destruction, but they would consider it a victory.

Like many conservatives, they see salvation in big business. The West Branch city council gave TIF agreement to a company with the promise that they would hire 100 residents. The company didn’t live up to their end of the deal when they laid off some people, but they gave the excuse that it wasn’t their fault because of the economy. So, the city council extended their TIF again. Only after the company broke their promise a second time did the city council revoke the TIF. Meanwhile, the city loss massive amounts of money in the taxes not paid.

If you add up all the money the West Branch government has given away or lost, it might add up into the millions. However, whenever any citizen group or committee seeks to do anything productive, the city council and the mayor will claim there isn’t enough money. That is fiscal conservatism for you.

The government of this town does the absolute minimum that it can get away with while keeping taxes as low as possible for town residents. They can get away with this partly because they live in the same county as the more prosperous Iowa City which means they get more county funds than they pay in county taxes. All of this pays for the public schools in town and helps pay for other basic maintenance. All that the West Branch government has to concern itself is that its few streets have their potholes filled and the snow plowed in the winter. The sewers probably need replacing and water pipes break every so often, but the infrastructure works well enough most of the time. The neat thing about infrastructure is that once it is built it can be neglected for decades before it becomes so big of a problem that it can no longer be ignored and shunted off onto the next generation.

A town like West Branch is a metaphor for our entire country. Like the rest of the country, it isn’t a bad place to live. The old white guys in West Branch government are like the old white guys in government everywhere else. This old political elite is part of a very large generation, the Baby Boomers (although the oldest of them are Silents or on the cusp), who have held onto power so long because they were followed by an extremely small generation, GenXers.

Many people, especially conservatives, like to idealize rural life. These days, though, rural life goes along with a lot of dysfunction, the side effects of globalized capitalism. Nearly everything is being concentrated into ever growing corporations, farms being no exception. Small independent farmers and business owners are becoming a rare species, the family farm and business rarer still. With factory farming, the land is being farmed even more intensively which means even less sustainably. I often doubt that we are on the road to long-term prosperity. It can feel that we are as much fighting against so-called ‘progress’ as we are looking for a progress worth fighting for. It is hard to blame old rural people for lashing out at a world they no longer understand.

I don’t think it is all doom and gloom, though. There are still plenty of small independent farmers fighting the good fight. Some have gone organic in looking for a niche market to make enough profit. There has been a growing market for locally grown produce. If the Amish can thrive in this modern society, there are far from being without hope. Besides,  even small towns like West Branch have their up-and-coming young generation looking to the future rather than the past. overwhelmed as they may seem by the old folk.

We typically look to the big cities on the coasts in determining the winds of change, but there remains a significantly large part of the US population living in rural states and not all of them are aging reactionaries. Driving through Iowa, one still sees plenty of progress. Many rural people gladly embrace new technology such as putting up wind turbines or renting their land out for those who put up wind turbines and beneath these behemoths the cows graze. Iowa is a leader in wind energy and most data shows the state doing relatively well compared to the rest of the country. The rural states to the west of the Mississippi fared extremely well even during the economic downturn. The economy could entirely collapse and there still will be a demand for corn, soy and wheat.

More important to my mind, I would note that the Midwest has been for a very long time one of the breeding grounds for progressive, populist and even radical politics. That fiercely independent spirit remains, even if the older generation has forgotten about it. If I were too look for the direction this country is turning toward, I’d probably look to a state like Wisconsin. The battles of local politics can be as inane as national politics, but I think the local politics might have more impact than we realize.

Fear of the Future: Against Progress

I was wondering about why people support certain things that seem against their own interests, even their own openly stated interests. What made me think about this today is a Pew poll:

Lower-Income Republicans Say Government Does Too Little for Poor People

“Mitt Romney’s statement that he is focused solely on the problems of middle class Americans, not the poor, may not sit well with lower-income voters within his own party. Roughly a quarter of Republican and Republican-leaning registered voters have annual family incomes under $30,000, and most of them say that the government does not do enough for poor people in this country.”

The Republican Party has had a War on the Poor for decades. Republican politicians regularly attack the poor as lazy and as leeches on society. The official stance of the GOP is less money for welfare or at least less welfare money for the poor.

It’s seems absolutely insane that a poor person would vote Republican while hoping for more help from the government. As I’ve been studying American history, I was reminded of how some American colonists supported the British government against those who sought freedom and democracy and I was reminded of how some slaves supported the Confederacy during the Civil War.

People fear change. Republicans use the rhetoric of the fear of change. They speak of traditional values and make romanticized claims about the past. Many people fear change because they fear the perceived/imagined threats of chaos, of social disorder and instability.

That is similar to why many who would benefit from revolution supported the British Empire. To embrace change is to embrace the unknown. There is no way one can know that one will gain something greater than what one loses. The British Empire, despite all of its failings and oppression, did offer stability and protection.

The Quakers, for example, feared change because they knew they were surrounded by enemies. The Quakers had been persecuted horrifically by the Puritans. The Quakers were despised by the Southern and Tidewater elites (who saw all of Pennsylvania as a breeding ground of the lower sort). And the Quakers were constantly being threatened by the Scots-Irish in their own territory. It turns out the Quakers did benefit from revolution, but it wasn’t certain that they would benefit.

Slaves were in an even greater situation of facing the unknown as they were intentionally kept ignorant. Most slaves had no idea about what was going on in the North. And it was true that most Northerners wouldn’t welcome them as equal citizens. But the fears of change that many slaves had weren’t entirely accurate, but then again they weren’t entirely inaccurate. The period of history directly following the Civil War was far from kind to African Americans. At least as slaves, their lives had stability and order. It took many generations and massive violence/oppression before African Americans would gain any civil rights victories.

It always comes down to fear of the future, fear of the unknown. That is what humans always face. Progress tends to benefit most people in the long run, but it doesn’t always benefit everyone and certainly doesn’t benefit everyone equally. The problem is that there is no other option. Civilization has set humanity on a course where we can’t just cling to the past. The world is changing whether we like it or not. We can embrace change and guide it toward our benefit or we can resist change and allow someone else decide our fate.

The democracy of e-books

Here is the link to a blog post by Quentin S. Crisp:

The business of books

The following are my responses. I want to be clear about one thing, though. These are my responses to my perception of Crisp’s presented view in this particular blog. My specific perceptions here, of course, may not be entirely accurate and most likely involves various biases and projections.

To speak of Crisp more generally, I like him and agree with him more than not. In this particular case, however, I found myself having a bewildered response in trying to understand why Crisp’s ‘loathing’ was so strong, especially as his loathing seemed directed at a group of people of which I am a member, i.e., Kindle owners. 

My first response:

I’m of the type who thinks change just happens and there ain’t nothin’ can be done about it. After civilization began, it was all downhill from there. I’m fatalistic about progress. I embrace it until civilization collapses. I’m curious where it will lead before then.

I bought a Kindle for various reasons, but my original reason was that I wanted something to replace my electronic dictionary. I still buy some physical books, not as much as I used to though. It’s a good thing because I was running out of room in my apartment.

By the way, why does “just sayin'” irritate you so much? I would assume it originates from American English. I’ve used the phrase “just sayin'” on occasion. I just find it amusing to say. It’s silly and stupid.

As I read your blog post, I must admit I felt some gut response to defend the world wide web. It’s ‘democracy’ in all of its beauty and ugliness. As Freck said in A Scanner Darkly, “Well, I like it.”

In early America, the government gave subsidies to presses so that it would be cheaper to publish newspapers and books. This would also meant more opportunities for writers. Of course, not everyone had a newspaper column like people now have blogs. But I’m sure the average published writing back then wasn’t all that well-edited. I was wondering about this. It would be an interesting analysis to look at first editions of books across the centuries to find out when writing was the most well-edited according to the standard grammar of the time period.

Having more writers does create more chaos. Even so, I’d point out that (since you were blogging about VALIS) I’m with PKD in having faith in chaos and the good it can safeguard. The corollary to chaos is innovation. Every age of innovation began with the crumbling of the previous age. We can’t know if it will lead to progress or destruction, but either way it can’t be avoided.

My second response:

On the whole I’m more sympathetic to chaos than order, as anyone who’s visited my flat can probably testify, but I think I’m most sympathetic of all to benign chaos – that is, self-regulating chaos of the idyllic kind which seems to be championed in the Dao De Jing, etc.

I’m also a man of much personal chaos. But there is a difference between one’s own chaos and someone else’s. And, as you say, there is a difference between benign chaos (benign to me, at least) and other varieties of chaos. I don’t know how benign PKD saw chaos, but he didn’t see chaos as an automatic enemy. He saw the divine as that which can’t be controlled, that which in fact will seek to avoid control. The divine sought hiding in the chaos so as to not to be found by the demiurgic forces that seek to control the world for their own purposes.

The argument that is always raised with any new technology, when anyone objects, is basically that “it’s all good” or “you can’t stop change”. But the same argument is never used in the case of politics. In politics, the points themselves are generally argued, and people, however stupid their decisions may be as related to the points, hardly ever just revert to “all change is good and/or inevitable.” So why do this with technology, which is, after all, as much of human manufacture as politics?

That may be true for some or even for most, but it ain’t true for me. I see two warring tendencies in society’s progress. There are those who see all progress as good and those see all progress as bad. An interesting middle position is that of Jeremy Rifkin in his book The Empathic Civilization. In Rifkin’s analysis, the progress of civilization is both destructive and creative. On the creative end, new technology (for traveling and communication) increases collective empathy. But it does so at a very high cost. Will our empathy for other people and other life increase quickly enough that we will find solutions to the destruction we’ve caused?

I don’t know. I just thought such a way of thinking might be applicable to this issue as well. Modern society changes ever more quickly which means much of the past gets lost. With the introduction of Western culture (including the Western invention of the book), many indigenous cultures are destroyed and lost forever. Likewise, with the introduction of new technologies, the traditions of the West can also become endangered. However, there is also a counter trend. For example, the digitization of books has saved many books from the dustbin of history. Some of these books only had one physical copy left remaining in the world, but now anyone anywhere can read them.

I think people have been hypnotised into thinking technology is inevitable and has a kind of universal objectivity to it, in other words, that it doesn’t have cultural implications or cultural bias. But all technologies have cultural implications and biases. Someone has made a decision somewhere to switch tracks to this or that thing.

Yeah, that is true. Even a simple technology like books hypnotize us into a certain way of looking at and being in the world. The printing press probably was the first to create or at least widely promulgate this perception of inevitability and universal objectivity. The vision of inevitable progress goes at least back to the Age of Enlightenment. More broadly, of societies around the world, a collective decision over generations was made to switch from oral to written, from stone and clay to scrolls and then to books and now to e-books. No single person or even group of people is making this decision, but this isn’t to say that individual choices don’t have influence. It’s just that individuals are increasingly choosing e-books. Still, you are free to think people like me are wrong or stupid for choosing e-books.

My third response:

I think that PKD must have been at least ambivalent towards chaos. In Valis, he identifies ananke, or ‘blind chance’ (also translated as ‘necessity’, or could that be ‘you can’t stop change’?), as a symptom of evil in the universe, and generally seems to equate rationality and order with good.
Oh yeah. I’m sure PKD was ambivalent about lots of things. I should’ve clarified my thoughts. I was partly referencing PKD’s view of what he called “God in the gutter” or “God in the garbage”. PKD was fascinated with chaos, not that he idealized it.

People accept free will in politics and other areas of life, so why not in technology?

I have no clear opinion about freewill. Part of me is attracted to the view of philosophical pessimism. I don’t think individuals are all that free. We act according to our natures and our natures were formed (with genetics and early life experience) long before we had any opportunity to aspire to become self-willed agents. And, on the larger scale of society, I suspect we have even less willed influence.

As I see it, freewill is a very modern concept, if anything created by and magnified by technology. Books are just one of the early technologies that have formed the modern sense of self. the book format was first used by the early Christians and it was that era when individualism was beginning to become what we know of it. With modern technology, people have an even stronger sense of self and of a self-willed relation to the world.

The problem you seem to be perceiving is that as the masses gain more freedom then more specific groups lose their monopoly on specific areas. When everyone can be a writer, everyone can influence the culture of writing, not just ‘professional’ published authors. Writing is no longer an elite profession. The internet and other new technologies have democratized writing and empowered the average person. For example, I’m just a parking ramp cashier and yet I’m talking to you, a published author. Online, I am equal to you and we’re both equal to everyone else. Power and authority have little meaning online, unless you’re one of the people who owns a major internet company like Google.

You have a sense, as an author, of losing power even as many people around the world are gaining power through more opportunities of reading and writing. But that isn’t how I see it. A small press author like you gets more readers from more countries for the very reason of newer technology. A century ago, you might never have been published at all or have remained almost entirely unknown. There are trade-offs. You gain more ability to reach more people but so does everyone else. Also, you have been self-publishing recently. Yes, you are more careful in editing, but because of limits of funds many small press publishers (whether self-published or not) often have issues with quality editing as it is very time consuming. I know Mike has bought expensive small press books with many editing problems. So why blame the average person for such issues? Why should anyone get to decide who can publish or not? Who would be on this publishing board of literary oligarchs?

I know you aren’t actually promoting oligarchy or anything. But how do you think the average person would be persuaded to your position? Considering the increase of writers among average people, you’d probably have a hard time even convincing writers of your position. This, however, doesn’t mean your position is wrong. Many of my own positions seem in the minority which doesn’t cause me to stop holding those positions. However, on this issue and as an American, I do have a healthy skepticism of any elite who wishes to tell the masses what they should do. Maybe if I were a part of the elite of professional published writers my views would be different… or maybe not. Matt Cardin bought a Kindle before I did. Mike is a collector of rare books and a lover of a fine book. He also has been considering buying an e-reader so as to not to have to read the expensive copies of books he owns.

There are a couple of factors I see.

First, there is an increase of freewill rather than a decrease. It’s just that there is greater equality of freewill (more opportunities to influence, more choices available) than ever before in all of the history of civilization. However, this creates other problems. As the ability to publish writing spreads to the lower classes, the upper classes lose control of defining correct and acceptable grammar. As the English language spreads to diverse cultures, British English becomes less dominant in defining correct and acceptable English grammar. For example, the more informal American English has become more popular because of American media.

Second, there is the development of large corporations. It’s ultimately not the average person defining writing and publishing. Large corporations (like Amazon and book publishing companies) aren’t democracies. This is probably where your insight fits in. These big businesses often promote a false sense of freedom and opportunity. What we’re experiencing is a shift of who is the elite controlling society. In the US, the founders were mostly an intellectual elite and small business owners who were actually fighting against a transnational corporation (British East India Company). But now such transnational corporations have taken over every major country and economy and taken over society in general. It’s the corporate elite, instead of the traditional intellectual elite, who now mostly control the publishing of books. It’s also large corporations who own most of the media companies (newspapers, tv, movies, internet, etc). It’s these companies who have the greatest power to influence language and there main motivation is profit, not maintaining the proud tradition of literature.

Eugenics was ‘progress’ and a new idea once. Should we have accepted it merely on those terms?

There is always the question of defining ‘progress’. I would, of course, agree that not all ‘progress’ is good.

However, I would point out that eugenics as a basic idea isn’t new. Spartans supposedly threw deformed babies off of a cliff. Male cats when they become the new alpha male will often kill the kittens of the former alpha male. The only modern part is that eugenics was able to be done on a larger scale and done with more precision. I would say that eugenics isn’t progress itself, although it can be used in the service of certain visions of progress.

I think everything I’ve said still stands. If there were concomitant spritual or social progress, technological progress would be simply useful, possibly irrelevant, probably harmless. But I don’t think that genetic modification, for instance, will represent true progress, because it will be an amplification of the steering will of a number of individuals in order to wipe from existence the possibility of certain other steering wills.

I also think everything I’ve said still stands. 😉

Actually, I don’t know to what degree we disagree. Like you, I’m not blindly for progress. Mabye less like you, I’m not against progress either. Like most issues, I’m agnostic about progress. It brings out my fatalist side. I can read someone like Derrick Jensen and find myself strongly persuaded. All of civilization (books and e-readers alike) is built on and maintained through massive dysfunction, oppression and violence. On the other hand, nothing has yet stopped the march of civilization’s progress, despite millennia of doomsayers.

I honestly don’t think it matters whether I like e-readers or not. I loathe lots of things and yet those things continue to exist. I loathe war and yet my tax money funds wars where worse things than Kindles happen.

I own a Kindle not because I have a strong opinion in support of e-readers but because I have a strong opinion about reading. I like to read and love books, in any and all formats. An e-book if it’s public domain is free and if not it’s still usually way cheaper than a physical book. As a relatively poor person, I can get more reading material for my money with e-books. As a person living in a relatively small apartment, I can from a practical perspective own more e-books than I could physical books. Even my public library already allows the public to ‘check out’ e-books. I personally like having my opportunities and choices increased. If that happens through e-readers, it is good by me. Or, if it happens by some other format, it is also good by me.

Similarly, I don’t see Kindle as a form of real progress, since what it does is allow people who don’t care about books and literature to call the shots.

Yes, I understand you feel strongly about this. But why does any individual get to decide which people are perceived to care? I suspect many of these people do care and some to a great degree. Like many normal people, I care. Don’t I matter? Defining who cares is like defining what is or isn’t literature, what is or isn’t art. In some ways, you might be right. Literature as we know it may be in the process of being destroyed. This is just like how Socrates was right that the oral tradition as he knew it was being destroyed by written texts. The ironic part is that Socrates supposed words are now recorded in text. It’s also ironic that your views here are recorded on a blog.

We don’t really know what will happen, and I hope the outcome ends up being more positive than negative, but I honestly don’t see much that’s positive coming out of it at the moment.

Yep. I don’t entirely lack hope, but in the long run I think it’s all doomed. We’re all just going along for the ride. Sometimes the ride is fun, often not.

Reading is already one of the most egalitarian of cultural media. It is an open university.

It’s true that it is to an extent an open university, but not equally so. Poor people in wealthy countries have a lot less access to this “open university”. And people in poor countries have had little access to it at all until very recently. The internet and e-books have opened up this “open university” to the entire world.

Now, however, Amazon have got the thin end of their wedge into reading, and I’m rather afraid (this seems to be the direction), that before long, Amazon (with Kindle) will be saying, “All those who want to come to reading, must do so by me, and my technology. All those who want to come to writing, must do so by me, and my technology. Keep up. Plug in. Buy the next model.”

The issue of transnational corporations taking over the world isn’t the same as the issue of e-readers, although like everything in life there is overlap. Right now, there are numerous devices (computers, tablets, pads, e-readers, smart phones, etc) that anyone can use to read almost any book (or at least any book that has been digitized) and such devices are becoming cheaper and more widely available. Right now, even poor people can access some kind of device that allows them to access the entire world’s library of public domain literature. I see that as a good thing.

Yes, many plutocrats would like to use the power and wealth of corporations to take over the world. They might be successful, but don’t blame the average person who simply wants more freedom and opportunity to cheaply and easily access reading material. In time, the natural trend of things should lead to open source e-readers being developed just as there are open-source computers and browsers.

The difference between us, in this matter, seems to be where we direct our loathing the most. The main problem I see is a plutocratic elite rather than the democratic masses. Democracy can be messy and ugly, but I think it’s better than the alternative. You seem to be equating the plutocratic elite with the democracy-seeking masses because the former is always trying to manipulate the latter. Even if the latter is being manipulated, why blame them instead of those who manipulate? Why not try to end their being manipulated rather than trying to end their having influence?

I realize that you have an old fashioned respect for the intellectual elite. I do too in many ways. I think the demise of the intellectual elite has had major problems. Maybe there will always be an elite. If so, I’d choose an intellectual elite over a plutocratic elite. In case you’re interested, Chris Hedges writes about the loss of power and influence among the intellectual elite in his book Death of the Liberal Class.

I would emphasize that this issue is part of a larger set of issues. Reading, writing and publishing are being democratized just as knowledge and education is being democratized. The first public library was only in recent centuries. For most of the history of ‘Great Literature’, most people had little or no access to any book besides the Bible and often not even that. Public education is likewise very new. I think it was Jefferson who helped create the first publicly funded university. Now, starting in the mid 20th century, almost anyone in the West can go to college if they really want to and if they have basic intelligence.

There is another tidbit of history related to American and British history. Thomas Paine was a working class craftsman. His father, a Quaker, taught him a love of learning and made sure he received a basic education. But lack of money and social position disallowed Paine to follow a scholarly profession. Fortunately, he went to London where he discovered many self-educated people. The lower classes weren’t allowed into the universities and so these people paid people to give them lectures. It was the rise of democracy that first took form through knowledge and education. From the perspective of the elite, this led to what was seen as chaos challenging tradition, the masses challenging authority. It probably didn’t look like democracy as we know it. During this era, there was much rioting and violence. An old order was collapsing.

The democratization of knowledge and education has led to problems in some ways. It created a literate middle class who mostly read crappy pulp fiction, but it also created a massive publishing industry that made books available to average people. It’s this pulp fiction industry that allowed someone like PKD to make a living at writing, despite the literary elite at the time thinking his writing was worthless.

I’m far from being an optimist, but apparently I’m the one defending optimism. I suppose I’m just playing Devil’s Advocate. Maybe it’s easy for me to be an optimist as I don’t have skin in the game in the same way you do. Your livelihood is dependent on book publishing. Nonetheless, I would point that, from a practical perspective, if you want to continue to make a living as an author, you should embrace e-readers. However, if principle is more important than profit, you are free to fight the Goliath to your dying breath. I wouldn’t hold that against you. We all have to pick our fights.